Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}\label{SECIntroduction}
Quantum Key Distribution (QKD) (reviewed in {e.g.}~\cite{Gisin02,Scarani08}) concerns the exchange of quantum states between two legitimate parties, conventionally named Alice and Bob. From these states, secret key data can be distilled. Unlike classical cryptography, for which the security is based on the unproven assumptions of computational difficulty, in principle QKD can be unconditionally secure.
The first QKD protocol, BB84~\cite{Bennett84}, was demonstrated experimentally in 1992~\cite{Bennett92c}. Since then, numerous other protocols have emerged, {e.g.}~\cite{B92,Ekert91,Ralph99,Silberhorn02b
. Such schemes have been established over very large distances, using both fibres (\unit[250]{km})~\cite{Stucki09} and free space (\unit[144]{km})~\cite{Schmitt-Manderbach07} as the quantum channel.
In the absence of quantum repeaters~\cite{Briegel98}, free space optics (FSO) is required for worldwide quantum communication via satellites~\cite{Villoresi08,Bonato09}. Furthermore, FSO facilitates urban free space communication which would bypass the need and expense of new fibres being laid. This is one solution to the ``last mile'' problem currently faced by the telecommunications industry~\cite{Majumdar08}.
Recently, we have demonstrated experimentally the feasibility of using continuous variables (CV), rather than single photons, to facilitate QKD~\cite{Lorenz04} in a real world free space environment with unrestrained daylight operation~\cite{Elser09}.
In this paper we explain why loss is of central importance to our scheme
and we will present a new and practical approach to beam collection to reduce this particular loss. A characterisation of our set-up can be found in~\cite{Elser09}.
\subsection{Noise and Attenuation}
Noise and attenuation in the quantum channel
are the limiting factors when determining the secure key rate. One of the central assumptions of any QKD protocol is that an adversary, Eve, has absolute control over the quantum channel. This means any and all attenuation in the channel are attributed to Eve, as well as any excess noise picked up during transmission.
In the continuous variable regime, attenuation gives Eve additional information and increases Bob's errors,
both of which limit potential key rate~\cite{Heid06}.
In principle, it is always possible to generate a secret key even under high losses~\cite{Silberhorn02}, although the rate becomes negligible if losses are too high.
While we have shown that it appears no polarisation excess noise is present in the channel~\cite{Elser09}, it is worth considering the effects of intensity noise on quantum states~\cite{Dong08,Semenov09,Heim09}. Imperfect detection means security analysis is more involved and further post processing is require
.
Security analysis under imperfect detection conditions, possibly caused by intensity noise, exists in the single photon regime~\cite{Fung09}. It remains to be seen what the implications are for continuous variables (we do not address these issues here), suffice to say that detection efficiency should be optimised as in all quantum information protocols, especially in the CV regime.
\section{Homodyne Detection}
We measure the signal states using a balanced homodyne detection scheme, as shown in Fig.~\ref{DetSet}. Homodyne detection uses a local oscillator (LO) which interferes with the signal beam at a beam splitter. The difference of the two resulting photocurrents gives the amplitude of the signal state. Successful detection relies on splitting the incident intensity equally and amplifying the resulting photocurrent difference electronically (using an appropriate low-noise amplifier). This technique allows quantum noise to be measured using standard PIN photodiodes.
Conventionally, as the name implies, the local oscillator is generated locally by the receiver, {e.g.} in~\cite{Lange06}. However, using polarisation variables, we are able to multiplex the signal and LO in the same spatial mode at the sender. This results in perfect mode matching at Bob's beam splitter (as well as numerous other benefits explained in~\cite{Elser09}). However, since the LO is actually part of the detection system, the effect of loss is compounded.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{detection_diagram_wollaston}
\caption{Our free space homodyne detection. We use a Wollaston prism as the polarising beam splitter (PBS) in our set-up.}\label{DetSet}
\end{figure}
\subsection{Atmospheric Fluctuations}
In standard FSO, an intensity-modulated beam is focussed on a single diode~\cite{Majumdar08}. For a beam propagating from a source far away, all paths are considered paraxial, therefore only a lens is required to maximise detection efficiency.
In our case, the homodyne detection set-up used to measure the polarisation states requires more optics. A telescope reduces the beam size before it is split at a polarising beam splitter (PBS) and focussed on the photodiodes, as shown in Fig.~\ref{DetSet}. These additional optics cause some paths of the beam to become non-paraxial in the presence of atmospheric beam jitter. This leads (in practice) to uncorrelated partial detection noise on each photodiode. However, any uncorrelated partial detection noise due to beam jitter leads to an imbalance across the diodes which may affect the homodyne detection~\cite{Elser09}. It is therefore desirable to remove the spatial dependence of detection due to a jittering beam.
\subsubsection{Compensating for Beam Jitter}
In principle, the active area of the photodiode can be increased, such that it captures the entire jittering beam. However, diode area scales with capacitance, which in turn limits bandwidth, reducing the speed of operation and thus key rate.
Another strategy could be to focus using suitable lenses. However, imperfectly aligned aspheric lenses are susceptible to poor focussing of beams that are not incident normal to the surface to the lens ({i.e.} are not paraxial).
Atmospheric beam jitter translates to angular deviations from the optical axis and the focus is no longer well defined.
This effect is more pronounced in moving target implementations, such as surface to aircraft communications, {e.g.}~\cite{Horvath09}.
Hence the motivation to design an optical component which offers angular as well as spatial tolerance in its transmission behaviour for the receiver.
\section{Improved Optical Tapers}
Microscale optical tapers exist to couple beams between fibres and waveguides, see {e.g.~\cite{Burns86,Love91}}.
They typically operate in the single-mode to single-mode regime. A similar implementation has been used for detectors in high energy physics~\cite{Hinterberger66,Winston70}, with ideas extending to solar power collectors using compound parabolic concentrators (CPCs)~\cite{Winston74} and generalised in terms of non-imaging optics~\cite{Winston05}.
While larger tapers have been suggested to operate in free space communication applications~\cite{Yun90}, to the best of our knowledge they have not been optimised and are not widely used.
Using numerical ray trace analysis simulations, we present an improved geometry of an optical taper for free space communications purposes.
\subsection{Taper Geometry}
The aim of our taper is to collect all light incident on a large aperture and transmit it onto a photodiode of much smaller size.
We want to effectively increase the active area of the photodiode without decreasing its speed of operation.
Furthermore, we require the transmittance of the taper to be both spatially invariant with respect to the incident beam and offer higher angular tolerance than lenses.
\subsubsection{Truncated Parabolic Mirror}
One solution to the problem of compressing a wide incident beam to one point is a parabolic mirror.
We therefore base our geometry on a parabolic fully-reflective surface, as shown in Fig.~\ref{Taper}~(left).
The equation of a parabola is given by~$z\left(r\right)=\alpha r^2$, where~$r$ is the radial extension and $\alpha$ is a constant.
In our case, the parabola is truncated in the~$z$-axis by the input and output apertures of radii~$r_1,r_2$ at~$z\left(r_1\right),z\left(r_2\right)$, respectively, such that the length of the taper is given by~$l=z\left(r_1\right)-z\left(r_2\right)$.
The constant~$\alpha=\frac{l}{r_1^2-r_2^2}$ is thus written in terms of these parameters.
\begin{figure}
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{taper}
\end{minipage}~\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{raytrace}
\end{minipage}
\caption{Geometry and definitions of the parabolic taper (left).
Example of ray trace of five rays (right).
The colours of the rays change according to reflection/refraction events.
Our actual simulations used over 10000 rays, from which transmission can be deduced by counting the resulting rays that pass through the aperture at the end of the taper.
Software: RayTrace~\cite{RayTrace08}.}\label{Taper}
\end{figure}
We require all the incident light to exit the taper, {i.e.} the focus $z_f$ of the parabola lying outside the taper ($z_f>z\left(r_2\right)$). This imposes the condition that the gradient $r^\prime\left(z\right)$ of the parabola (with respect to the $z$-axis) can never exceed unity, otherwise some paths within the taper would be back-reflecting. If we assume~$r_1$ and~$r_2$ are fixed by the size of beam jitter and diode area respectively, we therefore seek a taper of length~$l$ such that the condition~$\left|r^\prime\left(z\right)\right|<1$ is fulfilled. Given the gradient~$r^\prime\left(z\right)=\frac{1}{2\alpha r}$ is maximal at~$r=r_2$, upon substituting for~$\alpha$ the condition above becomes
\begin{equation}
\left|\frac{r^2_1-r_2^2}{2lr_2}\right|<1~.
\end{equation}
This means that the length~$l$ is limited by
\begin{equation}
l>\frac{\left(\beta^2-1\right)r_2}{2}~,
\end{equation}
where~$\beta=\frac{r_1}{r_2}$ is the ratio of the input and output aperture radii of the taper.
\subsection{Numerical Simulation}
It is important to stress that unlike conventional tapers ({e.g.}~\cite{Burns86,Love91}), we are operating in the highly multimode regime, rather than guiding single mode to single mode.
The large size of the taper compared to the wavelength of the light permits the use of ray approximations rather than wave optics.
Using the in-house numerical analysis programme RayTrace~\cite{RayTrace08}, the response of the taper to geometric rays can be simulated.
An example of a trace is shown in Fig.~\ref{Taper}~(right). Each incident ray represents an amount of energy governed by a Gaussian distribution centred on the middle ray.
The incident rays can be parameterised by a radial and angular displacement vector $\left(\delta,\theta\right)$, as used in ray transfer matrices.
Using this technique, the transmission of the taper for different values of $\delta$ and $\theta$ can be calculated.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{transmission_both_2}
\caption{Taper and lens transmission as a function of radial displacement (left) and angle (right).
Both components show similar radial dependence.
However, the lens is only fully transmittant up to angular deviations of about 1$^\circ$, whereas the taper transmits fully up to about 4$^\circ$.
}\label{Trans}
\end{figure}
We simulate a beam of width \unit[2]{mm} (similar to beam widths used in our QKD set-up) incident on a taper with fully-reflecting side facets of initial aperture \unit[20]{mm} and final aperture of \unit[2]{mm}, as shown in Fig.~\ref{Taper}~(right).
An absorbing plane with a \unit[2]{mm} aperture is placed immediately after the taper.
The transmission through this aperture represents the total intensity that would impinge on a photodiode of diameter \unit[2]{mm}.
Figure~\ref{Trans} (left) shows the transmission as a function of radial displacement~$\delta$ (with~$\theta=0~\forall~\delta$) for a taper compared to a lens of equal aperture.
The ``photodiode'' is placed at the focus of the lens. As can be seen, both optical components serve to increase the effective diameter of the aperture to \unit[17]{mm}
The angular dependence is tested slightly differently.
For a taper of non-linear geometry, whether or not a ray will be transmitted depends on both angle of incidence and point of entry into the taper.
We therefore simulate a number of beams of equal amplitude spread across the input aperture.
We test this at a number of different angles, with the results shown in Fig.~\ref{Trans}~(right).
The lens is only fully transmittant up to 1$^\circ$, whereas the taper offers four times more angular tolerance, remaining fully transmittant up to 4$^\circ$ and offering at least \unit[50]{\%} efficiency up to 7$^\circ$.
We also simulated how these general results translate to improvements in our specific set-up~\cite{Elser09}. The receiver telescope and taper are shown in the ray trace in Fig.~\ref{TelTrace} (left). Here, we vary the displacement incident on the receiver telescope, as would be the case for a jittering beam. For stationary sender and receiver stations, we do not need to consider angular variations since the beam propagating from a source far away is considered paraxial. However, radial displacement here is converted to angular deviations on the way to the photodiode,
as shown in Fig.~\ref{TelTrace} (left). These angular deviations arise from the realistic case of slight misalignment of the telescope or lens aberrations. Explicitly quantifying the tolerance of axial misalignment of the telescope is difficult. Here we simulate lens positions within \unit[1]{mm} of perfect alignment, which could be reasonably expected experimentally in our set-up.
The improvement of the taper over a lens is shown in Fig.~\ref{TelTrace} (right). In our set-up, the taper offers considerably more tolerance than a lens, remaining fully transmittant up to radial deviations at the telescope of up to \unit[30]{mm}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{telescope_trace_both}
\caption{Left: ray trace of the detection set-up including telescope. We see how angular deviations build up due to the
optics. Right: transmission to a photodiode as a function of radial displacement at the initial telescope. The taper offers tolerance up to \unit[30]{mm}, amounting to a three-fold improvement over a lens when coping with beam jitter.}\label{TelTrace}
\end{figure}
\section{Outlook}
We have begun characterising such tapers with respect to our quantum communication experiment.
Their performance for a jittering beam can be analysed in our existing \unit[100]{m} free space channel~\cite{Elser09}.
We then plan to implement such tapers in our \unit[1.6]{km} point-to-point QKD link, which is currently under construction.
Conventional tapers are made out of glass and rely on total internal reflection to guide light.
In our case, the rays may strike the glass/air boundary outside the range of the critical angle, such that some portion would be lost.
This leads to the undesired effect of position-dependent transmission.
To counter this, we coat our glass tapers with silver (offering high reflectivity for our wavelength of \unit[800]{nm}), negating the critical angle condition for reflection.
We also plan to coat the input and output apertures with an anti-reflective layer for our wavelength.
At the moment we study glass cores derived from the waste products of a fibre-drawing machine.
As a result, they are currently not machined precisely to our specifications.
However, constructing arbitrary taper geometries is possible in principle \cite{Birks92}.
An alternative approach would be to use a hollow taper with a highly-reflective inner surface.
\subsubsection{Acknowledgements} The authors would like to thank Silke Rammler and Leyun Zang for useful discussions, and Paul Lett for correspondence.
| proofpile-arXiv_065-6664 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Though unseen in the currently operating
first generation interferometric gravitational-wave antennae
(e.g., GEO \cite{httpGEO}, TAMA \cite{httpTAMA},
Virgo \cite{httpVirgo}, LIGO \cite{LIGO_RPP2009}),
designers of second generation antennae
may be faced with instabilities which result from the transfer
of optical energy stored in the detector's Fabry-Perot cavities
to the mechanical modes of the mirrors which form these cavity.
The field of gravitational wave interferometry was introduced to the
concept of parametric instabilities (PI) couched in the language
of quantum mechanics \cite{Braginsky2001331}.
We present a classical framework for PI which uses the audio-sideband
formalism to represent the optical response of the interferometer \cite{Fritschel2001b,Regehr1995}.
This in turn allows the formalism to be extended to multiple interferometer configurations
without the need to rederive the relevant relationships;
an activity that has consumed considerable resources
\cite{Gurkovsky200791,Gurkovsky2007177,Ju2006360,Strigin200710}.
\section{Parametric Instabilities}
\slabel{PI_intro}
The process which can lead to PI can be approached as a classical feedback effect
in which mechanical modes of an optical system act on the
electro-magnetic modes of the system via scattering,
and the electro-magnetic modes act on the mechanical modes
via radiation pressure (see figure \fref{LoopPI}).
\begin{figure}[h]
\includegraphics[width=3.3in]{LoopPI.eps}
\caption{Parametric instabilities can be described as a classical feedback phenomenon.
A given mechanical mode interacts with the pump field to scatter energy into higher order
optical modes.
This interaction is represented with $\otimes$ above.
Its strength is given by the overlap integral of the mechanical mode,
pump mode, and scattered field mode ($B_{m,n}$ in equation \eref{Bmn}).
While circulating in the interferometer, the scattered field interacts with the
pump field and mechanical mode via radiation pressure,
introducing the overlap integral $\otimes$ a second time.}
\flabel{LoopPI}
\end{figure}
We start by considering a single mechanical mode of one
optic in the interferometer, and computing the parametric gain
of that mode \cite{Kells2002326}.
The resonant frequency of this mode determines the frequency of
interest for the feed-back calculation.\footnote{In general, all of the modes of all of the
optics should be considered simultaneously, but with typical mode quality factors
greater than $10^6$ it is very unlikely that two mechanical modes will participate
significantly at the same frequency.}
The nascent excitation of this mechanical mode,
possibly of thermal origin, begins the process by scattering
light from the fundamental mode of the optical system into
higher order modes (HOM).
The resulting scattered field amplitudes are
\begin{equation}
\elabel{E_scatter}
E_{scat,n} = \frac{2 \pi i}{\lambda_0} A_m E_{pump} B_{m, n}
\end{equation}
where $\lambda_0$ is the wavelength and $E_{pump}$ the amplitude,
of the ``pump field'' at the optic's surface,
$A_m$ is the amplitude of the motion of the mechanical mode,
and $B_{m, n}$ is its overlap coefficient with the $n^{th}$ optical mode.
The overlap coefficient results from an overlap integral of basis functions
on the optic's surface
\begin{equation}
\elabel{Bmn}
B_{m, n} = \iint\limits_{\mbox{\tiny{surface}}} f_0 f_n (\vec{u}_m \cdot \hat{z}) \,d\vec{r}_{\perp}
\end{equation}
where $\vec{u}_m$ is the displacement function of the mechanical mode,
$f_0$ and $f_n$ are the field distribution functions for the pump field,
typically a gaussian, and the $n^{th}$ HOM.
Each of these basis functions is normalized such that
\begin{eqnarray}
\iiint\limits_{\mbox{\tiny{optic}}} |\vec{u}_m|^2 \,d\vec{r} & = & V \\
\iint\limits_{\infty} |f_n|^2 \,d\vec{r}_{\perp} & = & 1
\end{eqnarray}
where $V$ is the volume of the optic.\footnote{
A more general normalization of $\vec{u}_m$ which allows for non-uniform density
would include the density function of the optic in the integral,
with the result equal to the total mass.}
The interferometer's response to the scattered field can be computed in the
modal basis used to express the HOMs via the audio sideband formalism.
The resulting transfer coefficients represent the gain and phase of the
optical system to the scattered field as it travels from
the optic's surface through the optical system and back.
The optical mode amplitudes of the field which returns to the optic's surface is
\begin{eqnarray}
\elabel{E_return}
E_{rtrn,n} & = & G_n E_{scat,n} \nonumber \\
& = & \frac{2 \pi i}{\lambda_0} A_m E_{pump} G_n B_{m, n}
\end{eqnarray}
where $G_n$ is the transfer coefficient from a field leaving
the optic's surface to a field incident on and then reflected from the same surface.
The transfer coefficient $G_n$ is complex, representing the amplitude and phase of the
optical system's response at the mechanical mode frequency.
Computation of $G_n$ for an optical system is discussed in
more detail in section \sref{OptTC}.
The scattered field which returns to the optic closes the PI feedback loop
by generating radiation pressure on that surface with a spatial profile
that has some overlap with the mechanical mode of interest.
This radiation pressure force is given by
\begin{eqnarray}
F_{rad} & = & \frac{2}{c} E_{pump}^{\ast} \sum_{n = 0}^{\infty} B_{m, n} E_{rtrn, n} \nonumber \\
& = & \frac{2 P}{c} \frac{2 \pi i}{\lambda_0} A_m \sum_{n = 0}^{\infty} G_n B_{m, n}^2
\elabel{F_rad}
\end{eqnarray}
where $P = |E_{pump}|^2$, and $c$ is the speed of light.
The factor of 2 appears since the force is generated by the fields
as they reflect from the surface.
Finally, the radiation pressure which couples into this mechanical
mode adds to the source amplitude according to the transfer function
of the mechanical system at its resonance frequency $\omega_m$,
\begin{eqnarray}
\elabel{A_prime}
\Delta A_{m} & = & \frac{-i Q_m}{M \omega_m^2} F_{rad} \nonumber \\
& = & \frac{Q_m}{M \omega_m^2} \frac{4 \pi P}{c \lambda_0} A_m
\sum_{n = 0}^{\infty} G_n B_{m, n}^2
\end{eqnarray}
where $M$ is the mass of the optic
and $Q_m$ the quality factor of the mechanical resonance.
The open-loop-gain of the PI feed back loop is therefore,
\begin{equation}
\elabel{PI_LoopGain}
\frac{\Delta A_{m}}{A_{m}}
= \frac{4 \pi Q_m P}{M \omega_m^2 c \lambda_0} \sum_{n = 0}^{\infty} G_n B_{m, n}^2
\end{equation}
The parametric gain $\mathcal{R}$ is the real part of the open-loop-gain
\begin{eqnarray}
\elabel{R_m}
\mathcal{R}_m & = & \real{\frac{\Delta A_{m}}{A_{m}}} \nonumber \\
& = & \frac{4 \pi Q_m P}{M \omega_m^2 c \lambda_0}
\sum_{n = 0}^{\infty} \real{G_n} B_{m, n}^2
\end{eqnarray}
with the usual implication of instability if $\mathcal{R}_m > 1$,
and optical damping if $\mathcal{R}_m < 0$.\footnote{
Appendix \sref{Comparison} relates this result to the results found
in previous works.}
\section{Optical Transfer Coefficients}
\slabel{OptTC}
When the pump field is phase modulated by the mechanical mode of the optic,
upper and lower scattering sidebands are produced.
In general, these scattering sidebands will have different optical transfer coefficients
in the interferometer; $G_n^+$ and $G_n^-$.
The combination of scattering sidebands which leads to radiation pressure is
\begin{equation}
G_n = G_n^- - G_n^{+\ast}
\end{equation}
(see appendix \sref{DeriveFrad} for derivation).
This section will describe a general method for computing $G_n^\pm$.
Given scattering matrices $\mathbb{S}_n^\pm$ which contain transfer coefficients
for the $n^{th}$ HOM of the scattering sidebands from one point to the next in the
optical system, we have
\begin{equation}
G_n^\pm = \vec{e}_x^{\hspace{2pt} T} \left( \mathbb{I} - \mathbb{S}_n^\pm \right)^{-1} \vec{e}_x
\end{equation}
where the basis vector $\vec{e}_x$ is
the $x^{th}$ column of the identity matrix $\mathbb{I}$,
and $\vec{e}_x^{\hspace{2pt} T}$ is its transpose.
The index $x$ is used to select the field which reflects from optic of interest,
as demonstrated in the following section.\footnote{
A general framework for constructing scattering matrices is described in \cite{Corbitt2005}
and will not be reproduced here.}
\subsection{An Example: Fabry Perot Cavity}
\slabel{cavFP}
\begin{figure}[h]
\includegraphics[width=3.3in]{OptFP.eps}
\caption{A simple Fabry Perot cavity.
In the example caculation, fields are evaluated at each of the numbered circles.
Field 4 is used to compute the optical gain of the scattered field produced by,
and acting on, mirror B.}
\flabel{OptFP}
\end{figure}
As a simple and concrete example, we apply the above formalism
to a Fabry Perot cavity (FPC) of length $L$.
Figure \fref{OptFP} shows the configuration and indices for each of the fields in the FPC.
The scattering matrices for the upper and lower sidebands are
\begin{equation}
\mathbb{S}_n^\pm =
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
t_A & 0 & 0 & 0 & -r_A \\
0 & p_L^\pm & 0 & 0 & 0 \\
0 & 0 & -r_B & 0 & 0 \\
0 & 0 & 0 & p_L^\pm & 0
\end{pmatrix}
\end{equation}
where $t_A$ is the transmissivity for mirror A,
$r_A$ and $r_B$ are the reflectivities of the mirrors,
and $p_L^\pm = e^{i (\phi_n \pm \omega_m L / c)}$ is the propagation operator.
The reflectivity and transmissivity used in $\mathbb{S}_n^\pm$
are amplitude values and are related to a
mirror's power transmission by $t = \sqrt{T}$ and $r = \sqrt{1 - T}$.
The propagation phase depends on the phase of the $n^{th}$ HOM, $\phi_n$,
and the scattering sideband frequency $\pm \omega_m$.
If we wish to evaluate $\mathcal{R}_m$ for a mode of mirror B, we would use
\begin{equation}
\vec{e}_4 =
\begin{pmatrix}
0 \\ 0 \\ 0 \\ 1 \\ 0
\end{pmatrix}
\end{equation}
to select its reflected field, number 4 in figure \fref{OptFP}.
To arrive at a numerical result for $\mathcal{R}_m$,
we adopt the following parameters
\begin{align*}
P &= 1 \unit{MW} & \lambda_0 &= 1064 \unit{nm} \\
T_A &= 0.014 & T_B &= 10^{-5}\\
L &= 3994.5 \unit{m} & M &= 40 \unit{kg}
\end{align*}
which are representative of an Advanced LIGO arm cavity
operating at full power.
For the moment, we will consider a single optical mode,
the Hermite-Gauss TEM11 mode, and a single mechanical mode
\begin{align*}
Q_m &= 10^7 & \omega_m &= 2 \pi \times 29950 \unit{Hz} \\
B_{m,HG11} &= 0.21 & \phi_{HG11} &= -5.434
\end{align*}
both of which are shown in figure \fref{FPmodes}.
\begin{figure}[h]
\includegraphics[width=3.3in]{FPmodes.eps}
\caption{A mechanical mode of an Advanced LIGO test mass near 30kHz
and a HG TEM11 optical mode.
For the mechanical mode, surface displacement amplitude normal to the surface,
$\vec{u}_m \cdot \hat{z}$, is shown.
For the optical mode, the basis function $f_{HG11}$ amplitude is shown.
In both cases, red is positive, blue is negative and green is zero.
The X and Y-axes on both plots are in centimeters.}
\flabel{FPmodes}
\end{figure}
For this set of values, we evaluate the parametric gain
\begin{align*}
G_{HG11}^+ &= 0.554 + i 2.72 & G_{HG11}^- &= 0.502 + i 0.03 \\
\Rightarrow G_{HG11} &= -0.052 + i 2.75 & \mathcal{R}_{m,HG11} &= -6.5 \times 10^{-4}
\end{align*}
to find it is small and negative.
Allowing the mechanical mode frequency to artificially vary from $20\unit{kHz}$ to $50\unit{kHz}$
we can plot $\mathcal{R}_m$ as a function of $\omega_m$.
The result is shown in figure \fref{FPgains11}.
The resonance at $27.4\unit{kHz}$ is the upper scattering sideband of the TEM11 mode
which has negative parametric gain, indicating optical damping.
At $47.7\unit{kHz}$ the lower scattering sideband of the TEM11 mode resonates,
this time resulting in positive feedback,
but not enough to produce instability.
\begin{figure}[h]
\includegraphics[width=3.3in]{FPgains11.eps}
\caption{Optical gains $G_{HG11}^+$ and $G_{HG11}^-$, and parametric gain $\mathcal{R}_{m,HG11}$
are shown as a function of mechanical mode frequency.
The resonance of the upper scattering sideband at $27.4\unit{kHz}$ has negative parametric gain,
indicating optical damping.
The lower scattering sideband resonance at $47.7\unit{kHz}$ has positive gain,
but does not produce instability.}
\flabel{FPgains11}
\end{figure}
Thus far only one optical mode and only one mechanical mode have been considered.
Extending the computation to higher order optical modes requires that we elaborate
our expression for $\phi_n$ to include the Gouy phase.
For an arbitrary Hermite-Gauss mode
\begin{equation}
\phi_n = \phi_0 - O_n \phi_G
\end{equation}
where $\phi_0$ is the propagation phase of the TEM00 mode,
and $O_n$ is the mode order of the $n^{th}$ HOM.
Considering other mechanical modes is a matter of computing the mode shapes and frequencies
for the mirrors which make up the FPC;
we use those of an Advanced LIGO test-mass.\footnote{
The Advanced LIGO test-mass mechanical modes used in this and the following examples
are the result of finite element modeling.
For a discussion of numerical and analytic methods for calculating test-mass mechanical
modes, see \cite{Strigin20086305}.}
The result of the full calculation of $\mathcal{R}_m$ for all mechanical modes
between $10\unit{kHz}$ and $90\unit{kHz}$, including HOMs up to $9^{th}$ order,
is shown in figure \fref{FPparaGains}.
\begin{figure}[h]
\includegraphics[width=3.3in]{FPparaGains.eps}
\caption{Parametric gains $\mathcal{R}_m$
are shown for mechanical modes between $10\unit{kHz}$ and $90\unit{kHz}$.
Red circles mark modes with positive parametric gain,
while green circles mark those with negative gain.
This calculation uses HOMs up to $9^{th}$ order,
but does not include clipping losses,
discussed in section \sref{Clipping}.}
\flabel{FPparaGains}
\end{figure}
\section{Clipping Losses}
\slabel{Clipping}
Thus far we have ignored losses in the optical system.
A lower limit on the losses is given by the loss of power
due to the finite size of the optics, known as clipping loss.
For low-order optical modes, these losses are usually insignificant
by design, but losses can strongly impact the parametric gain
when the contribution from high-order optical modes is dominant.
In optical systems such as gravitational-wave interferometers,
in which the beam size on the optics is made as large as possible
without introducing significant loss in the TEM00 mode,
high-order modes tend to fall off the cavity optics.
Specifically, for an interferometer designed to have a few
parts-per-million clipping losses for the TEM00 mode,
contributions to $\mathcal{R}_m$ from modes of order $O_n \gtrsim 4$ are limited,
and modes with $O_n \gtrsim 9$ are insignificant.
A more complete description of losses due to apertures includes
diffraction effects, but this requires a more complex
and interferometer dependent calculation.
Even better is to use the eigenmodes of the full interferometer,
and their associated losses, rather than the Hermite-Gauss basis.
This level of detail may not be rewarded,
however, since modes which differ significantly from their
Hermite-Gauss partners do so as a result of significant losses,
which in turn make them irrelevant to PI.
\subsection{An Example: Advanced LIGO}
\slabel{aLIGO}
\begin{figure}[h]
\includegraphics[width=3.3in]{OptSR.eps}
\caption{Fields in a power and signal recycled Fabry Perot Michelson.
This optical configuration is common to many of the $2^{nd}$ generation
gravitational-wave detectors.}
\flabel{OptSR}
\end{figure}
As a more interesting example, we apply the above formalism to an
Advanced LIGO interferometer.
Figure \fref{OptSR} shows the layout of the optical system and the assignment
of field evaluation points (FEPs).
In this case care has been taken to minimize the number of FEPs and to follow
each one with a propagation operation.
In this way we can number the propagation distances $L_x$ according to
their associated FEP, and the propagation operations become
$p_{n,x}^\pm = e^{i (\phi_{n,x} \pm \omega_m L_x / c)}$.
The scattering matrices for this interferometer can be split into
a diagonal propagation matrix populated by $p_{n,x}^\pm$,
and a mirror matrix populated by reflectivity and transmissivity
coefficients (essentially one $r$ and one $t$ in each column),
as follows
\begin{equation}
\elabel{S_n}
\def\cdot{\cdot}
\mathbb{S}_n^\pm = \mathbb{M}_n \, \mathbb{P}_n^\pm
\hspace{8ex}
\mathbb{P}_n^\pm =
\left(
\begin{array}{*{3}c}
p_{n,1}^\pm & \cdot & \cdots \\
\cdot & p_{n,2}^\pm & \\
\vdots & & \ddots
\end{array}
\right)
\end{equation}
\begin{equation*}
\def\cdot{\cdot}
\def\unit{mm}{{\scriptscriptstyle -}}
\renewcommand{\arraycolsep}{-1pt}
\renewcommand{\arraystretch}{0.7}
\mathbb{M} \hspace{-1pt} = \hspace{-2pt}
\left(
\begin{array}{*{12}c}
\cdot & \unit{mm} r_{EX} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\unit{mm} r_{IX} & \cdot & \cdot & \cdot & \cdot & t_{IX} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \unit{mm} r_{EY} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \unit{mm} r_{IY} & \cdot & \cdot & \cdot & \cdot & t_{IY} & \cdot & \cdot & \cdot & \cdot \\
t_{IX} & \cdot & \cdot & \cdot & \cdot & r_{IX} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & t_{BS} & \cdot & r_{BS} \\
\cdot & \cdot & t_{IY} & \cdot & \cdot & \cdot & \cdot & r_{IY} & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \unit{mm} r_{BS} & \cdot & t_{BS} \\
\cdot & \cdot & \cdot & \cdot & t_{BS} & \cdot & \unit{mm} r_{BS} & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \unit{mm} r_{PR} & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & r_{BS} & \cdot & t_{BS} & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \unit{mm} r_{SR} & \cdot
\end{array}
\right)
\end{equation*}
Losses can be included in the scattering matrix $\mathbb{S}_n^\pm$
most generally by allowing $\mathbb{M}_n$ to vary for each HOM.
A simpler approach which is sufficient for clipping losses is
to add a diagonal matrix $\mathbb{C}_n$ to equation \eref{S_n}
which effectively adds loss to each propagation step
\begin{gather}
\def\cdot{\cdot}
\mathbb{S}_n^\pm = \mathbb{M} \, \mathbb{C}_n \, \mathbb{P}_n^\pm
\hspace{8ex}
\mathbb{C}_n =
\left(
\begin{array}{*{3}c}
t_{n,1} & \cdot & \cdots \\
\cdot & t_{n,2} & \\
\vdots & & \ddots
\end{array}
\right)
\end{gather}
where
\begin{equation}
t_{n,x} = \sqrt{\, \iint\limits_{\mbox{\tiny{surface}}} f_n^2 \, d\vec{r}_{\perp}}
\end{equation}
is the amplitude transmission of the aperture associated with each propagation step.
For this example, a $17\unit{cm}$ aperture is assumed for all of the optics,
except the beam-splitter for which we assume a $13.3\unit{cm}$ aperture.\footnote{
The beam-splitter aperture includes the aperture presented by the
electro-static actuators on IX and IY.}
In order to compute the parametric gain as a function of mechanical mode frequency,
as in figure \fref{FPgains11} for the FPC example, we use the same values as above
\begin{gather*}
P = 1 \unit{MW} \quad \lambda_0 = 1064 \unit{nm} \\
M = 40 \unit{kg} \quad Q_m = 10^7
\end{gather*}
and add the transmission of the new optics
\begin{gather*}
T_{IX} = T_{IY} = 0.014 \quad T_{EX} = T_{EY} = 10^{-5}\\
T_{PR} = 0.03 \quad T_{SR} = 0.2 \quad T_{BS} = 0.5
\end{gather*}
new lengths
\begin{gather*}
L_{\{1,2,3,4\}} = 3994.5 \unit{m} \\
L_{\{5,6\}} = 4.85 \unit{m} \quad L_{\{7,8\}} = 4.9 \unit{m} \\
L_{\{9,10\}} = 52.3 \unit{m} \quad L_{\{11,12\}} = 50.6 \unit{m}
\end{gather*}
and phases
\begin{gather*}
\phi_{0,\{1-8,11,12\}} = 0 \quad \phi_{0,\{9,10\}} = \pi / 2 \\
\phi_{G,\{1,2,3,4\}} = 2.72 \quad \phi_{G,\{5,6,7,8\}} = 0 \\
\phi_{G,\{9,10\}} = 0.44 \quad \phi_{G,\{11,12\}} = 0.35
\end{gather*}
which represent $156^\circ$ of Gouy phase in the arms,
$25^\circ$ in the power recycling cavity and
$20^\circ$ in the signal recycling cavity.
The results are similar to the FPC alone, with negative gain near $27.4\unit{kHz}$
and positive gain near $47.7\unit{kHz}$.
The addition of the rest of the interferometer, however,
leads to a narrow regions of high parametric gain for the HG11 mode, see figure \fref{ALIGOgains11}.
\begin{figure}[ht]
\includegraphics[width=3.3in]{ALIGOgains11.eps}
\caption{Comparison of $\mathcal{R}_{m, HG11}$ for a Fabry Perot cavity
and Advanced LIGO are shown as a function of mechanical mode frequency.
The Advanced LIGO computation is shown twice,
the green curve includes clipping losses, but has perfectly matched arm cavities,
the red curve adds a $0.1\%$ mismatch between the arm cavity Gouy phases.
The calculation is limited to the modes shown in figure \fref{FPmodes},
with the mechanical mode frequency artificially
adjusted to highlight the resonance near $47.7\unit{kHz}$.}
\flabel{ALIGOgains11}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=3.3in]{ALIGOparaGains.eps}
\caption{Parametric gain for all modes of an Advanced LIGO
test-mass between $10\unit{kHz}$ and $90\unit{kHz}$.
To show the effect of clipping losses, a few of the modes
which have significant gain in the absence of clipping are also included.
Empty circles represent parametric gains computed without clipping.
They are attached to their clipped partners, represented by filled circles, with dashed lines.}
\flabel{ALIGOparaGains}
\end{figure}
As noted in \cite{Strigin200710}, realistically imperfect matching of interferometer optics
can significantly change the parametric gains of the system.
Our formalism can be made to reproduce this result by allowing the Gouy phase
in one arm of the interferometer to differ from that of the other arm,
\begin{equation*}
\phi_{n,\{3,4\}} = (1 + \epsilon) \phi_{n,\{1,2\}}
\end{equation*}
where $\epsilon_Y$ is the fractional departure of $\phi_{n,\{3,4\}}$ from their nominal value.
For our Advanced LIGO example,
where radius of curvature errors of a few meters are expected,
we take $\epsilon = 10^{-3}$.
This tiny difference is sufficient to move the sharp features associated with a perfectly
matched interferometer by more than their width, thereby changing the result of any PI
calculation (see figure \fref{ALIGOgains11}). These results are similar to those in
\cite{Gras2009CQG}, though their model of the power-recycling cavity was somewhat less general.
Finally, the full calculation for Advanced LIGO is plotted in figure \fref{ALIGOparaGains}.
To show the effect of clipping losses,
modes which have $\mathcal{R}_n > 1$ in the absence of clipping
are shown and connected to their clipped partners.
\section{Worst Case Analysis}
\slabel{WorseCase}
Evaluating the impact of PI on a gravitational-wave interferometer
is complicated by the sensitivity of the result to small changes
in the model parameters.
In particular, uncertainty in the radii of optics used in the
arm cavities lead to changes the Gouy phases which,
while quite small, are sufficient to move the optical
resonances of the cavity by more than their width.
Similarly, mechanical mode frequencies produced analytically or by
finite element modeling may not match the real articles due
to small variations in materials, assembly, and ambient temperature.
This section describes a robust means of estimating the
``worst case scenario'' for a given interferometer.
A simple approach to the ``worst case'' problem is to compute the parametric
gain of each mode for multiple sets of plausible interferometer
parameters.
Varying all of the parameters is impractical and unnecessary
as the results are primarily sensitive to the relative frequencies
of the mechanical modes and the optical resonances in high-finesse
cavities.
In the case of Advanced LIGO, explored in section \sref{aLIGO} above,
it is sufficient to vary the Gouy phases in the arm cavites by
a $5 \, \times \, 10^{-3}$, and the phases in the recycling cavities
by a couple of degrees.
We proceed with a Monte-Carlo type analysis in which we randomly
vary the Gouy phases around their nominal values.
We repeat the process for 120 thousand trials,
then set an upper-limit on $\mathcal{R}_n$ for each mode at the
lowest value greater than $99\%$ of the results
(see figure \fref{ALIGOworstCase}).\footnote{
To speed the computation, only optical modes
with some overlap, $B_{m,n}^2 > 10^{-3}$,
and modest clipping losses, $t_{n,1}^2 > 0.7$, are considered.}
This provides us with a trial number insensitive statistic,
the accuracy of which is limited primarily by the
fidelity of our model.
We find that Advanced LIGO faces the possibility of a few unstable modes.
Among the 120 thousand cases considered, the mean value of the maximum
$\mathcal{R}_n$ among all modes was 5.8 and $99\%$ of cases had
a maximum parametric gain value less than 45.
Considering each mechanical mode independently, there are 32
modes which have the potential to be unstable and that all of the
highly unstable modes are between $15\unit{kHz}$ and $50\unit{kHz}$.
Taking into account that 4 test-masses make up the Advanced LIGO detector,
we find the mean number of unstable modes to be 10 (2.5 per test-mass),
with $99\%$ of cases having 6 or fewer unstable modes per test-mass.
One must keep in mind, however, that many of the parameters used in
this model are adjustable (e.g., power level in the interferometer,
mirror temperature and thus mechanical mode frequency and radius
of curvature, etc.) and others are speculative
(e.g., the quality factor of a mechanical mode may depend
strongly on its suspension \cite{Logan92, Rowan1998}).
Since the parametric gain scales directly with both power in the interferometer
and with mechanical mode Q, we have chosen round values for these parameters
which can be refined as higher fidelity numbers become known.
\begin{figure}[ht]
\includegraphics[width=3.3in]{ALIGOworstCase.eps}
\caption{Worst case parametric gain for all modes of an Advanced LIGO
test-mass between $10\unit{kHz}$ and $90\unit{kHz}$.
There are 32 potentially unstable modes,
and more than 200 modes with $\mathcal{R}_n > 0.1$.}
\flabel{ALIGOworstCase}
\end{figure}
\section{Conclusions}
\slabel{Conclusions}
Parametric instabilities are of particular interest to
the field of gravitational-wave interferometry where high mechanical
quality factors and a large amount of stored optical power
has the potential for instability.
We depart from previous work by constructing a flexible
analysis framework which can be applied to a variety of optical systems.
Though our examples use a Hermite-Gaussian modal basis to describe the
optical fields, this formalism can be implemented using the
modal basis best suited to the optical system at hand.
In our use of Advanced LIGO as an example application,
we find that parametric instabilities, if left unaddressed,
present a potential threat to the stability of high-power operation.
We hope that future work on solutions to parametric instabilities will be
guided by these results.
\newpage
| proofpile-arXiv_065-6676 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Hilbert function of a scheme $X\subset\mathbb{P}^n$ encodes a great
deal of interesting information about the geometry of $X$ and so
the study of $HF(X,\cdot)$ has generated an enormous amount of
research. One of the most crucial and basic facts about the
Hilbert function of a scheme is that the function is eventually
polynomial. More precisely
\[HF(X,d)=hp(X,d), \mbox{ for } d\gg 0.\]
In general, knowledge of the Hilbert polynomial does not determine
the Hilbert function. But, there are some interesting situations
when this is the case. E.g. if $X$ is a generic set of $s$ points
in $\mathbb{P}^n$, it is well known, and not hard to prove, that
\[HF(X,d)=\min\left\{hp(\mathbb{P}^n,d)={n+d\choose d},hp(X,d)=s\right\},\]
for all $d\in\mathbb{N}$. A much harder result is due to
Hartshorne and Hirschowitz. In \cite{HartshorneHirschowitz} the
authors considered schemes $X\subset\mathbb{P}^n$ consisting of $s$
generic lines and they proved that
\[HF(X,d)=\min\left\{hp(\mathbb{P}^n,d)={n+d\choose d},hp(X,d)=s(d+1)\right\},\]
for all $d\in\mathbb{N}$.
Inspired by these results about points and lines, we restrict our
attention to that special family of schemes known as {\em
configurations of linear spaces}. We recall that a configuration
of linear spaces $\Lambda\subset \mathbb{P}^n$ is nothing more than a
finite collection of linear subspaces of $\mathbb{P}^n$; see
\cite{CarCatGer1,CaCat09} and \cite{DerksenSidman} for more on
these schemes and their connection with {\em subspace
arrangements}. We further say that a configuration of linear
spaces is {\em generic} when its components are generically
chosen.
The Hilbert polynomial of a generic configuration of linear spaces
is known, thanks to a result of Derksen, see \cite{Derksen}. Thus,
in light of the results on the Hilbert function of generic points
and generic lines, we propose the following
\begin{quote} {\bf Conjecture:} {\em if $\Lambda\subset \mathbb{P}^n$ is a
generic configuration of linear spaces with non-intersecting
components, then
\[HF(X,d)=\min\left\{hp(\mathbb{P}^n,d),hp(X,d)\right\},\]
for all $d\in\mathbb{N}$.}
\end{quote}
We will call a Hilbert function defined as above {\em
bipolynomial}. Hence, the conjecture states that generic
configurations of linear spaces with non-intersecting components
have bipolynomial Hilbert function.
As we mentioned above, this conjecture is true when
$\dim\Lambda=0$ (generic points) and when $\dim\Lambda=1$. The
conjecture holds in the dimension one case because of the result
about generic lines in \cite{HartshorneHirschowitz} and because
we know how adding generic points to a scheme
changes its Hilbert function, see \cite{GeramitaMarosciaRoberts}.
In this paper we produce new evidence supporting our conjecture.
Namely, we show that the union of one plane and $s$ generic lines
has bipolynomial Hilbert function.
The paper is structured as follows: in Section \ref{basicsection}
we introduce some basic notation and results we will use;
Sections \ref{p4section} and \ref{lemmas} contain the base cases for our inductive approach;
Section \ref{generalsection} contains our main result, Theorem
\ref{TeoremaInPn}. These sections are followed by a section on Applications and another in which we propose a possibility for the Hilbert function of any generic configuration of linear spaces, even one in which there are forced intersections.
The first two authors thank Queen's University for its hospitality during part of the
preparation of this paper. All the authors enjoyed support from
NSERC (Canada) and GNSAGA of INDAM (Italy). The first author was, furthermore,
partially supported by a ``Giovani ricercatori, bando 2008" grant
of the Politecnico di Torino.
\section{Basic facts and notation}\label{basicsection}
We will always work over an algebraically closed field $k$ of
characteristic zero. Let $R=k[x_0,...,x_n]$ be the coordinate ring
of $\mathbb{P}^n$, and denote by $I_X$ the ideal of a scheme $X \subset
\mathbb{P}^n$. The Hilbert function of $X$ is then $HF(X,d)=\dim
(R/I_X)_d$.
\begin{defn} \ Let $X$ be a subscheme of $\mathbb{P}^n$. We say that $X$ has a {\it bipolynomial Hilbert function} if
\[HF(X,d)=\min\left\{hp(\mathbb{P}^n,d),hp(X,d)\right\},\]
for all $d\in\mathbb{N}$.
\end{defn}
It will often be convenient to use ideal notation rather then Hilbert function notation, i.e. we will often describe $\dim I_X$ rather than $HF(X,d)$. It is clearly trivial to pass
from one piece of information to the other.
The following lemma gives a criterion for adding to a scheme, $X
\subseteq \Bbb P^ n$, a set of reduced points lying on a linear
space ${\Pi} \subseteq \Bbb P ^n$ and imposing independent
conditions to forms of a given degree in the ideal of $X$.
\begin{lem} \label{AggiungerePuntiSuSpazioLineare}
Let $d \in \Bbb N$. Let $X \subseteq \Bbb P^n$ be a scheme, and
let $P_1,\dots,P_s$ be generic distinct points on a linear space
$\Pi \subseteq \Bbb P^n$.
If $\dim (I_{X })_d =s$ and $\dim (I_{X +\Pi})_d =0$, then
$
\dim (I_{X+P_1+\cdots+P_s})_d = 0.$
\par
\end{lem}
\begin{proof}
By induction on $s$. Obvious for $s=1$. Let $s>1$ and let $X' =
X+P_s$. Obviously $\dim (I_{X ' + \Pi})_d =0$. Since $\dim (I_{X
+\Pi})_d=0$ and $P_s$ is a generic point in $\Pi$, then $\dim
(I_{X' })_d =s-1$. Hence, by the inductive hypothesis, we get
$\dim (I_{X'+P_1+\cdots+P_{s-1}})_d =\dim (I_{X+P_1+\cdots+P_s})_d
= 0.$
\end{proof}
Since we will make use of Castelnuovo's inequality several times in the
next sections, we recall it here in a form more suited to our use (for
notation and proof we refer to \cite{AH95}, Section 2).
\begin {defn}\label{ResiduoTraccia}
If $X, Y$ are closed subschemes of $\mathbb{P}^n$, we denote by $Res_Y X$
the scheme defined by the ideal $(I_X:I_Y)$ and we call it the
{\it residual scheme} of $X$ with respect to $Y$, while the scheme
$Tr_Y X \subset Y$ is the schematic intersection $X\cap Y$, called
the {\it trace} of $X$ on $Y$.
\end {defn}
\begin{lem} \label{Castelnuovo}{\bf (Castelnuovo's inequality):}
Let $d,\delta \in \mathbb N$, $d \geq \delta$, let ${Y} \subseteq \mathbb{P} ^n$ be a smooth hypersurface of degree $\delta$,
and let $X \subseteq \mathbb{P}
^n$ be a scheme. Then
$$
\dim (I_{X, \mathbb{P}^n})_d \leq \dim (I_{ Res_Y X, \mathbb{P}^n})_{d-\delta}+
\dim (I_{Tr _{Y} X, Y})_d.
$$
\qed
\end{lem}
Even though we will only use the following lemma in the cases
$m=2$, $m=3$ (see the notation in the lemma), it seemed
appropriate to give the more general argument since such easily
understood (and non trivial) degenerations occur infrequently.
\begin{lem} \label{sundial}
Let $X_1 \subset \Bbb P^n $ be the disconnected subscheme consisting of a line $L_1$ and a linear space $\Pi \simeq \mathbb P^m$ (so the linear span of $X_1$ is $<X_1> \simeq \Bbb P^{m+2} $). Then there exists a flat family of subschemes $$X_{\lambda}\subset <X_1> \ \ \ \ \ (\lambda \in k )$$
whose special fibre $X_0$ is the union of
\begin{itemize}
\item
the linear space $\Pi $,
\item a line $L$ which intersects $\Pi$ in a point $P$,
\item the scheme $2P|_ { <X_1>}$, that is, the schematic intersection of the double point
$2P$ of $\mathbb P^n$ and $<X_1>$.
\end{itemize}
Moreover, if $H \simeq \Bbb P^{m+1}$ is the linear span of $L$
and $\Pi$, then $Res_H(X_0)$ is given by the (simple) point $P$.
\end{lem}
\begin{proof} We may assume that the ideal of the line $L_1$ is
$$(x_1, \dots, x_{m}, x_{m+1} - x_{0} , x_{m+3}, \dots, x_n)$$ and the ideal of $\Pi$ is
$( x_{m+1} , \dots, x_n)$, so the ideal of $X_1$ is
$$I_{X_1}=(x_1, \dots, x_{m}, x_{m+1} - x_{0} , x_{m+3}, \dots, x_n) \cap ( x_{m+1} , \dots, x_n).
$$
Consider the flat family $\{ X_{\lambda}\}_{\lambda \in k}$, where for any fixed $\lambda \in k$, $X_\lambda$ is the union of $\Pi$ and the line
$$
x_1= \dots= x_{m}= x_{m+1} - \lambda x_{0} = x_{m+3}= \dots= x_n=0.$$
The ideal of $X_{\lambda}$ is
$$I_{X_{\lambda}}=(x_1, \dots, x_{m}, x_{m+1} - \lambda x_{0} , x_{m+3}, \dots, x_n) \cap ( x_{m+1} , \dots, x_n)
$$
$$=( x_1, \dots, x_{m}, x_{m+1}-\lambda x_0) \cap (x_{m+1},x_{m+2}) + (x_{m+3},\dots,x_{n})$$
$$=( x_1, \dots, x_{m}, x_{m+1}-\lambda x_0) \cdot (x_{m+1},x_{m+2}) + (x_{m+3},\dots,x_{n})$$
$$=( x_1 x_{m+1}, \dots, x_{m} x_{m+1}, (x_{m+1}-\lambda x_0) x_{m+1})
$$
$$+
( x_1x_{m+2}, \dots, x_{m}x_{m+2}, (x_{m+1}-\lambda x_0)x_{m+2})$$
$$+ (x_{m+3},\dots,x_{n}),$$
which for $\lambda =0$ gives:
$$I_{X_0}=( x_1 x_{m+1}, \dots, x_{m} x_{m+1}, x_{m+1}^2)+
( x_1x_{m+2}, \dots, x_{m}x_{m+2}, x_{m+1}x_{m+2})$$
$$+ (x_{m+3},\dots,x_{n}),$$
$$
=( x_1, \dots, x_{m+1}) \cdot (x_{m+1},x_{m+2}) + (x_{m+3},\dots,x_{n}).$$
Let $(x_{m+3},\dots,x_{n}) = J$.
We will prove that
\begin{equation} \label{idealediX0}
I_{X_0}
=( x_1, \dots, x_{m+1}) \cdot (x_{m+1},x_{m+2}) +J
\end{equation}
$$
=\left [( x_1, \dots, x_{m+1})+J\right ] \cap \left [(x_{m+1},x_{m+2})+J\right ] \cap
\left [ ( x_1, \dots,x_{m+2})^2 +J\right ]. $$
We use Dedekind's Modular Law several times in what follows (see \cite[page 6]{AtMac}).
We start by considering the intersection of the first two ideals, i.e.,
$$
\left [( x_1, \dots, x_{m+1})+J\right ] \cap \left [(x_{m+1},x_{m+2})+J\right ] $$
$$=
\left [( x_1, \dots, x_{m+1})+J\right ] \cap \left [((x_{m+1})+J)+(x_{m+2})\right ]$$
$$=
((x_{m+1})+J)+
\left\{ \left [( x_1, \dots, x_{m+1})+J \right ] \cap (x_{m+2})\right \}$$
$$=
((x_{m+1},x_1x_{m+2}, \dots, x_mx_{m+2})+ J).$$
\medskip
It remains to intersect this last ideal with the third ideal above, i.e.,
$$((x_{m+1},x_1x_{m+2}, \dots, x_mx_{m+2})+ J) \cap \left [ ( x_1, \dots,x_{m+2})^2 +J \right ]$$
$$=\left[(x_{m+1}) +((x_1x_{m+2}, \dots, x_mx_{m+2})+ J)\right] \cap \left [ ( x_1, \dots,x_{m+2})^2 +J \right ]$$
$$=\left[(x_{m+1}) \cap (( x_1, \dots,x_{m+2})^2 +J )\right ] + ((x_1x_{m+2}, \dots, x_mx_{m+2})+ J)$$
$$=\left\{(x_{m+1}) \cap \left[ (x_{m+1})\cdot ( x_1, \dots,x_{m+2}) + ( x_1, \dots,x_{m},x_{m+2})^2 +J
\right ] \right \}$$
$$+ ((x_1x_{m+2}, \dots, x_mx_{m+2})+J)$$
$$= \left[ (x_{m+1})\cdot ( x_1, \dots,x_{m+2}) \right ]+ \left [ (x_{m+1}) \cap \left ( ( x_1, \dots,x_{m},x_{m+2})^2 +J
\right ) \right ]
$$
$$+ ((x_1x_{m+2}, \dots, x_mx_{m+2})+ J).$$
Clearly the middle ideal is contained in the sum of the other two, and so the last ideal is equal to
$$\left[ (x_{m+1})\cdot ( x_1, \dots,x_{m+2}) \right ]+ ((x_1x_{m+2}, \dots, x_mx_{m+2})+ J)$$
$$=( x_1, \dots, x_{m+1}) \cdot (x_{m+1},x_{m+2}) +J.$$
So we have proved that $I_{X_0} $ is
$$\left [( x_1, \dots, x_{m+1})+J\right ] \cap \left [(x_{m+1},x_{m+2})+J\right ] \cap
\left [ ( x_1, \dots,x_{m+2})^2 +J\right ]. $$ Since $J $ is the
ideal of $<X_1>$, the first ideal in this intersection defines a
line $L$ in $<X_1>$ which meets the linear space $\Pi$ (defined by
the second ideal in this intersection) in the point
$P=[1:0:\dots:0]\in \mathbb P^n$, which is the support of the third
ideal in this intersection. The third ideal, in fact, describes
the scheme $2P|_ {<X_1>}$which is the double point $2P$ of
$\mathbb P^n$ restricted to the span of $X_1$.
The ideal of $H$ is $(x_{m+1})+J $, hence from (\ref {idealediX0}) we have that the ideal of $Res_H(X_0)$ is
$$I_{X_0} : I_{H} = \left[ ( x_1, \dots, x_{m+1}) \cdot (x_{m+1},x_{m+2}) + J \right]
:((x_{m+1})+J) $$
$$=(x_1,\dots ,x_n)= I_P.
$$
\end{proof}
\begin {defn}\label{conica degenere}
We say that $C$ is a {\it degenerate conic} if $C$ is the union
of two intersecting lines $L_{1}, L_{2}.$ In this case we write
$C=L_1+L_2$.
\end {defn}
\begin {defn}\label{defsundial}
Let $n\geq m+2$. Let $\Pi \simeq \Bbb P^m \subset \mathbb P^n$
be a linear space of dimension $m$, let $P\in \Pi$ be a point and
let $L \not\subset \Pi $ be a generic line through $P$. Let $T
\simeq \Bbb P^{m+2}$ be a generic linear space containing the
scheme $L+\Pi$. We call the scheme $L+\Pi+ 2P|_T$ an {\it
$(m+2)$-dimensional sundial}. (See, for instance, the scheme
$X_0$ of Lemma \ref{sundial}).
\medskip
Note that for $m=1$, the scheme $L+\Pi$ is a degenerate conic and
the $3-$dimensional sundial
$L+\Pi+ 2P|_T$ is a {\it degenerate conic with an embedded point}
(see \cite {HartshorneHirschowitz}).
\end{defn}
\begin{thm} [Hartshorne-Hirschowitz, \cite{HartshorneHirschowitz}] \label{HH}
Let $n, d \in \mathbb N$.
For $n\geq 3$, the ideal of the scheme $X\subset \Bbb P^n$ consisting of $s$ generic
lines has the expected dimension, that is,
$$
\dim (I_X)_d = \max \left \{ {d+n \choose n} -s(d+1), 0 \right \},
$$
or equivalently
$$
H(X,d) = \min \left \{ hp(\mathbb{P}^n,d)={d+n \choose n}, hp(X,d)=s(d+1)
\right \}.
$$
\end{thm}
\qed
\medskip
Since a line imposes at most $d+1$ conditions to the forms of degree $d$, the first part of the following lemma is clear. The second statement of the lemma is obvious.
\begin{lem} \label{BastaProvarePers=e,e*}
Let $n,d, s \in \Bbb N$, $n \geq4$. Let $\Pi \subset \Bbb P^n$ be a plane, and let $L_1, \dots , L_s \subset \Bbb P^n$ be $s$ generic lines. Let
$$ X_s= \Pi + L_1+ \dots + L_s \subset \Bbb P^n .$$
\begin{itemize}
\item[(i)] If $ \dim (I_{X_s})_d = {d+n \choose n} - {d+2 \choose 2} -s(d+1)$, then
$ \dim (I_{X_{s'}})_d = {d+n \choose n} - {d+2 \choose 2} -s'(d+1)$ for any $s'<s$.
\par
\item[(ii)] If $ \dim (I_{X_s})_d = 0$, then
$ \dim (I_{X_{s'}})_d =0$ for any $s'>s$.
\par
\end{itemize}
\end{lem}
\qed
\section{The base for our induction}\label{p4section}
In this section we prove our main Theorem (see \ref{TeoremaInPn}) in $\mathbb{P}^4$.
\begin{thm} \label{TeoremaInP4} Let $d\in\mathbb{N}$ and $\Pi \subset \Bbb P^4$ be a plane,
and let $L_1, \dots , L_s \subset \Bbb P^4$ be $s$ generic lines. Set
$$X= \Pi + L_1+ \dots + L_s \subset \Bbb P^4.$$
Then
$$
\dim (I_{X})_d = \max \left \{ {d+4 \choose 4} - {d+2
\choose 2} -s(d+1), 0 \right \},
$$
or equivalently $X$ has bipolynomial Hilbert function.
\end{thm}
\begin{proof} We proceed by induction on $d$. Since the theorem is obvious for $d=1$, let $d>1$. By Lemma \ref{BastaProvarePers=e,e*}
it suffices to prove the theorem for $s=e$ and $s=e^*$, where
$$e= \left \lfloor {{{d+4 \choose 4} - {d+2 \choose 2} }\over {d+1} }\right \rfloor =
\left \lfloor {\frac{d (d+2)(d+7)}{24} }\right \rfloor ; \ \ \ \
e^*= \left \lceil {{{d+4 \choose 4} - {d+2 \choose 2} }\over {d+1} }\right \rceil .
$$
Let $$
\bar e = \left \lfloor {{{(d-1)+4 \choose 4} - {(d-1)+2 \choose 2} }\over {(d-1)+1} } \right \rfloor = \left \lfloor{{ (d-1)(d+1)(d+6)} \over 24} \right \rfloor .
$$
We consider two cases.
\par
\medskip
{\it Case 1}: $d$ odd.
\par
For $s=e$, we have to prove that $\dim (I_{X})_d = {d+4
\choose 4} - {d+2 \choose 2} -e(d+1)$ (which is obviously positive).
Since $\dim (I_{X})_d \geq {d+4 \choose 4} - {d+2 \choose 2} -e(d+1),$ we have
only to show that $\dim (I_{X})_d \leq {d+4 \choose 4} -
{d+2 \choose 2} -e(d+1).$
For $s=e^*$, we have to prove that
$\dim (I_{X})_d =0.$
In order to prove these statements we construct a scheme $Y $
obtained from $X$ by
specializing the $s- \bar e$ lines $L_{\bar e +1}, \dots, L_{s}$
into a generic hyperplane $H \simeq \Bbb P^3$ (we can do this since $\bar e <s$).
If we can prove that $\dim(I_{Y})_d = \max \{ {d+4 \choose 4} - {d+2 \choose 2} -s(d+1); 0 \}$, that is,
if we can show that the plane and the $s$ lines give the expected number of conditions to the forms of degree $d$ of $\mathbb P^4$,
then (by the semicontinuity of the Hilbert function) we are done.
\par
Note that
$$Res_H Y = L_{1}+ \dots + L_{\bar e} + \Pi \subset \Bbb P^4 ,
$$
and
$$Tr_H Y = P_{1}+ \dots + P_{\bar e }+ L_{\bar e +1}+ \dots+L_{s}+ L \subset \Bbb P^3,$$
where $P_i = L_i \cap H$, ($1 \leq i \leq \bar e$), and $L$ is the line $\Pi \cap H$ .
\medskip
\medskip
Since $ d$ is odd, the number ${{ (d-1)(d+1)(d+6)} \over 24} $ is an integer, so
$$
\bar e ={{ (d-1)(d+1)(d+6)} \over 24}.
$$
The inductive hypothesis applied to $Res_H Y $ in degree $d-1$ yields:
$$\dim (I_{Res_H Y })_{d-1} = {d+3 \choose 4} - {d+1 \choose 2} -\bar e (d) =0.
$$
By Theorem \ref{HH}, since the $P_i$ are generic points, we get
$$ \dim (I_{ Tr_H Y })_{d} =\max \left \{ {d+3 \choose 3} - \bar e - (s - \bar e +1) (d+1);0 \right \}$$
$$=
\max \left \{ {d+3 \choose 3} +\bar e d- (s +1) (d+1) ;0\right \}$$
$$=\left\{
\begin{array}{cc}
{d+4 \choose 4} - {d+2 \choose 2} - e(d+1) & \ {\rm for} \ s=e \\
0 & \ \ \ {\rm for} \ s=e^* \\
\end{array} \right. ,
$$
and the conclusion follows by Lemma \ref{Castelnuovo} with $\delta=1$.
\par
\medskip
{\it Case 2}: $d$ even. \par
In this case $e = e^* = {{d (d+2)(d+7)} \over 24} $, and so
we only have to prove that $\dim (I_{ X})_d = 0.$ Let
$$ x= {{d(d+2)}\over 8} ,
$$
and note that $x$ is an integer, $x <e$.
Let $H \simeq \Bbb P^3$ be a generic hyperplane containing the
plane $\Pi$, and let $ Y $ be the scheme obtained from $ X$ by
degenerating the $x$ lines $L_{1}, \dots, L_{x}$ into $H $. By
abuse of notation, we will again denote these lines by $L_{1},
\dots, L_{x}$. By Lemma \ref{sundial}, with $m=2$, we get
$$ Y = L_{1}+ \dots+ L_{x} + 2P_1+ \dots +2 P_x + \Pi + L_{x+1}+ \dots + L_{ e} ,
$$
where $P_i= L_i \cap \Pi$ ($1 \leq i \leq x $) and the $2P_i$ are double points in $\mathbb P^4$.
If we can prove that $\dim (I_{ Y})_d = 0$ we are done.
By Lemma \ref{sundial}, with $m=2$, we get
$$Res_H Y = P_1+ \dots +P_x+ L_{x+1}+ \dots + L_{ e} \subset \Bbb P^4 ,
$$
where the $P_i$ are generic points in $\Pi$.
Also,
$$
Tr_H Y = L_{1}+ \dots+ L_{x} + 2P_1|_ H+ \dots +2 P_x |_ H +
\Pi + Q_{x+1}+ \dots + Q_{ e}, $$ but, since $2P_i |_ H \subset
L_i + \Pi$, we get
$$
Tr_H Y = L_{1}+ \dots+ L_{x} + \Pi + Q_{x+1}+ \dots + Q_{ e}
\subset \Bbb P^3
$$
where $Q_i = L_i \cap H$, ($x+1 \leq i \leq e$).
Since $\Pi$ is a fixed component of the zero locus for the forms
of $I_{ Y \cap H }$, we get that
$$ \dim (I_{Tr_H Y })_{d} =\dim (I_{Tr_H Y - \Pi })_{d-1}.
$$
Since the $Q_i$ are generic points, we can apply Theorem \ref{HH} and get
\begin{equation} \label{traccia}
\dim (I_{Tr_H Y - \Pi })_{d-1}={d-1+3 \choose 3} - xd - (e-x) =
0 .
\end{equation}
Now we will prove that $\dim (I_{Res_H Y })_{d-1} =0$.
\par
By Theorem \ref{HH} we know that
\begin{equation} \label{numeropuntisuspazio}
\dim (I_{ L_{x+1}+ \dots + L_{ e}})_{d-1} = {d+3 \choose 4} - d(e-x) =x .
\end{equation}
Moreover, since the scheme $ \Pi+L_{x+1}+ \dots + L_{ e}$ has
$e-x $ lines,
and it is easy to show that
$$e-x={{d(d+2)(d+4)}\over 24} \geq
\left \lceil {{{(d-1)+4 \choose 4} - {(d-1)+2 \choose 2} }\over {(d-1)+1} } \right \rceil = \left \lceil{{(d-1) (d+1)(d+6)} \over 24} \right \rceil,
$$
then, by the inductive hypothesis, we get
\begin{equation} \label{nullasuspazio}
\dim (I_{ \Pi+L_{x+1}+ \dots + L_{ e}})_{d-1}=0.
\end{equation}
Now we apply Lemma \ref{AggiungerePuntiSuSpazioLineare}; by (\ref{numeropuntisuspazio}) and (\ref {nullasuspazio}) we have
\begin{equation} \label{residuo}
\dim (I_{Res_H Y })_{d-1}=0.
\end{equation}
Finally, by (\ref{traccia}), (\ref{residuo}) and Lemma
\ref{Castelnuovo} (with $\delta =1$) we get $\dim (I_{ Y})_d =
0$, and that completes the proof of our main theorem
for $\mathbb P^4$.
\end{proof}
\section{Some technical lemmata}\label{lemmas}
Although the base case for an inductive approach to our main theorem
was relatively straightforward, this is not the
case for the inductive step.
One aspect is relatively clear. We first specialize some lines and degenerate other pairs of lines and divide our calculation, via Castelnuovo, into a Residual scheme (which we can handle easily) and a Trace scheme in a lower dimensional projective space. It is here that the difficulties take place. The Trace scheme will consist of degenerate conics, points and lines. Unfortunately, it is not always the case that generic collections of degenerate conics behave well with respect to postulational questions. The following example makes that clear.
\begin{rem} If $C$ is a degenerate conic in $\mathbb P^3$ then imposing the passage though $C$ imposes 7 conditions on the cubics of $\mathbb P^3$. One might then suspect that if $X$ is the union of three generic degenerate conics in $\mathbb P^3$ then $X$ would impose $3\cdot 7 = 21$ conditions on cubics. I.e. there would not be a cubic surface through $X$, although there obviously is one.
\end{rem}
It is the existence of such examples that complicates the induction step. In fact, to get around this difficulty, we have to consider (at the same time) several auxiliary families combining both specializations and degenerations of a scheme consisting of a collection of generic lines and points.
Note that the first two lemmata deal with such families of auxiliary schemes in $\mathbb P^3$. These are needed to deal with the Trace scheme in $\mathbb P^4$ which occurs in the first inductive step from $\mathbb P^4$ to $\mathbb P^5$. These two lemmata also serve to point out the kinds of families we will need for the remainder of the proof.
\begin{lem} \label{RetteIncrociateInP3a} Let $d=2(4h+r+1)$, $h \in \Bbb N$, $r=0;1;3$, (that is, $d \equiv 0; 2; 4,$ mod 8). Let
$$ c= \left\lfloor{ {d+3 \choose 4} \over d} \right\rfloor ,
$$
and set
$$a = {d+3 \choose 4}-d c ; \ \ \ \ \ \ \ b={ {{d+3 \choose 3}- a(2d+1)-c } \over {d+1} } .$$
Then
\begin{itemize}
\item[(i)]
$b$ is an integer; \par
\item[(ii)]
if $x = {d+1 \choose 3}-(a+b)(d-1)$ we have $0\leq x<c ;$ \par
\item[(iii)]
if $W \subset \Bbb P^3$ is the following scheme
$$W = C_1+ \dots + C_a +M_1+ \dots +M_b + P_1+ \dots +P_c
$$
(where the $C_i$ are generic degenerate conics, the $M_i$ are generic lines, and the $P_i$ are generic points)
then $W$ gives the expected number of conditions to the forms of degree $d$, that is
$$ \dim (I_W)_d = {d+3 \choose 3} - a(2d+1)-b(d+1)-c =0.
$$
\end{itemize}
\end{lem}
\begin{proof}
{\rm (i)} An easy computation, yields
\begin{itemize}\item
for $d=8h+2$ (that is for $r=0$),
$$c={1\over4} {{d+3}\choose 3} -{1\over2};
\ \ a={d \over 2} = 4h+1; \ \ \hbox {and so} \
b= 8h^2+h+1 ;\ \ \
$$
\item
for $d=8h+4$ (that is for $r=1$),
$$c={1\over4} {{d+3}\choose 3} -{3\over4}; \ \
a={3d \over 4} = 6h+3; \ \ \hbox {and so} \
b= 8h^2+h;
$$
\item
for $d=8h+8$ (that is for $r=3$),
$$c={1\over4} {{d+3}\choose 3} -{1\over4}; \ \
a={d \over 4} = 2h+1; \ \ \hbox {and so} \
b= 8h^2+17h+10.$$
\end{itemize}
{\rm (ii)} Using (i) and direct computation, (ii) easily follows.
{\rm (iii)} Observe that
$$ {d+3 \choose 3} - a(2d+1)-b(d+1)-c
$$
$$={d+3 \choose 3} - a(2d+1)-{d+3 \choose 3}+ a(2d+1)+c-c=0.$$
Thus we have to prove that $ \dim (I_W)_d =0.$\par
If $d=2$, that is, for $h=r=0$, we have $a= 1$, $b=1$, $c= 2$, and it is easy to see that there are not quadrics containing the scheme $C_1+M_1+P_1+P_2$.
\par
Let $d>2$.
Let $L_{i,1},L_{i,2}$ be the two lines which form the degenerate conic $C_i$, and let $Q$ be a smooth quadric surface.
Let $x$ be as in {\rm (ii)} and let $\widetilde W$ be the scheme obtained from $W$ by specializing $(c-x)$ of the $c$ simple points $P_i$ to generic points on Q and by specializing the conics $C_i$ in such a way that the
lines $L_{1,1},\dots ,L_{a,1}$ become lines of the same ruling on $Q$, (the lines $L_{1,2},\dots ,L_{a,2}$ remain generic lines, not lying on $Q$).
$L_{i,2}$ meets $Q$ in the two points which are $(L_{i,1} \cap L_{i,2})$ and another, which we denote by $R_{i,2}$. In the same way, $M_i$ meets $Q$ in the two points $S_{i,1}$, $S_{i,2}$.
We have
$$
Res_Q {\widetilde W} =L_{1,2}+\dots + L_{a,2} +M_1+ \dots+M_b+ P_1+\dots+P_x \subset \Bbb P^4 ,
$$
where the $L_{i,2}$ and the $ M_i$ are generic lines.
By Theorem \ref{HH} and the description of $x$ we get
$$\dim ( I_{Res_Q {\widetilde W}})_{d-2} = {d+1 \choose 3}- (a+b)(d-1)-x=0.
$$
Now consider $
Tr_Q {\widetilde W} $, which is
$$
L_{1,1}+ \dots +L_{a,1}+R_{1,2}+\dots + R_{a,2}+
S_{1,1}+S_{1,2}+ \dots+S_{b,1}+ S_{b,2}+ P_{x+1}+\dots+P_c .
$$
Note that the points $R_{i,2}, (1 \leq i \leq a); S_{i,1},S_{i,2}, (1 \leq i \leq b); P_i, (x+1 \leq i \leq c)$ are generic points on $Q$ and the lines all come from the same ruling on $Q$, hence
$$\dim ( I_{Tr_Q {\widetilde W}})_{d} = (d-a+1)(d+1) - a -2b-(c-x).$$
By a direct computation, we get
$\dim ( I_{Tr_Q {\widetilde W}})_{d} =0$.
So by Lemma \ref {Castelnuovo}, with $n=3$ and $\delta=2$, the conclusion follows.
\end{proof}
\begin{lem} \label{RetteIncrociateInP3b} Let $d \geq3$ be odd, or $d=8h+6$, $h \in \Bbb N$, (that is, $ d\equiv 1;3;5;6;7$, mod 8). Let
$$c= { {d+3 \choose 4}\over {d}}$$
and set
$$ b= \left\lfloor{ {d+4 \choose 4} \over {d+1} } \right\rfloor - c- 2; \ \ \ \ \
b^*= \left\lceil{ {d+4 \choose 4} \over {d+1} } \right\rceil - c- 2.$$
Then
\begin{itemize}
\item[(i)]
$b >0$ and $c$ is an integer; \par
\item[(ii)]
if $x = {d+1 \choose 3}-b(d-1)$, then $0\leq x<c;$ \par
\item[(iii)]
if $W, W^* \subset \Bbb P^3$ are the following schemes
$$W = C+ 2P+M_1+ \dots +M_b + P_1+ \dots +P_c,
$$
$$W^* = C+ 2P+M_1+ \dots +M_{b^*} + P_1+ \dots +P_c,
$$
(where $C= L_1+L_2$ is a degenerate conic, formed by the two lines $L_1$, $L_2$; where $2P$ is a double point with support in $P= L_1 \cap L_2$; where the $M_i$ are generic lines and the $P_i$ are generic points)
then $W$ and $W^*$ give the expected number of conditions to the forms of degree $d$, that is
$$ \dim (I_W)_d = {d+3 \choose 3} - (2d+2)-b(d+1)-c ,
$$
$$ \hbox {and} \ \ \ \dim (I_{W^*})_d =0 .
$$
\end{itemize}
\end{lem}
\begin{proof}
Computing directly it is easy to verify {\rm (i)} and {\rm (ii)}.
{\rm (iii)} Since the scheme $C+2P$ is a degeneration of two skew lines it imposes $2d+2$ conditions to forms of degree $d$ (see Lemma \ref{sundial}). It follows that
$$ \dim (I_W)_d \geq {d+3 \choose 3} - (2d+2)-b(d+1)-c .$$
Hence, it suffices to prove that $ \dim (I_W)_d \leq {d+3 \choose 3} - (2d+2)-b(d+1)-c .$
Let $Q$ be a smooth quadric surface.
Let $x$ be defined as in {\rm (ii)} and let $\widetilde W$ be the scheme obtained from $W$ by specializing $(c-x)$ of the $c$ simple points $P_i$ onto Q and by specializing the line $M_1$ and the conic $C$ in such a way that the
lines $M_1$ and $L_{1}$ become lines of the same ruling on $Q$ (the line $L_{2}$ remain a generic line, not lying on $Q$, while the point $P$ becomes a point lying on $Q$).
We have $L_{2} \cap Q = P+ R$, and set $M_i \cap Q = S_{i,1}+S_{i,2}$, ($2 \leq i \leq b$).
Then
$$
Res_Q {\widetilde W} =L_{2}+M_2+ \dots+M_b+ P_1+\dots+P_x \subset \Bbb P^4 ;
$$
$$
Tr_Q {\widetilde W}
= L_{1}+ M_1+2P|_Q +R + S_{2,1}+S_{2,2}+ \dots+S_{b,1}+ S_{b,2}+ P_{x+1}+\dots+P_c .
$$
By Theorem \ref{HH} we immediately get
$$\dim ( I_{Res_Q {\widetilde W}})_{d-2} = {d+1 \choose 3}- b(d-1)-x=0.
$$
Thinking of $Q$ as $\mathbb P^1 \times \mathbb P^1$, we see that the forms of degree $d$ in the ideal of $L_{1}+M_1+2P|_Q$
are curves of type $(d-2,d)$ in $\mathbb P^1 \times \mathbb P^1$ passing through $P$,
since $P$ already belongs to $L_1$. With that observation, it is easy to check that
$$\dim ( I_{Tr_Q {\widetilde W}})_{d} =( d-1)(d+1)- 2 - 2(b-1)-c+x$$
$$= {d+3 \choose 3} - (2d+2)-b(d+1)-c. $$
So by Lemma \ref {Castelnuovo}, with $n=3$ and $\delta=2$, it follows that
$$ \dim (I_W)_d = {d+3 \choose 3} - (2d+2)-b(d+1)-c ,$$
and we are finished with the schemes $W$.
\par
\medskip
We now consider the schemes $W^*$. If $b =b^*$ (i.e., if $d \equiv 5, 6$, mod 8), we have $W^*=W$. In this case it is easy to verify that the number
$${d+3 \choose 3} - (2d+2)-b(d+1)-c $$ is zero and so we are done.
So we are left with the case
$b^*=b+1$.
Let $\widetilde W^*$ be the scheme obtained from $W^*$ by specializing $(c-x)$ of the $c$ simple points $P_i$, the lines $M_1$ and $M_2$ and the conic $C$ in such a way that the
lines $M_1, M_2, L_1$ are lines of the same ruling on $Q$, and the line $L_{2}$ remains a generic line not lying on $Q$. Note that the point $P$ becomes a point of $Q$.
Set $L_{2} \cap Q = P+ R$, and set $M_i \cap Q = S_{i,1}+S_{i,2}$, ($3 \leq i \leq b^*$).
We have
$$
Res_Q {\widetilde W} =L_{2}+M_3+ \dots+M_{b^*}+ P_1+\dots+P_x \subset \Bbb P^3 .
$$
and
$$
Tr_Q {\widetilde W}
= L_{1}+ M_1+M_2+ 2P|_Q +R $$
$$+
S_{3,1}+S_{3,2}+ \dots+S_{b^*,1}+ S_{b^*,2}+ P_{x+1}+\dots+P_c .
$$
By Theorem \ref{HH} we immediately get
$$\dim ( I_{Res_Q {\widetilde W}})_{d-2} = {d+1 \choose 3}- b(d-1)-x=0.
$$
Using the same reasoning as above, it is easy to check that
$$\dim ( I_{Tr_Q {\widetilde W}})_{d} =\max \left\{0; ( d-2)(d+1)- 2 - 2(b-1)-c+x\right\}=0. $$
By Lemma \ref {Castelnuovo}, with $n=3$ and $\delta=2$, it follows that $ \dim (I_{W^*})_d = 0 .$
\par
\end{proof}
We now formalize what we did in these last lemmata.
\medskip
Let $n,d, a, b, c, \in \Bbb N$, $n \geq 3$, $d >0, $ $a+b \leq d-1$, and let
$$t= \left\lfloor{ {d+n \choose n} \over {d+1} } \right\rfloor; \ \ \ \ \ \ \ t^*= \left\lceil{ {d+n \choose n} \over {d+1} } \right\rceil.$$
Let $c \leq t-2(a+b)$, $c^* \geq t^*-2(a+b)$.
Let $ \widehat C_i$ be a 3-dimensional sundial (see Definition \ref{defsundial}), that is a generic degenerate conic with an embedded point, and let $M_i$ be a generic line.
Note that $t \geq 2(d-1)$.
\par
\medskip
Consider the following statements:
\medskip
\begin{itemize}
\item $S(n,d)$: {\it The scheme \par \noindent
$W(n,d) = \widehat C_1 + \dots +\widehat C_{d-1} + M_1+\dots+M_{t-2(d-1)} \subset \Bbb P^n,$
\par \noindent
imposes the expected number of conditions to forms of degree $d$, that is:
\par \noindent
$ \dim (I_{W(n,d)})_d= {d+n \choose n} - (2d+2)(d-1) - (d+1)(t-2(d-1))
$ \par
$= {d+n \choose n} - t (d+1); $}
\medskip
\item $S^*(n,d)$: {\it The scheme \par \noindent
$W^*(n,d) = \widehat C_1 + \dots +\widehat C_{d-1} + M_1+\dots+M_{t^*-2(d-1)} \subset \Bbb P^n,$
\par \noindent
imposes the expected number of conditions to forms of degree $d$, that is:
\par \noindent
$ \dim (I_{W^*(n,d)})_d= 0. $}
\medskip
\item $S(n,d;a,b,c)$: {\it The scheme \par \noindent
$W(n,d;a,b,c) = \widehat C_1 + \dots +\widehat C_{a} + D_1+\dots+D_{b}+R_1+ \dots +R_b+M_1+\dots+M_{c} \subset \Bbb P^n,$
\par \noindent
where the $D_i$ are generic degenerate conics, and the $R_i$
are generic points, imposes the expected number of conditions to forms of degree $d$,
that is:
\par \noindent
$ \dim (I_{W(n,d;a,b,c)})_d= {d+n \choose n} - (2a+2b+c) (d+1).
$}
\medskip
\item $S^*(n,d;a,b,c^*)$: {\it The scheme \par \noindent
$W^*(n,d;a,b,c^*) = \widehat C_1 + \dots +\widehat C_{a} + D_1+\dots+D_{b}+R_1+ \dots +R_b+M_1+\dots+M_{c^*} \subset \Bbb P^n,$
\par \noindent
where the $D_i$ are generic degenerate conics, and the $R_i$
are generic points, imposes the expected number of conditions to forms of degree $d$,
that is:
\par \noindent
$ \dim (I_{W(n,d;a,b,c^*)})_d= 0.
$}
\end{itemize}
\begin{lem} \label{DegenerareRette} Notation as above,
\begin{itemize}
\item[(i)] if $S(n,d)$ holds, then $S(n,d;a,b,c)$ holds;
\item[(ii)] if $S^*(n,d)$ holds, then $S^*(n,d;a,b,c^*)$ holds.
\end{itemize}
\end{lem}
\begin{proof}
A degenerate conic with an embedded point is either a degeneration of two generic lines, or a specialization of a scheme which is the union of a degenerate conic and a simple generic point.
Then by the semicontinuity of the Hilbert function, and
since a line imposes at most $d+1$ conditions to the forms of degree $d$, we get (i).
(ii) immediately follows from the semicontinuity of the Hilbert function.
\end{proof}
\begin{lem} \label{S(4,d)} Notation as above, let
$$t= \left\lfloor{ {d+4 \choose 4} \over {d+1} } \right\rfloor; \ \ \ \ \ \ \ t^*= \left\lceil{ {d+4 \choose 4} \over {d+1} } \right\rceil.$$
Then $S(4,d)$ and $S^*(4,d)$ hold, that is
$$ \dim (I_{W(4,d)})_d= {d+4 \choose 4} - t (d+1) \ \ \ \ \hbox{and} \ \ \ \dim (I_{W^*(4,d)})_d= 0,$$
where
$$W(4,d) = \widehat C_1 + \dots +\widehat C_{d-1} + M_1+\dots+M_{t-2(d-1)} \subset \Bbb P^4,$$
$$W^*(4,d) = \widehat C_1 + \dots +\widehat C_{d-1} + M_1+\dots+M_{t^*-2(d-1)} \subset \Bbb P^4,$$
and $ \widehat C_i = C_i+2P_i|_{H_i}=L_{i,1}+L_{i,2}+2P_i|_{H_i}$.
\end{lem}
\begin{proof} By induction on $d$. For $d=1$ both conclusions follows from Theorem \ref {HH}. \par
Let $d>1$.
We consider two cases: \par
{\it Case 1}: $d=2(4h+r+1)$, $h \in \Bbb N$, $r=0;1;3$, (that is, $d\equiv 0; 2; 4$, mod 8).
In this case $t=t^*$, and we will prove that $\dim (I_{W(4,d)})_d=0$.
Consider
$$c= \left\lfloor{ {d+3 \choose 4} \over {d} } \right\rfloor \ \ \ \hbox {and} \ \ \
a = {d+3 \choose 4} - dc.$$
Note that:
\begin{itemize}\item
for $d=8h+2$ (that is for $r=0$):
$$
c={1\over4} {{d+3}\choose 3} -{1\over2}; \ \ \ a={d \over 2};
$$
\item
for $d=8h+4$ (that is for $r=1$):
$$
c={1\over4} {{d+3}\choose 3} -{3\over4}; \ \ \ a={3d \over 4};
$$
\item
for $d=8h+8$ (that is for $r=3$):
$$
c={1\over4} {{d+3}\choose 3} -{1\over4}; \ \ \ a={d \over 4}.
$$
\end{itemize}
It is easy to check that
$$ 1 \leq a \leq d-1; \ \ \ \ \ \ 0 \leq t-2a-c \leq t-2(d-1)
.$$
Let $H \simeq \Bbb P^3$ be a generic hyperplane.
Let $W_s(4,d) $ be the scheme obtained from $W(4,d)$ by specializing
$t-2a-c$ lines $M_1,\dots , M_{t-2a-c}$ into $H$
and by specializing $a$ degenerate conics $\widehat C_1, \dots , \widehat C_{a}$, in such a way that
$L_{i,1}+L_{i,2} \subset H$, but $2P_i|_{H_i} \not\subset H $, for $1 \leq i \leq a$.
So
$$
Res_H {W_s(4,d)} =P_1 + \dots +P_{a} + \widehat C_{a+1} + \dots +\widehat C_{d-1} $$
$$+ M_{t-2a-c+1}+\dots+M_{t-2(d-1)} \subset \Bbb P^4,
$$
where $P_1, \dots ,P_{a}$ are generic points lying on $H$;
$$
Tr_H {W_s(4,d)} =C_1 + \dots +C_{a} + R_{{a+1} ,1}+R_{{a+1} ,2} + \dots + R_{{d-1} ,1}+ R_{{d-1} ,2}
+$$
$$ +M_1+\dots + M_{t-2a-c} +S_{t-2a-c+1}+\dots+S_{t-2(d-1)} \subset \Bbb P^3,
$$
where $R_{{i} ,1}+R_{{i} ,2} = \widehat C_i \cap H = L_{i,1}\cap H+L_{i,2}\cap H$ and $S_{i}= M_i \cap H$.
By Lemma \ref{RetteIncrociateInP3a},
since the $R_{{i} ,j}$ and the $S_i$ are
$$2(d-1-a)+ (t-2(d-1)-t+2a+c) = c $$ generic points, and $t-2a-c =b$ ($b$ as in Lemma \ref{RetteIncrociateInP3a}), we get
$$\dim (I_{ Tr_H {W_s(4,d)}} )_d =0.$$
If we can prove that $\dim (I_{ Res_H {W_s(4,d)}} )_{d-1} =0$ then, by Lemma \ref{Castelnuovo},
with $\delta=1$, we are done.
If $d=2$,
we have $a=1$, $c=2$ and
$$
Res_H {W_s(4,d)} =P_1 + M_{2}+\dots+M_{3} \subset \Bbb P^4.
$$
Clearly $\dim (I_{ Res_H {W_s(4,d)}} )_1 =0.$ \par
Now let $d>2$ and set
$$X= \widehat C_{a+1} + \dots +\widehat C_{d-1} + M_{t-2a-c+1}+\dots+M_{t-2(d-1)} \subset \Bbb P^4,
$$
(hence $ Res_H {W_s(4,d)} = X +P_1 + \dots +P_{a} $).
So
$X$ is the union of $d-1-a$ degenerate conics with an embedded point and $2a+c-2(d-1)$ lines.
The first step here is to show that $X$ imposes the right number of conditions to the forms of degre $d-1$.
By the induction hypothesis we have that $S(4,d-1)$ holds. Since $d-1-a \leq d-3$,
and
$$X=
\widehat C_{a+1} + \dots +\widehat C_{d-1} + M_{t-2a-c+1}+\dots+M_{t-2(d-1)}$$
is a
$$
W(4,d-1; d-1-a,0,2a+c-2(d-1)),
$$
it follows from Lemma \ref{DegenerareRette} (i) that $X$ imposes independent conditions to the forms of degree $d-1$.
Thus
$$\dim (I_{X })_{d-1} = {d-1+4 \choose 4}- d (2(d-1-a) + 2a+c-2(d-1) ) $$
$$= {d+3 \choose 4}- dc =a.$$
To finish the argument we apply Lemma \ref{AggiungerePuntiSuSpazioLineare}. This requires us to prove that
$\dim (I_{X +H})_{d-1}=0$. But
$$\dim (I_{X +H})_{d-1}=\dim (I_{X})_{d-2}.$$
For $d=2$, we obviously have
$\dim (I_{X})_{d-2}=0.$
For $d>2$, by the inductive hypothesis $S^*(4,d-2)$ holds.
Since the parameters of $X$ (perhaps with fewer lines) satisfy the restrictions necessary to use Lemma \ref{DegenerareRette} (ii), we get that
$$S^*(4,d-2; d-1-a, 0, 2a+c-2(d-1))$$
holds, that is, $\dim (I_{X})_{d-2}=0.$
So, by Lemma \ref{AggiungerePuntiSuSpazioLineare}, we have
$$\dim (I_{ X+P_1 + \dots +P_{a} })_{d-1} = \dim (I_{Res_H {W_s(4,d)} })_{d-1} = 0,
$$
and we are done.
\par
\medskip
{\it Case 2}: $d$ odd, or $d=8h+6$, $h \in \Bbb N$, (that is, $ d= 1;3;5;6;7, $ mod 8).
Let
$$c= { {d+3 \choose 4}\over {d}}; \ \ \ \ \ b= t - c- 2;\ \ \ \ \ b^*= t^* - c- 2,$$
(note that c is an integer).
It is easy to check that
$$0 < b \leq t-2(d-1) \ \ \hbox{ and } \ \ 0 < b^* \leq t^*-2(d-1).$$
Let $W_s(4,d) $ be the scheme obtained from $W(4,d)$ by specializing
the $b$ lines $M_1,\dots , M_{b}$ and $\widehat C_{d-1}$
into a hyperplane $H \simeq \Bbb P^3$.
Let $W^*_s(4,d) $ be the scheme obtained from $W^*(4,d)$ by specializing
into $H$ the lines $M_1,\dots , M_{b^*}$ and the degenerate conic with an embedded point $\widehat C_{d-1}$.
We have
$$ Res_H {W_s(4,d)} = \widehat C_1 + \dots +\widehat C_{d-2} + M_{b+1}+\dots+M_{t-2(d-1)} \subset \Bbb P^4,
$$
$$ Res_H {W^*_s(4,d)} = \widehat C_1 + \dots +\widehat C_{d-2} + M_{b^*+1}+\dots+M_{t^*-2(d-1)} \subset \Bbb P^4,
$$
that is, both $Res_H {W_s(4,d)} $ and $Res_H {W^*_s(4,d)}$ are the union of $d-2$ degenerate conics with an embedded point and $c-2d+4$ lines.
By the inductive hypothesis we immediately get
$$\dim (I_{Res_H {W_s(4,d)} })_{d-1} =$$
$$=\dim (I_{Res_H {W^*_s(4,d)} })_{d-1} = {d+3 \choose 4} - d(
2(d-2)+c-2d+4)=0.
$$
Now we consider the traces:
$$
Tr_H {W_s(4,d)} =R_{{1} ,1}+R_{{1} ,2} + \dots + R_{{d-2} ,1}+ R_{{d-2} ,2} + \widehat C_{d-2} +
$$
$$ +M_1+\dots + M_{b} +S_{b+1}+\dots+S_{t-2(d-1)} \subset \Bbb P^3,
$$
$$
Tr_H {W^*_s(4,d)} =R_{{1} ,1}+R_{{1} ,2} + \dots + R_{{d-2} ,1}+ R_{{d-2} ,2} + \widehat C_{d-2} +
$$
$$ +M_1+\dots + M_{b^*} +S_{b^*+1}+\dots+S_{t^*-2(d-1)} \subset \Bbb P^3,
$$
where $R_{{i} ,1}+R_{{i} ,2} = \widehat C_i \cap H$, and $S_{i}= M_i \cap H$.
$Tr_H {W_s(4,d)}$ is the union of $2(d-2)+c+4-2d= c$ simple generic points, a degenerate conic with an embedded point, and $b$ lines. So, by Lemma
\ref{RetteIncrociateInP3b} we get
$$ \dim (I_{ Tr_H {W_s(4,d)} })_d = {d+3 \choose 3} - (2d+2)-b(d+1)-c .
$$
Thus, by Lemma \ref{Castelnuovo}, with $\delta=1$, we have
$$ \dim (I_ {W_s(4,d)} )_d \leq {d+3 \choose 3} - (2d+2)-b(d+1)-c = {d+4 \choose 4} - t(d+1).
$$
Since $ \dim (I_ {W(4,d)} )_d \leq \dim (I_ {W_s(4,d)} )_d$ and ${d+4 \choose 4} - t(d+1)$ is the expected dimension for $(I_ {W(4,d)} )_d$, we have
$ \dim (I_ {W(4,d)} )_d = {d+4 \choose 4} - t(d+1)$.
\par
Finally,
$Tr_H {W^*_s(4,d)}$ is the union of $ c$ simple generic points, one degenerate conic with an embedded point, and $b^*$ lines.
So, by Lemma \ref{RetteIncrociateInP3b} we get
$$ \dim (I_{ Tr_H {W^*_s(4,d)} })_d =0,
$$
and by Lemma \ref{Castelnuovo}, with $\delta=1$, the conclusion follows.
\end{proof}
\begin{lem} \label{S(n,d)} Let $n \geq4$, $d\geq1$,
$$t= \left\lfloor{ {d+n \choose n} \over {d+1} } \right\rfloor; \ \ \ \ \ \ \ t^*= \left\lceil{ {d+n \choose n} \over {d+1} } \right\rceil.$$
Then $S(n,d)$ and $S^*(n,d)$ hold, that is
$$ \dim (I_{W(n,d)})_d= {d+n \choose n} - t (d+1); \ \ \ \ \dim (I_{W^*(n,d)})_d= 0,$$
where
$$W(n,d) = \widehat C_1 + \dots +\widehat C_{d-1} + M_1+\dots+M_{t-2(d-1)} \subset \Bbb P^n,$$
$$W^*(n,d) = \widehat C_1 + \dots +\widehat C_{d-1} + M_1+\dots+M_{t^*-2(d-1)} \subset \Bbb P^n,$$
and $ \widehat C_i = C_i+2P_i|_{H_i}=L_{i,1}+L_{i,2}+2P_i|_{H_i}$.
\end{lem}
\begin{proof} By induction on $n+d$. The case $d=1$ follows from Theorem \ref {HH}. For $n=4$, see Lemma \ref{S(4,d)}.
\par
Let $n+d>6$, $d>1$, $n>4$. Let
$$ a= {d+n-1 \choose n} - d \left\lfloor{ {d+n-1 \choose n} \over {d} } \right\rfloor \ \ \ \hbox{ and } \ \ \ c=\left\lfloor{ {d+n-1 \choose n} \over {d} } \right\rfloor
.$$
Note that, by a direct computation, we have $$0 \leq a \leq d-1 \ \ \ \hbox{ and } \ \ \ \ a \leq c \leq t-2(d-1).$$
\par
Let $W_s(n,d) $ be the scheme obtained from $W(n,d)$ by specializing,
into a generic hyperplane $H \simeq \Bbb P^{n-1}$, the $d-1-a$
degenerate conics with an embedded point $\widehat C_{a+1}$, $\dots, \widehat C_{d-1}$
and the $ t-2(d-1)- c $ lines $M_{c+1}, \dots ,M_{t-2(d-1)}$. We further specialize
the $a$ degenerate conics $\widehat C_1, \dots , \widehat C_{a}$, in such a way that $L_{i,1}+L_{i,2}\subset H$, but
$2P_i|_{H_i} \not\subset H $, for $1 \leq i \leq a$.
\par
Analogously, let $W^*_s(n,d) $ be the scheme obtained from $W^*(n,d)$ by specializing,
into a generic hyperplane $H \simeq \Bbb P^{n-1}$,
the degenerate conics with an embedded point $\widehat C_{a+1}, \dots ,\widehat C_{d-1}$,
and the $ t^*-2(d-1)- c $ lines $M_{c+1}, \dots , M_{t^*-2(d-1)}$.
We further specialize
the $a$ degenerate conics $\widehat C_1, \dots , \widehat C_{a}$, in such a way that $L_{i,1}+L_{i,2}\subset H$, but
$2P_i|_{H_i} \not\subset H $.
From these specializations we have
$$
Res_H {W_s(n,d)} = Res_H {W^*_s(n,d)} =P_1 + \dots +P_{a} + M_{1}+\dots+M_{c} \subset \Bbb P^n,
$$
where $P_1, \dots ,P_{a}$ are generic points of $H$;
$$
Tr_H {W_s(n,d)} =
$$
$$
C_1 + \dots +C_{a} + \widehat C_{a+1} + \dots +\widehat C_{d-1} +S_{1}+\dots+S_{c} +M_{c+1}+\dots + M_{t-2(d-1)} \subset \Bbb P^{n-1},
$$
and
$$
Tr_H {W^*_s(n,d)} =
$$
$$
C_1 + \dots +C_{a} + \widehat C_{a+1} + \dots +\widehat C_{d-1} +S_{1}+\dots+S_{c} +M_{c+1}+\dots + M_{t^*-2(d-1)} \subset \Bbb P^{n-1},
$$
where $S_{i}= M_i \cap H$.
Consider the schemes
$$
X= Tr_H {W_s(n,d)} - (S_{a+1}+\dots+S_{c})$$
$$= \widehat C_{a+1} + \dots +\widehat C_{d-1} +C_1 + \dots +C_{a} +S_{1}+\dots+S_{a} +M_{c+1}+\dots + M_{t-2(d-1)} \subset \Bbb P^{n-1},
$$
and
$$
X^*= Tr_H {W^*_s(n,d)} - (S_{a+1}+\dots+S_{c})$$
$$= \widehat C_{a+1} + \dots +\widehat C_{d-1} +C_1 + \dots +C_{a} +S_{1}+\dots+S_{a} +M_{c+1}+\dots + M_{t^*-2(d-1)} \subset \Bbb P^{n-1}.
$$
By the inductive hypothesis, $S(n-1,d)$ holds. By a direct
computation we check that
$$t-c \leq t^* -c \leq \left\lfloor{ {d+n-1 \choose d} \over {d+1} } \right\rfloor .$$
Hence, by Lemma \ref{DegenerareRette},
we have that $S(n-1,d; d-1-a, a, t-2(d-1)-c)$ and $S(n-1,d; d-1-a, a, t^*-2(d-1)-c)$ hold.
It follows that
$\dim (I_X)_{d}$ and $\dim (I_{X^*})_{d}$ are as expected, that is,
$$\dim (I_X)_{d} = {d+n-1 \choose {n-1}} - (d+1)(2(d-1)+t-2(d-1)-c)
$$
$$= {d+n-1 \choose {n-1}} - (d+1)(t-c)= {d+n \choose {n}} -t(d+1)+c-a,
$$
and
$$\dim (I_{X^*})_{d} = {d+n-1 \choose {n-1}} - (d+1)(2(d-1)+t^*-2(d-1)-c)
$$
$$= {d+n-1 \choose {n-1}} - (d+1)(t^*-c)= {d+n \choose {n}} -t^*(d+1)+c-a.
$$
Now, since $S_{a+1}, \dots,S_{c}$ are generic points and $ {d+n \choose {n}} -t^*(d+1) \leq 0 $, it follows that
$$\dim (I_{ Tr_H {W_s(n,d)} })_{d} ={d+n \choose {n}} -t(d+1),
$$
and
$$\dim (I_{ Tr_H {W^*_s(n,d)} })_{d} =\max \left\{ 0 ; {d+n \choose {n}} -t^*(d+1) \right\} =0.
$$
If we prove that $\dim (I_{ Res_H {W_s(n,d)} })_{d-1} = \dim (I_{ Res_H {W^*_s(n,d)} })_{d-1} =0$ the, by Lemma \ref{Castelnuovo} with $\delta=1$, we are done.
Recall that
$$
Res_H {W_s(n,d)} = Res_H {W^*_s(n,d)} =P_1 + \dots +P_{a} + M_{1}+\dots+M_{c} \subset \Bbb P^n,
$$
where $P_1, \dots ,P_{a} $ are generic points in $H$. By Lemma \ref{AggiungerePuntiSuSpazioLineare} it sufficies to prove that
$\dim (I_{M_{1}+\dots+M_{c} })_{d-1} =a$ and $\dim (I_{M_{1}+\dots+M_{c}+H})_{d-1} =0$.
By Theorem \ref{HH} we immediately get
$$\dim (I_{M_{1}+\dots+M_{c} })_{d-1} ={d+n-1 \choose {n}} - dc= a.
$$
Moreover, since $\dim (I_{M_{1}+\dots+M_{c}+H})_{d-1} =\dim (I_{M_{1}+\dots+M_{c}})_{d-2}$,
by Theorem \ref{HH} we have
$$\dim (I_{M_{1}+\dots+M_{c}})_{d-2} = \max \left\{ 0; {d+n-2 \choose
{n}} - (d-1) c \right\}=0,$$ and the conclusion follows.
\end{proof}
\section{The general case}\label{generalsection}
Having collected all the preliminary lemmata necessary, we are
ready to prove the main theorem of the paper.
\begin{thm} \label{TeoremaInPn} Let $n,d \in \Bbb N$, $n \geq 4$, $d \geq1$. Let $\Pi \subset \Bbb P^n$ be a plane, and let $L_1, \dots , L_s \subset \Bbb P^n$ be $s$ generic lines.
If
$$X= \Pi + L_1+ \dots + L_s \subset \Bbb P^n$$
then
$$
\dim (I_{X})_d = \max \left \{ {d+n \choose n} - {d+2 \choose 2} -s(d+1), 0 \right
\},
$$
or equivalently $X$ has bipolynomial Hilbert function.
\end{thm}
\begin{proof}
We proceed by induction on $n+d$. The result is obvious for $d=1$ and any $n$, while for $n=4$ see Theorem \ref{TeoremaInP4}. \par
Let $d>1$, $n>4$.
By Lemma \ref{BastaProvarePers=e,e*} it suffices to prove the theorem for $s=e$ and $s=e^*$, where
$$e= \left \lfloor {{{d+n \choose n} - {d+2 \choose 2} }\over {d+1} }\right \rfloor ; \ \ \ \
e^*= \left \lceil {{{d+n \choose n} - {d+2 \choose 2} }\over {d+1} }\right \rceil .
$$
Let
$$e_\rho = \left \lfloor {{{d+n-1 \choose n} - {d+1 \choose 2} }\over {d} }\right \rfloor ; \ \ \ \
\rho= {{d+n-1 \choose n} - {d+1 \choose 2} }- e_\rho d ;$$ $$e_T = s - e_\rho - 2\rho, \ \ \ \ \ (s=e, e^*).
$$
It is a direct computation to check that
$e - e_\rho - 2\rho \geq 0$.
Let $\widehat C_i $ be the degenerate conic with an embedded
point obtained by degenerating the lines $L_{i}, L_{i+1}$, $1
\leq i \leq \rho$ as in Lemma \ref{sundial} with $m=1$. By abuse
of notation, we write $\widehat C_i $ as $L_{i}+ L_{i+1}+2P_i|
_{H_i}$, (recall that $H_i \simeq \Bbb P^3 $ is a generic linear
space through $P_i$). Let $H \simeq \Bbb P^{n-1}$ be a generic
hyperplane. Now specialize $\widehat C_1, \dots, \widehat C_\rho
$ in such a way that $L_{i}+ L_{i+1} \subset H$ and $2P_i|_{H_i}
\not\subset H $, and specialize the $e_T$ lines $L_{2\rho+1},
\dots L_{2\rho+e_T}$ into $H$ and denote by $Y $ the resulting
scheme. We have
$$
Res_H { Y} = \Pi + P_1+ \dots + P_\rho + L_{2\rho+e_T+1}+ \dots
+L_{s} \subset \Bbb P^{n}
$$
($P_1, \dots ,P_{\rho}$ are generic points of $H$),
$$
Tr_H { Y} =L + C_1+ \dots + C_\rho + L_{2\rho+1}+ \dots +
L_{2\rho+e_T}+ P_{2\rho+e_T+1}+ \dots +P_{s} \subset \Bbb P^{n-1}
$$
where $L =\Pi \cap H$ and $P_i = L_i \cap H$ , for $2\rho+e_T+1 \leq i \leq s$.
$Res_H { Y} $ is the union of one plane, $e_\rho$ lines and
$\rho$ generic points of $H$. By the inductive hypothesis we have
$$\dim (I_{ \Pi + L_{2\rho+e_T+1}+ \dots +L_{s} })_{d-1} =\rho.
$$
Moreover
$$\dim (I_{ H+ \Pi + L_{2\rho+e_T+1}+ \dots +L_{s} })_{d-1} =\dim (I_{ \Pi + L_{2\rho+e_T+1}+ \dots +L_{s} })_{d-2}=0,
$$
(obvious, for $d=1,2$; by induction, for $d>2$).
Hence by
Lemma \ref{AggiungerePuntiSuSpazioLineare} we get
$$\dim (I_{ Res_H { Y}})_{d-1} =0.
$$
$Tr_H { Y} $ is the union of $\rho$ degenerate conics, $e_T +1$
lines, and $e_\rho$ generic points. We will compute $\dim (I_{
Tr_H { Y}})_{d} $ by using Lemma \ref{DegenerareRette} and Lemma
\ref{S(n,d)}. We have to check that $\rho \leq d-1$ and $e_\rho
\leq \rho$. The first inequality is obvious, and it is not
difficult to verify the other one. So we get
$$\dim (I_{ Tr_H { Y}})_{d} = \max \left\{ 0; {d+n-1 \choose d} - (d+1)(2\rho+e_T+1)-e_\rho\right\} ,
$$
and from here
$$\dim (I_{ Tr_H { Y}})_{d} = {d+n \choose n} - {d+2 \choose 2} -s(d+1), \ \ \ \ for \ \ s=e ;$$
$$\dim (I_{ Tr_H { Y}})_{d} =0 \ \ \ \ for \ \ s=e^* .$$
The conclusion now follows from Theorem \ref{Castelnuovo}, with $\delta=1$.
\end{proof}
\section{Applications}\label{applicationsection}
We now mention two applications of Theorem \ref{TeoremaInPn}. The
first is to a very classical problem concerning the existence of
rational normal curves having prescribed intersections with
various dimensional linear subspaces of $\mathbb{P}^n$. For example, the
classical Theorem of Castelnuovo which asserts that there exists a
unique rational normal curve through $n+3$ generic points of
$\mathbb{P}^n$, is the kind of result we have in mind.
The second application is to writing polynomials in several
variables in a simple form. For example, the classical theorem which says that
in $S=\mathbb C[x_0, \dots, x_n]$ every quadratic form is a sum of
at most $n+1$ squares of linear forms, is the kind of theorem we
intend.
\medskip
\noindent {\bf Rational normal curves.} The problem of deciding
whether or not there exists a rational normal curve with
prescribed intersections with generic configurations of linear spaces, is
well known and, in general, unsolved. Various results and
applications of answers to this problem can be found in
\cite{CaCat07} and \cite{CaCat09}.
Of particular importance in such questions is the Hilbert function of the resulting configuration of linear spaces.
It is for this reason that the results of this paper can be applied to such a problem.
To illustrate the relationships we will look at the following
special problem (left open in \cite{CaCat09}): consider in
$\mathbb P^4$, $P_1,P_2,P_3$ generic points, $L_1,L_2$ generic
lines and $\pi$ a generic plane. Does there exist a rational
normal curve $\mathcal{C}$ in $\mathbb P^4$ such that:
(i) $\mathcal{C}$ passes through the $P_i$ ($i=1,2,3$);
(ii) $\deg(\mathcal{C}\cap L_i)\geq 2$ for $i=1,2$;
(iii) $\deg(\mathcal{C}\cap\pi)\geq 3$.
An expected answer is described in \cite{CaCat09} and can be obtained by arguing as follows:
inside the 21 dimensional parameter space for rational normal
curves in $\mathbb{P}^4$ it is expected that those satisfying the
conditions enumerated above form a subvariety of codimension $20$. In other words,
we expect that there is a rational normal curve in $\mathbb P^4$
satisfying the conditions above.
To see that this is not the case we consider the schemes
$${X}=P_1+P_2+L_1 +L_2+\pi,$$
$${Y}={X}+P_3.$$
Using Theorem \ref{TeoremaInPn} we know that $\dim (I_{ X})_2 =1$
and $\dim (I_{ Y})_2 =0$. If $\mathcal{C}$ existed, then
$Q\supset{X}$ would imply $Q\supset\mathcal{C}$ by a standard
Bezout type argument, and so we get $Q\supset {Y}$, a
contradiction.
\medskip
\noindent{\bf Polynomial decompositions.} We consider the rings
$S=\mathbb{C}[x_0,\ldots,x_n]$ and $T=\mathbb{C}[y_0,\ldots,y_n]$,
and we denote by $S_d$ and $T_d$ their homogeneous pieces of
degree $d$. We consider $T$ as an $S$-module by letting the action
of $x_i$ on $T$ be that of partial differentiation with respect to
$y_i$. We also use some basic notions about apolarity (for more on
this see \cite{Ge,IaKa}).
Let $I\subset S$ be a subset and denote by $I^\perp\subset T$ the
submodule of $T$ annihilated by every element of $I$. If $I$ is an
homogeneous ideal, we recall that $(I_d)^\perp=(I^\perp)_d$.
Given linear forms $a,b,c,l_i,m_i\in T_1,i=1,\ldots,s,$ one can ask
the following question $(\star)$:
\begin{quote}{\em
For which values of $d$ is it true that any form $f\in T_d$ can be
written as
$$f(y_0,\ldots,y_n)=f_1(l_1,m_1)+\ldots+f_s(l_s,m_s)+g(a,b,c)$$ for
suitable forms $f_i$ and $g$ of degree $d$?}
\end{quote}
\noindent More precisely, we ask whether the following vector space equality
holds:
$$T_d=\left(\mathbb{C}[l_1,m_1]\right)_d+\ldots
+\left(\mathbb{C}[l_s,m_s]\right)_d+\left(\mathbb{C}[a,b,c]\right)_d,$$
where $\left(\mathbb{C}[l_i,m_i]\right)_d$, respectively
$\left(\mathbb{C}[a,b,c]\right)_d$, is the degree $d$ part of the
subring of $T$ generated by the $l_i,m_i$'s for a fixed $i$,
respectively generated by $a,b$ and $c$. A more general question
can be considered as described in \cite{CarCatGer1}, but a
complete answer is not known. We now give a complete answer in the case of $(\star)$.
The connection with configurations of linear spaces is given by the
following results.
\begin{lem} Let $\Lambda\subset\mathbb{P}^n$ be an $i$ dimensional
linear space having defining ideal $I$. Then, for any $d$, we have
the following:
$$ I_d^\perp=\left(\mathbb{C}[l_0,\ldots,l_i]\right)_d $$
where the linear forms $l_i\in T_1$ generate $I_1^\perp$.
\end{lem}
\begin{proof}
After a linear change of variables, we may assume
$$I=(x_0,\ldots,x_{n-i-1}).$$ As this is a monomial ideal the
conclusion follows by straightforward computations.
\end{proof}
\begin{prop}
Let $\Lambda=\Lambda_1+\ldots+\Lambda_s\subset\mathbb{P}^n$ be a
configuration of linear spaces having defining ideal $I$ and such
that $\dim \Lambda_i=n_i$. Then, for any $d$, the following holds:
$$ I_d^\perp=\left(\mathbb{C}[l_{1,0},\ldots,l_{1,n_1}]\right)_d+\ldots +\left(\mathbb{C}[l_{s,0},\ldots,l_{s,n_s}]\right)_d$$
where the linear forms $l_{i,j}\in T_1$ are such that the degree
$1$ piece of $(l_{i,0},\ldots,l_{i,n_i})^\perp$ generates the
ideal of $\Lambda_i$.
\end{prop}
\begin{proof}
The proof follows readily from the previous lemma once we recall
that $(I\cap J)^\perp=I^\perp + J^\perp$.
\end{proof}
Now we can make clear the connection with question $(\star)$.
Given the linear forms $a,b,c,l_i,m_i\in T_1$ for $i=1,\ldots,s$,
we consider the ideal $I\subset S$ generated by the degree 1 piece
of $(a,b,c)^\perp$ and the ideals $I_i$ generated by the degree 1
pieces of $(l_i,m_i)^\perp, i=1,\ldots,s$. Note that $I\cap
I_1\cap\ldots\cap I_s$ is the ideal of the union of $s$ lines and
one plane in $\mathbb{P}^n$. Denote this scheme by ${X}$. Now we can give
an answer to question $(\star)$ using Theorem \ref{TeoremaInPn}.
\begin{prop} With notation as above, we have: the values of $d$ answering question $(\star)$ are exactly the ones
for which $\dim (I_{ X})_d=0$.
\end{prop} \qed
\section{Final remarks}
Theorem \ref{TeoremaInPn} gives new evidence for the conjecture we
stated in the Introduction of the paper. As our conjecture deals
with generic configurations of linear spaces with non-intersecting
components, we would like to say something in case there are components which are forced to intersect.
Let $\Lambda=\bigcup\Lambda_i\subset\mathbb{P}^n$ be a generic
configuration of linear spaces such that $m_i=\dim\Lambda_i\geq
m_j=\dim\Lambda_j$ if $i\geq j$. Then, there exist components of
$\Lambda$ which intersect if and only if $m_1+m_2\geq n$.
The first interesting case where generic configurations of linear
spaces have intersecting components occurs in $\mathbb{P}^3$ by taking lines
and at least one plane.
\begin{rem} Theorem \ref{TeoremaInPn} is not stated in $\mathbb{P}^3$,
but it can easily be extended to include this case. If
$X=L_1+\ldots+L_s+\Pi\subset\mathbb{P}^3$ we consider the exact sequence
\[0\rightarrow I_{L_1+\ldots +L_s}(-1)\rightarrow R \rightarrow R/I_X \rightarrow 0\]
where the first map is multiplication by a linear form
defining $\Pi$. We can compute $HF(X,\cdot)$ by taking
dimensions in degree $d$ and obtain:
\[HF(X,d)={d+3\choose 3}-\max\left\{0,{d+2\choose 3}-sd\right\}\]
for $d>0$ and $HF(X,0)=1$. We also notice that
\[hp(X,d)={d+2\choose 2}+s(d+1)-s={d+3\choose 3}-{d+2\choose 3}+sd.\]
Thus $X$ has bipolynomial Hilbert function.
\end{rem}
Hence our conjecture holds for the union of generic lines and {\bf one}
plane even in $\mathbb{P}^3$, where forced intersection appear. But, in
general, our conjecture is false for configurations of linear
spaces with intersecting components, as shown by the following
example.
\begin{ex}\label{notmaxrem}{
Consider $\Lambda\subset\mathbb{P}^3$ a generic configuration of linear
spaces consisting of one line and three planes. By Derksen's
result in \cite{Derksen} we have $hp(\Lambda,1)=3$ but clearly no
plane containing $\Lambda$ exists. Hence,
\[HF(\Lambda,1)=4\neq\min\{hp(\mathbb{P}^3,4)=4,hp(\Lambda,4)=3\}\] and
the Hilbert function is not bipolynomial.}
\end{ex}
We are not aware of any general result providing evidence for the
behavior of $HF(\Lambda,d)$ when the components of $\Lambda$ are
intersecting. We did, however, conduct experiments using the
computer algebra system CoCoA \cite{cocoa} and the results
obtained suggest the following:
\begin{quote}{\em let $\Lambda \subset\mathbb{P}^n$ be a generic configuration of
linear spaces. There exists an integer $d(\Lambda)$ such that
\[HF(\Lambda,d)=hp(\mathbb{P}^n,d), \mbox{ for } d\leq d(\Lambda)\]
and
\[HF(\Lambda,d)=hp(\Lambda,d), \mbox{ for } d> d(\Lambda).\]}
\end{quote}
This seems to be a reasonable possibility for the Hilbert function
of generic configurations of linear spaces (even with forced
intersections), but the evidence is still to sparse to call it a
conjecture.
\def$'${$'$}
| proofpile-arXiv_065-6678 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
One of the greatest challenge of intermediate energy heavy ion physics
consists in the experimental measurement of the modifications to the symmetry energy $c_{sym}$
induced by density and temperature conditions different from the ones of ground state nuclei\cite{baoan}.
For the different astrophysical applications linked
to the evolution and structure of compact stars\cite{skin,pierre},
the symmetry energy behavior at density far from saturation is of outmost importance.
The high density behavior of the isovector equation of state is almost not constrained by experimental observations, and considerable uncertainties exist also in the behavior of
the symmetry energy at subcritical density\cite{baran}. In this regime, nuclear matter is unstable with respect to phase separation, mean-field estimation can become severely incorrect\cite{horowitz}, and clusterization has to be considered\cite{ropke,lehaut}.
Another important point concerns the temperature dependence of the symmetry energy, which is schematically treated or even neglected in the supernova explosion and proto-neutron star cooling modelizations\cite{horowitz,page}.
A possible approach to this problem consists in comparing selected isospin observables to the output of a transport model where the isovector part of the equation of state can be varied\cite{msu}. This strategy has recently lead to very stringent constraints on the symmetry energy behavior\cite{tsang}, that appear reasonably consistent with the experimental results extracted from collective modes\cite{pigmy}, nuclear massses\cite{pawel} and neutron skins\cite{skins}.
The drawback of these analyses is that the results are model dependent, and different models do not produce fully consistent results\cite{baran,chimera}; moreover no information can be extracted about the finite temperature behavior.
Alternatively, a simple formula has been proposed\cite{botvina} to extract directly
the symmetry energy
from experimental cluster properties obtained
in the fragmentation of two systems
of charge $Z_1,Z_2$, mass $A_1,A_2$ at the same temperature $T$:
\begin{equation}
4 \frac{c_{sym}^0}{T} = \frac{\alpha}{\left(Z_1^2/A_1^2\right)-\left(Z_2^2/A_2^2\right)}
\label{equBotvina}
\end{equation}
where $\alpha$ is the so-called isoscaling parameter that can be
measured from isotopic yields\cite{isoscaling}.
Applications of this formula to different fragmentation data\cite{shetty,indra}
show that the measured symmetry energy tends to decrease with increasing collision
violence.
However eq.(\ref{equBotvina}) is not an exact expression. It was derived in the framework of macroscopic statistical models~\cite{botvina}, where many-body correlations are supposed to be entirely exhausted by clusterisation, and it appears to be strongly affected by conservation laws and combinatorial effects\cite{dasgupta,claudio};{ secondary decays may strongly affect the value of $\alpha$\cite{smf};} finally, the $c_{sym}$ coefficient appearing
in eq.(\ref{equBotvina}) should correspond to the symmetry free-energy\cite{natowitz}, which is equivalent to the symmetry energy only in the $T\to 0$ limit.
In particular, S. Das Gupta et al. have shown that the difference in the neutron chemical potential $\Delta\mu_n$ increases with the temperature if $c_{sym}$
is taken as a constant\cite{dasgupta}. In the grand-canonical ensemble, the difference of chemical potential between two sources at a common temperature $T$ is linked to the isoscaling parameter by
\begin{equation}
\Delta\mu_n=\alpha T
\label{deltamu}
\end{equation}
This means that eq.(\ref{equBotvina}) cannot be exact at finite temperature.
Experimentally the measured value of $\alpha T$ decreases with increasing incident energy and/or collision violence.
Let us suppose that the grand-canonical equality (\ref{deltamu})
is true also in the data, and
that ncreasing collision violence does indeed correspond to increasing temperature.
Then this implies that the physical symmetry energy coefficient explored in fragmentation data is not constant, but it is decreasing more steeply than obtained in the rough data\cite{shetty,indra}.
This is consistent with the interpretation of ref.\cite{shetty,indra,botvina_csym,samaddar,baoan2}. But it is inconsistent with the statistical model calculations of ref.\cite{raduta}, where a constant input symmetry energy coefficient produces an apparent $c_{sym}$
from eq.(\ref{equBotvina}) qualitatively coherent with the experiment.
The statistical model MMM used in ref.\cite{raduta} is similar, but not identical,
to the statistical model CTM used in ref.\cite{dasgupta}, which raises
once again the question of the model dependence of the results.
However, refs.\cite{dasgupta},\cite{raduta} do not compute the same observables either, and use different prescriptions for the symmetry energy.
In these conditions, it is difficult to understand the real origin of the observed discrepancy.
To progress on this issue, we present in this paper calculations with CTM
made under similar conditions with respect to MMM. We will show that the two models produce
very similar results.
Moreover, we address the question of an "improved" formula
which would be valid out of the $T\to0$ limit. We will show that, in the framework of this model, none of the different formulas proposed in the literature allows a reliable direct measurement of the symmetry energy. However, both the isoscaling observable and isotopic widths appear very well correlated with the physical symmetry energy, implying that ratios of these isotopic observables measured in different systems should allow to extract the physical trend.
\section{The model}
The canonical thermodynamic McGill model is based on the analytic evaluation of
the canonical partition function for the fragmenting source with $A$ nucleons and $Z$
protons (neutron number $N=A-Z$) at a given temperature $T$ written as
\begin{eqnarray}
Q_{A,Z}=\sum\prod \frac{\omega_{a,j}^{n_{a,j}}}{n_{a,j}!}
\end{eqnarray}
where the sum is over all possible channels of break-up
which satisfy the conservation laws; $n_{a,j}$ is
the number of this composite in the given channel, and $\omega_{a,j}$
is the partition function of one composite with
nucleon number $a$ and proton number $j$:
\begin{eqnarray}
\omega_{a,j}=\frac{V_f}{h^3}(2\pi maT)^{3/2}\times z_{a,j}(int)
\end{eqnarray}
Here $ma$ is the mass of the composite and
$V_f= V - V_0$ is the volume available for translational motion,
where $V$ is the volume to which the system has expanded at break up and $V_0$ is the normal volume of $A$
nucleons and $Z$ protons.
Concerning the choice of $z_{a,j}(int)$ used in this work, the
proton and the neutron are fundamental building blocks
thus $z_{1,0}(int)=z_{1,1}(int)=2$
where 2 takes care of the spin degeneracy. For
deuteron, triton, $^3$He and $^4$He we use $z_{a,j}(int)=(2s_{a,j}+1)\exp(-
\beta e_{a,j}(gr))$ where $\beta=1/T, e_{a,j}(gr)$ is the ground state energy
of the composite and $(2s_{a,j}+1)$ is the experimental spin degeneracy
of the ground state. Excited states for these very low mass
nuclei are not included. For mass number $a=5$ and greater we use
the liquid-drop formula:
\begin{eqnarray}
z_{a,j}(int)=\exp\frac{1}{T}[W_0 a-\sigma(T)a^{2/3}-\kappa\frac{j^2}{a^{1/3}}
-c_{sym}\frac{(a-2j)^2}{a}+\frac{T^2a}{\epsilon_0}]
\label{partfunc}
\end{eqnarray}
The expression includes the
volume energy, the temperature dependent surface energy, the Coulomb
energy, the symmetry energy and contribution from excited states in the continuum
since the composites are at a non-zero temperature.
In this paper we will try different prescription for the symmetry energy coefficient, namely the same mass dependent prescription employed in the MMM model\cite{raduta}
\begin{equation}
c_{sym}=c_i c_v - c_i c_s a^{-1/3},
\label{csurf}
\end{equation}
with $c_i=1.7826$, $c_v=15.4941$ MeV, $c_s=17.9439$ MeV,
or alternatively a more sophisticated surface and temperature dependent expression\cite{das1}, accounting for the vanishing of all surface contributions at the critical point:
\begin{equation}
c_{sym}\left (a,T\right )=c_0-c_i T_c a^{-1/3}\left (\frac{T_c^2-T^2}{T_c^2+T^2} \right )^{5/4}.
\label{ctempsurf}
\end{equation}
where $c_0=28.165$ MeV, and $T_c=18$ MeV is the critical temperature.
To test the sensitivity of the different observables to the symmetry energy, a schematic constant coefficient will also be used.
In using the thermodynamic model one needs to specify which composites
are allowed in the channels. For mass numbers $a$=5 and 6, we include proton
numbers 2 and 3 and for mass number $a$=7, we include proton numbers 2,3 and 4. For $a\ge 8$, we include all nuclei within drip-lines
defined by the liquid-drop formula.
The Coulomb interaction between different composites is included in
the Wigner-Seitz approximation\cite{das1,Bondorf1}.
For further details, see ref.\cite{das1}.
\section{Symmetry energy evaluations}
In this section we present the results of calculations made with the CTM model
with the aim of reconstructing the input symmetry energy of the model from measurable cluster observables. When not explicitely stated, we will consider an excited fragmented source composed of $Z=75$ protons with two different mass numbers $A=168$, $A=186$. This specific choice of source size was already employed in previous works\cite{dasgupta,gargi}.
Table \ref{tab1} gives the value of the isoscaling parameter obtained in the model and the resulting apparent symmetry energy from eq.(\ref{equBotvina}) for different values of the temperature and the break-up volume. The isoscaling parameter $\alpha$ is the value extracted from the slopes of differential cluster yields\cite{dasgupta} averaged over $Z=1,2,3,4,5$, similar
to the procedure employed in the analysis of heavy ion data\cite{msu}. For these light isotopes, an excellent isoscaling is observed in the model\cite{dasgupta}.
The input symmetry energy in this exploratory calculation is fixed to $c_{sym}=23.5$ MeV.
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{r|c|c|c|c}
& Temperature & $V/V_0$ & $\alpha$ & $c_{sym}^0$ \\
\hline
& 7.5 $MeV$ & 4 & 0.511 & 26.10 $MeV$ \\
& 6.5 $MeV$ & 4 & 0.557 & 24.65 $MeV$ \\
& 5.5 $MeV$ & 4 & 0.606 & 22.72 $MeV$ \\
& 4.5 $MeV$ & 4 & 0.703 & 21.56 $MeV$ \\
& 3.5 $MeV$ & 4 & 0.870 & 20.75 $MeV$ \\
& 7.5 $MeV$ & 6 & 0.462 & 23.6 $MeV$ \\
& 6.5 $MeV$ & 6 & 0.514 & 22.75 $MeV$ \\
& 5.5 $MeV$ & 6 & 0.578 & 21.65 $MeV$ \\
& 4.5 $MeV$ & 6 & 0.673 & 20.63 $MeV$ \\
& 3.5 $MeV$ & 6 & 0.923 & 22.02 $MeV$ \\
\end{tabular}
\end{center}
\caption{ Isoscaling parameter averaged over $Z=1-5$ and apparent symmetry energy from eq.(\ref{equBotvina}) for a fragmenting source with $Z=75$ at different temperatures and break-up volumes. The input symmetry energy in the model is
$C_{sym}=23.5$ MeV independent of the temperature.
}
\label{tab1}
\end{table}
The isoscaling parameter $\alpha$ decreases with increasing temperature independent of the break-up volume. This is in agreement with the results of previous works \cite{raduta,dasgupta}, as well as with the experimental observation of a decreasing $\alpha$ with increasing collision violence. Concerning the extracted values of $c_{sym}$, an important dependence on the break-up volume is observed. Fot small break-up volumes, the apparent $c_{sym}$ monotonically increases with temperature
as already observed in a preceeding study with the CTM model where $\alpha$ was deduced from the chemical potential via the grancanonical expression eq.(\ref{deltamu})
and not deduced from the slope of fragment yields\cite{dasgupta}.
For higher volumes, the apparent $c_{sym}$ initially decreases as in ref.\cite{raduta} and in the data, and it increases again when $\alpha$ saturates. This second regime may not be explored in the data because the break-up temperatures saturate with increasing collision violence\cite{shetty}. The same may be true for the analysis of ref.\cite{raduta} which was made in the microcanonical ensemble; indeed in this latter the temperature does not increase linearly with excitation energy, producing a saturation in the apparent $c_{sym}$. The existence of these different behaviors
shows that grancanonical formulas have to be handled with care: in the canonical
or microcanonical models the slope of isotopic yields may not be directly related to the chemical potential.
For the higher break-up volume, the input symmetry energy coefficient is well
recovered at the lowest temperature. This was expected since eq.(\ref{equBotvina}) has been derived in the limit of vanishing temperature.
Surprisingly, this does not seem the case if the break-up volume is small. In this case, the limit may be attained at lower temperatures where our fragmentation model cannot be safely applied any more. However,
for all the situations considered, which cover a large range of thermodynamic conditions typically accessed in fragmentation experiments, the deviation between the input $c_{sym}$ and the approximation extracted from eq.(\ref{equBotvina}) never exceeds $12\%$, which is a reasonable precision considering the inevitable error bars
induced by efficiency, event selection and thermometry in heavy ion collision experiments.
To further progress on this analysis, we plot on the left part of Fig.\ref{fig1} the isoscaling $\alpha$ parameter as a function of the temperature with different choices of the symmetry energy parametrization.
In all cases a decreasing isoscaling parameter is found.
The middle part of the same figure shows the resulting symmetry energy coefficient obtained by applying eq.(\ref{equBotvina}).
We can see that the functional form of $c_{sym}$ does not affect strongly the trend of the results. In particular, a decreasing $\alpha$ does not necessarily imply a decreasing physical symmetry energy. Moreover, the temperature and surface dependence of the physical symmetry energy affects the predictive power of eq.(\ref{equBotvina}).
In no case the extracted coefficient approaches the symmetry energy of the fragments
used for the isoscaling analysis, however at a given value of temperature, it qualitatively follows the trend of the input symmetry energy of the fragmenting source, as expected in the Weisskopf regime\cite{weisskopf}.
This means that the isoscaling properties of the lightest fragments appear well correlated to the symmetry energy of their emitting source, even out of the evaporation regime.
From the observations of Fig.\ref{fig1} and table \ref{tab1} we can already draw some partial conclusions. An important point concerns the fact that observing $\alpha$ or $c_{sym}^0$ decreasing with the collision violence cannot be taken as an evidence that the physical symmetry energy does so. Only a detailed comparison with a model
may allow to extract the physical symmetry energy.
Different models have to be very carefully compared to a large set of independent observables before one can extract any conclusion.
As a second remark, both MMM and CTM models tend to agree that at low temperature eq.(\ref{equBotvina}) gives a good reproduction of the physical symmetry energy.
This means that results obtained from intermediate impact parameter collisions in the neck region\cite{rizzo,casini} (where in principle the matter is at low density but also relatively cold),
or analyses of quasi-projectiles produced in peripheral collisions\cite{shetty} (where the system is close to normal density and the temperature behavior could be disentangled from the density behavior) are better suited to this study than central collisions in the multifragmentation regime.
To progress on the issue of the determination of the in-medium modifications
to the symmetry energy, it would be extremely useful if we could have a formula more adapted to the finite temperature case.
To this purpose, we turn to check two other expressions proposed in the literature
to access the symmetry energy from fragment observables.
\section{The influence of fractionation}
In nuclear multifragmentation reactions, the asymmetry term influences
the neutron-proton composition of the break-up fragments.
Interpreting multifragmentation in the light of first-order phase transitions
in multicomponent systems,
the neutron enrichment of the gas phase with respect to the liquid phase
comes out as a natural consequence of Gibbs equilibrium criteria
and a connection between phases chemical composition and the symmetry term
can be established \cite{mueller,gulminelli}.
Interesting enough, the phenomenon of isospin fractionation which is
systematically observed in analyses of multifragmentation
data \cite{xu,geraci,martin,shetty,botvina}, seems to be a generic feature of phase
separation independent of the equilibrium Gibbs construction \cite{isospinfrac}.
Indeed, dynamical models of heavy ion collisions \cite{baoan,baran}
where fragment formation is essentially ruled by the out of equilibrium process of
spinodal decomposition also exhibit fractionation.
Adopting an equilibrium scenario for the break-up stage of a multifragmenting system,
Ono {\it et al.} \cite{ono} derive an approximate grandcanonical expression which connects
the symmetry term with the isotopic composition of
fragments obtained in the break-up stage of two
sources with similar sizes in identical thermodynamical states
and differing in their isospin content,
\begin{equation}
c_{sym}(j)=\frac{\alpha(j) T}{4 \left[ \left( \frac{j}{<a>_1}\right)^2-
\left( \frac{j}{<a>_2}\right)^2\right] },
\label{eq:csym_ono}
\end{equation}
under the hypothesis that the isotopic distributions are essentially Gaussian
and that the free energies contain only bulk terms.
Here, $\alpha(j)$ is the isoscaling slope parameter of a fragment of charge $j$,
and $<a>_i$ stands for the average mass number of a fragment of charge $j$ produced
by the source $i(=1,2)$ at the temperature $T$.
In the limit of vanishing temperature, fractionation can be neglected
and $j/<a>_i$ can be replaced by the corresponding quantity of the sources $Z_i/A_i$ \cite{botvina} giving back eq.(\ref{equBotvina}).
In the opposite case, fragment yields are predicted to be sensitive to their proper symmetry energy and not to the symmetry energy of the emitting source.
Figure \ref{fig2} gives, as a function of the cluster atomic number $j$, the apparent symmetry energy coefficient extracted from eq.(\ref{eq:csym_ono}) for different conditions of temperature, free volume, source isotopic content, and source size.
In all cases the input symmetry energy has been fixed to the constant value $c_{sym}=23.5$ MeV. We can see that eq.(\ref{eq:csym_ono}) leads to a global systematic
overestimation of the input symmetry energy.
The response of the different fragments depends on their size. Non realistic values
are obtained for the lightest fragments.
The lightest fragments isotopic distribution $j/<a>$ is very sensitive to the number of isotopes considered in the calculation, which show strong binding energy fluctuations. It is not surprising that they cannot be treated by eq.(\ref{eq:csym_ono}), which implies a behavior dominated by the bulk for all fragments.
Turning to the increase in the apparent symmetry energy for the heaviest fragments, this is most probably due to the failing of the grancanonical approximation in eq.(\ref{eq:csym_ono}) when the fragment size becomes comparable to the source size, as previously discussed in ref.\cite{raduta}. One should also note that for heavy fragments isoscaling tends to be violated in the model\cite{dasgupta}, and the determination of an isoscaling slope becomes largely arbitrary. Clusters of charge $j>5$ and smaller than approximately one tenth of the source size are best suited to this analysis. If we limit ourselves to such intermediate mass fragments, we can see that the apparent $c_{sym}$ coefficient is reasonably independent of the available volume, source isospin and mass. A temperature dependence is still apparent and, as in the case of Fig.\ref{fig1}, does not exceed 10\%. These results are in good agreement with the findings of ref.\cite{raduta} in the framework of the MMM model.
Figure \ref{fig3} shows the response of eq.(\ref{eq:csym_ono}) to a symmetry energy depending on the temperature and on the fragment size through eq.(\ref{ctempsurf}).
The behavior is very similar to the one displayed in Figure \ref{fig2} above.
This means that it is not possible to extract the surface dependence by looking at the behavior as a function of the charge. The temperature dependence for a fixed charge (right part of Figure \ref{fig3}) conversely shows a good correlation, meaning that the temperature dependence could be extracted studying the isoscaling for a given charge at different excitation energies.
An alternative expression has been derived in ref.\cite{raduta2} connecting
the symmetry energy of a cluster of size $a$ to the width of its isotopic distribution.
Indeed, a Gaussian approximation on the grandcanonical expression for cluster yields
gives
\begin{equation}
\sigma_I^2(a) \approx \frac{aT}{2c_{sym}(a)},
\label{eq:csym_fluct_fr}
\end{equation}
where $\sigma_I^2(a)$ indicates the width of the isotopic distribution of a cluster of size $a$, and $I=a-2j$.
In principle the $c_{sym}$ coeffcients appearing in eqs.(\ref{eq:csym_ono}) and
(\ref{eq:csym_fluct_fr}) correspond to symmetry free-energy coefficients, that is
they include an entropic contribution.
However if we neglect the $I$ dependence of the excitation energy and entropy
associated to a given mass $a$, they coincide with $c_{sym}$ defined by eq.(\ref{partfunc}).
Fig.\ref{fig3} shows the apparent $c_{sym}$ extracted from the fluctuation formula
eq.(\ref{eq:csym_fluct_fr}) as a function of the cluster charge $j$ at different temperatures and isospin
values. This observable shows a linearly increasing behavior similar to the one
displayed by eq.(\ref{eq:csym_ono}), but the overall quality of reproduction is improved.
In this picture the input symmetry energy was taken as a constant. To see if eqs.(\ref{eq:csym_ono}),(\ref{eq:csym_fluct_fr}) can be used to extract from observable cluster data the possible surface/temperature/density dependence of
the symmetry energy, we additionally show in Figure \ref{fig5}, for a cluster charge $j=10$,
the apparent $c_{sym}$ extracted from eqs.(\ref{eq:csym_ono}),(\ref{eq:csym_fluct_fr}) as a function of the input $c_{sym}$. We can see that both formulas
produce a bias. This implies that a quantitative estimation of the symmetry energy
coefficient cannot be obtained from these expressions. We recall that
the presented calculation completely neglects secondary decay, which is expected to considerably increase this bias.
If the quantitative values do not match, we observe however a good linear
correlation in both cases.
This means that the analyzed observables show an excellent sensitivity to the isovector equation of state. In particular, the relative variation
of the extracted symmetry coefficient should be reliable, and it would be very important to check whether the linear dependence survives to secondary decay.
We recall that in the framework of the MMM model which (at variance with CTM)
contains an afterburner, the proportionality is kept also after secondary decay\cite{raduta2}.
\section{Conclusions}
In this paper we have studied the sensitivity to the symmetry energy of different isotopic observables measurable in heavy ion collisions, in the framework of the McGill Canonical Thermodynamic Model.
We conclude that, even in the idealized limit of thermal equilibrium, no isotopic observable can allow to reconstruct the physical symmetry energy of the excited fragmenting source in a model indpendent way. The different models have to be very carefully compared to a large set of independent observables before one can extract any conclusion.
In the low temperature limit, the well spread expression eq.(\ref{equBotvina})
gives a good measurement of the symmetry energy of the source, and is never sensitive to the symmetry energy of the fragments. At high temperature in the multifragmentation regime, no formula gives a satisfactorily reproduction of the input $c_{sym}$.
However we confirm, in agreement with the results of previous studies with different models\cite{dasgupta,raduta,raduta2,shetty,msu,tsang} that both the isoscaling variable and isotopic widths show a very strong sensitivity to the strength of the symmetry energy.
This means that the use of the different formulas proposed in the literature should allow to better constrain $c_{sym}$.
In particular, if a drastic reduction of the effective $c_{sym}$
in the nuclear medium is observed, we should be able to see it
by calculating the differential response of eqs.(\ref{eq:csym_ono}),(\ref{eq:csym_fluct_fr})
for different excitation energies.
It is important to stress that this study neglects secondary decay which may
have dramatic consequences on isotopic observable\cite{smf}.
It is clear that the issue should be investigated. Moreover differential observables\cite{msu,tsang} where the effect of secondary decay may
cancel out should be analysed.
| proofpile-arXiv_065-6690 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}\label{section:intro}
Compact Steep Spectrum (CSS) sources are a population of powerful radio
sources with projected linear size less than 20 kpc and steep high radio
frequency spectrum $\alpha<-0.5$ \footnote{In the present paper, the
spectral index is defined as $S_\nu\propto\nu^\alpha$.} (Peacock \& Wall
1982, Fanti et al. 1990, and review by O'Dea 1998 {\bf and Fanti 2009}).
Kinematical studies
of the hot spots and analysis of the high-frequency turnover in the
radio spectrum due to radiative cooling imply ages for CSS sources in
the range 10$^2$--10$^5$ yr (e.g., Owsianik, Conway \& Polatidis 1998;
Murgia et al. 1999). The sub-galactic size of CSS sources has been used
to argue that CSS sources are probably young radio sources, (the `youth'
model: Fanti et al. 1995; Readhead et al. 1996). However, another
interpretation attributes the apparent compactness of the CSS sources to
being strongly confined by the dense ISM in the host galaxy (the
`frustration' model: van Breugel, Miley \& Heckman 1984). Spectroscopic
observations of CSS sources provide evidence for abundant gas reservoirs
in the host galaxies and strong interaction between the radio sources
and the emission-line clouds \cite{ODe02}. Some CSS sources have been
observed to have high-velocity clouds (as high as $\sim500$
km\,s$^{-1}$) in the Narrow-Line Region (NLR), presumably driven by
radio jets or outflows; an example is 3C~48 \cite{Cha99,Sto07}. In
addition, many CSS sources show distorted radio structures, suggestive
of violent interaction between the jet and the ambient interstellar
medium \cite{Wil84,Fan85,Nan91,Spe91,Nan92,Aku91}. The ample supply of
cold gas in their host galaxies and their strong radio activity, which
results in a detection rate as high as $\sim30$ per cent in flux-density
limited radio source surveys \cite{Pea82,Fan90}, make CSS sources good
laboratories for the study of AGN triggering and feedback.
3C~48 ($z=0.367$) is associated with the first quasar to be discovered
\cite{Mat61,Gre63} in the optical band. Its host galaxy is brighter than
that of most other low redshift quasars. The radio source 3C~48 is
classified as a CSS source due to its small size and steep radio
spectrum \cite{Pea82}. Optical and NIR spectroscopic observations
suggest that the active nucleus is located in a gas-rich environment and
that the line-emitting gas clouds are interacting with the jet material
\cite{Can90,Sto91,Cha99,Zut04,Kri05,Sto07}. VLBI images
\cite{Wil90,Wil91,Nan91,Wor04} have revealed a disrupted jet in 3C~48,
indicative of strong interactions between the jet flow and the dense
clouds in the host galaxy. Although some authors
\cite{Wil91,Gup05,Sto07} have suggested that the vigorous radio jet is
powerful enough to drive massive clouds in the NLR at speeds up to 1000
km~s$^{-1}$, the dynamics of the 3C~48 radio jet have yet to be well
constrained. Due to the complex structure of the source, kinematical
analysis of 3C~48 through tracing proper motions of compact jet
components can only be done with VLBI observations at 4.8 GHz and higher
frequencies, but until now the required multi-epoch high-frequency VLBI
observations had not been carried out.
In order to study the kinematics of the radio jet for comparison with
the physical properties of the host galaxy, we observed 3C~48 in full
polarization mode with the VLBA at 1.5, 4.8 and 8.3 GHz in 2004, and
with the EVN and MERLIN at 1.65 GHz in 2005. Combined with earlier VLBA
and EVN observations, these data allow us to constrain the dynamics of the
jet on various scales. Our new observations and our interpretation of
the data are presented in this paper. The remainder of the paper is laid
out as follows. Section 2 describes the observations and data reduction;
Section 3 presents the total intensity images of 3C~48; and Section 4
discusses the spectral properties and the linear polarization of the
components of the radio jet. In Section 5, we discuss the implications
of our observations for the kinematics and dynamics of the radio jet.
Section 6 summarizes our results. Throughout this paper we adopt a
cosmological model with Hubble constant $H_0$=70 km~s$^{-1}$~Mpc$^{-1}$,
$\Omega_m=0.3$, and $\Omega_{\Lambda}=0.7$. Under this cosmological
model, a 1-arcsec angular separation corresponds to a projected linear
size of 5.1 kpc in the source frame at the distance of 3C 48
($z=0.367$).
\section{OBSERVATIONS AND DATA REDUCTION}
The VLBA observations (which included a single VLA antenna) of 3C~48
were carried out at 1.5, 4.8, and 8.3 GHz on 2004 June 25. The EVN and
MERLIN observations at 1.65 GHz were simultaneously made on 2005 June 7.
Table \ref{tab:obs} lists the parameters of the VLBA, EVN and MERLIN
observations. In addition to our new observations, we made use of the
VLBA observations described by Worrall et al. (2004) taken in 1996 at
1.5, 5.0. 8.4 and 15.4 GHz.
\subsection{VLBA observations and data reduction}
The total 12 hours of VLBA observing time were evenly allocated among the
three frequencies. At each frequency the effective observing time on
3C~48 is about 2.6 hours. The data were recorded at four observing
frequencies (IFs) at 1.5 GHz and at two frequencies at the other two
bands, initially split into 16 channels each, in full polarization mode.
The total bandwidth in each case was 32 MHz. The detailed data reduction
procedure was as described by Worrall et al. (2004) and was carried out
in {\sc aips}. We used models derived from our 1996 observations to
facilitate fringe fitting of the 3C~48 data. Because the source
structure of 3C~48 is heavily resolved at 4.8 and 8.3 GHz, and missing
short baselines adds noise to the image, the initial data were not
perfectly calibrated. We carried out self-calibration to further correct
the antenna-based phase and amplitude errors. This progress improves the
dynamic range in the final images.
Polarization calibration was also carried out in the standard manner.
Observations of our bandpass calibrator, 3C~345, were used to determine
the R-L phase and delay offsets. The bright calibrator source DA~193 was
observed at a range of parallactic angles and we used a model image of
this, made from the Stokes $I$ data, to solve for instrumental
polarization. Our observing run included a snapshot observation of the
strongly polarized source 3C~138. Assuming that the polarization
position angle (or the E-Vector position angle in polarization images,
`EVPA') of 3C~138 on VLBI scales at 1.5 GHz is the same as the value
measured by the VLA, and we used the measured polarization position
angle of this source to make a rotation of $94\degr$ of the position
angles in our 3C~48 data. We will show later that the corrected EVPAs of
3C~48 at 1.5 GHz are well consistent with those derived from the
1.65-GHz EVN data that are calibrated independently. At 4.8 GHz, 3C~138
shows multiple polarized components; we estimated the polarization angle
for the brightest polarized component in 3C~138 from Figure 1 in Cotton
et al. 2003, and determined a correction of $-55\degr$ for the 3C~48
data. After the rotation of the EVPAs, the polarized structures at
4.8~GHz are basically in agreement with those at 1.5 GHz. At 8.3~GHz the
polarized emission of 3C~138 is too weak to be used to correct the
absolute EVPA; we therefore did not calibrate the absolute
EVPAs at 8.3 GHz.
\subsection{EVN observations and data reduction}
The effective observing time on 3C~48 was about 8 hours. Apart from
occasional RFI (radio frequency interference), the whole observation ran
successfully. The data were recorded in four IFs. Each IF was split into
16 channels, each of 0.5-MHz channel width. In addition to 3C~48 we
observed the quasars DA~193 and 3C~138 for phase calibration. 3C 138 was
used as a fringe finder due to its high flux density of $\sim$9~Jy at
1.65 GHz.
The amplitude of the visibility data was calibrated using the system
temperatures, monitored during the observations, and gain curves of
each antenna that were measured within 2 weeks of the observations. The
parallactic angles were determined on each telescope and the data were
corrected appropriately before phase and polarization calibration. We
corrected the ionospheric Faraday rotation using archival ionospheric
model data from the CDDIS. DA~193 and OQ~208 were used to calibrate the
complex bandpass response of each antenna. We first ran fringe fitting
on DA~193 over a 10-minute time span to align the multi-band delays.
Then a full fringe fitting using all calibrators over the whole
observing time was carried out to solve for the residual delays and
phase rates. The derived gain solutions were interpolated to calibrate
the 3C~48 visibility data. The single-source data were split for hybrid
imaging. We first ran phase-only self-calibration of the 3C 48 data to
remove the antenna-based, residual phase errors. Next we ran three
iterations of both amplitude and phase self-calibration to improve the
dynamic range of the image.
DA~193 is weakly polarized at centimetre wavelengths (its fractional
polarization is no more than 1 per cent at 5 GHz, Xiang et al. 2006),
and was observed over a wide range of parallactic angles to calibrate
the feed response to polarized signals. The instrumental polarization
parameters of the antenna feeds (the so-called `D-terms') were
calculated from the DA~193 data and then used to correct the phase of
the 3C~48 data. The absolute EVPA was then calibrated from
observations of 3C~138 \cite{Cot97b,Tay00}. A comparison between the
apparent polarization angle of 3C~138 and the value from the VLA
calibrator monitoring program (i.e., $-15\degr$ at 20 cm
wavelength) leads to a differential angle $-22\degr$, which was
applied to correct the apparent orientation of the E-vector for the
3C~48 data. After correction of instrumental polarization and absolute
polarization angle, the cross-correlated 3C~48 data were used to produce
Stokes $Q$ and $U$ images, from which maps of linear polarization intensity
and position angle were produced.
\subsection{MERLIN observations and data reduction}
The MERLIN observations of 3C~48 were performed in the fake-continuum
mode: the total bandwidth of 15 MHz was split into 15 contiguous
channels, 1 MHz for each channel. A number of strong, compact
extragalactic sources were interspersed into the observations of 3C~48
to calibrate the complex antenna gains.
The MERLIN data were reduced in {\sc aips} following the standard
procedure described in the MERLIN cookbook. The flux-density scale was
determined using 3C~286 which has a flux density of 13.7 Jy at 1.65 GHz.
The phases of the data were corrected for the varying parallactic angles
on each antenna. Magnetized plasma in the ionosphere results in an
additional phase difference between the right- and left-handed signals,
owing to Faraday rotation. This time-variable Faraday rotation tends to
defocus the polarized image and to give rise to erroneous estimates of
the instrumental polarization parameters. We estimated the ionospheric
Faraday rotation on each antenna based on the model suggested in the
{\sc aips} Cookbook, and corrected the phases of the visibilities
accordingly. DA~193, OQ~208, PKS~2134+004 and 3C~138 were used to
calibrate the time- and elevation-dependent complex gains. These gain
solutions from the calibrators were interpolated to the 3C~48 data. The
calibrated data were averaged in 30-second bins for further imaging
analysis. Self-calibration in both amplitude and phase was performed to
remove residual errors.
The observations of OQ~208 were used to calculate the instrumental
polarization parameters of each antenna assuming a point-source model.
The derived parameters were then applied to the multi-source data. We
compared the right- and left-hand phase difference of the 3C~286
visibility data with the phase difference value derived from the VLA
monitoring program (i.e., $66\degr$ at 20 cm, Cotton et al. 1997b;
Taylor \& Myers 2000), and obtained a differential angle of $141\degr$.
This angle was used to rotate the EVPA of the polarized data for 3C~48.
\subsection{Combination of EVN and MERLIN data}
After self-calibration, the EVN and MERLIN data of 3C~48 were combined
to make an image with intermediate resolution and high sensitivity. The
pointing centre of the MERLIN observation was offset by 0.034 arcsec to
the West and 0.378 arcsec to the North with respect to the EVN pointing
centre (Table \ref{tab:obs}). Before combination, we first shifted the
pointing centre of the MERLIN data to align with that of the EVN data.
The Lovell and Cambridge telescopes took part in both the EVN and MERLIN
observations. We compared the amplitude of 3C~48 on the common
Lovell--Cambridge baseline in the EVN and MERLIN data, and re-scaled the
EVN visibilities by multiplying them by a factor of 1.4 to match the
MERLIN flux. After combination of EVN and MERLIN visibility data, we
performed a few iterations of amplitude and phase self-calibration to
eliminate the residual errors resulting from minor offsets in
registering the two coordinate frames and flux scales.
\section{RESULTS -- total intensity images}
Figures \ref{fig:MERcont} and \ref{fig:vlbimap} exhibit the total
intensity images derived from the MERLIN, VLBA and EVN data. The final
images were created using the {\sc aips} and {\sc miriad} software
packages as well as the {\sc mapplot} program in the Caltech VLBI
software package.
\subsection{MERLIN images}
Figure \ref{fig:MERcont} shows the total intensity image of 3C~48 from
the MERLIN observations. We used the multi-frequency synthesis
technique to minimize the effects of bandwidth smearing, and assumed an
optically thin synchrotron spectral index ($\alpha=-0.7$) to scale the
amplitude of the visibilities with respect to the central frequency
when averaging the data across multiple channels. The final image was
produced using a hybrid of the Clark (BGC CLEAN) and Steer (SDI CLEAN)
deconvolution algorithms. The image shows that the source structure is
characterized by two major features: a compact component contributing
about half of the total flux density (hereafter referred to as the
`compact jet'), and an extended component surrounding the compact jet
like a cocoon (hereafter called the `extended envelope'). The compact
jet is elongated in roughly the north-south direction, in alignment
with the VLBI jet. The galactic nucleus corresponding to the central
engine of 3C~48 is associated with VLBI component A
\cite{Sim90,Wil91}. It is embedded in the southern end of the compact
jet. The emission peaks at a location close to the VLBI jet component
D; the second brightest component in the compact jet is located in the
vicinity of the VLBI jet component B2 (Figure \ref{fig:vlbimap}: see
Section \ref{section:vlbimap}). The extended envelope extends out to
$\sim$1 arcsec north from the nucleus. At $\sim$0.25 arcsec north of
the nucleus, the extended component bends and diffuses toward the
northeast. The absence of short baselines ($uv<30 k\lambda$) results
in some negative features (the so-called `negative bowl' in synthesis
images) just outside the outer boundary of the envelope.
The integrated flux density over the whole source is 14.36$\pm$1.02 Jy
(very close to the single-dish measurement), suggesting that there is
not much missing flux on short spacings. The uncertainty we assign
includes both the systematic errors and the {\it r.m.s.} fluctuations in
the image. Since the calibrator of the flux density scale, 3C~286, is
resolved on baselines longer than 600 k$\lambda$ \cite{An04,Cot97a}, a
model with a set of CLEAN components was used in flux density
calibration instead of a point-source model. We further compared the
derived flux density of the phase calibrator DA~193 from our
observations with published results \cite{Sta98,Con98}. The comparison
suggests that the flux density of DA~193 from our MERLIN observation was
consistent with that from the VLBI measurements to within 7 per cent. We
note that this systematic error includes both the amplitude calibration
error of 3C~286 and the error induced by the intrinsic long-term
variability of DA~193; the latter is likely to be dominant.
The optical and NIR observations \cite{Sto91,Cha99,Zut04} detect a
secondary continuum peak, denoted 3C~48A, at $\sim$1 arcsec northeast
of the optical peak of 3C~48. Although MERLIN would be sensitive to
any compact structure with this offset from the pointing centre, we
did not find any significant radio emission associated with 3C~48A.
There is no strong feature at the position of 3C~48A even in
high-dynamic-range VLA images \cite{Bri95,Feng05}. It is possibly that
the radio emission from 3C~48A is intrinsically weak if 3C~48A is a
disrupted nucleus of the companion galaxy without an active AGN
\cite{Sto91} or 3C~48A is an active star forming region \cite{Cha99}.
In either case, the emission power of 3C~48A would be dominated by
thermal sources and any radio radiation would be highly obscured by
the surrounding interstellar medium.
\subsection{VLBA and EVN images}\label{section:vlbimap}
Figure \ref{fig:vlbimap} shows the compact radio jet of 3C~48 on various
scales derived from the VLBA and EVN observations. Table
\ref{tab:figpar} gives the parameters of the images.
The VLBI data have been averaged on all frequency channels in individual
IFs to export a single-channel dataset. The visibility amplitudes on
each IF have been corrected on the assumption of a spectral index of
$-0.7$.
The total-intensity images derived from the 1.5-GHz VLBA and 1.65-GHz
EVN data are shown in Figures \ref{fig:vlbimap}-a to
\ref{fig:vlbimap}-c. The jet morphology we see is consistent with
other published high-resolution images
\cite{Wil90,Wil91,Nan91,Wor04,Feng05}. The jet extends $\sim$0.5
arcsec in the north-south direction, and consists of a diffuse plume
in which a number of bright compact knots are embedded. We label these
knots in the image using nomenclature consistent with the previous
VLBI observations \cite{Wil91,Wor04} (we introduce the labels B3 and
D2 for faint features in the B and D regions revealed by our new
observations). The active nucleus is thought to be located at the
southern end of the jet, i.e., close to the position of component A
\cite{Sim90,Wil91}. The bright knots, other than the nuclear
component A, are thought to be associated with shocks that are created
when the jet flow passes through the dense interstellar medium in the
host galaxy \cite{Wil91,Wor04,Feng05}. Figure \ref{fig:vlbimap}-b
enlarges the inner jet region of the 3C~48, showing the structure
between A and B2. At $\sim$0.05 arcsec north away from the core A, the
jet brightens at the hot spot B. B is in fact the brightest jet
knot in the VLBI images. Earlier 1.5-GHz images (Figure 1 : Wilkinson
et al. 1991; Figure 5 : Worrall et al. 2004) show only weak flux
($\sim4\sigma$) between A and B, but in our high-dynamic-range image in
Figure \ref{fig:vlbimap}-b, a continuous jet is distinctly seen to
connect A and B. From component B, the jet curves to the northwest. At $\sim$0.1 arcsec north of the nucleus, there is a
bright component B2. After B2, the jet position angle seems to have a
significant increase, and the jet bends into a second curve with a
larger radius. At 0.25 arcsec north of the nucleus, the jet runs into
a bright knot C which is elongated in the East-West direction. Here a plume of emission turns toward the northeast. The outer
boundary of the plume feature is ill-defined in this image since its
surface brightness is dependent on the {\it r.m.s.} noise in the
image. The compact jet still keeps its northward motion from component
C, but bends into an even larger curvature. Beyond component D2, the
compact VLBI jet is too weak to be detected.
At 4.8 and 8.3 GHz, most of the extended emission is resolved out
(Figures \ref{fig:vlbimap}-d to \ref{fig:vlbimap}-g) and only a few
compact knots remain visible. Figure \ref{fig:vlbimap}-e at 4.8 GHz
highlights the core-jet structure within 150 pc ($\sim$30 mas); the
ridge line appears to oscillate from side to side. At the resolution
of this image the core A is resolved into two sub-components, which we
denote A1 and A2. Figure \ref{fig:vlbimap}-g at 8.3 GHz focuses on the
nuclear region within 50 pc ($\sim$10 mas) and clearly shows two
well-separated components. Beyond this distance the brightness of the
inner jet is below the detection threshold. This is consistent with
what was seen in the 8.4- and 15.4-GHz images from the 1996 VLBA
observations \cite{Wor04}.
Figure \ref{fig:core} focuses on the core A and inner jet out to
the hot spot B. Figure \ref{fig:core}-a shows the 1.5-GHz
image from 2004. Unlike the image already shown in Figure
\ref{fig:vlbimap}-b, this image was produced with a super-uniform
weighting of the $uv$ plane (see the caption of Figure \ref{fig:core}
for details). The high-resolution
1.5-GHz image reveals a quasi-oscillatory jet extending to a distance of
$\sim$40 mas ($\sim$200~pc) to the north of the core A. Interestingly,
Figure \ref{fig:core}-b shows similar oscillatory jet structure at
4.8-GHz on both epoch 2004 (contours) and epoch 1996 (grey-scale,
Worrall et al. 2004). The consistency of the jet morphology seen in both
1.5- and 4.8-GHz images and in both epochs may suggest that the
oscillatory pattern of the jet seen on kpc scales (Figure
\ref{fig:vlbimap}) may be traced back to the innermost jet on parsec
scales. Figure \ref{fig:core}-c shows the 8.3-GHz images in 2004
(contours) and 1996 (grey scale, Worrall et al. 2004). In 1996 (the
image denoted `1996X') the core is only slightly resolved into the two
components A1 and A2, while these are well separated by 3.5 mas (2
times the synthesized beam size) in the 2004
observations (`2004X'). Direct comparison of 1996X and 2004X images thus
provides evidence for a northward position shift of A2 between 1996 and
2004. Figure \ref{fig:core}-d overlays the 2004X contour map on the
1996U (15.4 GHz, Worrall et al. 2004) grey-scale map. Neglecting the
minor positional offset of A1 between 1996U and 2004X, possibly due to
opacity effects, this comparison of 1996U and 2004X maps is also
consistent with the idea that A2 has moved north between 1996 and 2004. We will
discuss the jet kinematics in detail in Section \ref{section:pm}.
\section{image analysis}
\subsection{Spectral index distribution along the radio jet}
In order to measure the spectral properties of the 3C~48 jet, we
re-mapped the 4.99-GHz MERLIN data acquired on 1992 June 15
\cite{Feng05} and compared it with the 1.65-GHz EVN+MERLIN data
described in the present paper. The individual data sets were first
mapped with the same {\it uv} range, and convolved with the same
40$\times$40 (mas) restoring beam. Then we compared the intensities of
the two images pixel by pixel to calculate the spectral index
$\alpha^{4.99}_{1.65}$. The results are shown in Figure
\ref{fig:spix}. Component A shows a rather flat spectrum with a
spectral index $\alpha^{4.99}_{1.65}=-0.24\pm0.09$. All other bright
knots show steep spectral indices, ranging from $-0.66$ to $-0.92$.
The extended envelope in general has an even steeper spectrum with
$\alpha\lesssim-1.10$. Spectral steepening in radio sources is a
signature of a less efficient acceleration mechanism and/or the
depletion of high-energy electrons through synchrotron/Compton
radiation losses and adiabatic losses as a result of the expansion of
the plasma as it flows away from active acceleration region. The
different spectral index distribution seen in the compact jet and
extended envelope may indicate that there are different electron
populations in these two components, with the extended component
arising from an aged electron population.
\subsection{Linear polarization images}
\subsubsection{MERLIN images}
\label{section:rm-reference}
Figure \ref{fig:MERpol} displays the polarization image made from the
MERLIN data.
The majority of the polarized emission is detected in the inner region
of the source, in alignment with the compact jet. The polarized
intensity peaks in two locations. The brightest one is near the VLBI
jet component C, with an integrated polarized intensity of 0.31 Jy and
a mean percentage of polarization (defined as $\frac{\Sigma
\sqrt{Q_i^2+U_i^2}}{\Sigma I_i}$, where $i$ represents the $i$th
polarized sub-components) of $m=5.8$ per cent. The secondary one is
located between VLBI jet components B and B2, with an integrated
polarized intensity of 0.23 Jy and a mean degree of polarization
$m=9.5$ per cent. Both of the two peaks show clear deviations from the
total intensity peaks in Figure \ref{fig:MERcont}. These measurements
of polarization structure and fractional polarization are in good
agreement with those observed with the VLA at 2-cm wavelength with a
similar angular resolution \cite{Bre84}. The integrated polarized flux
density in the whole source is 0.64$\pm$0.05 Jy and the integrated
fractional polarization is (4.9$\pm$0.4) per cent. Since the
integrated polarized intensity is in fact a vector sum of different
polarized sub-components, the percentage polarization calculated in
this way represents a lower limit. We can see from the image (Figure
\ref{fig:MERpol}) that the percentage of the polarization at
individual pixels is higher than 5 per cent, and increases toward the
south of the nucleus. A maximum value of $m\gtrsim30$ per cent is
detected at $\sim$0.045 arcsec south of the nucleus. The fractional
polarization ($m>4.9$ per cent) measured from our MERLIN observation
at 18 cm is at least an order of magnitude higher than the VLA
measurement at 20 cm, although it is consistent with the values
measured by the VLA at 6 cm and shorter wavelengths. This difference
in the fractional polarization at these very similar wavelengths is
most likely to be an observational effect due to beam depolarization,
rather than being due to intrinsic variations in the Faraday depth
(R.~Perley, private communication).
The averaged polarization angle (or EVPA) is $-18\degr\pm5\degr$ in the
polarization structure. On the basis of the new measurements of the Rotation
Measure (RM) towards 3C~48 by Mantovani et al. (2009), i.e.,
RM=$-64$ rad~m$^{-2}$ and intrinsic position angle $\phi_0=116\degr$
\cite{Sim81,Man09}, we get a polarization angle of $-4\degr$ at
1.65 GHz. This result suggests that the absolute EVPA calibration of
3C~48 agrees with the RM-corrected EVPA within 3$\sigma$. We show
in Figure \ref{fig:MERpol} the RM-corrected EVPAs. The EVPAs are well
aligned in the North-South direction, indicating an ordered magnetic
field in the Faraday screen.
\subsubsection{EVN and VLBA images}
At the resolution of the EVN, most of the polarized emission from
extended structures is resolved out. In order to map the polarized
emission with modest sensitivity and resolution, we created Stokes $Q$
and $U$ maps using only the European baselines. Figure
\ref{fig:VLBIpol}-a shows the linear polarization of 3C~48 from the
1.65 GHz EVN data. The polarized emission peaks at two components to
the East (hereafter, `C-East') and West (hereafter, `C-West') of
component C. The integrated polarized flux density is 24.8 mJy in
`C-West' and of 22.9 mJy in `C-East', and the mean percentage
polarization in the two regions is 6.3 per cent and 10.7 per cent
respectively. The real fractional polarization at individual pixels is
much higher, for the reasons discussed above (Section 4.2.1). There is
clear evidence for the existence of sub-components in `C-West' and
`C-East'; these polarized sub-components show a variety of EVPAs, and
have much higher fractional polarization than the `mean' value. The
polarization is as high as 40 per cent at the inner edge of the knot
C, which would be consistent with the existence of a shear layer
produced by the jet-ISM interaction and/or a helical magnetic field
(3C~43: Cotton et al. 2003; 3C~120: G\'{o}mez et al. 2008). Component
B, the brightest VLBI component, however, is weakly polarized with an
intensity $<$4.0 mJy~beam$^{-1}$ (percentage polarization less than
1 per cent). The nucleus A shows no obvious polarization.
The 20-cm VLBA observations were carried out in four 8-MHz bands,
centred at 1404.5, 1412.5, 1604.5 and 1612.5 MHz. In order to compare
with the 1.65-GHz EVN polarization image, we made a VLBA polarization
image (Figure \ref{fig:VLBIpol}-b) using data in the latter two bands.
This image displays a polarization structure in excellent agreement
with that detected at 1.65 GHz with the EVN, although the angular
resolution is 3 times higher than the latter: the polarized emission
mostly comes from the vicinity of component C and the fractional
polarization increases where the jet bends; the hot spot B and the
core A are weakly polarized or not detected in polarization. The 1.65-
and 1.61-GHz images show detailed polarized structure in the component-C region on a spatial scale of tens of parsecs: the polarization angle
(EVPA) shows a gradual increase across component C, with a total range
of $160\degr$, and the percentage of polarization gradually increases
from 5 per cent to $\gtrsim$30 per cent from the Western edge to the
Eastern edge at both `C-West' and `C-East'.
Figure \ref{fig:VLBIpol}-c and \ref{fig:VLBIpol}-d show the 4.8- and
8.3-GHz polarization images made with the VLBA data. Both images were made
by tapering the visibility data using a Gaussian function in order to
increase the signal-to-noise of the low-surface brightness emission.
Similar to what is seen in the 1.65 and 1.61-GHz images, component
`C-West' shows a polarization angle that increases by $80\degr$ across
the component, but these images show the opposite sense of change of
fractional polarization -- fractional polarization decreases from 60
per cent down to 20 per cent from the northwest to the southeast.
Another distinct difference is that hot spot B shows increasing
fractional polarization toward the higher frequencies, $m \sim2.0$ per
cent at 4.8 GHz and $m\sim12$ per cent at 8.3 GHz in contrast with
$m\lesssim1$ per cent at 1.6 GHz. The difference in the fractional
polarizations of B at 1.6/4.8 GHz and 8.3 GHz imply that a component
of the Faraday screen is unresolved at 1.6 and 4.8 GHz and/or that
some internal depolarization is at work. The non-detection of
polarization from the core A at all four frequencies may suggest a
tangled magnetic field at the base of the jet.
\subsection{EVPA gradient at component C and RM distribution}
We found at all four frequencies that the polarization angles undergo
a rotation by $\gtrsim 80\degr$ across the jet ridge line at both the
`C-East' and `C-West' components. There are four possible factors that
may affect the observed polarization angle: (1) the calibration of the
absolute EVPAs; (2) Faraday rotation caused by Galactic ionized gas;
(3) Faraday rotation due to gas within the 3C~48 system and (4)
intrinsic polarization structure changes. The correction of absolute
EVPAs applies to all polarization structure, so it can not explain the
position-dependent polarization angle changes at component C; in any
case, the fact that we see similar patterns at four different
frequencies, calibrated following independent procedures, rules out
the possibility of calibration error. Galactic Faraday rotation is
non-negligible (Section \ref{section:rm-reference}; $-64$ rad m$^{-2}$
implies rotations from the true position angle of $168\degr$ at 1.4
GHz, $129\degr$ at 1.6 GHz, $14.3\degr$ at 4.8 GHz and $4.8\degr$ at
8.3 GHz), and means that we expect significant differences between the
EVPA measured at our different frequencies; however, the Galactic
Faraday screen should vary on much larger angular scales than we
observe. Only factors (3) and (4), which reflect the situation
internal to the 3C~48 system itself, will give rise to a
position-dependent rotation of the EVPAs. The EVPA gradient is related
to the gradient of the RM and the intrinsic polarization angle by:
$\frac{{\rm d}\phi}{{\rm d}x}=\lambda^2\frac{{\rm d}(RM)}{{\rm
d}x}+\frac{{\rm d}\phi_0}{{\rm d}x}$, where the first term
represents the RM gradient and the latter term represents the
intrinsic polarization angle gradient. If the systematic gradient of
EVPAs, $\frac{{\rm d}\phi}{{\rm d}x}$, were solely attributed to an RM
gradient, then $\frac{d\phi}{dx}$ would show a strong frequency
dependence; on the other hand, if $\frac{{\rm d}\phi}{{\rm d}x}$ is
associated with the change of the intrinsic polarization angle, there
is no frequency-dependence. We compared the $\frac{{\rm d}\phi}{{\rm
d}x}$ at 1.6 and 4.8 GHz and found a ratio $\frac{{\rm d}\phi/{\rm
d}x (1.6GHz)}{{\rm d}\phi/{\rm d}x(4.8GHz)}=1.8$. This number
falls between 1.0 (the value expected if there were no RM gradient)
and 8.8 (the ratio of $\lambda^2$), suggesting that a combination of
RM and intrinsic polarization angle gradients are responsible for the
systematic gradient of EVPAs at C. Accordingly, it is worthwhile to
attempt to measure the RM in the VLBI components of 3C~48.
The first two bands of the 20-cm VLBA data (centre frequency 1.408
GHz) are separated from the last two bands (centre frequency 1.608
GHz) by 200 MHz, indicating a differential polarization angle of $\sim
40\degr$ across the passband. The low integrated rotation measure
means that the effects of Faraday rotation are not significant
($<10\degr$) between 4.8 and 8.3 GHz, while the absolute EVPA
calibration at 8.3 GHz is uncertain; moreover, the {\it uv} sampling
at 8.3 GHz is too sparse to allow us to image identical source
structure at 1.5 and 4.8 GHz. Therefore we used the 1.408, 1.608 and
4.78 GHz data to map the RM distribution in 3C~48.
We first re-imaged the Stokes $Q$ and $U$ data at the three
frequencies with a common {\it uv} cutoff at $>$400 k$\lambda$ and
restored with the same convolving beam. We tapered the $uv$ plane
weights when imaging the 4.78-GHz data
in order to achieve a similar intrinsic resolution to that of the
images at the two lower frequencies. We then made polarization angle
images from the Stokes $Q$ and $U$ maps. The three polarization angle
images were assembled to calculate the RM (using {\sc aips} task RM). The
resulting RM image is shown in Figure \ref{fig:RM}. The image shows a
smooth distribution of RM in the component-C region except for a
region northeast of `C-West'. The superposed plots present the
fits to the RM and intrinsic polarization angle ($\phi_0$, the
orientation of polarization extrapolated at $\lambda=0$) at four
selected locations. The polarization position angles at individual
frequencies have multiples of $\pi$ added or subtracted to remove the
$n\pi$ amibiguity. The errors in the calculated RMs and $\phi_0$ are
derived from the linear fits. We note that the systematic error due
to the absolute EVPA calibration feeds into the error on the observed
polarization angle. All four fits show a good match with a $\lambda^2$
law. The fitted parameters at `P4' in the `C-East' region are
consistent with those derived from the single-dish measurements for
the overall source \cite{Man09}. The western component (`C-West')
shows a gradient of RM from $-95$ rad m$^{-2}$ at `P1' to $-85$ rad
m$^{-2}$ at `P3', and the intrinsic polarization angle varies from
$123\degr$ (or $-57\degr$) at `P1', through $146\degr$ (or $-34\degr$)
at `P2' to $5\degr$ at `P3'. This result is in good agreement with the
qualitative analysis of the EVPA gradients above. A straightforward
interpretation of the gradients of the RMs and the intrinsic
polarization angles is that the magnetic field orientation gradually
varies across the jet ridge line; for example, a helical magnetic
field surrounding the jet might have this effect. An alternative
interpretation for the enhancement of the rotation measurement at the
edge of the jet is that it is associated with thermal electrons in a
milliarcsec-scale Faraday screens surrounding or inside the jet due to
jet-ISM interactions \cite{Cot03,Gom08}. More observations are needed
to investigate the origins of the varying RM and $\phi_0$.
The hot spot B shows a much larger difference of EVPAs between
4.8 and 8.3 GHz than is seen in component C. This
might be a signature of different RMs at B and C. A rough
calculation suggests a RM of $-330\pm60$ rad~m$^{-2}$ at B. The
high rotation measure and high fractional polarization (Section 4.2) is
indicative of a strong, ordered magnetic field in the vicinity of B. This might
be expected in a region containing a shock in which the
line-of-sight component of the magnetic field and/or the density of
thermal electrons are enhanced; in fact, the proper motion of B (Section
4.5) does provide some evidence for a stationary shock in this region.
\subsection{Physical properties of compact components in VLBI images}
In order to make a quantitative study of the radiation properties of
the compact VLBI components in 3C~48, we fitted the images
of compact components in the VLBI images from our new observations and
from the VLBA data taken in 1996 \cite{Wor04} with Gaussian models.
Measurements from the 1996 data used mapping parameters consistent with
those for the 2004 images. Table \ref{tab:model} lists the fitted
parameters of bright VLBI components in ascending frequency order.
The discrete compact components in the 4.8- and 8.3-GHz VLBA images are
well fitted with Gaussian models along with a zero-level base and slope
accounting for the extended background structure. The fit to extended
emission structure is sensitive to the {\it uv} sampling and the
sensitivity of the image. We have re-imaged the 1.5-GHz VLBA image using
the same parameters as for the 1.65-GHz EVN image, i.e., the same {\it
uv} range and restoring beam. At 1.5 and 1.65 GHz, Gaussian models are
good approximations to the emission structure of compact sources with
high signal-to-noise ratio, such as components A and B. For extended
sources (i.e., components B2 to D2) whose emission structures
are either not well modelled by Gaussian distribution, or blended with
many sub-components, model fitting with a single Gaussian model gives a
larger uncertainty for the fitted parameters. In particular, the
determination of the integrated flux density is very sensitive to the
apparent source size.
The uncertainties for the fitted parameters in Table \ref{tab:model}
are derived from the output of the {\sc aips} task {\sc JMFIT}. These
fitting errors are sensitive to the intensity fluctuations in the
images and source shapes. In most cases, the fitting errors for the
peak intensities of Gaussian components are roughly equal to the {\it
r.m.s.} noise. We note that the uncertainty on the integrated flux
density should also contain systematic calibration errors propagated
from the amplitude calibration of the visibility data, in addition to
the fitting errors. The calibration error normally dominates over the
fitting error. The amplitude calibration for the VLBI antennas was
made from the measurements of system temperature ($T_{sys}$) at
two-minute intervals during the observations combined with the antenna
gain curves measured at each VLBI station. For the VLBA data, this
calibration has an accuracy $\lesssim$5 per cent of the amplitude
scale\footnote{See the online VLBA status summary at
http://www.vlba.nrao.edu/astro/obstatus/current/obssum.html .}.
Because of the diversity of the antenna performance of the EVN
elements, we adopted an averaged amplitude calibration uncertainty of
5 per cent for the EVN data.
The positions of the VLBI core A1 at 4.8, 8.3 and 15.4 GHz show good
alignment within 0.4~mas at different frequencies and epochs. The
positions of the unresolved core A at 1.5 and 1.65 GHz show a systematic
northward offset by 2--4 mas relative to the position of A1 at higher
frequencies. Due to the low resolution and high opacity at 1.5 GHz, the
position of A at this frequency reflects the centroid of the blended
emission structure of the active galactic nucleus and inner 40-pc jet.
The parameters that we have derived for the compact components A, B and
B2 in epoch 1996 are in good agreement with those determined by Worrall
et al. (2004) at the same frequency band. The results for fitting to
extended knots at 1.5 and 1.65 GHz are in less good agreement. This is
probably because of the different {\it uv} sampling on short spacings,
meaning that the VLBA and EVN data sample different extended structures
in the emission.
The integrated flux densities of the VLBI components A1 and A2 in 1996X
(8.3~GHz) are higher than those in 2004X (8.3~GHz) by $\sim$100 per cent
(A1) and $\sim$60 per cent (A2), respectively. The large discrepancy in
the flux densities of A1 and A2 between epochs 1996X and 2004X can not
easily be interpreted as an amplitude calibration error of larger than
60 per cent since we do not see a variation at a comparable level
in the flux densities of components B, B2 and D. Although the {\it total} flux
densities of CSS sources in general exhibit no violent variability at radio
wavelengths, the possibility of small-amplitude ($\lesssim$100 per cent)
variability in the VLBI core and inner jet components is not ruled out.
Component A1 has a flat spectrum with
$\alpha^{8.3}_{4.8}=-0.34\pm0.04$ between 4.8 and 8.3 GHz in epoch
2004; component A2 has a rather steeper spectrum with
$\alpha^{8.3}_{4.8}=-1.29\pm0.16$ (epoch 2004). The spectral
properties of these two components support the idea that A1 is
associated with the active nucleus and suffers from synchrotron
self-absorption at centimetre radio wavelengths; in this picture, A2
is the innermost jet. The spectral indices of components B and B2
in epoch 2004 are $\alpha^{8.3}_{4.8}=-0.82\pm0.10$ (B) and
$\alpha^{8.3}_{4.8}=-0.79\pm0.10$ (B2), respectively. This is
consistent with the measurements from the 1.65 and 4.99 GHz images
(Figure \ref{fig:spix}). Component D shows a relatively flatter
spectrum in epoch 2004 with $\alpha^{8.3}_{4.8}=-0.46\pm0.06$, in
contrast to the other jet knots. While this spectral index is
consistent with those of the shock-accelerated hot spots in radio
galaxies, the flattening of the spectrum in D might also arise from a
local compression of particles and magnetic field.
Table \ref{tab:tb} lists the brightness temperatures ($T_b$) of the
compact VLBI components A1, A2 and B. All these VLBI components have a
brightness temperatures ($T_b$) higher than $10^8$K, confirming their
non-thermal origin. These brightness temperatures are well below the
$10^{11-12}$~K upper limit constrained by the inverse Compton
catastrophic \cite{KP69}, suggesting that the relativistic jet plasma is
only mildly beamed toward the line of sight. The $T_b$ of A1 is about 3
times higher than that of A2 at 4.8 and 8.3~GHz, and the $T_b$ of A1
decreases toward higher frequencies. Together with the flat spectrum and
variability of A1, the observed results are consistent with A1 being the
self-absorbed core harbouring the AGN. $T_b$ is much higher in 1996X than
2004X for both A1 and A2, a consequence of the measured flux density
variation between the two epochs.
\subsection{Proper motions of VLBI components}\label{section:pm}
The Gaussian fitting results presented in Table \ref{tab:model} may be
used to calculate the proper motions of VLBI components. In order to
search for proper motions in 3C~48, maps at different epochs
should be aligned at a compact component such as the core
\cite{Wor04}. However, thanks to our new VLBI observations we know
that aligning the cores at 1.5-GHz is not likely to be practical,
since the core structure appears to be changing on the relevant
timescales. Even at 4.8 GHz, the core still blends with the inner
jet A2 in epoch 1996C (Figure \ref{fig:core}). In contrast to these
two lower frequencies, the 8.3-GHz images have higher resolution,
better separation of A1 and A2, and less contamination from
extended emission. These make 8.3-GHz images the best choice for the
proper motion analysis. In the following discussion of proper motion
measurements we rely on the 8.3~GHz images.
We have already commented on the shift of the peak of A2 to the north
from epochs 1996X to 2004X in Figure \ref{fig:core}. A quantitative
calculation based on the model fitting results gives a positional
variation of 1.38 mas to the North and 0.15 mas to the West during a
time span 8.43 yr, assuming that the core A1 is stationary. That corresponds to a
proper motion of $\mu_\alpha = -0.018\pm0.007$ mas yr$^{-1}$ ('minus'
mean moving to the West) and $\mu_\delta=0.164\pm0.015$ mas yr$^{-1}$,
corresponding to an apparent transverse velocity of $v_\alpha =
-0.40\pm0.16 \,c$ and $v_\delta=3.74\pm0.35 \,c$. The error quoted
here includes
both the positional uncertainty derived from Gaussian fitting and the
relative offset of the reference point ({\it i.e.}, A1). That means that
we detect a significant ($>10\sigma$) proper motion for A2 moving to the
north. The apparent transverse velocity for A2 is similar to
velocities derived from other CSS and GPS sources in which
apparent superluminal motions in the pc-scale jet have been detected, e.g.,
3.3--9.7$c$ in 3C~138 \cite{Cot97b,She01}.
We also searched for evidence for proper motions of the other jet knots.
The proper motion measurement is limited by the accuracy of the
reference point alignment, our ability to make a high-precision position
determination at each epoch, and the contamination from extended
structure. We found only a
$3\sigma$ proper motion from B2, which shows a position change of
$\Delta\alpha=0.22\pm0.07$ mas and $\Delta\delta=0.48\pm0.13$ mas in
8.43 yr, corresponding to an apparent velocity of
$\beta_{app}=1.43\pm0.33\,c$ to the northeast. The measurements of the
position variation of the hot spot B between 1996X and 2004X show no
evidence for proper motion with $\mu_\alpha = 0.012\pm0.007$ mas
yr$^{-1}$ and $\mu_\delta=0.005\pm0.015$ mas yr$^{-1}$. Worrall et al.
(2004) earlier reported a $3\sigma$ proper motion for B by comparing the
the 1.5-GHz VLBA image taken in 1996 with Wilkinson et al's 1.6-GHz
image from
11.8 years previously. However, as mentioned above, the 1.5-GHz
measurements are subject to the problems of lower angular resolution,
poor reference point alignment and contamination from structural
variation. In particular, if we extrapolate the observed angular motion
of A2 back, the creation of jet component A2 took place in 1984,
therefore in 1996 A2 would still have been blended with A1 in the
1.5-GHz image within $\frac{1}{4}$ beam. The fitting of a Gaussian to
the combination of A1 and A2 at 1.5 GHz on epoch 1996 would then have
suffered from the effects of the structural changes in the core due
to the expansion of A2. For these reasons we conclude that the hot spot B
is stationary to the limit of our ability to measure motions. For the other
jet components, the complex source structure does not permit any
determination of proper motions.
\section{Kinematics of the radio jet}
\label{section:kinematics}
\subsection{Geometry of the radio jet}
Most CSS sources show double or triple structures on kpc scales,
analogous to classical FR~I or FR~II galaxies. However, some CSS
sources show strongly asymmetric structures. At small viewing angles,
the advancing jet looks much brighter than the receding one, due to
Doppler boosting. The sidedness of radio jets can be characterized by
the jet-to-counterjet intensity ratio $R$. In VLA images
\cite{Bri95,Feng05}, 3C~48 shows two-sided structure in the north-south
direction. The southern (presumably receding) component is much weaker
than the north (advancing) one. In VLBI images (Wilkinson et al. 1991;
Worrall et al. 2004; the present paper) 3C~48 shows a one-sided jet to
the north of the nucleus. If the non-detection of the counterjet is
solely attributed to Doppler deboosting, the sideness parameter $R$ can
be estimated from the intensity ratio of jet knots to the detection
limit (derived from the $3\sigma$ off-source noise). Assuming the source is
intrinsically symmetric out to a projected separation of 600~pc (the
distance of B2 away from A1), the sideness parameter would be $>200$
for B2 and B in the 1.5-GHz image (Figure \ref{fig:vlbimap}-a).
In the highest-sensitivity image on epoch 2004C (Figure
\ref{fig:vlbimap}-d), the off-source noise in the image is 40 $\mu$Jy
beam$^{-1}$, so that the derived $R$ at component B could be as high as
$\gtrsim$900.
For a smooth jet which consists of a number of unresolved components,
the jet-to-counterjet brightness ratio $R$ is related to the jet
velocity
($\beta$) and viewing angle ($\Theta$) by
$$
R=\left(\frac{1+\beta\cos\Theta}{1-\beta\cos\Theta}\right)^{2-\alpha}
$$
Assuming an optically thin spectral index $\alpha=-1.0$ for the
3C~48 jet (Figure \ref{fig:spix}), the sideness parameter $R\gtrsim900$
estimated above gives a limit of
$\beta\cos\Theta>0.81\,c$ for the projected jet velocity in
the line of sight.
Using only the combination of parameters $\beta\cos\Theta$ it is not possible
to determine the kinematics (jet speed $\beta$) and the geometry
(viewing angle $\Theta$) of the jet flow. Additional constraints may
come from the apparent transverse velocity, which is related to the
jet velocity by $\beta_{app} =
\frac{\beta\sin\Theta}{1-\beta\cos\Theta}$. In Section \ref{section:pm} we determined the
apparent velocities for components B and B2,
$\beta_{app}(B)=3.74c\pm0.35c$, $\beta_{app}(B2)=1.43c\pm0.33c$, and
so we can combine $\beta\cos\Theta$ and $\beta_{app}$ to place a constraint
on the kinematics and orientation of the outer jet.
The constraints to the jet velocity and source orientation
are shown in Figure \ref{fig:viewangle}.
The results imply that the 3C~48 jet moves at $v>0.85c$
along a viewing angle less than $35\degr$.
\subsection{Helical radio jet structure}
\label{section:helic}
As discussed in Section \ref{section:vlbimap} the bright jet knots
define a sinusoidal ridge line. This is the expected appearance of a
helically twisted jet projected on to the plane of the sky. Helical
radio jets, or jet structure with multiple bends, can be triggered by
periodic variations in the direction of ejection (e.g., precession of
the jet nozzle), and/or random perturbations at the start of the jet
(e.g., jet-cloud collisions). For example, the wiggles in the
ballistic jets in SS~433 are interpreted in terms of periodic
variation in the direction of ejection \cite{Hje81}. Alternatively,
small perturbations at the start of a coherent, smooth jet stream
might be amplified by the Kelvin-Helmholtz (K-H) instability and grow
downstream in the jet. In this case, the triggering of the helical
mode and its actual evolution in the jet are dependent on the
fluctuation properties of the initial perturbations, the dynamics of
the jet flow, and the physical properties of the surrounding
interstellar medium \cite{Har87,Har03}. In the following subsections
we consider these two models in more detail.
\subsubsection{Model 1 -- precessing jet}
We use a simple precession model \cite{Hje81}, taking into account
only kinematics, to model the apparently oscillatory structure of the
3C~48 radio jet. Figure \ref{fig:sketch} shows a sketch map of a 3-D
jet projected on the plane of the sky. The X- and Y-axis are defined
so that they point to the Right Ascension and Declination directions,
respectively. In the right-handed coordinate system, the Z-axis is
perpendicular to the XOY plane and the minus-Z direction points to the
observer. The jet axis is tilting toward the observer by an
inclination angle of ($90-\theta$). The observed jet axis lies at a
position angle $\alpha$. In the jet rest frame, the kinematic equation
of a precessing jet can be parameterized by jet velocity ($V_j$),
half-opening angle of the helix cone ($\varphi$) and angular velocity
(or, equivalently, precession period $P$).
To simplify the calculations, we assume a constant jet flow velocity
$V_j$, a constant opening angle $\varphi$ of the helix, and a constant
angular velocity. We ignore the width of the jet itself, so we are
actually fitting to the ridge line of the jet. The jet thickness does
not significantly affect the fitting unless it is far wider than the
opening angle of the helix cone. (We note that, although we have
measured lower proper motion velocities in B and B2 than the velocity in
the inner jet A2, this does not necessarily imply deceleration in the
outer jet flow, since the brightening at B, and to some extent at B2,
may arise mostly from stationary shocks; the proper motions of B and B2
thus represent a lower limit on the actual bulk motions of the jet.) We
further assume the origin of the precession arises from the central
black hole and accretion disk system, so that ($X_0$,$Y_0$,$Z_0$) can be
taken as zero. In the observer's frame the jet trajectory shown in the
CLEAN image can be acquired by projecting the 3-D jet on the plane of
the sky and then performing a rotation by an angle $\alpha$ in the plane
of the sky so that the Y-axis aligns to the North (Declination) and
X-axis points to the East (Right Ascension). In addition to the above
parameters, we need to define a rotation sign parameter $s_{rot}$
($s_{rot}=+1$ means counterclockwise rotation) and jet side parameter
($s_{jet}=+1$ means the jet moves toward the observer). Since we are
dealing with the advancing jet, the jet side parameter is set to 1.
Based on our calculations, we found that a clockwise rotation pattern
($s_{rot}=-1$) fits the 3C~48 jet.
To estimate the kinematical properties of the precessing jet flow, we
use the proper motion measurements of component A2 as an estimate of
the jet velocity and orientation (Figure \ref{fig:viewangle}). We have
chosen a set of parameters consistent with the curve for $V_{\rm
app,j}=3.7c$ and a viewing angle of $17\degr$. Other combinations of
angles to the line of sight and velocities give qualitatively similar
curves. For example, if we use a lower flow speed instead, a similar
model structure can be produced by adjusting other parameters
accordingly, e.g. by increasing the precessing period by the same
factor. The high-resolution VLBI images (Figure \ref{fig:core}) show
that the innermost jet aligns to the North. So an initial position
angle $\alpha=0$ should be a reasonable estimate. The VLBI images
(Figure \ref{fig:vlbimap}) suggest that the position angle of the jet
ridge line shows an increasing trend starting from the hot spot B.
Moreover, we found that a model with a constant position angle does
not fit simultaneously to both the inner and outer jet. To simplify
the calculation, we introduced a parameter $\frac{{\rm d}\alpha}{{\rm
d}t}$ to account for the increasing position angle in the outer
jet.
The fitted jet ridge line is shown (thick green line) in the upper
panel of Figure \ref{fig:helicalfit} overlaid on the total intensity
image. The assumed and fitted parameters are listed in Table
\ref{tab:helicalfit}. The modelled helix fits the general wiggling jet
structure with at least two complete periods of oscillation. The
fitted opening angle of $2.0\degr$ suggests that the line of sight
falls outside the helix cone. The initial phase angle $\phi_0$ is
loosely constrained; it is related to the reference time of the
ejection of the jet knot, $\phi_0 = 2\pi t_{ref}/P$. The fits suggest
that the reference time is $t_{ref}=-480$ yr. In the presence of the
gradual tilting of the jet axis as well as the helical coiling around
the jet axis, the fits most likely represents a superposition of the
precession of the jet knots and the nutation of the jet axis,
analogous to SS~433 (e.g. Katz et al. 1982; Begelman, King \& Pringle
2006). The fitted period of 3500 yr is then a nutation period, about 0.4
times the dynamical time scale of the jet, assuming a flow speed of
$0.965c$, while the precession period is much longer. From the rate of
the jet axis tilting, we estimate a precession period of
$\sim2\times10^5$ yr. The ratio of the estimated precession period
to the nutation period is 57:1, 2.2 times the ratio in SS~433
(which has a 162-day periodic precession and 6.3-day nodding motion:
see Begelman, King \& Pringle 2006 and references therein). The
precessing jet model predicts a smooth structure on small scales, and
a constant evolution of the wavelength so long as the jet kinetic
energy is conserved and the helix cone is not disrupted (the opening
angle of the helix cone is constant). However, the real 3C~48 jet
probably does not conserve kinetic energy, as it is characterized by a
disrupted jet and violent jet-ISM interactions. In particular, the
inner-kpc jet is seen to be physically interacting with a massive gas
system, and the observed blue-shifted NIR clouds could be driven by
the radio jet to move at velocities up to 1000 km~s$^{-1}$
\cite{Cha99,Gup05,Sto07}. The 3C~48 radio jet thus might lose a
fraction of its kinetic energy, resulting in a slowing down of the jet
flow and the shrinking of the wavelength in the outer jet, assuming
that the precessing periodicity is not destroyed.
\subsubsection{Model 2 -- Kelvin-Helmholtz instabilities}
We next investigate the interpretation of a hydrodynamic or magnetized
jet instability for a helical structure \cite{Har87,Cam86}. We used
the simple analytic model described in Steffen et al. (1995) to fit to
the helical jet trajectory in 3C~48. The kinematic equations of this
toy model are solved on the basis of the conservation of kinetic
energy $E_{kin}$ and the specific momentum in the jet motion direction
(Case 2 : Steffen et al. 1995). It is in fact identical to the
isothermal hydrodynamic model \cite{Har87} under the condition of a
small helix opening angle. Model fitting with an adiabatically
expanding jet can basically obtain similar helical twisting jet as
well, but the initial amplitude growth is much faster \cite{Har87}
than that of the isothermal jet. In this analysis we confine our
discussion to the isothermal case.
To make the calculations simple but not to lose generality, we used
similar assumptions to those of Model 1 on the jet kinematics and
geometry. (We should note that although we used an apparent velocity
$V_{\rm app,j}$ with same value in Model 1, the jet speed $V_j$ in
the K-H model is the pattern speed, and therefore the real flow speed
and the viewing angle in the K-H model are more uncertain than for the
ballistic case.) In addition, we assume that the initial perturbations
originate from a region very close to the central engine. The
calculations thus start from an initial distance of zero along the jet
axis and a small displacement $r_0$ in the rotation plane away from
the jet axis. Moreover, we assumed an initial position angle
$\alpha_0=0\degr$ , and again introduced a rate ${\rm d}\alpha/{\rm
d}t$ to explain the eastward tilting of the jet axis. The half
opening angle, which is a parameter to be fitted, is assumed constant.
This assumption is plausible since the jet width seems not to change
much within 0.5 arcsec, indicating that the trajectory of the jet is
not disrupted even given the occurrence of a number of jet-ISM
interactions. In addition to the above morphological assumptions, the
model also assumes the conservation of specific momentum and kinetic
energy $E_{\rm kin}$ along the jet axis. The conservation of specific
momentum is equivalent to a constant velocity along the jet axis if
mass loss or entrainment are negligible. The combination of the
conservation of specific momentum and kinetic energy along the jet
axis results in a constant pitch angle along the helical jet.
Furthermore, the constant jet opening angle and pitch angle lead to a
helical geometry in which the oscillatory wavelength linearly
increases with time. The parameter $r_0$ controls how fast the
wavelength varies (Equation 12 : Steffen et al. 1995). The model
describes a self-similar helical trajectory with a number of
revolutions as long as the helical amplitude is not dampened too
rapidly.
The modelled curve is exhibited in the lower panel of Figure
\ref{fig:helicalfit}. The assumed and fitted parameters are listed in
Table \ref{tab:helicalfit}. As mentioned above, this K-H instability
model predicts that, when the helical amplitude is not dampened and the
opening angle $\varphi$ is small ($\varphi\ll
\arctan{\frac{r_0}{\lambda_0}}$), the oscillating wavelength (or period)
along the jet axis increases linearly with time. The fits give an
initial wavelength of 60 mas and initial period of 370 yr. The period
increases to $1.3\times10^4$ yr at the end of the plot window of 9000
yr. The fitted curve displays more oscillations in the inner part of the jet
and smoother structure in the outer part, due to the decreasing angular
velocity downstream. The initial transverse distance $r_0$ represents
the location where the K-H instability starts to grow in the surface
of the jet. It is associated with the varying rate of the wavelength. A
value of $r_0=1.8$ mas corresponds to a projected linear distance of 9.2
pc off the jet axis. As discussed above, the major discrepancy between the
helical model and the real 3C~48 jet could be the assumption of the
conservation of kinetic energy $E_{kin}$. We have tried to fit the helical
model without the conservation of kinetic energy but with
conserved angular momentum, which is in principle similar to Case
4 in Steffen et al. (1995). However, in this case, the modelled helix
rapidly evolves into a straight line, and thus fails to reproduce the
observed 3C~48 jet on kpc scales.
\subsubsection{Comparison of the two models}
Both two models give fits to the overall jet structure of 3C~48 within
0.45 arcsec with 2--3 complete revolutions, but they have some
differences in detail. The helical shape of the precessing jet is a
superposition of ballistic jet knots modulated by a nodding motion
(nutation). In this case, the whole jet envelope wiggles out and shows
a restricted periodicity. The observed jet structure displays a smooth shape
on rather smaller scales. If, alternatively, the coherent, smooth jet
stream is initially disturbed at the jet base, and is amplified by the
Kelvin-Helmholtz instability downstream in the jet, the jet stream
itself is bent. The resulting helical jet flow rotates faster at the
start and gradually slows down as it moves further away. If the twisted inner
jet morphology detected at 1.5 and 4.8 GHz (Figure \ref{fig:core}) is
real, this would support the K-H instability model. Further
high-dynamic-range VLBI maps of the inner jet region could test this
scenario.
In addition to the morphological discrepancy, the two models require
different physical origins. In the precessing-jet model, ballistic
knots are ejected in different directions which are associated with an
ordered rotation in the jet flow direction in the vicinity of the
central engine. If the precession results from a rotating injector at
the jet base (see discussion in Worrall et al. 2007), the precession
period of 0.2 million yr requires a radius of $17 \times
(\frac{M_{\bullet} }{10^9 M_{\odot}})^{1/3}$ pc, assuming the injector
is in a Keplerian motion around the black hole. This size scale is
much larger than the accretion disk, and so we may simply rule out the
possibility of an injection from the rotating accretion disk. Instead
the long-term precession can plausibly take place in a binary SMBH
system or a tilting accretion disk (e.g. \cite{Beg80,Lu05}). For
example, the precessing period caused by a tilting disk is $\sim
2\times10^5$ yr, assuming a $3\times10^9 M_{\sun}$ SMBH for 3C~48, a
dimensionless viscosity parameter $\alpha=0.1$ and the dimentionless
specific angular momentum of the black hole $a=0.5$ \cite{Lu05}. In
this scenario, the short-term nodding motion can then be triggered by
the tidally-induced torque on the outer brim of the wobbling accretion
disk, analogous to SS~433 \cite{Kat82,Bat00}.
On the other hand, the helical K-H instabilities modes can be
triggered by ordered or random perturbations to the jet flow. The fits
with Model 2 give an initial perturbation period $\sim370$ yr,
which leads to a radius of $\sim 0.25 \times (\frac{M_{\bullet} }{10^9
M_{\odot}})^{1/3}$ pc where perturbations take place. This radius is
still larger than the size of the accretion disk, but at this size
scale it is still plausible for the perturbations to be due to
interactions between the jet flow and the broad-line-region clouds
(e.g. 3C120: \cite{Gom00}). However, the high Faraday depth and/or the
possible internal depolarization structure in the radio core A makes
it difficult to investigate this scenario through VLBI polarimetric
measurements. In addition, K-H instabilities would not only produce
simple helical modes, but also many other instability modes mixed
together; the K-H interpretation of the oscillatory 3C~48 jet on both
pc and kpc scales requires a selection of modes or a simple mix of
low-order modes. However, it is difficult to see how these required
modes are excited while others with higher growth rates are suppressed
(see the discussion of the wiggling filament in NGC~315 by Worrall et
al. 2007). Moreover, the K-H model does not have a ready explanation for
the observed large-scale gradual bend of the jet axis. Simple
kinematical models, such as a reflection by an oblique shock or a
pressure gradient in the Narrow-Line-Region ISM, may not be adequate to
explain the bends of the robust ($\gtrsim0.9c$) jet flow.
\section{Summary}
We have observed 3C~48 at multiple frequencies with the VLBA, EVN
and MERLIN with spatial resolutions between tens and hundreds of
parsec. Our principal results may be summarized as follows:
(1) The total-intensity MERLIN image of 3C~48 is characterized by two
components with comparable integrated flux density. A compact
component aligns with the VLBI jet, while an extended envelope
surrounds it. The extended emission structure becomes diffuse and
extends toward the northeast at $\sim$0.25 arcsec from the nucleus.
The extended component shows a steeper spectrum than the compact jet.
(2) In the VLBA and EVN images, the compact jet seen in the MERLIN
image is resolved into a series of bright knots. Knot A is
further resolved into two smaller features A1 and A2 in 4.8- and 8.3-GHz
VLBA images. A1 shows a flat spectrum with spectral index
$\alpha^{4.8}_{8.3}=-0.34\pm0.04$. A2 shows a steep spectrum with
$\alpha^{4.8}_{8.3}=-1.29\pm0.16$, and may be identified with the inner jet. The
brightness temperature of A1 is $>10^9$~K and much higher than the $T_b$
of A2. The flux densities of A1 and A2 in epoch 2004 show a 100 and 60
per cent decrease compared with those in 1996. The high brightness
temperature, flat spectrum and variability imply that A1 is the
synchrotron self-absorbed core found close to the active nucleus.
(3) Comparison of the present VLBA data with those of 1996 January
20 strongly suggests that A2 is moving, with an apparent velocity
$3.7c\pm0.4c$ to the North. Combining the apparent superluminal
motion and the jet-to-counterjet intensity ratio yields a
constraint on the jet kinematics and geometry: the jet is
relativistic ($>0.85c$) and closely aligned to the line of sight ($<35\degr$).
(4) We present for the first time VLBI polarization images of 3C~48, which
reveal polarized structures with multiple sub-components in
component C. The fractional polarization peaks at the interface between
the compact jet and the surrounding medium, perhaps consistent with a
local jet-induced shock. The systematic gradient of the EVPAs across
the jet width at C can be attributed to the combination of a gradient
in the emission-weighted intrinsic polarization angle across the jet
and possibly a systematic gradient in the RM. Changing magnetic field
directions are a possible interpretation of the RM gradient, but other
alternatives can not be ruled out. The fractional polarization of the
hot spot B increases towards higher frequencies, from $\sim1$ per cent
(1.6 GHz), $\sim2.0$ per cent (4.8 GHz) to $12$ per cent (8.3 GHz). The
relatively low degree of polarization at lower frequencies probably
results from a unresolved Faraday screen associated with the NLR clouds
and/or the internal depolarization in the jet itself. Hot spot B has a
higher RM than C, which can perhaps be attributed to a stationary shock
in the vicinity of B. The core A at all frequencies is unpolarized,
which may be the result of a tangled magnetic field in the inner part of
the jet.
(5) The combined EVN+MERLIN 1.65-GHz image and 1.5-GHz VLBA images show
that the bright knots trace out a wave-like shape within the jet. We
fitted the jet structure with a simple precession model and a K-H
instability model. Both models in general reproduce the observed
oscillatory jet trajectory, but neither of them is able to explain all
the observations. More observations are required to investigate the
physical origin of the helical pattern. Further monitoring of the proper
motion of the inner jet A2 should be able to constrain the ballistic
motion in the framework of the precessing jet. High-resolution VLBI
images of the inner jet region will be required to check whether or not
the jet flow is oscillating on scales of tens of mas, which might give a
morphological means of discriminating between the two models.
Sophisticated simulations of the jet would be needed to take into
account the deceleration of the jet flow due to kinetic energy loss via
jet-cloud interaction and radiation loss, but these are beyond the scope
of the present paper.
\section*{Acknowledgments}
TA and XYH are grateful for partial support for this work from the
National Natural Science Foundation of PR China (NSFC 10503008,
10473018) and Shanghai Natural Science Foundation (09ZR1437400). MJH
thanks the Royal Society (UK) for support. We thank Mark Birkinshaw
for helpful discussions on the jet kinematics. The VLBA is an
instrument of the National Radio Astronomy Observatory, a facility of
the US National Science Foundation operated under cooperative
agreement by Associated Universities, Inc. The European VLBI Network
(EVN) is a joint facility of European, Chinese, South African and
other radio astronomy institutes funded by their national research
councils. MERLIN is a National Facility operated by the University of
Manchester at Jodrell Bank Observatory on behalf of the UK Science and
Technology Facilities Council (STFC).
| proofpile-arXiv_065-6691 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recently F-theory compactifications on a Calabi--Yau 4-fold down to four dimensions have been argued to have interesting phenomenological features \cite{Beasley:2008dc,Beasley:2008kw}. In these compactifications the Calabi-Yau 4-fold is described by an elliptic fibration of a K3 over a complex 2-dimensional surface $S$. This surface $S$ is wrapped by 7-branes which are space-time filling in the non-compact dimensions. The gauge groups of the wrapped 7-branes can be obtained geometrically through the $A$-$D$-$E$-singularities of the elliptically fibered K3. Special attention has
been given to the case of exceptional gauge groups.
Seven-branes with exceptional gauge groups are poorly understood objects in type IIB string theory. We will study the symmetry enhancement from an open string point of view making use of a recent analysis of the simplest (flat world-volume) supergravity 7-brane solutions \cite{Bergshoeff:2006jj}, which emphasizes the supersymmetry properties of the solutions. This analysis has yielded a graphical representation of the 7-brane solutions that summarizes the global branch cut structure of the two analytic functions in terms of which the entire solution can be described. Here we will improve on these graphical representations in a way that allows us to study open strings in a background of 7-branes. The analysis will be concerned with the case of 24 (flat world-volume) 7-branes forming F-theory on K3 \cite{Vafa:1996xn}.
Open string descriptions of the $A$-$D$-$E$-singularities have already been studied a long time ago \cite{Johansen:1996am,Gaberdiel:1997ud,Gaberdiel:1998mv}.
The motivation to re-study the open string description of these singularities is that with the work of \cite{Bergshoeff:2006jj} we now have
a full knowledge of the global branch cut structure of the complex axi-dilaton field $\tau$. This has led us to a different approach of the
problem in which we avoid the use of the so-called B- and C-branes \cite{Johansen:1996am,Gaberdiel:1997ud,Gaberdiel:1998mv}. In our picture these branes are represented by (1,0) 7-branes which are hidden behind $S$-branch cuts. These $S$ branch cuts play an important role in our analysis.
\section{Seven-branes: a short review}\label{sec:sevenbranes}
We will start in Subsection \ref{subsec:solutions} with a short review of the 24 7-brane solution. The $A$-$D$-$E$-singularities are next discussed in Subsection \ref{subsec:ADEsingularities}. In Subsection \ref{subsec:graphical} we discuss the graphical representation \cite{Bergshoeff:2006jj} of the 7-brane solution presented in Subsection \ref{subsec:solutions}. The notion of 7-brane charges in the global solution is discussed in Subsection \ref{subsection:charges}. This section ends with Subsection \ref{subsec:othersolutions} where we make some comments regarding other 7-brane solutions.
\subsection{Solutions}\label{subsec:solutions}
The basic 7-brane solution with a compact transverse space requires 24 non-coincident 7-branes \cite{Greene:1989ya,Gibbons:1995vg}. This transverse space has the topology of $S^2$ with 24 punctures, the locations of the 24 7-branes. The solution is described by two analytic functions $\tau(z)$ and $f(z)$ in terms of which the metric (in Einstein frame) and the Killing spinor are given by
\begin{eqnarray}
ds^2 & = & -dx_{1,7}^2+\text{Im}\,\tau\vert f\vert^2dzd\bar z\,,\label{IIBbackgroundmetric}\\
\epsilon &=&\left(\frac{\bar f}{f}\right)^{1/4}\epsilon_0\,,\hskip 1truecm \text{with}
\hskip 1truecm \gamma_z\epsilon_0=0\,,\label{Killingspinor}
\end{eqnarray}
for some constant spinor $\epsilon_0$. The metric $dx_{1,7}^2$ denotes
8-dimensional Minkowski space-time. The 7-brane transverse space is
parametrized in terms of $z,\bar z$ which are fixed up to an
$SL(2,\mathbb{C})$ coordinate transformation. The solution preserves 16 supersymmetries provided that the Killing
spinor $\epsilon$ is given by Eq. \eqref{Killingspinor}
\cite{Bergshoeff:2006jj}\footnote{The conventions for the unbroken
supersymmetries we use here are slightly different from the ones
used in \cite{Bergshoeff:2006jj}.}. The holomorphic functions $\tau(z)$ and $f(z)$ are given by
\begin{eqnarray}
j(\tau) & = & \frac{P_8^3}{P_8^3+Q_{12}^2}\,,\label{IIBbackgroundtau}\\
f(z) & = & c\,\eta^2(\tau)\left(P_8^3+Q_{12}^2\right)^{-1/12}\,,\label{IIBbackgroundf}
\end{eqnarray}
for some nonzero complex constant $c$. The functions $j$ and
$\eta$ are Klein's modular $j$-function and the Dedekind
eta-function, respectively. Furthermore, $P_8$ and $Q_{12}$ are
arbitrary polynomials of degree $8$ and $12$, respectively, in the
complex coordinate $z$.
The complex axi-dilaton field can be interpreted as the modulus of a
2-torus that is elliptically fibered over the 7-brane transverse
space. If we describe this torus locally via a complex coordinate
$w$ then the complex 2-dimensional space parameterized in terms of
$z$ and $w$ forms a K3 surface \cite{Greene:1989ya,Vafa:1996xn}. The function $f$ has the
interpretation of $fdzdw$ being the holomorphic (2,0) form of the K3
\cite{Greene:1989ya}.
We have the following scale transformation (with complex parameter $\lambda$):
\begin{equation}
P_8\rightarrow \lambda^2P_8\,,\qquad Q_{12}\rightarrow\lambda^3Q_{12}\,.
\end{equation}
This transformation leaves $j(\tau)$ invariant and provided we replace $c\rightarrow\lambda^{1/2}c$ it also leaves $f$ invariant. If we combine this scale transformation with the $SL(2,\mathbb{C})$ coordinate freedom it is concluded that we can fix at will four complex parameters that appear in the polynomials $P_8$ and $Q_{12}$. Since $P_8$ and $Q_{12}$ together depend on 22 complex parameters, after fixing 4 of them we are left with 18 adjustable complex parameters. Hence, the 24 positions of the 7-branes are parameterized in terms of 18 complex parameters. The absolute value of $c$ can be associated with a real K\"ahler modulus, while the 18 complex parameters can be associated with the complex structure moduli of the K3. The argument of $c$ can be absorbed into a redefinition of $\epsilon_0$ and does not represent a modulus.
The $z$-dependence of the axi-dilaton $\tau$ is summarized in
Fig. \ref{fig:mappingproperties}. The top left figure indicates the chosen fundamental domain
$F$ of $PSL(2,\mathbb{Z})$, together with its orbifold points
$\tau=i\infty,\tau = \rho$ and $ \tau = i$. The $j$-function maps
these orbifold points to the points $j=\infty, j=0$ and $j=1$,
respectively. The top right figure indicates the branch cuts of the
inverse $j$-function. The bottom figure
shows that the $j$-plane is mapped 24 times onto the z-plane. Under
this mapping the point $j=\infty$ is mapped to 24 distinct points
$z_{i\infty}$ which are the 24 zeros of the polynomial $P_8^3 +
Q_{12}^2$. Similarly, the points $j=0$ and $j=1$ are mapped to 8
distinct points $z_\rho$ (which are the 8 zeros of $P_8$) and 12
distinct points $z_i$ (which are the 12 zeros of $Q_{12}$),
respectively. The points $z_{i\infty}$, $z_\rho$ and $z_i$ are those points where $\tau$ takes the values
$i\infty$, $\rho$ and $i$, respectively\footnote{This definition applies to a situation in which $\tau$ does not take its values in the covering space, but always in the fundamental domain.}. The branch cuts of the inverse $j$-function, i.e. of $\tau$ as a function of $z$, are indicated schematically in the lower figure of Fig. \ref{fig:mappingproperties}. The precise branch cut structure will be discussed in Subsection \ref{subsec:graphical}.
\begin{figure}
\centering
\vskip -30cm
\psfrag{iinfty}{$i\infty$}
\psfrag{infty}{$\infty$}
\psfrag{0}{0}
\psfrag{1}{1}
\psfrag{i}{$i$}
\psfrag{jt}{$j(\tau)$}
\psfrag{jtz}{$j(\tau(z)) = \frac{P^3_8}{P^3_8 + Q^2_{12}}$}
\psfrag{F}{$F$}
\psfrag{rho}{$\rho$}
\psfrag{ziinfty}{$z_{i\infty}$}
\psfrag{zrho}{$z_\rho$}
\psfrag{zi}{$z_i$}
\psfrag{x24}{$\times 24$}
\psfrag{x8}{$\times8$}
\psfrag{x12}{$\times 12$}
\psfrag{tplane}{$\angle^\tau$}
\psfrag{jplane}{$\angle^j$}
\psfrag{zplane}{$\angle^z$}
\includegraphics{Fig1}
\caption{In the top left figure we indicate our choice of fundamental domain of
the group $PSL(2,\mathbb{Z})$. The top right figure summarizes the
transformation properties of the $j$-function. The bottom figure is a
schematic representation of the branch cuts of the function
$\tau(z)$. The solid (dashed) line indicates a branch cut with a $T (S)$ transformation.}\label{fig:mappingproperties}
\end{figure}
Going counterclockwise around the branch points $z_{i\infty}$ or $z_i$, we measure a $T$ or $S$ $PSL(2,\mathbb{Z})$ transformation on $\tau$,
with $T$ and $S$ given by
\begin{equation}T= \begin{pmatrix}
1&1\cr 0&1\end{pmatrix}\,,\hskip 2truecm
S= \begin{pmatrix}
0&1\cr -1&0\end{pmatrix}\,.
\end{equation}
We will indicate a branch cut with a $T(S)$ transfomation by a solid (dashed) line, as in Fig. \ref{fig:mappingproperties}. Later, we will refine this notation in order to also include the transformation of $f$ which transforms under $SL(2,\mathbb{Z})$ instead of $PSL(2,\mathbb{Z})$:
\begin{equation}\label{trafoSL2Z}
\tau\rightarrow\ \frac{a\tau+b}{c\tau+d}\,,\hskip 2truecm
f\rightarrow (c\tau+d)f\,.
\end{equation}
Sometimes we will use a description in which $\tau$ and $f$ take their values in the covering space, which for $\tau$ is the entire upper half plane. This means that when a branch cut is crossed $\tau$ and $f$ continuously change their values from one branch into an adjacent one. Sometimes we will use a description in which $\tau$ and $f$ take their values on a particular branch which for $\tau$ is a fundamental domain. In that case when a branch cut is crossed $\tau$ and $f$ change their values discontinuously.
\subsection{Singularities and Gauge Groups}\label{subsec:ADEsingularities}
The 24 non-coincident 7-branes are located at the points
$z_{i\infty}$. At each of these points a 1-cycle of the fibered 2-torus shrinks to a point.
When a number of such points is made to
coincide different types of singularities are formed. The type of
singularity depends on the details of the zeros of $P_8^3+Q_{12}^2$,
i.e. whether or not the zero of $P_8^3+Q_{12}^2$ is also a zero of
either $P_8$ and/or $Q_{12}$ and what the orders of the zeros of
$P_8$, $Q_{12}$ and $P_8^3+Q_{12}^2$ are. The singularities of an
elliptically fibered 2-torus have been classified by Kodaira (see
for example \cite{Barth}) and the relation between the singularity
type of the singular fibre with the order of the zeros of $P_8$,
$Q_{12}$ and $P_8^3+Q_{12}^2$ follows from applying Tate's algorithm
\cite{Tate}. The possible singularities are listed in Table
\ref{Tatealgorithm} which has been adopted from
\cite{Bershadsky:1996nh}.
Table \ref{Tatealgorithm} is useful in determining the non-Abelian
factors of the 7-brane gauge groups. An $A_{n-1}$ singularity for
$n\ge 2$ leads to a gauge group $SU(n)$, a $D_{n+4}$ singularity to
a gauge group $SO(2(n+4))$ and the $E_6$, $E_7$ and $E_8$
singularities lead to the exceptional gauge groups $E_6$, $E_7$ and
$E_8$. The third to fifth rows of Table \ref{Tatealgorithm}
correspond to the Argyres--Douglas singularities
\cite{Argyres:1995jj,Argyres:1995xn}. The singularity type indicated by ``none'' in the first row
of Table \ref{Tatealgorithm} refers to the fact that the 7-brane
gauge group is trivial (there is no 7-brane since the order of the zero of $P_8^3+Q_{12}^2$ is zero), while the same singularity type in the third row means that the gauge group is Abelian.
Besides the non-Abelian gauge groups predicted by Table \ref{Tatealgorithm} there can additionally be various $U(1)$ factors coming from the 7-branes. We recall, as discussed in Subsection \ref{subsec:solutions}, that the solution has 18 free complex parameters, which can be associated with the complex structure moduli of the K3. From the 8-dimensional point of view these complex moduli reside in minimal vector supermultiplets. Therefore there are 18 $U(1)$ factors for 24 7-branes\footnote{We remark that there are additionally two more vectors residing in the minimal 8-dimensional gravity supermultiplet. These vectors do not participate in the symmetry enhancement.}. The fact that there is not a one-to-one relationship between $U(1)$ factors and 7-branes is because there are certain global obstructions to the positioning of the 24 7-branes. This will be further discussed in Subsection \ref{subsection:charges}.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Order zero & Order zero & Order zero & Singularity & Conjugacy \\
$P_8$ & $Q_{12}$ & $P_8^3+Q_{12}^2$ & Type & class \\
\hline
&&&&\\
$\ge 0$ & $\ge 0$ & $0$ & none & $[\mathbbm{1}]$\\
&&&&\\
$0$ & $0$ & $n$ & $A_{n-1}$ & $[T^n]$\\
&&&&\\
$\ge 1$ & $1$ & $2$ & none & $[T^{-1}S]$ \\
&&&&\\
$1$ & $\ge 2$ & $3$ & $A_1$ & $[S]$ \\
&&&&\\
$\ge $2 & $2$ & $4$ & $A_2$ & $[(T^{-1}S)^2]$\\
&&&&\\
$2$ & $\ge 3$ & $n+6$ & $D_{n+4}$ & $[-T^n]$\\
&&&&\\
$\ge 2$ & $3$ & $n+6$ & $D_{n+4}$ & $[-T^n]$\\
&&&&\\
$\ge 3$ & $4$ & $8$ & $E_6$ & $[-T^{-1}S]$\\
&&&&\\
$3$ & $\ge 5$ & $9$ & $E_7$ & $[-S]$\\
&&&&\\
$\ge 4$ & $5$ & $10$ & $E_8$ & $[-(T^{-1}S)^2]$\\
&&&&\\
\hline
\end{tabular}
\end{center}
\caption{{\small{The Kodaira classification of singular fibres of an
elliptically fibered 2-torus. When the singularity in the fourth column is
called `none' it means that the group contains no non-Abelian
factor. The last column indicates the $SL(2,\mathbb{Z})$ conjugacy class of the singularity.}}}
\label{Tatealgorithm}
\end{table}
We now consider the local geometry of the 7-brane solution in the
neighborhood of a singularity. Consider for example the $E_6$
singularity for which the order of the zero of $Q_{12}$ must be
four. Suppose that this singularity is located at the point $z_0$ in the
transverse space. Then near $z_0$ we have
$Q_{12}\approx \rm{cst}(z-z_0)^4$. Further, according to Table
\ref{Tatealgorithm} we have $P_8 \approx \rm{cst}(z-z_0)^n$ with $n\ge 3$
and $P_8^3+Q_{12}^2 \approx \rm{cst}(z-z_0)^{8}$ so that near $z_0$ we have
$f \approx \rm{cst}\,\eta^2(\tau)(z-z_0)^{-2/3}$ and
$j(\tau)\approx\rm{cst}(z-z_0)^{3n-8}$. Since $n\ge 3$ we have $j=0$ at $z=z_0$, i.e. $\tau=\Lambda\rho$ where $\Lambda$ is some $PSL(2,\mathbb{Z})$ transformation. The most general transformation
that $\tau$ may undergo compatible with this local
expression is of the form
$\Lambda(\pm T^{-1}S)\Lambda^{-1}$. Next, the sign can be fixed by comparing the transformation of $f$ with Eq. \eqref{trafoSL2Z}. Thus we conclude that $E_6$
singularities are formed at fixed points of $SL(2,\mathbb{Z})$
transformations that belong to the $-T^{-1}S$ $SL(2,\mathbb{Z})$
conjugacy class, that we will denote by $[-T^{-1}S]$.
In a similar way one can relate the other singularities to $SL(2,\mathbb{Z})$ conjugacy classes. These conjugacy classes are indicated in the last column of Table \ref{Tatealgorithm}.
\subsection{Graphical Representations}\label{subsec:graphical}
We first discuss a refinement of Fig. \ref{fig:mappingproperties}, which also takes into
account the transformation properties of $f$. Our starting point is
the top right figure in Fig. \ref{fig:mappingproperties}. We will first distill out of this
figure a new Fig. \ref{fig:branchcutstructure} which contains the detailed branch cut
structure of the function $\tau(z)$. This new figure can be obtained
as follows. We start by taking all the points
$z_{\rho}^1,\ldots,z_{\rho}^8$, $z_{i}^1,\ldots, z_i^{12}$ and
$z_{i\infty}^1,\ldots,z_{i\infty}^{24}$ non-coinciding. The inverse
$j$-function has branch cuts running from $j=\infty$ to $j=0$ and
from $j=0$ to $j=1$. This means that each point $z_{i\infty}^1$ to
$z_{i\infty}^{24}$ has a branch cut connecting it to one of the eight points $z_{\rho}^1,\ldots,z_{\rho}^8$. At each point $z_{\rho}$ we thus have three $T$ branch cuts. Next we must include the $S$ branch cuts that connect the points $z_{\rho}$ and $z_i$. Since there are three $T$ branch cuts meeting at $z_{\rho}$ we must have three $S$ branch cuts meeting at $z_{\rho}$ as well. These eight times three $S$ branch cuts must then end on twelve points $z_{i}$. At each point $z_i$ two $S$ branch cuts meet. Now, we need to put the $T$ and $S$ branch cuts that meet at $z_{\rho}$ in such an order that the monodromy around $z_{\rho}$ is the identity in $PSL(2,\mathbb{Z})$. This will be the case if
we put the three $T$ and three $S$ branch cuts meeting at $z_\rho$ in alternating order. This follows from the $SL(2,\mathbb{Z})$ identity $(TS)^3=\mathbbm{1}$. Next we need to find the positioning of the $S$ branch cuts. At this point there are two choices:
\begin{enumerate}
\item We take two of the three $S$ branch cuts that originate from the same branch point $z_\rho$ to go to the
{\it same} point $z_i$.
\item We take all three $S$ branch cuts to go to three {\it different} points $z_i$.
\end{enumerate}
This shows that our construction of the graphical representation is not unique. If we always choose the first option then we obtain the $PSL(2,\mathbb{Z})$ part of Fig. \ref{fig:branchcutstructure}. The minus signs through some of the $S$ branch cuts refer to the $SL(2,\mathbb{Z})$ properties of the figure as will be explained next.
Now that we have fixed the positioning of the $S$ branch cuts, we
can consider the transformation properties of the function $f$. From
the expression for $f$, Eq. \eqref{IIBbackgroundf}, we know that $f$
does not transform when going locally around any of the points
$z_{i\infty}$, $z_\rho$ or $z_i$. Let us first consider those points
$z_i$ at which two $S$ branch cuts meet that come from the same
point $z_\rho$. Since $S^{2}=-\mathbbm{1}$ and the function $f$ transforms under the
$-\mathbbm{1}$ element of $SL(2,\mathbb{Z})$ we need to take one of
these two $S$ branch cuts to be a $-S$ branch cut. This has no
effect on the transformation properties of $\tau$ and realizes that
$f\rightarrow f$ when going around these $z_i$ points. There are
eight $z_i$ points in Fig. \ref{fig:branchcutstructure} which can be treated in this way. Consider now the
points $z_\rho$. In order that $f\rightarrow f$ when going around
$z_\rho$ we need to turn the $S$ branch cut that goes to a point
$z_i$ at which it meets an $S$ branch cut coming from another point
$z_\rho$ into a $-S$ branch cut, so that going counterclockwise
around $z_\rho$, not encircling any other branch points, gives the
$+\mathbbm{1}$ element of $SL(2,\mathbb{Z})$. As a result we are now
left with four points $z_i$ at which two $-S$ branch cuts meet. The
only way to realize that $f\rightarrow f$ when going around any of
these four points $z_i$ is to introduce a new branch cut. This
new branch cut cannot have any effect on $\tau$, so it must be a
$-\mathbbm{1}$ branch cut. Further, because the only points around
which $f$ does not yet transform to itself are these four points $z_i$, the
$-\mathbbm{1}$ branch cut must go from one point $z_i$ to another
point $z_i$. Since we have four points $z_i$ at which a
$-\mathbbm{1}$ branch cut ends we need two such new branch cuts.
Finally, in order not to introduce additional branch points these
$-\mathbbm{1}$ branch cuts are not allowed to intersect each
other. This explains all the
necessary steps to construct Fig. \ref{fig:branchcutstructure}. This figure provides a representation of the branch
cuts of the pair $(\tau(z),f(z))$.
Note that the way in which we have chosen to place the $-S$ and
$-\mathbbm{1}$ branch cuts is not unique. We could, for example, not
use any $-S$ branch cuts and instead place a $-\mathbbm{1}$ branch
cut between pairs of points $z_i$ in such a way that the
$-\mathbbm{1}$ branch cuts do not intersect.
It is useful to highlite all the choices that went into Fig. \ref{fig:branchcutstructure}.
These choices have been:
\begin{enumerate}
\item Choose a fundamental domain.
\item Choose a positioning of the $S$ branch cuts that leads to a proper representation of
$\tau(z)$.
\item Choose a positioning of the $-S$ and $-\mathbbm{1}$ branch
cuts that also takes into account the transformation properties of $f(z)$.
\end{enumerate}
Even though the branch cut representations are not unique the allowed choices for the positioning of the $S$ branch cuts lead to certain global obstructions. We note that for each choice of positioning of the $S$ branch cuts it is possible to put certain 7-branes on top of each other (without modifying the $S$ branch cuts). This leads to manifest $SU$ symmetry groups. It will be shown later that different positionings of the $S$ branch cuts correspond to different embeddings of these $SU$ groups into the to be formed $A$-$D$-$E$-gauge groups.
\subsection{Charges}\label{subsection:charges}
The charges of a 7-brane located at a point $z_{i\infty}$ will be
meausured around an infinitesimal loop encircling the point
$z_{i\infty}$. A single 7-brane has charges $p$ and $q$ when
the $\tau$-monodromy along an infinitesimal loop around $z_{i\infty}$ is of the form:
\begin{equation}\label{pqcharges}
\Lambda T\Lambda^{-1} \hskip 2truecm \text{with} \hskip 2truecm
\Lambda=\begin{pmatrix} p&r\cr q&s\end{pmatrix}\,,
\end{equation}
and $sp-qr=1$. With this definition each point $z_{i\infty}$
corresponds to a $(1,0)$ 7-brane due to our choice of fundamental
domain. However, there are some $(1,0)$ 7-branes that are encircled
by $\pm S$ branch cuts. Thus, from the point of view of a base point
that lies outside the region enclosed by two $S$ branch cuts these
7-branes appear to have different charges $p$ and $q$ depending on
the loop that one uses to encircle such $S$ branch cut locked $(1,0)$ 7-branes. From now on we will refer to such $S$ branch cut locked 7-branes simply as locked 7-branes.
The monodromy loops giving rise to different $p$
and $q$ charges are always large\footnote{For example, in the orientifold limit introduced by Sen \cite{Sen:1997gv} the monodromy around the two 7-branes that describe the split orientifold is computed at a finite distance from the two branes.} and never infinitesimally close to
the locked points $z_{i\infty}$. From this point of view one cannot view the locked 7-brane as being some $(p,q)$ 7-brane. For instance, although one can always put two $(1,0)$ 7-branes on top of each other one cannot force two locked 7-branes to coincide without altering the branch cut structure. The infinitesimal monodromy around a 7-brane
is determined by the choice of fundamental domain and this
can be chosen only once. If we denote the fundamental domain displayed
in the top left figure of Fig. \ref{fig:mappingproperties} by $F$ then the fundamental domain
$\Lambda[F]$ contains $(p,q)$ 7-branes as
defined in \eqref{pqcharges}.
In the graphical representation of Fig. \ref{fig:branchcutstructure} there are 16 $(1,0)$
7-branes that are not locked by $S$ branch cuts. This number depends on the graphical representation.
For instance, another choice of positioning the
$S$ branch cuts exists that leads to 18
$(1,0)$ 7-branes that are not locked by $S$ branch cuts. This branch cut representation can be obtained by taking twice the upper figure of Fig. \ref{fig:alternatives} in which the five coinciding 7-branes are taken apart. It is not possible to position the $S$ branch cuts in such a way that we have more than 18 unlocked branes. This agrees with the earlier observation that the maximal rank of the gauge group is 18. In the orbifold limits of $K3$ this can be observed from the analysis of \cite{Dasgupta:1996ij}.
\subsection{Other 7-brane solutions}\label{subsec:othersolutions}
We end this section with a few comments about other 7-brane
solutions. The non-compact solutions with 6 or 12 7-branes have a
graphical representation that can be inferred from Fig. \ref{fig:mappingproperties} by
simply considering the 24 7-brane solutions as four copies of a
solution with 6 or 2 copies of a solution with 12 7-branes. Further,
it can be shown that any other supersymmetric 7-brane solution with
$\tau(z=\infty)$ arbitrary can be formed out of the 6, 12 or 24 7-brane solutions by taking certain 7-branes to coincide. This includes both solutions containing so-called Q7-branes as well as
solutions for which the $\tau$ monodromy group is a subgroup of
$PSL(2,\mathbb{Z})$. An example of the latter kind is given in \cite{Bergshoeff:2006jj}.
The name Q7-brane has been coined in \cite{Bergshoeff:2007aa} to
refer to a solution with $\tau(z)$ non-constant that contains
deficit angles at the points $z_{\rho}$ and $z_i$. Such deficit angles arise
when a locked 7-brane is put on top of an unlocked one\footnote{For a world-volume discussion supporting the point of view that a Q7-brane corresponds to a stack of 7-branes in which some 7-branes have charges that are $PSL(2,\mathbb{Z})$ transformed with respect to other 7-branes in the stack, see \cite{Bandos:2008bn}.}. The global branch cut structure of solutions with either Q7-branes or a monodromy group that is a subgroup of $PSL(2,\mathbb{Z})$ cannot be obtained by continuously deforming the $T$ and $S$ branch cuts of a solution with 24 non-coinciding 7-branes and must therefore be studied separately using new branch cut rules.
The result of putting a locked 7-brane on top of an unlocked one in the supergravity approximation appears as a single brane that couples to an 8-form potential that is outside the $SL(2,\mathbb{Z})$ orbit of the RR 8-form. By electro-magnetic duality this same 8-form potential magnetically sources so-called Q-instantons \cite{Bergshoeff:2008qq}. The Q-instantons relate to new vacua of the quantum axi-dilaton moduli space $SO(2) \backslash PSL(2,\mathbb{R})\slash PSL(2,\mathbb{Z})$ and are argued to be relevant for the IIB theory in the neighborhood of the orbifold points $\tau_0=i,\rho$ (and their $PSL(2,\mathbb{Z})$ transforms) of the quantum moduli space.
\section{BPS open strings}\label{sec:BPSstrings}
Consider any of the 24 7-branes of Fig. \ref{fig:branchcutstructure} and consider an open $(1,0)$ string that has not yet crossed any branch cuts and that has one endpoint ending on the 7-brane which is thus a $(1,0)$ 7-brane. Let us now follow this string along some path $\gamma$ that will generically cross some number of branch cuts going from this 7-brane to another one.
If we allow $\tau$ (and $f$) to take values in the covering space then when a branch cut is crossed $\tau$ changes its values continuously from one branch (fundamental domain) into an adjacent one.
The string tension of a $(1,0)$ string is then only continuous across the branch cut if the string charges do not transform, i.e. a $(1,0)$ string remains a $(1,0)$ string. If the $(1,0)$ string would cross a number of branch cuts whose overall $SL(2,\mathbb{Z})$ transformation equals $\Lambda$ and subsequently approaches a 7-brane that 7-brane would appear to be some $(p,q)$ 7-brane with monodromy $\Lambda T\Lambda^{-1}$, i.e.~the 7-brane charges have changed. Alternatively, we can assume that $\tau$ and $f$ always take their values on some particular branch. In this case $\tau$ and $f$ change their values discontinuously when crossing a branch cut. If we do this then all the 7-branes are always of $(1,0)$ type. In order for the string tension to change continuously when the string crosses a number of branch cuts whose overall $SL(2,\mathbb{Z})$ transformation is $\Lambda$ the string charges at the end of the string must be $\Lambda\textstyle{\left(\begin{array}{c}1\\0
\end{array}\right)}$. In summary, from the point of view of a string, working in the covering space means that the 7-brane charges transform, while working with a fixed branch means that the string charges transform.
In any case, regardless one's point of view, going back to the $(1,0)$ string with one endpoint on the 7-brane, in order for it to end on another 7-brane it must cross a number of branch cuts for which $\Lambda$ is such that it leaves the $(1,0)$ string invariant, i.e. $\Lambda=\pm T^k$ for some $k\in\mathbb{Z}$.
Paths $\gamma$ along which the overall $SL(2,\mathbb{Z})$ transformation is $\pm T^k$ can in general have self-intersections. However, if these paths are to represent possible profiles of open strings self-intersections are not allowed. Classically, this is because the endpoints of an open string move at the speed of light in order to counteract the tension of the string preventing it from collapsing to a point. There is no way to sustain a closed loop that would be formed if the string were to self-intersect. Therefore we must restrict to simple paths, i.e.~paths without self-intersections, along which the overall $SL(2,\mathbb{Z})$ transformation is given by $\Lambda=\pm T^k$.
We can always have the $(1,0)$ string loop a sufficient number of times around the begin or end brane, or when a number of 7-branes coincide at one point have the $(1,0)$ string to cross, in a suitable manner, a sufficient number of $T$ branch cuts, so that the overall $SL(2,\mathbb{Z})$ transformation along the string is $\pm\mathbbm{1}$. As will be explained in the next section, this does not lead to inequivalent strings. Therefore, from now on we will restrict our attention to those simple curves along which $\Lambda=\pm\mathbbm{1}$.
A string stretched between two non-coinciding 7-branes would become massless if it must always lie along a simple path connecting the two 7-branes. Hence, in order to understand the open string origin of 7-brane gauge groups we must find those strings that lie along simple curves along which $\Lambda=\pm\mathbbm{1}$ whose masses are BPS. Once these BPS strings have been identified one can try to associate them to various generators of the 7-brane gauge group.
We will assign an orientation to each open string. When $\Lambda=-\mathbbm{1}$ along the BPS string then with $\tau, f$ taking values in the covering space the $(1,0)$ string starts on a $(1,0)$ 7-brane and ends on a $(-1,0)$ 7-brane. Or with $\tau, f$ taking their values on some fixed branch the string starts as a $(1,0)$ string on a $(1,0)$ 7-brane and ends as a $(-1,0)$ string on a $(1,0)$ 7-brane. In the latter case the directionality along the string has flipped.
Due to our choice of fundamental domain a string always starts and ends on a $(1,0)$ 7-brane (up to signs). If we allow $\tau$ and $f$ to take values in the covering space the string charges do not transform and it is sufficiently general to consider only the tension of a $(1,0)$ string in order to compute the mass of the stretched string. The mass of a $(1,0)$
string that is stretched along some simple curve $\gamma$, denoted by $m_{1,0}$, is given by
\begin{equation}
m_{1,0}=\int_{\gamma}T_{1,0}ds=\int_{\gamma}\vert fdz\vert=\int_{\gamma}\vert dw_{1,0}\vert\,,
\end{equation}
where $ds$ is the Einstein frame line element, $T_{1,0}$ is
the tension of a $(1,0)$ string,
\begin{equation}
T_{1,0}=\left(\text{Im}\,\tau\right)^{-1/2}\,,
\end{equation}
and $dw_{1,0}$ is defined to be
\begin{equation}
dw_{1,0}=fdz\,.
\end{equation}
Denoting the path $\gamma$ for which the mass of the $(1,0)$ string satisfies
a lower bound by $\gamma_{\text{BPS}}$, the BPS mass is given by
\begin{equation}\label{BPS}
m_{1,0}^{\text{BPS}}=\vert\int_{\gamma_{\text{BPS}}}dw_{1,0}\vert\,.
\end{equation}
Hence BPS strings lie along $\gamma_{\text{BPS}}$. These are non self-intersecting paths from $z^i_{i\infty}$ to $z_{i\infty}^j$ with $i\neq j$ and $i,j=1,\ldots,24$ along which the overall $SL(2,\mathbb{Z})$ transformation is $\pm\mathbbm{1}$. This definition applies to a situation in which none of the 7-branes are coinciding. As we will see when some 7-branes are put on top of each other at, say, $z_{i\infty}^1$ then there can also exist BPS strings that lie along paths which are non-self-intersecting along which the overall $SL(2,\mathbb{Z})$ transformation is $-\mathbbm{1}$ that go from $z_{i\infty}^1$ back to itself along some non-contractible loop.
\section{Open strings and the $A$-$D$-$E$-singularities}\label{sec:gaugegroups}
Before we discuss specific cases we first make some general observations.
In order to study symmetry enhancement using open strings we need to isolate those 7-branes which when made to coincide give rise to a certain gauge group. These branes can be identified as follows.
In the solution with no 7-branes coinciding the branes that when made to coincide give rise to the gauge groups of Table
\ref{Tatealgorithm} can be found by encircling a group of 7-branes that satisfy two criteria:
\begin{enumerate}
\item The number of 7-branes that is encircled is given by the third column of Table 1.
\item When encircling these 7-branes by a loop (with
winding number one) the $SL(2,\mathbb{Z})$ monodromy must belong to the
conjugacy class\footnote{This loop can be continuously deformed to any other loop that encircles the same number of branes and that belongs to the same $SL(2,\mathbb{Z})$ conjugacy class.} that is given in the fifth column of Table 1.
\end{enumerate}
Examples of monodromy loops encircling a group of 7-branes which when made to coincide give rise to certain $A$- and $D$-type gauge groups are given in Fig. \ref{fig:conjugacyclassloops}.
One could also encircle a group of 7-branes that does not satisfy the above two criteria. For those cases there is no limit in which the group of 7-branes can be made to coincide, so that BPS strings cannot become massless. Such groups of non-collapsable 7-branes have been studied in \cite{DeWolfe:1998eu,DeWolfe:1998pr} and will not be considered further here.
The loop encircling a number of 7-branes around which the monodromy belongs to some $SL(2,\mathbb{Z})$ conjugacy class can be considered to form the boundary of a punctured disk where each puncture corresponds to some 7-brane. In general, a necessary condition for two strings to be inequivalent, is that they lie along two non-selfintersecting $\Lambda=\pm\mathbbm{1}$ paths that are homotopically distinct in the sense of the homotopy of this punctured disk. There is an important exception to this statement which concerns strings crossing branch cuts of branes they can end on. A $(1,0)$ string going from point $A$ to point $B$ (both lying inside the disk) along $\gamma_a$ crossing a $T$ branch cut (and no other branch cuts) is equivalent to a $(1,0)$ string going from point $A$ to point $B$ along $\gamma_b$ without crossing any branch cuts, i.e.~the string along $\gamma_b^{-1}\gamma_a$ is contractible. Put another way, for strings that only cross $T$ branch cuts and nothing else the inequivalence of strings only depends on the starting point and endpoint and not on the homotopy.
Before we embark on a discussion of open BPS strings and gauge group generators let us briefly recall the symmetry enhancement for a stack of $n$ D7-branes in perturbative string theory. In this case the symmetry group is $U(n)$. The Cartan subalgebra of $U(n)$ which is $(U(1))^n$ has as many $U(1)$ factors as there are D7-branes. Each string has a definite orientation and the charges at the string endpoints couple to vectors that are associated to the $U(1)$ of the Cartan subalgebra. The symmetry enhancement comes from massive BPS strings stretched between the different D7-branes. Taking all possible orientations into account there are $n(n-1)$ such BPS strings \cite{Witten:1995im}. In the context of perturbative string theory this analysis has been generalized to include orientifolds \cite{Gimon:1996rq}. For example consider a stack of 4 D7-branes and one O7-plane\footnote{In the context of the global 7-brane solution the O7-plane can be viewed as an approximate solution in which the two locked 7-branes of Fig. \ref{fig:branchcutssu(4)} have been put on top of each other, see \cite{Sen:1996vd}. An exact solution with an O7-plane is obtained only once the four D7-branes are put on top of the O7-plane.}. In this case the gauge group is $SO(8)$. The 4 D7-branes give rise to the gauge group $U(4)$. By including additional BPS strings that go from the stack of 4 D7-branes to the O7-plane and back additonal Chan--Paton states originate that together with the $U(4)$ states give rise to $SO(8)$. The Cartan subalgebra of $SO(8)$ is given by the Cartan subalgebra of $U(4)$.
This familiar situation from perturbative string theory does not straightforwardly apply to the case of F-theory on K3. This is for a number of reasons. First of all we note that the number of 7-branes in the third column of Table 1 does not equal the rank of the gauge groups. For $A_{n-1}$, $D_{n+4}$, $E_6$. $E_7$ and $E_8$ we have
\begin{equation}\label{ranknumberbranes}
\text{\#7-branes}=\text{rank of non-Abelian part of 7-brane gauge group}+m\,,
\end{equation}
where $m=1$ for the A series related to the $[T^n]$ conjugacy classes and $m=2$ for the $A$ series related to the Argyres--Douglas singularities as well as for the $D$ and $E$ series. Hence, the Cartan subalgebra is not related to the individual branes. Further, as discussed earlier, the total number of $U(1)$ factors coming from 7-branes is 18 while the total number of 7-branes is 24.
Another complication that follows from \eqref{ranknumberbranes} is that, except for the $[T^n]$ conjugacy classes, the number of BPS strings is larger than the number of generators in the gauge group that lie outside the Cartan subalgebra. One can draw BPS strings between each pair of 7-branes inside the loop that encircles the 7-branes forming the singularity. Further, one can draw homotopically inequivalent BPS strings between the same pair of 7-branes. We cannot umambigously say when the different BPS strings represent different generators because we have no means to assign the $U(1)$ charges to the string endpoints. We conclude that the relation between BPS strings and gauge group generators for the cases in which no branes are coinciding is not one-to-one.
The situation is improved when a subset of the 7-branes are made to coincide. This has two effects. First of all it enables us to use irreps of the non-Abelian algebra that is formed by putting some branes on top of each other to label the Chan--Paton states at the endpoints of the string. We can thus associate Chan--Paton states at the endpoints of the open strings with the definite location of a stack of some number of 7-branes. Secondly, the number of BPS strings will be much less as compared to a situation in which all branes are non-coincident\footnote{The reduction in the number of BPS strings is larger than the number of BPS strings that disappear due to the symmetry enhancement on the stack of branes.}. This latter fact can be explained as follows. Suppose two 7-branes are made to coincide along some path $\gamma$. Then any BPS string which crosses $\gamma$ seizes to exist once these two 7-branes are coincident. In fact as will be shown in the $E_6$ case putting a sufficiently large number of branes coincident can even lead to a situation in which no BPS strings exist at all. The effect of putting some number of 7-branes on top of each other is to make certain subgroups of the gauge group manifest.
Using Chan--Paton states for string endpoints on a stack of 7-branes only solves the problem of relating open strings to gauge group generators partially. Only those strings that have both their endpoints on a stack, which we will refer to as stack-to-stack-strings, can be related to gauge group generators. Since the $U(1)$ charges of the Cartan subalgebra are not manifest it is not possible to
tell when two strings ending on at least one single brane are (in)equivalent, so that we cannot map each individual string to some generator. Such strings occur in two types: 1). strings with both their endpoints on a different single brane are referred to as single-to-single-brane-strings and 2). strings with one endpoint on a stack and one endpoint on a single brane are referred to as stack-to-single-brane-strings. Below we will relate the stack-to-stack-strings in number to certain gauge group generators and for the stack-to-single-brane- and single-to-single-brane-strings we will derive consistency conditions in order for them to describe the remaining gauge group generators.
\subsection{$A$ series}\label{Aseries}
As follows from Table \ref{Tatealgorithm} the $A$-type gauge groups
are related to the following $SL(2,\mathbb{Z})$ conjugacy classes:
$[T^n]$, $[T^{-1}S]$, $[S]$ and $[(T^{-1}S)^2]$. For $[T^{-1}S]$ the gauge group
is Abelian.
The case of $[T^n]$ corresponds to the familiar situation of $n$ coiniciding D7-branes. We know from \cite{Witten:1995im} that the gauge group in this case is $U(n)$. We cannot however state that the $U(1)$ factor in $U(n)=U(1)\times SU(n)$ corresponds to this group of branes for reasons explained above.
The $SL(2,\mathbb{Z})$ conjugacy classes $[T^{-1}S]$, $[S]$ and $[(T^{-1}S)^2]$ correspond to the Argyres--Douglas singularities \cite{Argyres:1995jj,Argyres:1995xn}. The groups of 7-branes for these cases are shown in Fig. \ref{fig:conjugacyclassloops}.
For the $[T^{-1}S]$ case there does not exist a BPS string that lies inside the region bounded by the $[T^{-1}S]$ loop. Hence, no symmetry enhancement can occur. From Table \ref{Tatealgorithm} we know that indeed the gauge group is Abelian.
For the $[S]$ and $[(T^{-1}S)^2]$ cases we can draw BPS strings between any pair of 7-branes inside the $[S]$ and $[(T^{-1}S)^2]$ loops of Fig. \ref{fig:conjugacyclassloops}. The generators
that lie outside the Cartan subalgebra correspond to strings starting
and ending on different 7-branes. For $SU(n)$ groups the
number of generators outside the Cartan subalgebra is $n(n-1)$. Hence, for the $[S]$ and $[(T^{-1}S)^2]$ cases we need one and three BPS strings (taking into account that each string can have two orientations), respectively. The fact that we can draw strings between any pair of 7-branes means that there are more BPS strings than gauge group generators. This situation is resolved (and not just improved) by putting some of the 7-branes on top of each other. For the $[S]$ and $[(T^{-1}S)^2]$ cases we can put two and three 7-branes coincident, respectively, see Fig. \ref{fig:noBPSstringADsings}. Now we have manifest $SU(2)$ and $SU(3)$ symmetries and it can be shown that just as in the $[T^{-1}S]$ case there do not exist any BPS strings that lie stretched between the stack of coinciding 7-branes and the 7-brane that is locked by the $\pm S$ branch cuts nor do there exist strings that go from the stack back to itself around some non-contractible loop.
\subsection{$D$ series}\label{Dseries}
The relation between BPS open strings and gauge group generators for the $D_4$ case can be studied by making an $SU(4)$ subgroup manifest. Fig. \ref{fig:branchcutssu(4)} shows the branch cut representation when four 7-branes are made to coincide inside the $[-\mathbbm{1}]$ loop encircling six 7-branes. The only BPS string that still exists in this case is the one drawn in Fig. \ref{fig:BPSSO(8)}.
Fig. \ref{fig:BPSSO(8)} represents the three locations of the 7-branes of Fig. \ref{fig:branchcutssu(4)} but without the branch cuts. Strictly speaking one should also draw those but in order not to make the pictures too messy we leave them out. Branes 1 and 2 correspond to the locked 7-branes of Fig. \ref{fig:branchcutssu(4)}. Brane 3 corresponds to the stack of four 7-branes. It turns out that the only admissible BPS string is the one that starts and ends on the stack of four 7-branes and that loops around the other two locked 7-branes.
The generators outside the $SU(4)$ subgroup of $SO(8)$ can be represented by their $SU(4)$ representation. We have the following branching rule for the decomposition of the adjoint representation of $SO(8)$ in terms of irreps of the subalgebra $SU(4)\times U(1)$:
\begin{equation}
\mathbf{28} \rightarrow \mathbf{1}+\mathbf{15}+\mathbf{6}+\bar{\mathbf{6}}\,,
\end{equation}
where we have suppressed the $U(1)$ labels because we will not be able to relate those to properties of the BPS strings anyway. The $SU(4)$ singlet $\mathbf{1}$ is one of the elements of the Cartan subalgebra of $SO(8)$. The $\mathbf{15}$ of $SU(4)$ is made manifest and does not require any massive BPS strings. We are left with the $\mathbf{6}$ and $\bar{\mathbf{6}}$ of $SU(4)$ which are the antisymmetric rank two tensor of $SU(4)$ and its conjugate, respectively.
The BPS string drawn in Fig. \ref{fig:BPSSO(8)} has $\Lambda=-\mathbbm{1}$. This means that the 7-brane on which the string has started and on which it can end must be different branes in the stack. Counting both orientations along the string there are 4 times 3, i.e.~12 such strings. For each orientation there are thus 6 such strings. These are the sought for $\mathbf{6}$ and $\bar{\mathbf{6}}$ irreps of $SU(4)$. We see that conjugation of the $SU(4)$ irrep corresponds to orientation reversal along the string.
Instead of making $SU(4)$ manifest we could also have made, say, an $SU(2)\times SU(2)\times SU(2)$ subgroup of $SO(8)$ manifest using a different branch cut representation. The reason that $SU(4)$ is attractive is because the generators outside the $U(4)$ subgroup of $SO(8)$ are antisymmetric rank two tensors and for such irreps we know how to relate them (in contrast to for example singlet representations) to the BPS strings. In perturbative string theory the $U(4)$ subgroup is clearly the natural one to explain the emergence of $SO(8)$. Also in the split orientifold case studied in \cite{Sen:1996vd} the $SU(4)$ group plays an important role. It is not a priori guaranteed that there exists an open string description of $SO(8)$ via some other subgroup such as $SU(2)\times SU(2)\times SU(2)$. It can for example happen that making such a symmetry manifest leads to fewer BPS strings than generators. When this is the case such a construction will not work. This is in fact what happens when $SU(2)\times SU(2)\times SU(2)$ is made manifest.
The $D_{n+4}$-type gauge groups are related to the $[-T^n]$ conjugacy classes. The set of 7-branes giving rise to $SO(10)$ are encircled in Fig. \ref{fig:conjugacyclassloops} by the $[-T]$ loop. In this case in order to have a one-to-one relation between BPS strings and gauge group generators we have to make an $SU(5)$ subgroup manifest. The analysis proceeds analogously to the $D_4$ case and will not be given here. For the $D_{n+4}$ case we must make an $SU(n+4)$ subgroup manifest.
\subsection{$E$ series}\label{Eseries}
In the case of the $D_{n+4}$ series making $SU(n+4)$ manifest we always have one antisymmetric rank two tensor irrep of $SU(n+4)$ and one BPS string going from the stack of $n+4$ branes back to itself. As we will see for the case of the $E_n$ series with $n=6,7,8$ making for example a certain $SU(m)\times (U(1))^k$ subgroup with $n=m-1+k$ manifest will in general lead to more than one antisymmetric rank two tensor irrep of the $SU(m)$ subgroup. These antisymmetric rank two tensors differ only in their $k$ $U(1)$ charges of the $U(1)$'s in the decomposition $E_n\rightarrow SU(m)\times (U(1))^k$. Since these $U(1)$ charges cannot be made manifest one may wonder how we can reliably state that the number of inequivalent BPS strings going from the stack to itself equals the number of antisymmetric rank two tensor irreps of $SU(m)$. The reason is the following.
Suppose we have two strings $a$ and $b$ lying along the paths $\gamma_a$ and $\gamma_b$, respectively, each of which is non-contractible starting and ending on the stack. Suppose that the BPS strings $a$ and $b$ have exactly the same Chan--Paton labels with respect to the $SU(m)\times (U(1))^k$ subgroup. Then it follows that the string lying along $\gamma_b^{-1}\gamma_a$ has trivial Chan--Paton labels and must correspond to a Cartan generator. These are formed by massless strings starting and ending on the stack. Inside the region, $D$, bounded by the loops encircling the collapsable 7-brane configurations of Table \ref{Tatealgorithm} such massless strings lie along contractible paths (in the sense of the homotopy of $D$) and hence if strings $a$ and $b$ have the same Chan--Paton labels then $\gamma_b^{-1}\gamma_a$ must be contractible or in other words $\gamma_a$ and $\gamma_b$ are homotopically equivalent. Therefore, homotopically inequivalent strings starting and ending on the same stack must have different sets of $k$ $U(1)$ charges and it follows that the number of such inequivalent BPS strings must match the number of antisymmetric rank two tensors in the decomposition of some $E_n$ group according to $E_n\rightarrow SU(m)\times (U(1))^k$. We will verify that this is indeed the case.
The matching of stack-to-stack-strings with generators transforming as antisymmetric rank two tensors of $SU(m)$ provides a nontrivial step towards an open string interpretation of the exceptional gauge groups. However, since in the decomposition of the adjoint representation of $E_n$ according to $E_n\rightarrow SU(m)\times (U(1))^k$ there also appear states that are in the fundamental or singlet of $SU(m)$ it is difficult to match all the generators to strings because for strings with only one or no endpoints on the stack we have no general argument to relate them in number to the fundamental or singlet irreps of $SU(m)$. Still, for each of the cases $E_n$ with $n=6,7,8$, as we will see below, it is possible to perform certain consistency checks regarding the number of such strings.
We will next motivate why we make $SU(m)\times (U(1))^k$ subgroups of $E_n$ manifest and fix what $m$ and $k$ should be for each of the cases $n=6,7,8$. The higher the symmetry we make manifest the less BPS strings there generically will be. In general when we make $SU(k)\times SU(l)$ subgroups manifest the adjoint of $E_n$ decomposes into irreps that are e.g.~in the fundamental of $SU(k)$ and in the antisymmetric rank two of $SU(l)$. Such states cannot be realized using open strings. If we make $SU(m)$ subgroups manifest with a relatively high value of $m$, such as $m=6$ for $E_6$ or $m=7$ for $E_7$ then there appear typically higher rank than two antisymmetric tensor representations and also these cannot be realized with open strings. For branching rules of the exceptional symmetry groups we refer to e.g.~\cite{Slansky:1981yr}. Open strings can only account for singlets, fundamental and antisymmetric rank two tensor irreps and only in such a way that their conjugates also appear. This is because for each BPS string we always have both orientations.
In general the branching rule of the adjoint decomposition of some to be formed gauge group with respect to some $SU$ subgroup (or possibly direct product of $SU$ subgroups) depends on the embedding. A simple condition the subgroup must satisfy is that its rank must equal that of the gauge group. Therefore the embedding always goes via a maximal subalgebra. We will choose subgroups of the $E$-type gauge groups for which the adjoint decomposition always gives the same irreps of the subgroup (ignoring possible differences in $U(1)$ charges) regardless via which maximal subalgebra it is embedded. Having branching rules independent of the embedding is convenient because it means that when we study BPS strings we do not need to worry about the question which embedding is realized by a certain branch cut representation compatible with some manifest $SU$ group.
In order that the adjoint decomposition of $E_n$ only leads to open string realizable states such that the irreps (apart from $U(1)$ charges) are the same regardless the embedding (that must always go via a maximal subalgebra) we choose the $SU(m)\times (U(1))^k$ subgroups of $E_n$ of Table \ref{Esubgroups}. We will thus work with a manifest $SU(5)$ symmetry group.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ccc}
\hline
$n$ & $m$ & $k$ \\
\hline
6 & 5 & 2 \\
7 & 5 & 3 \\
8 & 5 & 4\\
\hline
\end{tabular}
\end{center}
\caption{{\small{$SU(m)\times (U(1))^k$ subgroups of $E_n$ for which the adjoint decomposition of $E_n$ gives the same number of open string realizable irreps of $SU(m)$ for every embedding of $SU(m)\times (U(1))^k$ into $E_n$ with $n=m-1+k$.}}}
\label{Esubgroups}
\end{table}
Any BPS string that exists inside a $[-T^{-1}S]$, $[-S]$ or $[-(T^{-1}S)^2]$ loop with a manifest $SU(5)$ automatically carries a representation with respect to $SU(5)$ and these representations appear in the adjoint decomposition of $E_n$. Granted there exists an open string description for the exceptional singularities we have the following consistency conditions for these open strings.
\begin{enumerate}
\renewcommand{\labelenumi}{\Roman{enumi}.}
\item For the strings that start and end on the stack with $\Lambda=-\mathbbm{1}$, by the argument given at the beginning of this section, we know that, regardless the branch cut representation, there should always be as many such strings as there are antisymmetric rank two tensors of $SU(5)$ in the adjoint decomposition of $E_n$.
\newcounter{last}
\setcounter{last}{\value{enumi}}
\end{enumerate}
For the strings with at least one endpoint on a single brane we cannot state for a given branch cut structure exactly how many such strings there should be. In fact the number of such strings varies depending on the positioning of the branch cuts\footnote{The number of branes on the same branch is the number of branes that can be connected by non-selfintersecting $\Lambda=+\mathbbm{1}$ paths that do not cross any branch cuts. This number determines which $SU$ symmetry groups corresponding to the $[T^n]$ conjugacy classes can be realized without changing the positioning of the $S$ branch cuts. BPS strings that cross the direct paths between such branes no longer exist once these branes are put on top of each other. Since making a certain $SU$ symmetry group manifest can in general be done in different ways depending on the positioning of the $S$ branch cuts the number of BPS strings can differ for different branch cut representations each of which makes, with the same value for $m$, an $SU(m)$ group manifest.}. A simple consistency condition is then
\begin{enumerate}
\setcounter{enumi}{\value{last}}
\renewcommand{\labelenumi}{\Roman{enumi}.}
\item There are always at least as many stack-to-single-brane-strings as there are fundamentals in the adjoint decomposition of $E_n$ and there are at least as many single-to-single-brane-strings (counting both orientations) as there are singlets outside the Cartan subalgebra.
\end{enumerate}
\bigskip
\noindent We will now discuss the cases of $E_6, E_7$ and $E_8$ separately.
\bigskip
\noindent $\bullet\ \ E_6$
\medskip
We start with the case of the $E_6$ gauge group. When we make an $SU(5)$ symmetry manifest one possible branch cut representation is the one given in Fig.~\ref{fig:branchcutsEgroups} where for $E_6$ we should consider the $[-T^{-1}S]$ loop. In Fig.~\ref{fig:branchcutsEgroups} there are three locked 7-branes denoted by 1, 2 and 3 and there is a stack of five 7-branes located at the point labelled 4. Because there is now a manifest $SU(5)$ group, 20 of the 72 generators outside the Cartan subalgebra of $E_6$ are taken care off.
The decomposition of the adjoint representation of $E_6$ according to the subalgebra $SU(5)\times (U(1))^2$ is
\begin{equation}
\mathbf{78}\rightarrow\mathbf{24}+2\cdot\mathbf{1}+2\cdot\left(\mathbf{10}+\bar{\mathbf{10}}\right)+\mathbf{5}+
\bar{\mathbf{5}}+2\cdot\mathbf{1}\,.
\end{equation}
The $U(1)$ charges have been suppressed. By $2\cdot\mathbf{1}$ we mean two singlets and by $2\cdot\left(\mathbf{10}+\bar{\mathbf{10}}\right)$ we mean two sets of $\mathbf{10}+\bar{\mathbf{10}}$ irreps. There are in total four singlets separated in two sets of two. The $U(1)$ charges can still depend on the embedding. We only need two facts about these charges that are independent of the embedding. The first set of states $2\cdot\mathbf{1}$ contains two singlets whose $U(1)$ charges are all equal to zero and corresponds to two Cartan generators while the second set of states $2\cdot\mathbf{1}$ contains two singlets that have nonzero $U(1)$ charges, with the $U(1)$ charges of one singlet opposite to those of the other singlet, and corresponds to generators outside the Cartan subalgebra.
From the argument at the beginning of this subsection we know that the number of antisymmetric rank two tensors equals the number of inequivalent strings that start and end on the stack of five 7-branes. By inspection it can be seen that there are two such strings, see Fig.~\ref{fig:stackstackE6}, namely:
\begin{enumerate}
\renewcommand{\labelenumi}{\alph{enumi}.}
\item From 4 to 4 with $\Lambda=-\mathbbm{1}$ around 1 and 2.
\item From 4 to 4 with $\Lambda=-\mathbbm{1}$ around 1 and 3.
\end{enumerate}
The two BPS strings going from 4 to 4 have $\Lambda=-\mathbbm{1}$ so that they cannot start and end on the same brane in the stack. Therefore, counting both orientations, each of these BPS strings gives rise to 20 generators. Since orientation reversal corresponds to conjugation each BPS string corresponds to one set of $\mathbf{10}+\bar{\mathbf{10}}$ generators. BPS strings a and b are homotopically distinct and must therefore carry different $U(1)$ charges.
This leaves us with the challenge of relating the $\mathbf{5}+\bar{\mathbf{5}}+2\cdot\mathbf{1}$ states to BPS strings. What we can say with certainty is that the $\mathbf{5}+\bar{\mathbf{5}}$ states correspond to strings with one endpoint on the stack of five 7-branes and one endpoint on a single brane and that the $2\cdot\mathbf{1}$ states correspond to strings that lie stretched between two different single branes. By inspection it can be checked that such BPS strings exist in sufficient number. As shown in Fig.~\ref{fig:stacksingleE6} there are BPS strings starting at 4 and going (along suitable paths) to any of the points 1, 2 or 3. Fig.~\ref{fig:singlesingleE6} shows that there also exist strings going from brane 3 to 2 and from 3 to 1. Hence, the required type of strings can be constructed but we do not know how to relate them to the gauge group generators. We verified that the branch cut representation with manifest $SU(5)$ symmetry of
Fig.~\ref{fig:branchcutsEgroups} allows for BPS strings that satisfy the above-mentioned consistency conditions I and II.
To see the effect of choosing different branch cuts consider Fig.~\ref{fig:alternatives} which shows two alternative ways of placing the $S$ branch cuts while having an $SU(5)$ inside a $[-T^{-1}S]$ loop. In both cases it can be verified that conditions I and II are met. In both cases there are two homotopically inequivalent $\Lambda=-\mathbbm{1}$ loops from the stack to itself and there are sufficiently many stack-to-single-brane- as well as single-to-single-brane-strings to in principle account for all the states in the adjoint decomposition of $E_6$.
The two branch cut representations of Fig.~\ref{fig:alternatives} allow us to make an $SU(5)\times SU(2)$ and an $SU(6)$ symmetry manifest. In the lower figure of Fig.~\ref{fig:alternatives} we can make an $SU(2)$ manifest by putting the two 7-branes that are encircled by the same $S$ branch cuts on top of each other. In the upper figure of Fig.~\ref{fig:alternatives} there exists a path from the stack of five branes to a single brane that does not cross any branch cuts. We can thus take the single brane along this path and put it on top of the stack forming a stack of six branes (without changing the $S$ branch cuts). In the decomposition of the adjoint representation of $E_6$ with respect to either $SU(5)\times SU(2)\times U(1)$ or $SU(6)\times U(1)$, which are given by ($U(1)$ charges suppressed)
\begin{eqnarray}
\mathbf{78} & \rightarrow & (\mathbf{24},\mathbf{1})+(\mathbf{1},\mathbf{3})+(\mathbf{1},\mathbf{1})+(\mathbf{10},\mathbf{2})+(\bar{\mathbf{10}},\mathbf{2})+(\mathbf{5},\mathbf{1})+(\bar{\mathbf{5}},\mathbf{1})\,,\\
\mathbf{78} & \rightarrow & \mathbf{35}+\mathbf{1}+2\cdot\mathbf{20}+2\cdot\mathbf{1}\,,
\end{eqnarray}
respectively, there occur non-open string realizable representations of the manifest symmetry groups such as the $(\mathbf{10},\mathbf{2})$ of $SU(5)\times SU(2)$ and the $\mathbf{20}$ of $SU(6)$. Hence when $SU(5)\times SU(2)$ or $SU(6)$ are made manifest we should not expect there to be an open string interpretation of the symmetry enhancement. Indeed when $SU(5)\times SU(2)$ is manifest we cannot construct a $\mathbf{5}+\bar{\mathbf{5}}$ string that necessarily would have to go from the stack of five 7-branes to the only single brane. Likewise when $SU(6)$ is manifest we cannot construct strings corresponding to singlets which necessarily would have to lie between the only two single branes.
One could try to use multi-pronged strings to describe representations such as $(\mathbf{10},\mathbf{2})+(\bar{\mathbf{10}},\mathbf{2})$. A strategy could be to take the two branes forming $SU(2)$ apart so that now stack-to-stack-strings in the
$\mathbf{10}+\bar{\mathbf{10}}$ of $SU(5)$ exist and to transform those into multi-pronged strings using rules similar to those used in \cite{Gaberdiel:1997ud,Gaberdiel:1998mv} and then to put the branes on top of each other after the multi-pronged strings have been formed. It would be interesting to work out the details of such an analysis using our global branch cut structure.
We pause here to make a comment on statements regarding (non-)existence of certain BPS strings that are made at various places in the text. When we say that there are no strings of a certain type or no more than drawn in one of the figures this means that we did not manage to construct those after an extensive search. We did not try to prove these statements in a rigorous way. Such an attempt would probably greatly benefit from some computer program that computes $SL(2,\mathbb{Z})$ transformations going from one connected region of the domain of $(\tau,f)$ to an adjacent one tracing out paths such that no self-intersections occur.
\medskip
\noindent $\bullet\ \ E_7$
\medskip
For $E_7$ the adjoint decomposes into irreps of $SU(5)\times (U(1))^3$ as follows:
\begin{equation}
\mathbf{133}\rightarrow\mathbf{24}+3\cdot\mathbf{1}+3\cdot\left(\mathbf{10}+\bar{\mathbf{10}}\right)+4\cdot\left(\mathbf{5}+\bar{\mathbf{5}}\right)+6\cdot\mathbf{1}\,,
\end{equation}
where we have suppressed the $U(1)$ labels. The first set of singlets corresponds to Cartan generators while the second set of singlets to generators outside the Cartan subalgebra. This second set of singlets, denoted by $6\cdot\mathbf{1}$, can be divided into two sets where one set has opposite $U(1)$ charges with respect to the other set.
Independent of the branch cut representation according to rules I and II we need exactly three homotopically inequivalent strings from the stack to itself, four or more strings going from the stack to a single brane and three or more strings stretched between different single branes.
Consider the branch cut representation for a manifest $SU(5)$ of Fig.~\ref{fig:branchcutsEgroups} where we take the $[-S]$ loop.
The antisymmetric rank two irreps must be related in number to strings with $\Lambda=-\mathbbm{1}$ that start and end on the stack. These BPS strings are:
\begin{enumerate}
\renewcommand{\labelenumi}{\alph{enumi}.}
\item From 4 to 4 with $\Lambda=-\mathbbm{1}$ around 1 and 2.
\item From 4 to 4 with $\Lambda=-\mathbbm{1}$ around 1 and 3.
\item From 4 to 4 with $\Lambda=-\mathbbm{1}$ around 2, 3 and 5.
\end{enumerate}
Each of the strings a to c, counting both orientations, accounts for one set of $\mathbf{10}+\bar{\mathbf{10}}$ generators. String a also exists in the $D_4$ case, and strings a and b exist in the $E_6$ case. Fig.~\ref{fig:stackstackE7} shows string c that does not exist for the $E_6$ case.
The same is true for the stack-to-single-brane- and single-to-single-brane-strings that exist in the $E_6$ case. These also exist in the $E_7$ case. Stack-to-single-brane-strings that exist for $E_7$ but not for
$E_6$ are drawn in Fig.~\ref{fig:stacksingleE7} and single-to-single-brane-strings that exist for $E_7$ but not for $E_6$ are drawn in Fig.~\ref{fig:singlesingleE7}. Since these are sufficient in number and since there are exactly three homotopically inequivalent stack-to-stack-strings with $\Lambda=-\mathbbm{1}$ we once again verified conditions I and II.
We do not know if the strings drawn in Figs. \ref{fig:stacksingleE7} and \ref{fig:singlesingleE7} are really all the strings with at least one endpoint on a single brane. One way to check if there might be more BPS strings is to combine certain strings with others to form new strings. The resulting path can be interpreted as a single new BPS string as long as it does not selfintersect and as long as there are no problems with charge conservation. For example a string going from a single brane back to itself along some non-contractible loop with $\Lambda=-\mathbbm{1}$ is not allowed. An example of an allowed combination is the joining of the first and third strings of Fig.~\ref{fig:singlesingleE7}. When this is done the resulting string is homotopically equivalent to the second string of Fig.~\ref{fig:singlesingleE6}. There are many examples of such combinations. We checked that the total set of $E_7$ strings given in
Figs. \ref{fig:stackstackE6} to \ref{fig:singlesingleE6} and Figs.
\ref{fig:stackstackE7} to \ref{fig:singlesingleE7} is closed under such combinations.
Besides the conditions I and II we can in the $E_7$ case perform another consistency check on the BPS strings. From the branch cut representation for $E_7$ shown in Fig.~\ref{fig:branchcutsEgroups} it is clear that we could make an $SU(6)$ symmetry manifest by putting brane 5 on top of the stack at 4. The path along which this is done is the profile of the fourth open string of Fig.~\ref{fig:stacksingleE7}. By looking at the stack-to-single-brane-strings of Figs.~\ref{fig:stacksingleE6} and \ref{fig:stacksingleE7} it can be concluded that all of them with the exception of the 3rd, 4th, 5th, 8th and 9th strings of Fig.~\ref{fig:stacksingleE7} disappear when $SU(6)$ is realized in this way. When brane 5 is put on top of the stack at 4 the 3rd and 8th strings of Fig.~\ref{fig:stacksingleE7} become identical. Each of the surviving strings of Fig.~\ref{fig:stacksingleE7} goes from the stack of five branes to brane number 5 and hence now go from the stack to itself but always end on the same brane in the stack. If we consider the stack-to-stack-strings of Figs.~\ref{fig:stackstackE6} and \ref{fig:stackstackE7} then first of all each of them survive putting brane 5 on top of the stack and secondly these strings do not end on brane 5. Combining them with those of Fig.~\ref{fig:stacksingleE7} we obtain all the stack-to-stack strings for a stack consisting of six branes and we thus find three sets of $\mathbf{15}+\bar{\mathbf{15}}$ irreps of $SU(6)$. Further, only three of the single-to-single-brane-strings of Figs. \ref{fig:singlesingleE6} and \ref{fig:singlesingleE7} survive. These are the first string of Fig.~\ref{fig:singlesingleE6} and the fourth and fifth strings of Fig.~\ref{fig:singlesingleE7}. There exists a decomposition of the adjoint of $E_7$ with respect to $SU(6)\times (U(1))^2$ that reads
\begin{equation}
\mathbf{133}\rightarrow\mathbf{35}+2\cdot\mathbf{1}+3\cdot\left(\mathbf{15}+\bar{\mathbf{15}}\right)+6\cdot\mathbf{1}\,.
\end{equation}
Each of these states are represented by the strings of Figs. \ref{fig:stackstackE6} to \ref{fig:singlesingleE6} and Figs. \ref{fig:stackstackE7} to \ref{fig:singlesingleE7} after brane 5 has been put on top of the stack. It can further be checked that the resulting branch cut representation does not allow for any stack-to-single-brane-strings consistent with the above decomposition.
The branching rule of the adjoint decomposition of $E_7$ with respect to $SU(6)\times (U(1))^2$ depends on the embedding. There exists an embedding of $SU(6)\times (U(1))^2$ that goes via $SU(8)$ that has a different branching rule. In this case the branching rule is
\begin{equation}
\mathbf{133}\rightarrow\mathbf{35}+2\cdot\mathbf{1}+\mathbf{15}+\bar{\mathbf{15}}+2\cdot\left(\mathbf{6}+\bar{\mathbf{6}}\right)+2\cdot\mathbf{20}+2\cdot\mathbf{1}\,.
\end{equation}
There now appears the non-open-string-realizable-state $\mathbf{20}$ of $SU(6)$. Clearly, this embedding is not described by Fig.~\ref{fig:branchcutsEgroups} with brane 5 on top of the stack. However, consider the right figure of Fig.~\ref{fig:alternatives} and the branes inside the $[-S]$ loop. There now exist more than two $\mathbf{6}+\bar{\mathbf{6}}$ stack-to-single-brane-strings as well as one stack-to-stack string with $\Lambda=-\mathbbm{1}$ representing the $\mathbf{15}+\bar{\mathbf{15}}$ irreps of $SU(6)$. This example makes explicit that the embedding of a symmetry group is related to the branch cut representation. Even though in this case there still exist some open strings these are not capable of describing the enhancement to $E_7$ starting from this embedding of $SU(6)$.
\medskip
\noindent $\bullet\ \ E_8$
\medskip
We conclude with some brief remarks about $E_8$. The adjoint decomposes into irreps of $SU(5)\times (U(1))^4$ as
\begin{equation}
\mathbf{248}\rightarrow\mathbf{24}+4\cdot\mathbf{1}+5\cdot\left(\mathbf{10}+\bar{\mathbf{10}}\right)+
10\cdot\left(\mathbf{5}+\bar{\mathbf{5}}\right)+20\cdot\mathbf{1}\,,
\end{equation}
where the four singlets $4\cdot\mathbf{1}$ correspond to Cartan generators and the 20 singlets, $20\cdot\mathbf{1}$, can be divided into two set of opposite $U(1)$ charges, so that there are at least 10 single-to-single-brane-strings needed. A branch cut representation for a manifest $SU(5)$ inside the $[-(T^{-1}S)^2]$ loop is given in Fig.~\ref{fig:branchcutsEgroups}. It can be checked that now there are, besides the three homotopically inequivalent stack-to-stack-strings with $\Lambda=-\mathbbm{1}$ that exist in the $E_7$ case, two more such strings, see Fig.~\ref{fig:stackstackE8}. The number of stack-to-single-brane- and single-to-single-brane-strings that exist for $E_8$ but not for $E_7$ is rather large. Conditions I and II are trivially met because the number of stack-to-single-brane- and single-to-single-brane-strings in the $E_7$ case with $SU(5)$ manifest which is included in the $E_8$ case is already sufficiently high.
\section{Discussion}
Let us recapitulate the situation for the $E$-type symmetry groups. Due to the fact that we cannot identify the $U(1)$ charges of the open strings we cannot relate BPS strings to specific gauge group generators. The number of BPS strings with at least one endpoint on a single brane is generically larger than the number of gauge group generators. The number and type of BPS strings strongly depends on which symmetry group is made manifest and via which embedding this is done, i.e.~which branch cut representation is used.
We used homotopy inequivalence of BPS strings (except for the $[T^n]$ conjugacy classes) as a necessary condition to distinguish between different strings. A stronger condition would be to say that homotopically distinct BPS strings are only to be considered inequivalent when they have different $U(1)$ charges. Certainly, from the point of view of gauge group generators that should be sufficient. The incorporation of $U(1)$ charges into the analysis, however, remains an open problem.
In this work we have shown that it is conceivable that an open string interpretation of the $A$-$D$-$E$-type symmetry groups exists provided a sufficient number of branes is non-coinciding. We stress that we cannot follow step by step what happens when an $A$-$D$-$E$-gauge group is actually formed with the exception of the $A$-groups corresponding to the $[T^n]$ conjugacy classes. This is either because the open string description breaks down once too much symmetry is made manifest or, more generically, because at some point one has to put 7-branes on top of locked 7-branes and such a process does not correspond to a continuous change of the branch cuts, so that it is not clear how to do this graphically. We therefore cannot trace the fate of the BPS strings all the way down to the formation of the singularity. It would be interesting to see how, with our global branch cut structure and with so much symmetry made manifest that ordinary open BPS strings have seized to exist, the open string description of the BPS states gets replaced by a description in terms of multi-pronged strings
\cite{Gaberdiel:1997ud,Gaberdiel:1998mv}.
Finally, in this work we have been using the Kodaira classification, see Table \ref{Tatealgorithm}, and reasoned our way towards the known singularity structures. It would be rather satisfying to {\it{derive}} the Kodaira classification from the open string perspective presented here.
\section*{Acknowledgments}
The authors wish to thank Radu Tatar and, in particular, Matthias Gaberdiel for useful discussions. We are also grateful to Teake Nutma for help with making the pictures. This work was supported in part by the Swiss National Science Foundation and the ``Innovations- und Kooperationsprojekt C-13'' of the Schweizerische Universit\"atskonferenz SUK/CRUS. J. H. wishes to thank the University of Groningen for its hospitality.
| proofpile-arXiv_065-6712 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There have been many developments in hadron spectroscopy during the past year. It is impossible
to do justice to all them in a brief review. I will focus on two topics; new
hadrons with $b$ quarks and the so called charmonium $X$, $Y$, $Z$ states, many of which
do not seem to be understood as conventional hadrons. There are many other developments that
deserve attention but which I will not discuss: the $f_{D_s}$ puzzle and possible hints for new
physics,
the $Y(2175)$ and $Y(10890)$ states, further measurements of $B_c$
properties, new measurements of hadronic transitions in quarkonium,
recent results by the BESIII collaboration on $J/\psi$ decays,
the $N^*$ program at Jefferson Lab,
new measurements of quarkonium annihilation decays, etc. etc..
I apologize for the many interesting topics I am not able to cover in a brief
update. Some recent reviews of hadron spectroscopy are given in
Ref.~\cite{Godfrey:2008nc,Eichten:2007qx,Harris:2008bv,Pakhlova:2008di,Olsen:2009ys}.
Quantum Chromodynamics (QCD) is the theory of the strong interactions but it has been a challenge
to calculate the properties of hadrons directly from the QCD Lagrangian in the regime
where the theory
is non-perturbative. Instead, alternative approaches have been used; Lattice QCD, effective
field theories, chiral dynamics, and the constituent quark model. Measurement of hadron
properties provide an important test of these calculational approaches. On the one hand, there
has been much progress in recent years, while on the other hand, a large number of states
have been discovered with properties that are not easily or consistently explained by theory.
In this context I use the constituent quark model (CQM) as a benchmark against which to identify
and compare the properties of the newly discovered states \cite{Brambilla:2004wf}.
Constituent quark models
typically assume a QCD motivated potential that includes a Coulomb-like one-gluon-exchange
potential at small separation and a linearly confining potential at large separation. The
potential is included in a Schrodinger or relativistic equation to solve for the eigenvalues
of radial and orbital angular momentum excitations. For the case of mesons, the quantum numbers
are characterized by $J^{PC}$ quantum numbers where $S$ is the total spin of the quark-antiquark
system, $L$ is the orbital angular momentum, $P=(-1)^{L+1}$, and for self-conjugate mesons,
$C=(-1)^{L+S}$. With these rules, the quark model predicts the allowed quantum numbers of
$J^{PC}=0^{-+}$, $1^{--}$, $1^{+-}$, $0^{++}$, $1^{++}$, $2^{++}\ldots$. Quantum
numbers not allowed by the CQM such as $J^{PC}=0^{--}$, $0^{+-}$, $1^{-+}$, and $2^{+-}$,
are often referred to as {\it exotic} combinations and, if such states were discovered, would
unambiguously signify hadronic states outside the quark model.
In addition to the spin-independent potential there are spin-dependent potentials that
are relativistic corrections, typically assuming a Lorentz vector one-gluon-exchange and a
Lorentz scalar confining potential. This leads to a short distance
spin-spin contact interaction which splits the spin-triplet and spin-singlet $S$-wave states.
If the spin-spin interaction were not short range it would result in a splitting between
the spin-singlet and spin-triplet centre of gravity of the $L\neq 0$ states.
There is also a spin-spin tensor interaction which contributes to splittings in $S=1$,
$L\neq 0$ multiplets in addition to mixings between states with the same $J^{PC}$ quantum
numbers. Finally, there are spin-orbit interactions
that contribute to splittings between $S=1$, $L\neq 0$ states and mix states with unequal
quark and antiquark masses where $C$ is not a good quantum number and with the same $J^P$ such
as $^3P_1-^1P_1$ pairs. The tensor and spin-orbit interactions give rise to the splittings
in, for example, the $\chi_{c0}, \chi_{c1}, \chi_{c2}$ multiplet.
Strong Zweig allowed decays, annihilation decays, hadronic transitions, and electromagnetic
transitions have also been calculated using various models \cite{Brambilla:2004wf}.
Putting all these predictions
together one can build up a fairly complete picture of a quark model state's properties that can
be compared to experimental measurements.
In addition to these conventional CQM hadrons, models of hadrons predict the existence
of additional states:
\begin{description}
\item[Hybrids] are states with an excited gluonic degree of freedom.
Some hybrids are predicted to have exotic quantum numbers which would signal a non-$q\bar{q}$ state.
Almost all models of hybrids predict that hybrids with conventional quantum numbers
will have very distinctive decay modes that can be used to distinguish them from conventional
states.
\item[Multiquark States]
{\it Molecular States} are a loosely bound state of a pair of mesons near threshold. One
signature of these states is that they exhibit large isospin violations.
{\it Tetraquarks} are tightly bound diquark-diantiquark states. A prediction of
tetraquark models is that they are predicted to come in flavour multiplets.
\item[Threhold-effects] come about from rescattering near threshold due to the interactions
between two outgoing mesons. They result in mass shifts due to thresholds. A related effect are
coupled channel effects that result in the mixing of two-meson states with $q\bar{q}$
resonances.
\end{description}
One can think of an analogy in atomic physics for multiquark states and hybrids.
Say we know about atomic spectroscopy but theorists
predict something they call molecules that have never been discovered. Whether molecules
really exist would be an important test of theory. Likewise, the unambiguous discovery of
hybrids and multiquark states is an important test of our models of QCD.
\section{Bottomonium $\eta_b$ State}
The observation of the $\eta_b$ is an important validation of lattice QCD and other calculations.
One means of producing the $\eta_b$ is via radiative transitions, specifically M1 transitions
from the $n^3S_1(b\bar{b})$ states \cite{Godfrey:2001eb,Godfrey:2008zz}.
The partial width for this transition is given by
\begin{equation}
\Gamma (^3S_1 \to ^1S_0 + \gamma)= \frac{4}{3} \alpha \frac{e_Q^2}{m_Q^2}
|\langle f | j_0(kr/2)| i\rangle |^2 k_\gamma^3
\end{equation}
where $e_Q$ is the quark charge in units of $e$, $m_Q$ is the mass of the quark, $k_\gamma$ is
the energy of the emitted photon, and $j_0$ is the spherical Bessel function.
Hindered decays are those that occur between initial and final states
with different principle quantum numbers. In the non-relativistic limit,
the wavefunctions are
orthogonal so that these decays are forbidden. However, hindered decays
have large phase space so that even
small deviations from the non-relativistic
limit can result in observable decays. In contrast, the allowed
transitions have very little phase space so the partial widths are likely to be
too small to be observed.
Last year the BaBar collaboration announced the discovery of the $\eta_b(1^1S_0)$ state
in the transition $\Upsilon(3S)\to \eta_b \gamma$ \cite{:2008vj} with
$B( \Upsilon(3S) \to \eta_b \gamma)=(4.8\pm 0.5 \pm 1.2)\times 10^{-4}$.
More recently BaBar confirmed the $\eta_b$
in the transition $\Upsilon(2S)\to \eta_b \gamma$
with $B( \Upsilon(2S) \to \eta_b \gamma)=(4.2^{+1.1}_{-1.0} \pm 0.9)\times 10^{-4}$ \cite{:2009pz}.
The photon spectrum for this transition is shown in Fig~\ref{fig:babar_etab}.
The average $\eta_b$ mass from the two measurements is
\begin{equation}
M(\eta_b)=9390.4\pm 3.1 \hbox{ MeV}
\end{equation}
This is in agreement with Lattice QCD and the predictions of QCD based models.
After the DPF conference, the CLEO collaboration \cite{Bonvicini:2009hs}
reported evidence for the $\eta_b$ in
$\Upsilon(3S)\to \eta_b \gamma$ with statistical significance of $\sim 4\sigma$
and with $M(\eta_b)=9391.8\pm 6.6 \pm 2.0$~MeV and
$B( \Upsilon(3S) \to \eta_b \gamma)=(7.1\pm 1.8 \pm 1.2)\times 10^{-4}$ which are consistent
with the BaBar measurements.
Both the measured mass and branching ratios support the models of heavy quarkonium spectroscopy.
\begin{figure}[t]
\centering
\includegraphics[width=75mm, clip]{babar_etab.eps}
\caption{From Babar Ref.~\cite{:2009pz}.
Photon spectrum for the $\Upsilon(2S)\to \eta_b \gamma$ transition after subtracting
the non-peaking background component, with PDFs for $\chi_{bJ}(2P)$ peak (light solid),
ISR $\Upsilon(1S)$ (dot), $\eta_b$ signal (dash) and the sum of all three (dark solid). }
\label{fig:babar_etab}
\end{figure}
\section{Bottomonium $\Upsilon(1D)$ State}
Another $b\bar{b}$ state I want to mention is the $\Upsilon(1D)$ state.
It was suggested that
the $\Upsilon(1D)$ states could be observed in the cascade decays consisting of four
E1 transitions in the decay chain $3^3S_1 \to 2^3P_J \to 1^3D_J \to 1^3P_J \to 1^3S_1$
\cite{Godfrey:2001vc,Kwong:1988ae}.
The E1 partial widths and branching ratios can be estimated using the quark model.
The CLEO collaboration followed this search strategy and observed an $\Upsilon(1D)$
\cite{Bonvicini:2004yj}.
The data are dominated by the production of one $\Upsilon(1D)$ state consistent with the $J=2$
assignment. It was measured to have a mass of $M=10161.1\pm 0.6 \pm 1.6$~MeV which
is in good agreement with predictions of potential models and Lattice QCD.
The measured BR for the decay chain is $B=(2.5\pm 0.5 \pm 0.5)\times 10^{-5}$ which compares well
to the predicted BR of $B=2.6\times 10^{-5}$.
This result is not exactly a new result. I mention it because CLEO's discovery
was based on an $\Upsilon(3S)$ data sample of $5.8\times 10^6$ $\Upsilon(3S)$. In contrast,
BaBar has collected a sample of $109\pm 1 \times 10^6$ $\Upsilon(3S)$'s, almost 20 times the size
of the CLEO sample. BaBar has the potential to observe all three of the $1^3D_J$ states which
would be a nice test of our understanding of the $^3D_J$ splittings.
\section{Baryons with $b$ Quarks}
In the last year a number of baryons with $b$-quarks were observed for the first time by the
D0 \cite{Abazov:2008qm} and CDF collaborations \cite{Aaltonen:2009ny}.
The ground state baryons with $b$ quarks and their quark content are given by:
\begin{eqnarray}
& & \Lambda_b^0 = |bud\rangle \cr
& & \Sigma_b^{(*)+} = |buu\rangle \cr
& & \Xi_b^{(*,\prime)-} = |bsd\rangle \cr
& & \Omega_b^{(*)} = |bss\rangle
\end{eqnarray}
The splittings of the ground state baryons can be described reasonably well by only including
the colour hyperfine interaction between two quarks \cite{Rosner:2006yk}
:
\begin{equation}
\Delta H_{ij}^{hyp}={{16\pi\alpha_s}\over{9m_i m_j}}
\vec{S}_i\cdot\vec{S}_j \; \langle \delta^3(\vec{r}_{ij}) \rangle
\sim \gamma \; {{\vec{S}_i\cdot\vec{S}_j}\over {m_i m_j}}
\end{equation}
where we made the simplifying approximation that the wavefunction at the origin and $\alpha_s$
are roughly the same for all states. This results in a number of predictions
\begin{equation}
M(\Sigma_b^*)-M(\Sigma_b)=[M(\Sigma_c^*)-M(\Sigma_c)] \times (\frac{m_c}{m_b})
\simeq 25\hbox{ MeV}
\end{equation}
\begin{eqnarray}
M(\Sigma_b)-M(\Lambda_b)& = & [M(\Sigma_c)-M(\Lambda_c)] \times \frac{(1-m_u/m_b)}{(1-m_u/m_c)} \cr
& \simeq & 192\hbox{ MeV}
\end{eqnarray}
to be compared to
the measured splittings of $21.2^{+2.0}_{-1.9}$~MeV and 192~MeV respectively which is
very good agreement \cite{Karliner:2006ny}.
While not all predictions are in such good agreement, this simple picture does
work quite well. A more careful
analysis by Karliner {\it et al.} \cite{Karliner:2008sv}
that includes wavefunction effects predicts
\begin{equation}
M(\Omega_b)= 6052.1 \pm 5.6 \hbox{ MeV}.
\end{equation}
This state was recently observed by the Fermilab D0 \cite{Abazov:2008qm}
and CDF \cite{Aaltonen:2009ny} collaborations in $J/\psi \Omega^-$. The mass distributions
are shown in Fig.~\ref{fig:d0omegab} for D0 and in Fig.~\ref{fig:cdfomegab} for CDF
with measured masses of
$M(\Omega_b)=6165 \pm 10 \pm 13 $~MeV and
$M(\Omega_b)=6054.4 \pm 6.8 \;(stat) \; \pm 0.9 \; (sys)$~MeV by D0 and CDF
respectively. The two measurements are inconsistent.
The CDF measurement is in good agreement with the quark model prediction and the lattice result
while the D0 measurement is significantly larger. The lattice results \cite{Lewis:2008fu}
along with observed ground state baryon masses
with a $b$-quark are shown in Fig.~\ref{fig:hbmass}.
\begin{figure}[ht]
\centering
\includegraphics[width=38mm, clip]{d0_omegab.eps} \\
\caption{ From D0 Ref.~\cite{Abazov:2008qm}.
The $M(\Omega_b^-)$ distribution of the $\Omega_b^-$ candidates. The
dotted curve is an unbinned lieklihood fit to the model of a constant background plus a
Gaussian signal. }
\label{fig:d0omegab}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=78mm, clip]{cdf_omegabv2.eps}
\caption{From CDF Ref.~\cite{Aaltonen:2009ny}. The invariant mass distribution
of $J/\psi \Omega^-$. }
\label{fig:cdfomegab}
\end{figure}
\begin{figure}[hb]
\centering
\includegraphics[width=65mm, clip]{hbmass.eps}
\caption{Masses of single-b baryons. The diagonally-hatched boxes are lattice results
with combined statistical and systematic errors. Solid bars (red) are experimental values
with the exception of the $\Omega_b$. For the $\Omega_b$ the upper (red) bar is the D0 result and
the lower (blue) bar is the CDF result. From Ref.~\cite{Lewis:2008fu} }
\label{fig:hbmass}
\end{figure}
\section{The Charmonium like $X$, $Y$, $Z$ States}
Over the last five years or so, numerous charmonium like states have been discovered with many
of them not fitting into conventional charmonium spectroscopy
\cite{Godfrey:2008nc,Eichten:2007qx,Pakhlova:2008di,Olsen:2009ys}.
This has led to considerable
theoretical speculation that some of these new states are non-conventional hadrons like hybrids,
molecules, or tetraquarks, or possibly some sort of threshold effect. More and more of these
states seem to appear every other day and it is far from clear what most of them actually are. The
charmonium spectrum is summarized in Fig.~\ref{fig:charmoniumb}. This is a very cluttered figure
which underlines the complexity of the current situation. A more
detailed summary of these states is given in Table~\ref{tab:xyz_states}.
One can see that
there are many of these charmonium like states. I will restrict myself to the following;
I will start with the most recently observed states, the
$Y(4140)$ seen by CDF \cite{Aaltonen:2009tz} and the $X(3915)$ seen by Belle \cite{Olsen:2009ys}.
I will next report on
the $Z^+$ states, charmonium-like states observed by Belle that carry charge so cannot be
conventional $c\bar{c}$ states. I will then briefly discuss the $X(3872)$ which was the first
charmonium-like state to be observed and is the most robust, having been observed by many
experiments in different processes. The final group is the $1^{--}$
$Y$ states observed in $e^+e^- \to \gamma_{ISR} +Y$.
\begin{figure}[t]
\centering
\includegraphics[width=90mm, clip]{charmonium_spectrum.eps}
\caption{The Charmonium spectrum. The solid lines are quark mode predictions
\cite{Godfrey:1985xj}
the shaded lines are the observed conventional charmonium states \cite{Amsler:2008zzb},
the horizontal dashed lines represent various $D^{(*)}_{s} \bar{D}^{(*)}_{s}$ thresholds,
and the (red) dots are
the newly discovered charmonium-like states placed in the column with the
most probable spin assignment.
The states in the last column do not fit elsewhere and appear to be truly exotic.}
\label{fig:charmoniumb}
\end{figure}
\begin{table*}[t]
\caption{Summary of the Charmonium-like $XYZ$ states.}
\begin{center}
\begin{tabular}{lccclcll}
\hline\hline
\label{tab:xyz_states}
state & $M$~(MeV) &$\Gamma$~(MeV) & $J^{PC}$ & Seen In & Observed by: & Comments \\ \hline
$Y_s(2175)$& $2175\pm8$&$ 58\pm26 $& $1^{--}$ & $(e^+e^-)_{ISR}, J/\psi \to Y_s(2175)\to\phi f_0(980)$ & BaBar, BESII, Belle & \\
$X(3872)$& $3871.4\pm0.6$&$<2.3$& $1^{++}$ & $B\to KX(3872)\to \pi^+\pi^-
J/\psi$,$\gamma J/\psi$, $D\bar{D^*}$ & Belle, CDF, D0, BaBar & Molecule?\\
$X(3915)$& $3914\pm4$& $28^{+12}_{-14}$ & $?^{++}$ & $\gamma\gamma\to \omega J/\psi$ & Belle & \\
$Z(3930)$& $3929\pm5$&$ 29\pm10 $& $2^{++}$ & $\gamma\gamma\to Z(3940)\to D\bar{D}$ & Belle & $2^3P_2 (c\bar{c})$ \\
$X(3940)$& $3942\pm9$&$ 37\pm17 $& $0^{?+}$ & $e^+e^-\to J/\psi X(3940)\to D\bar{D^*}$ (not
$D\bar{D}$ or $\omega J/\psi$) & Belle & $3^1S_0 (c\bar{c})$? \\
$Y(3940)$& $3943\pm17$&$ 87\pm34 $&$?^{?+}$ & $B\to K Y(3940)\to \omega J/\psi$ (not
$D\bar{D^*}$) & Belle, BaBar & $2^3P_1 (c\bar{c})$?\\
$Y(4008)$& $4008^{+82}_{-49}$&$ 226^{+97}_{-80}$ &$1^{--}$& $(e^+e^-)_{ISR}\to Y(4008)\to \pi^+\pi^- J/\psi$ & Belle & \\
$Y(4140)$ & $4143\pm 3.1$ & $ 11.7^{+9.1}_{-6.2}$ & $?^{?}$ & $B\to K Y(4140) \to J/\psi \phi $ & CDF & \\
$X(4160)$& $4156\pm29$&$ 139^{+113}_{-65}$ &$0^{?+}$& $e^+e^- \to J/\psi X(4160)\to D^*\bar{D^*}$
(not $D\bar{D}$) & Belle & \\
$Y(4260)$& $4264\pm12$&$ 83\pm22$ &$1^{--}$& $(e^+e^-)_{ISR}\to Y(4260) \to \pi^+\pi^- J/\psi$ & BaBar, CLEO, Belle & Hybrid? \\
$Y(4350)$& $4324\pm24$&$ 172\pm33$ &$1^{--}$& $(e^+e^-)_{ISR}\to Y(4350) \to \pi^+\pi^- \psi'$ & BaBar & \\
$Y(4350)$& $4361\pm13$&$ 74\pm18$ &$1^{--}$& $(e^+e^-)_{ISR}\to Y(4350) \to \pi^+\pi^- \psi'$ & Belle & \\
$Y(4630)$& $4634^{+9.4}_{-10.6}$ & $ 92^{+41}_{-32} $ &$1^{--}$& $(e^+e^-)_{ISR}\to Y(4630)\to \Lambda_c^+\Lambda_c^-$ & Belle & \\
$Y(4660)$& $4664\pm12$&$ 48\pm15 $ &$1^{--}$& $(e^+e^-)_{ISR}\to Y(4660)\to \pi^+\pi^- \psi'$ & Belle & \\
$Z_1(4050)$& $4051^{+24}_{-23}$&$ 82^{+51}_{-29}$ & ? &
$B\to K Z_1^{\pm}(4050)\to \pi^{\pm}\chi_{c1}$ & Belle & \\
$Z_2(4250)$& $4248^{+185}_{-45}$&$ 177^{+320}_{-72}$ & ? &
$B\to K Z_2^{\pm}(4250)\to \pi^{\pm}\chi_{c1}$ & Belle & \\
$Z(4430)$& $4433\pm5$&$ 45^{+35}_{-18}$ & ? & $B\to KZ^{\pm}(4430)\to \pi^{\pm}\psi'$ & Belle & \\
$Y_b(10890)$ & $10,890\pm 3$ & $55\pm 9$ & $1^{--}$ &
$e^+e^-\to Y_b\to \pi^+\pi^-\Upsilon(1,2,3S)$ & Belle & \\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{The $Y(4140)$}
\begin{figure}[t]
\begin{center}
\centerline{\epsfig{file=cdf_y4140v2.eps,width=60mm,clip=}}
\end{center}
\caption{From CDF \cite{Aaltonen:2009tz}.
Evidence for the $Y(4140)$ seen in $B\to J/\psi \phi$ with $J/\psi\to \mu^+\mu^-$ and
$\phi \to K^+K^-$.
The mass difference, $\Delta M$, between $\mu^+\mu^- K^+K^-$ and $\mu^+\mu^-$, in
the $B^+$ mass window. The dash-dotted (blue) curve is the background contribution and the solid (red)
curve is the total unbinned fit. \label{cdf_y4140}}
\end{figure}
The CDF collaboration found evidence for the
$Y(4140)$ in the $J/\psi \phi$ invariant mass distribution
from the decay $B^+\to J/\psi \phi K^+$ which is
shown in Fig.~\ref{cdf_y4140} \cite{Aaltonen:2009tz}.
The state has significance of $3.8\sigma$ with
$M=4143.0 \pm 2.9 \pm 1.2$~MeV/c$^2$ and $\Gamma=11.7^{+8.3}_{-5.0}\pm 3.7$~MeV/c$^2$.
Because both the $J/\psi$ and $\phi$ have $J^{PC}=1^{--}$ the $Y(4140)$ has +ve C parity. Some
argue that there are similarities to the $Y(3940)$ seen in $B\to J/\psi \omega K$
\cite{Abe:2004zs}.
The question asked about all these new $XYZ$ states is: What is it?
And as in all of these states, we consider the different
possibilities, comparing the state's properties to theoretical predictions.
\begin{description}
\item[Conventional State] The $Y(4140)$ is above open charm threshold so it would be expected
to have a large width which is in contradiction to its measured width.
Hence, the $Y(4140)$ is unlikely to be a conventional $c\bar{c}$ state.
\item[$c\bar{c}s\bar{s}$ Tetraquark] A number of authors argue that the $Y(4140)$ is a
tetraquark \cite{Mahajan:2009pj,Stancu:2009ka,Liu:2009ei}. However a tetraquark
is expected to decay via rearrangement of the quarks with a width of $\sim 100$~MeV. It is
also generally expected to have similar widths to both hidden and open charm final states.
The tetraquark interpretation does not, therefore, appear to be consistent with the data.
\item[Charmonium Hybrid] Charmonium hybrid states are predicted to have masses in the 4.0 to 4.4 GeV
mass range. The $Y(4140)$'s mass lies in this range. Hybrids are expected to decay
predominantly to $SP$ meson pair final states with decays to $SS$ final state meson pairs
suppressed. If the $Y(4140)$ were below $D^{**}D$ threshold the allowed decays to $D\bar{D}$
would be suppressed leading to a relatively narrow width. The $D^*\bar{D}$ is an important mode
to look for.
\item[Rescattering via $D_sD^*_s$] Other possibilities are that the $Y(4140)$ is due to
$D_sD^*_s$ rescattering \cite{Rosner:2007mu} or the opening up of a new final state
channel \cite{vanBeveren:2009dc,vanBeveren:2009jk}.
\item[$D^{*+}_s D^{*-}_s$ Molecule] The molecule explanation has been examined by a number of
authors \cite{Mahajan:2009pj,Branz:2009yt,Zhang:2009vs,Liu:2009ei,Albuquerque:2009ak,Ding:2009vd}.
The $D^{*+}_s D^{*-}_s$ threshold is $\sim 4225$~MeV implying a binding energy of $\sim 80$~MeV.
If one interprets the $Y(3940)$ to be a $D^*\bar{D}^*$ molecule the binding energy of the
two systems are similar. Futhermore the decay $Y(4140)\to J/\psi \phi$ is similar
to the decay $Y(3940)\to J/\psi \omega$ although the widths are different. The molecule picture
predicts that decays proceed via rescattering with decays to hidden and open charm final states
equally probable. One should search for decays to open modes
like $D\bar{D}$ and $D\bar{D}^*$. Another prediction is that constituent mesons
can decay independently so observation of decays such as $Y(4140)\to D_s^{*+}D_s^-\gamma$
and $Y(4140)\to D_s^{+}D_s^{*-}\gamma$ would provide evidence for the molecule
picture \cite{Liu:2009ei,Branz:2009yt,Liu:2009pu}.
A $D^{*+}D_s^{*-}$ molecule is also predicted with mass
$\sim 4040$~MeV with $J/\psi \rho $ as a prominent final state to look for\cite{Mahajan:2009pj}.
\end{description}
None of these explanations is compelling.
A necessary first step to understand the $Y(4140)$ is to confirm it's existence in another
measurement as it has only been observed in one measurement at $3.8\sigma$. It is then necessary
to observe other decay modes to help to distinguish between the various possibilities.
\subsection{The $X(3915)$}
The $X(3915)$ is the most recent addition to the collection of $XYZ$ states (at least at the time
of the conference). It was observed
by Belle in $\gamma\gamma\to \omega J/\psi$ with a statistical significance of
$7.5\sigma$ \cite{Olsen:2009ys}. It has a measured mass and width of
$M=3914 \pm 3 \pm 2$~MeV and $\Gamma=23\pm 9 ^{+2}_{-3}$~MeV. These parameters are consistent
with those of the $Y(3940)$. The $2\gamma$ width times BR to $\omega J/\psi$ is
$\Gamma_{\gamma\gamma} \times {\cal B}(X(3915)\to \omega J/\psi )= 69\pm 16^{+7}_{-18}$~eV
assuming $J^P=0^+$ or $21\pm 4 ^{+2}_{-5}$~eV for $J^P=2^+$. For comparison
$\Gamma_{\gamma\gamma} \times {\cal B}(Z(3930)\to D\bar{D})= 180\pm 50 \pm 30$~eV.
\subsection{The $Z^+(4430)$, $Z_1^+(4050)$ and $Z_2^+(4250)$ States}
Belle observed a number of charmonium like states in $B$ decay that carry charge \cite{:2007wga},
thus indicating that they cannot be conventional $c\bar{c}$ states. The first state to be
discovered was the $Z^+(4430)$. The $\pi^+\psi (2S)$
invariant mass distribution is shown in Fig.~\ref{belle_z4430}. The observed peak has a statistical
significance of $6.5\sigma$. It's measured properties are $M=4433\pm 4\pm 2$~MeV,
$\Gamma=45^{+18}_{-12}\; ^{+30}_{-13}$~MeV, and
${\cal B}(B^0\to K^{\mp}Z^{\pm})\times {\cal B} (Z^\pm\to \pi^\pm \psi')
=(4.1\pm 1.0\pm1.4)\times 10^{-5}$. The unusual properties of the $Z^+(4430)$ led to the
usual explanations:
\begin{itemize}
\item $[cu][\bar{c}\bar{d}]$ Tetraquark \cite{Maiani:2007wz}
\item $D^*\bar{D}_1(2420)$ Threshold effect \cite{Rosner:2007mu}
\item $D^*\bar{D}_1(2420)$ $J^P=0^-, \; 1^-$ Molecule \cite{Meng:2007fu}. The molecule
explanation predicts that the $Z^+(4430)$ will decay into $D^*\bar{D}^*\pi$ and that
it decays into $\psi(2S) \pi$ via rescattering.
\end{itemize}
\begin{figure}[t]
\begin{center}
\centerline{\epsfig{file=belle_z4430.eps,width=60mm,clip=}}
\end{center}
\caption{From Belle \cite{:2007wga}. The $M(\pi^+\psi')$ distribution. The shaded histogram show
the scaled results from the sideband region and the solid curves show the results of the fits.
\label{belle_z4430}}
\end{figure}
The Belle $Z^+(4430)$ observation was followed by a search by the BaBar
collaboration in $B\to K \pi^\pm \psi(2S)$ \cite{:2008nk}. BaBar performed a detailed
analysis of the $K\pi^-$ system, corrected for efficiency, and included $S$, $P$, and $D$ waves
in their analysis. Fig.~\ref{belle_babar1} shows the invariant mass distributions from Belle and
Babar \cite{:2008nk}. While there appears to be an excess of events in the $Z^+(4430)$ region
in the BaBar data, BaBar finds no conclusive evidence in their data for the $Z^+(4430)$.
\begin{figure}[t]
\begin{center}
\centerline{\epsfig{file=belle_babar1.eps,width=70mm,clip=}}
\end{center}
\caption{From BaBar Ref.~\cite{:2008nk}. (a) The $\psi(2S)\pi^-$ mass distribution from
Ref.~\cite{:2007wga}; the data points represent the signal region and the shaded histogram
represents the background contribution. (b) shows the corresponding distribution form the BaBar
analysis. The Dashed vertical line indicates $M_{\psi(2S)\pi^-}=4.433$~GeV/c$^2$.
\label{belle_babar1}}
\end{figure}
More recently Belle performed a complete Dalitz analysis \cite{:2009da}. Belle confirms the
$Z^+(4430)$ with $M=4443^{+15+17}_{-12-13}$~MeV and $\Gamma = 109^{+86+57}_{-43-52}$~MeV.
The width is larger than the original measurements but the uncertainties are large.
The Belle collaboration has also observed two resonance structures in $\pi^+ \chi_{c1}$ mass
distributions shown in Fig.~\ref{belle_zs} \cite{Mizuk:2008me} with masses
and widths of $M_1=4051 \pm 14 ^{+20}_{-41}$~MeV, $\Gamma_1=82^{+21+47}_{-17-22}$~MeV
and $M_2=4248 ^{+44+180}_{-29-35}$~MeV, $\Gamma_2=177^{+54+316}_{-39-61}$~MeV.
Belle has now found evidence for three charged charmonium like objects. If confirmed they
represent clear evidence for some sort of multiquark state, either a molecule or tetraquark.
Confirmation is needed for all three of them.
\begin{figure}[t]
\begin{center}
\centerline{\epsfig{file=belle_zs.eps,width=60mm,clip=}}
\end{center}
\caption{From Belle Ref.~\cite{Mizuk:2008me}. The $M(\chi_{c1\pi^+})$ distribution for the
Dalitz plot slice $1.0\hbox{ GeV}^2/c^4 < M^2(K^-\pi^+) < 1.75\hbox{ GeV}^2/c^4$. The
dots with error bars represent data, the solid (dashed) histogram is the Dalitz plot fit
result for the fit model with all known $K^*$ and two (without any) $\chi_{c1}\pi^+$
resonance, the dotted histograms represent the contribution of the two $\chi_{c1}\pi^+$
resonances.
\label{belle_zs}}
\end{figure}
\subsection{The $X(3872)$}
The $X(3872)$ is probably the most robust of all the charmonium like objects. It was first
observed by Belle as a peak in $\pi^+\pi^-J/\psi$ in
$B^+\to K^+ \pi^+\pi^-J/\psi$ \cite{Choi:2003ue}.
It was subsequently confirmed by CDF \cite{Acosta:2003zx},
D0 \cite{Abazov:2004kp}, and BaBar \cite{Aubert:2004ns}. The PDG \cite{Amsler:2008zzb}
values for its mass and width are $M=3872.2\pm 0.8$~MeV and $\Gamma=3.0^{+2.1}_{-1.7}$~MeV.
Unlike most other $XYZ$ states there is a fair amount known about the $X(3872)$ properties.
The radiative transition $X(3872)\to \gamma J/\psi$ has been observed by Belle \cite{Abe:2005ix}
and by BaBar \cite{Aubert:2006aj} and more recently $X(3872)\to \psi(2S) \gamma$ by
BaBar \cite{:2008rn}. This implies that the $X(3872)$ has $C=+$. A study of
angular distributions by Belle favours $J^{PC}=1^{++}$ \cite{Abe:2005iya} while a higher
statistics study by CDF allows $J^{PC}=1^{++}$ or $2^{-+}$ \cite{Abulencia:2006ma}.
In the decay $X(3872)\to \pi^+\pi^-J/\psi$ the dipion invariant mass is consistent
with originating from $\rho\to \pi^+\pi^-$ \cite{Abulencia:2006jp}.
The decay $c\bar{c}\to \rho J/\psi$ violates isospin and should be strongly suppressed.
The decay $X(3872)\to D^0\bar{D}^0\pi^0$ has been seen by Belle \cite{Gokhroo:2006bt}
and the decay $X(3872)\to D^0\bar{D}^0 \gamma$ by BaBar \cite{Aubert:2007rva}. These decays imply
that the $X(3872)$ decays predominantly via $D^0\bar{D}^{*0}$. To understand the nature of the
$X(3872)$ we work through the now familiar possibilities and compare the theoretical predictions
for each case to the $X(3872)$ properties.
\subsubsection{Conventional Charmonium}
These possibilities were discussed in
Ref.~\cite{Barnes:2003vb,Eichten:2004uh,Barnes:2005pb,Eichten:2005ga}. The
$1^1D_2$ and the $2^3P_1$ are the only conventional states with the correct quantum numbers
that are close enough in mass to be associated with the $X(3872)$. However, both these
possibilities have problems. Another new state, the $Z(3921)$, is identified with the $2^3P_2$
state implying that the $2P$ mass is $\sim 3940$~MeV. Identifying the $X(3872)$ with the
$2^3P_1$ implies a spin splitting much larger than would be expected. If the $X$ were the
$1^1D_2(c\bar{c})$, the radiative
transition $1^1D_2 \to \gamma 1^3S_1$ would be a highly suppressed M2 transition so that the
observation of $X(3872)\to \gamma J/\psi$ disfavours identifying the $X(3872)$ as
the $1^1D_2$ state.
\subsubsection{Tetraquark}
This possibility was proposed in Ref.~\cite{Maiani:2004vq}. This scenario predicts more nearly
degenerate states including charged states which have yet to be observed. A high statistics
study by CDF of the $X(3872)$ mass and width tested the hypothesis of two states and finds
$\Delta m < 3.6 (95\% C.L.)$ with $M=3871.61\pm 0.16 \pm 0.19$~MeV \cite{Aaltonen:2009vj}.
The mass splitting of the
$X(3872)$ states
produced in charged and neutral $B$ decays is consistent with zero. These measurements
disfavour the tetraquark interpretation.
\subsubsection{$D^0 D^{*0}$ Molecule}
The molecule explanation appears to be the most likely interpretion of the $X(3872)$
\cite{Close:2003sg,Voloshin:2003nt,Swanson:2003tb,Braaten:2005ai}. It is very close to the
$D^0D^{*0}$ threshold so it is quite reasonable that it is
an $S$-wave bound state. One of the early predictions of the molecule interpretation
is that \cite{Swanson:2003tb}:
\begin{equation}
\Gamma(X(3872)\to \rho J/\psi) \simeq \Gamma(X(3872)\to \omega J/\psi)
\end{equation}
so that large isospin violations are expected. On the other hand, the
decays $X(3872)\to \gamma J/\psi$ and $X(3872)\to \gamma \psi(2S)$ \cite{:2008rn}
indicate it has $c\bar{c}$ content. The most likely explanation is that both the $X(3872)$
and $Y(3940)$ have
more complicated structure, consisting of mixing with both $2^3P_1(c\bar{c})$ and
$D^0D^{*0}$ components
\cite{Godfrey:2006pd,Danilkin:2009hr,Ortega:2009hj,Matheus:2009vq,Kalashnikova:2009gt}.
This may also explain the unexpected large partial width for
$Y(3940)\to J/\psi \omega$ \cite{Aubert:2007vj}.
\subsection{$Y$ States in ISR ($J^{PC}=1^{--}$)}
There are now six ``$Y$'' states seen in $e^+e^-\to \gamma_{ISR} Y$. Because they are seen
in ISR they have $J^{PC}=1^{--}$.
Because of time and space constraints I will only discuss
two of these states; the $Y(4630)$ observed by Belle \cite{Pakhlova:2008vn}
which is one of the newest
$Y$ states and the $Y(4260)$ first observed by BaBar \cite{Aubert:2005rm}
which is one of the oldest.
\subsubsection{$Y(4630)$}
The $Y(4630)$ was seen by the Belle collaboration in
$e^+e^-\to \Lambda_c^+\Lambda_c^- \gamma_{ISR}$ with mass and width
$M=4634 ^{+8+5}_{-7-8}$ and $\Gamma=92^{+40+10}_{-24-21}$ \cite{Pakhlova:2008vn}. The
$\Lambda_c^+\Lambda_c^-$ mass distribution is shown in Fig~\ref{belle_y4630}.
There are some speculations about what it might be. A possible explanation
is that it is a dibaryon threshold effect. A similar effect is also seen by Belle in
$B\to \Lambda_c^+ \bar{p} \pi^-$ \cite{Abe:2004sr} with a $6.2\sigma$ peak observed at
threshold in the $\Lambda_c^+ \bar{p} $ invariant mass distribution. Other possibilities
put forward are to identify the the $Y(4630)$ with the $Y(4660)$, also observed in ISR,
but in the $\pi^+\pi^-\psi'$ final state, or to identify the $Y(4630)$ with the $5^3S_1$
charmonium state.
\begin{figure}[t]
\begin{center}
\centerline{\epsfig{file=belle_y4630.eps,width=65mm,clip=}}
\end{center}
\caption{From Belle Ref.~\cite{Pakhlova:2008vn}. The $M_{\Lambda^+_c\Lambda^-_c}$ spectrum.
(a) With $\bar{p}$ tag. The solid curve represents the result of the fit, the threshold function
is shown by the dashed curve, and the combinatorial background parametrization is shown by the
dashed-dotted curve. (b) With proton (wrong-sign) tag. Histograms show the normalized
contributions from $\Lambda_c^+$ sidebands.
\label{belle_y4630}}
\end{figure}
\subsubsection{$Y(4260)$}
The $Y(4260)$ was the first of the $Y$ states to be observed. It was first observed
by the BaBar collaboration as an enhancement in the $\pi\pi J/\psi$ final state in
$e^+e^- \to \gamma_{ISR} J/\psi \pi\pi$ \cite{Aubert:2005rm}.
The $\pi^+\pi^- J/\psi$ invariant mass
distribution is shown in Fig.\ref{babar_y4260} \cite{Aubert:2005rm}.
BaBar found further evidence for the $Y(4260)$ in $B\to K (\pi^+\pi^- J/\psi)$
\cite{Aubert:2005zh} and it was also independently confirmed by
CLEO \cite{Coan:2006rv} and Belle \cite{:2007sj}. Thus, it is the oldest and most robust
of the $Y$ states. The possibilities for the $Y(4260)$ are:
\begin{description}
\item[Conventional Charmonium] The first unaccounted $1^{--}$ state is the $\psi(3D)$
with predicted mass $M[\psi(3D)]\sim 4500$~MeV which is much heavier than the observed
mass. Thus, the $Y(4260)$ appears to represent an overpopulation of the expected $1^{--}$
states. In addition, the absence of open charm production speaks against it being
a conventional $c\bar{c}$ state. There was the suggestion that the $Y(4260)$ could be
identified as the $\psi(4S)$ state \cite{LlanesEstrada:2005hz}, displacing the $\psi(4415)$
from that slot although the authors acknowledge this fit is somewhat forced.
\item[Tetraquark] Maiani {\it et al.}, \cite{Maiani:2005pe} proposed that the $Y(4260)$
is the first radial excitation of the $[cs][\bar{c}\bar{s}]$. They predict that the
$Y(4260)$ should decay predominantly to $D_s \bar{D}_s$ and predict a full nonet of
related four-quark states.
\item[$D_1D^*$ Bound State] Close and Downum \cite{Close:2009ag}
suggest that two $S$-wave mesons can be bound via pion exchange leading to a spectroscopy
of quasi-molecular states above 4~GeV and a possible explanation of the $Y(4260)$ and
$Y(4360)$. They suggest searches in $D\bar{D}3\pi$ channels as well as in $B$ decays.
\item[$c\bar{c}$ Hybrid] This has been suggested in a number of papers
\cite{Zhu:2005hp,Close:2005iz,Kou:2005gt}. This possibility has a number of attractive features.
The flux tube model \cite{Isgur:1984bm} and lattice QCD \cite{Lacock:1996ny}
predict the lowest $c\bar{c}$ hybrid at $\sim 4200$~MeV. LGT suggests searching for other
closed charm models with $J^{PC}=1^{--}$ such as $J/\psi \eta$, $J/\psi \eta'$,
$\chi_{cJ} \omega \ldots$ \cite{McNeile:2002az}.
Most models predict that the lowest mass hybrid mesons will
decay to $S+P$-wave mesons final states \cite{Kokoski:1985is,Close:1994hc}.
The dominant decay mode is expected to be $D^+D_1(2420)$. The $D_1(2420)$ has a width of
$\sim 300$~MeV to $D^*\pi$ which suggests to search for the $Y(4260)$ in $DD^*\pi$ final states.
Evidence of a large $DD_1(2420)$ signal would be strong evidence for the hybrid interpretation.
Note that searches for these decays by Belle find no evidence \cite{Pakhlova:2007fq}.
Another prediction of the hyrid explanation is to search for partner states. The flux tube model
predicts a multiplet of states nearby in mass
with conventional quantum numbers; $0^{-+}$, $1^{+-}$, $2^{-+}$,
$1^{++}$, $1^{--}$ and states with {\it exotic} quantum numbers $0^{+-}$, $1^{-+}$, $2^{+-}$.
Identifying some of these $J^{PC}$ partners would further validate the hybrid scenario.
\end{description}
\begin{figure}[t]
\begin{center}
\centerline{\epsfig{file=babar_y4260.eps,width=68mm,clip=}}
\end{center}
\caption{From BaBar Ref.~\cite{Aubert:2005rm}. The $\pi^+\pi^- J/\psi$ invariant mass spectrum
in the range 3.8-5.0~GeV/c$^2$ and (inset) over a wider range that includes the $\psi(2S)$.
The points with error bars represent the selected data and the shaded histogram represents
the scaled data from neighbouring $e^+e^-$ and $\mu^+\mu^-$ mass regions. The solid curve
shows the result of the single-resonance fit and the dashed curve represents
the background component.
\label{babar_y4260}}
\end{figure}
\subsubsection{$Y$ States in ISR: What are they?}
There are now six $Y$ states observed in ISR. I've described the possibilities
for the $Y(4260)$ but the same process of elimination follows for all of them. The
measured $Y$ masses don't match the peaks in the $D^{(*)}\bar{D}^{(*)}$ cross sections and
there does not appear to be room for additional conventional $c\bar{c}$ states in this mass
region unless the predictions are way off.
It has been suggested that many of the $Y$-states are multiquark
states, either tetraquarks or molecules.
Molecules are generally believed to lie just below threshold
and are bound via $S$-wave rescattering and pion exchange. Few of the $Y$-states lie close
to thresholds so at best this might explain special cases but cannot be a general explanation.
Other problems with the multiquark explanation are discussed below.
The final possibility considered is that some of the $Y$ states are charmonium hybrids. The
$Y(4260)$ is the most robust of all these states and is quite possibly a hybrid. Most of the
$Y$-states, however, need confirmation and
more detailed measurements of their properties.
\section{Summary}
During the past year there have been many new developments in hadron spectroscopy.
In some cases the new results reinforce our understanding in the context
of the constituent quark model
while in other cases they demonstrate that we still have much to learn.
Many hadrons with heavy quarks have been observed and their properties are in good agreement
with theory. The observation of the $\eta_b$ by BaBar in the electromagnetic transitions
$\Upsilon(3S)\to \gamma \eta_b$ and $\Upsilon(2S)\to \gamma \eta_b$ provides further
evidence that QCD motivated quark models and Lattice QCD calculatons
are essentially correct. Likewise, the properties
of the ground state baryons with $b$ quarks are well described by the simplest of quark
model assumptions to the point that they can be used as a homework problem in a particle
physics course.
In contrast, it is not at all clear what most of the new charmonium-like $XYZ$ states
are.
There are now something like 16 charmonium like $XYZ$ states with new ones, seemingly,
discovered every other day.
A few can be identified as conventional states and a few more,
the $X(3872)$ and $Y(4260)$ for example, are strong candidates for hadronic molecule
and hybrid states.
These latter two are the best understood, having been confirmed by several experiments and
observed in different processes and channels.
It has been suggested that many of the $XYZ$ states are multiquark
states, either tetraquarks or molecules. The problem with the tetraquark explanation is that it
predicts multiplets with other charge states that have not been observed, and
larger widths than have been observed. The possibility that some of the $XYZ$ states
are molecules is likely intertwined with threshold effects that occur when channels are opened up.
Including
coupled-channel effects and the rescattering of charmed meson pairs in the mix can also
result in shifts
of the masses of $c\bar{c}$ states and result in meson-meson binding which could help explain the
observed spectrum \cite{Voloshin:2006pz,Close:2009ag,Danilkin:2009hr,vanBeveren:2009fb}.
In my view, a comprehensive study including coupled channels is a necessity if we are to
understand the charmonium spectrum above $D\bar{D}$ threshold.
Many of the $XYZ$ states need independent confirmation and to understand them will require
detailed studies of their properties.
With better
experimental and theoretical understanding of these states we will have more confidence
in believing that any of these new states are non-conventional $c\bar{c}$ states
like molecules, tetraquarks, and hybrids.
Hadron spectroscopy continues to intrigue with a bright future.
There is the potential for many new measurements;
BaBar has considerable unanalyzed data that might hold evidence for new states, Belle and
BESIII have bright futures, and JLab, PANDA, and the LHC promise to produce exciting new physics
in the longer term.
\begin{acknowledgments}
This work was supported in part by the Natural Sciences and Engineering Council of Canada.
\end{acknowledgments}
| proofpile-arXiv_065-6714 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{A complete $\mu$-focused calculus} \label{sec:mufoc}
In this section,
we call \emph{asynchronous} (resp.~ \emph{synchronous})
the negative (resp.~ positive) connectives of Definition~\ref{def:connectives}
and the formulas whose top-level connective is asynchronous (resp.~
synchronous).
Moreover, we classify non-negated atoms as synchronous and negated
ones as asynchronous. As with Andreoli's original system, this latter choice
is arbitrary and can easily be changed for a case-by-case
assignment~\cite{miller07cslb,chaudhuri08jar}.
We present the system in Figure~\ref{fig:focused} as a good
candidate for a focused proof system for $\mmumall$.
In addition to asynchronous and synchronous formulas as defined above,
focused sequents can contain \emph{frozen formulas} $P^*$
where $P$ is an asynchronous atom or fixed point.
Frozen formulas may only be found at toplevel in sequents.
We use explicit annotations of the sequents in the style of Andreoli:
in the synchronous phase, sequents have the form
$\;\vdash \Gamma \Downarrow P$;
in the asynchronous phase, they have the form
$\;\vdash \Gamma \Uparrow \Delta$.
In both cases,
$\Gamma$ and $\Delta$ are sets of formulas of disjoint locations,
and $\Gamma$ is a multiset of synchronous or frozen formulas.
The convention on $\Delta$ is a slight departure from Andreoli's
original proof system where $\Delta$ is a list: we shall emphasize
the irrelevance of the order of asynchronous rules without
forcing a particular, arbitrary ordering.
Although we use an explicit freezing annotation,
our treatment of atoms is really the same one as Andreoli's;
the notion of freezing is introduced here as a technical device for
dealing precisely with fixed points,
and we also use it for atoms for a more uniform presentation.
\begin{figure}[htpb]
\begin{center}
$\begin{array}{c}
\mbox{Asynchronous phase}
\\[6pt]
\infer{\vdash\Gamma\Uparrow P\parr Q,\Delta}{\vdash\Gamma\Uparrow P,Q,\Delta}
\quad
\infer{\vdash\Gamma\Uparrow P\with Q, \Delta}{
\vdash\Gamma\Uparrow P,\Delta & \vdash\Gamma\Uparrow Q,\Delta
}
\quad
\infer{\vdash\Gamma\Uparrow a^\perp\t, \Delta}{
\vdash\Gamma,(a^\perp\t)^*\Uparrow\Delta}
\\[6pt]
\infer{\vdash \Gamma \Uparrow \bot, \Delta}{\vdash \Gamma\Uparrow\Delta}
\quad
\infer{\vdash \Gamma \Uparrow \top, \Delta}{}
\quad
\infer{\vdash \Gamma \Uparrow s\neq t, \Delta}{
\{ \vdash \Gamma\theta \Uparrow \Delta\theta :
\theta\in csu(s\stackrel{.}{=} t) \} }
\\[6pt]
\infer{\vdash\Gamma\Uparrow\forall{}x. P x,\Delta}{
\vdash\Gamma\Uparrow P c,\Delta}
\\[6pt]
\infer{\vdash\Gamma\Uparrow \nu{}B\t,\Delta}{
\vdash\Gamma\Uparrow S\t,\Delta &
\vdash\Uparrow BS\vec{x}, S\vec{x}^\bot
}
\quad
\infer{\vdash\Gamma\Uparrow \nu{}B\t,\Delta}{
\vdash\Gamma,(\nu{}B\t)^*\Uparrow\Delta}
\end{array}
$
\vspace{10pt}
$
\begin{array}{c}
\mbox{Synchronous phase}
\\[6pt]
\infer{\vdash\Gamma,\Gamma'\Downarrow P\mathrel{\otimes} Q}{
\vdash\Gamma\Downarrow P &
\vdash \Gamma'\Downarrow Q
}
\quad
\infer{\vdash \Gamma\Downarrow P_0\mathrel{\oplus} P_1}{\vdash\Gamma\Downarrow P_i}
\quad
\infer{\vdash (a^\perp\t)^*\Downarrow a\t}{}
\\[6pt]
\infer{\vdash \Downarrow \mathbf{1}}{} \quad \infer{\vdash \Downarrow t=t}{}
\\[6pt]
\infer{\vdash \Gamma\Downarrow\exists{}x. P x}{\vdash\Gamma\Downarrow P t}
\\[6pt]
\infer{\vdash\Gamma\Downarrow \mu{}B\t}{\vdash\Gamma\Downarrow B(\mu{}B)\t}
\quad
\infer{\vdash (\nu{}\overline{B}\t)^*\Downarrow \mu{}B\t}{}
\end{array}$
\vspace{10pt}
Switching rules (where $P$ is synchronous, $Q$ asynchronous)
\[
\infer{\vdash\Gamma\Uparrow P,\Delta}{\vdash\Gamma,P\Uparrow\Delta}
\quad
\infer{\vdash \Gamma, P \Uparrow}{\vdash \Gamma \Downarrow P}
\quad
\infer{\vdash \Gamma \Downarrow Q}{\vdash \Gamma \Uparrow Q}
\]
\end{center}
\caption{The $\mu$-focused proof-system for $\mmumall$}
\label{fig:focused}
\end{figure}
The $\mu$-focused system extends the usual focused system for MALL.
The rules for equality are not surprising,
the main novelty here is the treatment of fixed points.
Each of the fixed point connectives has two rules in the focused
system: one treats it ``as an atom'' and the other one as an expression with
internal logical structure.
In accordance with Definition~\ref{def:connectives},
$\mu$ is treated during the synchronous phase
and $\nu$ during the asynchronous phase.
Roughly, what the focused system implies is that
if a proof involving a $\nu$-expression
proceeds by coinduction on it, then this coinduction can be done at the
beginning;
otherwise that formula can be ignored in the whole derivation,
except for the $init$ rule.
The latter case is expressed by the rule which moves the greatest fixed
point to the left zone, freezing it.
Focusing on a $\mu$-expression yields two choices: unfolding or applying the
initial rule for fixed points.
If the considered operator is fully synchronous, the focus will never be lost.
For example, if $nat$ is the (fully synchronous) expression
$\mu N. \lambda{}x.~ x=0 \mathrel{\oplus} \exists{}y.~ x=s~y \mathrel{\otimes} N~y$,
then focusing puts a lot of structure on a proof of
$\vdash \Gamma\Downarrow nat~t$:
either $t$ is a closed term representing a natural number and $\Gamma$ is empty,
or $t = s^n t'$ for some $n\geq 0$ and $\Gamma$ only contains $(nat~t')^\bot$.
We shall now establish the completeness of our focused proof system:
If the unfocused sequent $\;\vdash\Gamma$ is provable then so is
$\;\vdash\Uparrow\Gamma$, and the order of application of asynchronous
rules does not affect provability.
From the perspective of proofs rather than provability,
we are actually going to provide transformations from unfocused to focused
derivations (and back) which can reorder asynchronous rules arbitrarily.
However, this result cannot hold without a simple condition
avoiding pathological uses of infinite branching, as illustrated with
the following counter-example.
The unification problem $s~(f~0)\stackrel{.}{=} f~(s~0)$, where $s$ and $0$ are constants,
has infinitely many solutions $[(\lambda x.~ s^n x) / f]$.
Using this, we build a derivation $\Pi_\omega$
with infinitely many branches, each $\Pi_n$
unfolding a greatest fixed point $n$ times:
\[
\Pi_0 \stackrel{def}{=} \infer[\top]{\vdash \nu p. p, \top}{} \quad\quad
\Pi_{n+1} \stackrel{def}{=} \infer[\nu]{\vdash \nu p. p, \top}{
\infer{\vdash \nu p. p, \top}{\Pi_n} &
\infer[init]{\vdash \mu p. p, \nu p. p}{}} \]
\[ \Pi_\omega \stackrel{def}{=} \infer[\neq]{
f; \vdash s~(f~0) \neq f~(s~0), \nu p. p, \top}{
\Pi_0 & \Pi_1 & \ldots & \Pi_n & \ldots} \]
Although this proof happens to be already in a focused form,
in the sense that focusing annotations can be added in a straightforward
way,
the focused transformation must also provide a way to change the order
of application of asynchronous rules.
In particular it must allow to permute down the introduction of
the first $\nu p. p$. The only reasonable way to do so is as follows,
expanding $\Pi_0$ into $\Pi_1$ and then pulling down the $\nu$ rule
from each subderivation, changing $\Pi_{n+1}$ into $\Pi_n$:
\[ \Pi_\omega \quad\rightsquigarrow\quad
\infer[\nu]{f; \vdash s~(f~0) \neq f~(s~0), \nu p. p, \top}{
\infer{f; \vdash s~(f~0) \neq f~(s~0), \nu p. p, \top}{\Pi_\omega} &
\infer[init]{\vdash \mu p. p, \nu p. p}{}} \]
This leads to a focusing transformation that may not terminate.
The fundamental problem here is that although each additive
branch only finitely explores the asynchronous formula $\nu p.p$,
the overall use is infinite.
A solution would be to admit infinitely deep derivations,
with which such infinite balancing process may have a limit.
But our goal here is to develop finite proof representations
(this is the whole point of (co)induction rules)
so we take an opposite approach and require a minimum amount
of finiteness in our proofs.
\begin{definition}[Quasi-finite derivation]
A derivation is said to be quasi-finite if
it is cut-free,
has a finite height
and only uses a finite number of different coinvariants.
\end{definition}
This condition may seem unfortunate, but it appears to be essential
when dealing with transfinite proof systems involving fixed points. More
precisely, it is related to the choice regarding the introduction of
asynchronous fixed points, be they greatest fixed points in $\mu$-focusing
or least fixed points in $\nu$-focusing.
Note that quasi-finiteness is trivially satisfied
for any cut-free derivation that is finitely branching,
and that any derivation which does not involve the $\neq$ rule
can be normalized into a quasi-finite one.
Moreover,
quasi-finiteness is a natural condition from a practical perspective,
for example in the context of automated or interactive theorem proving,
where $\neq$ is restricted to finitely branching instances anyway.
However, it would be desirable to refine the notion of quasi-finite
derivation in a way that allows cuts and is preserved by cut elimination,
so that quasi-finite proofs could be considered a proper proof fragment.
Indeed, the essential idea behind quasi-finiteness is that
only a finite number of locations are explored in a proof,
and the cut-free condition is only added because cut reductions
do not obviously preserve this.
We conjecture that a proper, self-contained notion of quasi-finite
derivation can be attained,
but leave this technical development to further work.
The core of the completeness proof follows~\cite{miller07cslb}.
This proof technique proceeds by transforming standard derivations
into a form where focused annotations can be added to obtain a focused
derivation. Conceptually, focused proofs are simply special cases
of standard proofs, the annotated sequents of the focused proof system
being a concise way of describing their shape.
The proof transformation proceeds by iterating two lemmas which
perform rule permutations: the first lemma expresses that
asynchronous rules can always be applied first, while the second one
expresses that synchronous rules can be applied in a hereditary fashion
once the focus has been chosen.
The key ingredient of \cite{miller07cslb} is the notion of focalization
graph, analyzing dependencies in a proof and showing that there is always
at least one possible focus.
In order to ease the proof, we shall consider an intermediate
proof system whose rules enjoy a one-to-one correspondence with
the focused rules.
This involves getting rid of the cut, non-atomic axioms,
and also explicitly performing freezing.
\begin{definition}[Freezing-annotated derivation]
The freezing-annotated variant of $\mmumall$\ is obtained by
removing the cut rule,
enriching the sequent structure with an annotation for frozen fixed points
or atoms,
restricting the initial rule to be applied only on frozen asynchronous
formulas,
and adding explicit annotation rules:
\[
\infer{\vdash \freeze{a^\perp\t}, a\t}{}
\quad\quad
\infer{\vdash \freeze{\nu\overline{B}\t}, \mu B \t}{}
\quad\quad
\infer{\vdash \Gamma, \nu B \t}{\vdash \Gamma, \freeze{\nu B \t}}
\quad\quad
\infer{\vdash \Gamma, a^\perp\t}{\vdash \Gamma, \freeze{a^\perp\t}}
\]
Atomic instances of $init$ can be translated into freezing-annotated
derivations:
\[ \infer{\vdash \nu B\t, \mu \overline{B}\t}{}
\quad \longrightarrow \quad
\infer{\vdash \nu B\t , \mu \overline{B}\t }{
\infer{\vdash \freeze{\nu B\t}, \mu \overline{B}\t }{}}
\quad\quad\quad\quad
\infer{\vdash a^\perp\t, a\t}{}
\quad \longrightarrow \quad
\infer{\vdash a^\perp\t , a\t }{
\infer{\vdash \freeze{a^\perp\t}, a\t }{}} \]
Arbitrary instances of $init$ can also be obtained by first expanding them
to rely only on atomic $init$, using Proposition~\ref{def:atomicinit},
and then translating atomic $init$ as shown above.
We shall denote by $init*$ this derived generalized axiom.
Any $\mmumall$\ derivation can be transformed into a freezing-annotated one
by normalizing it and translating $init$ into $init*$.
\end{definition}
The asynchronous freezing-annotated rules (that is,
those whose principal formula is asynchronous) correspond naturally
to asynchronous rules of the $\mu$-focused system.
Similarly, synchronous freezing-annotated rules correspond to
synchronous focused rules, which includes the axiom rule.
The switching rules of the $\mu$-focused system
do not have a freezing-annotated equivalent:
they are just book-keeping devices marking phase transitions.
From now on we shall work on freezing-annotated derivations,
simply calling them derivations.
\subsubsection{Balanced derivations}
In order to ensure that the focalization process terminates, we have to
guarantee that the permutation steps preserve some measure over derivations.
The main problem here comes from the treatment of fixed points,
and more precisely from the fact that there is a choice in the asynchronous
phase regarding greatest fixed points.
We must ensure that a given greatest fixed point formula is always used in
the same way in all additive branches of a proof:
if a greatest fixed point is copied by an additive conjunction or $\neq$,
then it should either be used for coinduction in all branches,
or frozen and used for axiom in all branches.
Otherwise it would not be possible to permute the treatment of the
$\nu$ under that of the $\with$ or $\neq$ while controlling the size of
the transformed derivation.
\begin{definition}[Balanced derivation]
A greatest fixed point occurrence is \emph{used in a balanced way}
if all of its principal occurrences are used consistently:
either they are all frozen or they are all used for coinduction,
with the same coinvariant.
We say that a derivation is \emph{balanced} if it is quasi-finite
and all greatest fixed points occurring in it are used in a balanced way.
\end{definition}
\begin{lemma}\label{lem:invwith}
If $S_0$ and $S_1$ are both coinvariants for $B$
then so is $S_0\mathrel{\oplus} S_1$.
\end{lemma}
\begin{proof}
Let $\Pi_i$ be the derivation of coinvariance for $S_i$.
The proof of coinvariance of $S_0\mathrel{\oplus} S_1$ is as follows:
\[ \infer[\with]{\vdash S_0^\perp\vec{x} \with S_1^\perp\vec{x},
B (S_0\mathrel{\oplus} S_1) \vec{x}}{
\infer{\vdash S_0^\perp\vec{x}, B (S_0\mathrel{\oplus} S_1)\vec{x}}{
\phi_0(\Pi_0)}
&
\infer{\vdash S_1^\perp\vec{x}, B (S_0\mathrel{\oplus} S_1)\vec{x}}{
\phi_1(\Pi_1)}
} \]
The transformed derivations $\phi_i(\Pi_i)$ are obtained by functoriality:
\[ \phi_i(\Pi_i) = \infer[cut]{\vdash S_i^\perp\vec{x}, B (S_0\mathrel{\oplus} S_1)\vec{x}}{
\infer{\vdash S_i^\perp\vec{x}, B S_i \vec{x}}{\Pi_i} &
\infer[B]{\vdash \overline{B} S_i^\perp \vec{x},
B (S_0\mathrel{\oplus} S_1) \vec{x}}{
\infer[\mathrel{\oplus}]{\vdash S_i^\perp \vec{y}, S_0\vec{y}\mathrel{\oplus} S_1\vec{y}}{
\infer[init]{\vdash S_i^\perp\vec{y}, S_i\vec{y}}{}}}} \]
Notice that after the elimination of cuts, the proof of coinvariance
that we built can be larger than the original ones:
this is why this transformation cannot be done as part of
the rule permutation process.
\end{proof}
\begin{lemma} \label{lem:balance}
Any quasi-finite derivation of $\;\vdash\Gamma$
can be transformed into a balanced
derivation of $\;\vdash\Gamma$.
\end{lemma}
\begin{proof}
We first ensure that all coinvariants used for the same (locatively identical)
greatest fixed point are the same.
For each $\nu B$ on which at least one coinduction is performed
in the proof, this is achieved by taking the union of all coinvariants
used in the derivation,
thanks to Lemma~\ref{lem:invwith},
adding to this union the unfolding coinvariant $B (\nu B)$.
Note that quasi-finiteness is needed here to
ensure that we are only combining finitely many coinvariants.
Let $S_{\nu B}$ be the resulting coinvariant,
of the form $S_0 \mathrel{\oplus} \ldots \mathrel{\oplus} S_n \mathrel{\oplus} B (\nu B)$,
and $\Theta_{\nu B}$ be the proof of its coinvariance.
We adapt our derivation by
changing every instance of the $\nu$ rule as follows:
\[ \infer{\vdash \Gamma, \nu B \t}{
\vdash \Gamma, S_i \t &
\infer{\vdash S_i^\perp \vec{x}, B S_i \vec{x}}{\Theta_i}}
\quad\longrightarrow\quad
\infer{\vdash \Gamma, \nu B \t}{
\infer=[\mathrel{\oplus}]{\vdash
\Gamma, S_{\nu B}\t}{
\vdash \Gamma, S_i\t}
&
\infer{\vdash S_{\nu B}^\perp\vec{x}, BS_{\nu B}\vec{x}}{\Theta_{\nu B}}
} \]
It remains to ensure that a given fixed point is either always coinducted on
or always frozen in the derivation.
We shall balance greatest fixed points,
starting with unbalanced fixed points closest to the root,
and potentially unbalancing deeper fixed points in that process,
but without ever introducing unbalanced fixed points that were not initially
occurring in the proof.
Let $\Pi_0$ be the derivation obtained at this point.
We define the degree of a greatest fixed point
to be the maximum distance in the sublocation ordering
to a greatest fixed point sublocation occurring in $\Pi_0$,
$0$ if there is none.
Quasi-finiteness ensures that degrees are finite,
since there are only finitely many locations occurring at toplevel
in the sequents of a quasi-finite derivation.
We shall only consider derivations in which greatest fixed points that
are coinducted on are also coinducted on with the same coinvariant
in $\Pi_0$, and maintain this condition
while transforming any such derivation into a balanced one.
We proceed by induction on
the multiset of the degrees of unbalanced fixed points in the derivation,
ordered using the standard multiset ordering |
note that degrees are well defined for all unbalanced fixed points since they
must also occur in $\Pi_0$.
If there is no unbalanced fixed point, we have a balanced proof.
Otherwise, pick an unbalanced fixed point of maximal degree.
It is frozen in some branches
and coinducted on in others.
We remove all applications of freezing on that fixed point,
which requires to adapt axioms\footnote{
Note that instead of the unfolding coinvariant $B(\nu B)$ we could have
used the coinvariant $\nu B$. This would yield a simpler proof,
but that would not be so easy to adapt for $\nu$-focusing in
Section~\ref{sec:nufoc}.
}:
\[ \infer{\vdash \freeze{\nu B \t}, \mu \overline{B} \t}{}
\quad\longrightarrow\quad
\infer[\nu]{\vdash \nu B \t, \mu\overline{B}\t}{
\infer=[\mathrel{\oplus}]{\vdash S_{\nu B}\t,\mu\overline{B}\t}{
\infer[\mu]{\vdash B (\nu B) \t, \mu\overline{B}\t}{
\infer[init*]{\vdash B (\nu B) \t, \overline{B}(\mu\overline{B})\t}{}}}
& \infer{\vdash S^\perp_{\nu B}\vec{x}, BS_{\nu B}\vec{x}}{\Theta_{\nu B}}} \]
The fixed point $\nu B$ is used in a balanced way in the resulting derivation.
Our use of the derived rule $init*$ might have introduced
some new freezing rules on greatest fixed point
sublocations of $B(\nu B)$ or $\overline{B} (\mu \overline{B})$.
Such sublocations, if already present in the proof,
may become unbalanced, but have a smaller degree.
Some new sublocations may also be introduced,
but they are only frozen as required.
The new derivation has a smaller multiset
of unbalanced fixed points, and we can conclude by induction hypothesis.
\end{proof}
Balancing is the most novel part of our focalization process.
This preprocessing is a technical device ensuring
termination in the proof of completeness,
whatever rule permutations are performed.
It should be noted that balancing is often too strong,
and that many focused proofs are indeed not balanced.
For example,
it is possible to obtain unbalanced focused proofs
by introducing an additive conjunction before treating a greatest
fixed point differently in each branch.
\subsubsection{Focalization graph}
We shall now present the notion of focalization graph
and its main properties~\cite{miller07cslb}.
As we shall see, their adaptation to $\mmumall${}
is trivial\footnote{
Note that we do not use the same notations:
in \cite{miller07cslb}, ${\prec}$ denotes the subformula relation
while it represents accessibility in the focalization graph in our case.
}.
\begin{definition}
The \emph{synchronous trunk} of a derivation is its largest prefix
containing only applications of synchronous rules.
It is a potentially open subderivation having the same conclusion sequent.
The open sequents of the synchronous trunk (which are conclusions
of asynchronous rules in the full derivation) and its initial sequents
(which are conclusions of $init$, $\mathbf{1}$ or ${=}$)
are called \emph{leaf sequents} of the trunk.
\end{definition}
\begin{definition}
We define the relation $\prec$ on the formulas of
the base sequent of a derivation $\Pi$:
$P\prec Q$ if and only if
there exists $P'$, asynchronous subformula\footnote{
This does mean subformula in the locative sense,
in particular with (co)invariants being subformulas of
the associated fixed points.
} of $P$,
and $Q'$, synchronous subformula of $Q$,
such that $P'$ and $Q'$ occur in the same
leaf sequent of the synchronous trunk of $\Pi$.
\end{definition}
The intended meaning of $P\prec Q$ is that we must focus on $P$ before $Q$.
Therefore, the natural question is the existence of minimal elements for that
relation, equivalent to its acyclicity.
\begin{proposition} \label{prop:mini_subform}
If $\Pi$ starts with a synchronous rule,
and $P$ is minimal for $\prec$ in $\Pi$,
then so are its subformulas in their respective subderivations.
\end{proposition}
\begin{proof}
There is nothing to do
if $\Pi$ simply consists of an initial rule.
In all other cases
($\mathrel{\otimes}$, $\mathrel{\oplus}$, $\exists$ and $\mu$)
let us consider any subderivation $\Pi'$ in which
the minimal element $P$ or one of its subformulas $P'$ occurs
| there will be exactly one such $\Pi'$, except in the case of a tensor
applied on $P$.
The other formulas occurring in the conclusion of $\Pi'$
either occur in the conclusion of $\Pi$ or are subformulas
of the principal formula occurring in it.
This implies that a $Q\prec P$ or $Q\prec P'$ in $\Pi'$
would yield a $Q'\prec P$ in $\Pi$,
which contradicts the minimality hypothesis.
\end{proof}
\begin{lemma}\label{lem:mini}
The relation $\prec$ is acyclic.
\end{lemma}
\begin{proof}
We proceed by induction on the derivation $\Pi$.
If it starts with an asynchronous rule or an initial synchronous rule,
\emph{i.e.,}\ its conclusion sequent is a leaf of its synchronous trunk,
acyclicity is obvious since $P\prec Q$ iff $P$ is asynchronous
and $Q$ is synchronous.
If $\Pi$ starts with $\mathrel{\oplus}$, $\exists$ or $\mu$,
the relations $\prec$ in $\Pi$ and its subderivation
are isomorphic (only the principal formula changes)
and we conclude by induction hypothesis.
In the case of $\mathrel{\otimes}$,
say $\Pi$ derives $\,\vdash\Gamma,\Gamma',P\mathrel{\otimes} P'$,
only the principal formula $P\mathrel{\otimes} P'$ has subformulas in both premises
$\,\vdash\Gamma,P$ and $\,\vdash\Gamma',P'$.
Hence there cannot be any $\prec$ relation between a formula of $\Gamma$
and one of $\Gamma'$.
In fact, the graph of $\prec$ in the conclusion is obtained by taking
the union of the graphs in the premises
and merging $P$ and $P'$ into $P\mathrel{\otimes} P'$.
Suppose, \emph{ab absurdo}, that $\prec$ has cycles in $\Pi$,
and consider a cycle of minimal length.
It cannot involve nodes from both $\Gamma$ and $\Gamma'$:
since only $P\mathrel{\otimes} P'$ connects those two components,
the cycle would have to go twice through it,
which contradicts the minimality of the cycle's length.
Hence the cycle must lie within
$(\Gamma,P\mathrel{\otimes} P')$ or $(\Gamma',P\mathrel{\otimes} P')$
but then there would also be a cycle in the corresponding premise
(obtained by replacing $P\mathrel{\otimes} P'$ by its subformula)
which is absurd by induction hypothesis.
\end{proof}
\subsubsection{Permutation lemmas and completeness}
We are now ready to describe the transformation of a balanced derivation
into a $\mu$-focused derivation.
\begin{definition}
We define the \emph{reachable locations} of a balanced
derivation $\Pi$, denoted by $|\Pi|$, by taking
the finitely many locations occurring at toplevel in sequents of $\Pi$,
ignoring coinvariance subderivations,
and saturating this set by adding the sublocations of
locations that do not correspond to fixed point expressions.
\end{definition}
It is easy to see that $|\Pi|$ is a finite set.
Hence $|\Pi|$, ordered by strict inclusion, is a well-founded measure
on balanced derivations.
Let us illustrate the role of reachable locations with the following
derivations:
\[ \infer[\nu]{\vdash \nu B \t, a \parr b, \top}{
\infer[\top]{\vdash S \t, a\parr b, \top}{} &
\infer{\vdash S^\perp\vec{x}, BS\vec{x}}{\vdots}}
\quad\quad\quad
\infer[\parr]{\vdash \nu B \t, a \parr b, \top}{
\infer[\top]{\vdash \nu B \t, a, b, \top}{}} \]
For the first derivation, the set of reachable locations is
$\{ \nu B \t, a \parr b, \top, S\t, a, b \}$.
For the second one, it is $\{ \nu B \t, a\parr b, \top, a, b \}$.
As we shall see, the focalization process may involve transforming
the first derivation into the second one, thus loosing reachable locations,
but it will never introduce new ones.
In that process, the asynchronous rule $\parr$ is ``permuted'' under
the $\top$, \emph{i.e.,}\ the application of $\top$ is delayed by the insertion
of a new $\parr$ rule.
This limited kind of proof expansion does not affect reachable locations.
A more subtle case is that of ``permuting'' a fixed point rule under $\top$.
This will never happen for $\mu$. For $\nu$, the permutation
will be guided by the existing reachable locations:
if $\nu$ currently has no reachable sublocation it will be frozen,
otherwise it will be coinducted on,
leaving reachable sublocations unchanged in both cases.
The set of reachable locations is therefore
a skeleton that guides the focusing process,
and a measure which ensures its termination.
\begin{lemma} \label{lem:inst}
For any balanced derivation $\Pi$,
$|\Pi\theta|$ is balanced and $|\Pi\theta|\subseteq|\Pi|$.
\end{lemma}
\begin{proof}
By induction on $\Pi$, following the definition of $\Pi\theta$.
The preservation of balancing and reachable locations is obvious since
the rule applications in $\Pi\theta$ are the same as in $\Pi$,
except for branches that are erased by $\theta$
(which can lead to a strict inclusion of reachable locations).
\end{proof}
\begin{lemma}[Asynchronous permutability] \label{lem:async}
Let $P$ be an asynchronous formula.
If $\;\vdash \Gamma, P$ has a balanced derivation $\Pi$,
then it also has a balanced derivation $\Pi'$ where $P$ is principal in the
conclusion sequent, and such that $|\Pi'|\subseteq|\Pi|$.
\end{lemma}
\begin{proof}
Let $\Pi_0$ be the initial derivation.
We proceed by induction on its subderivations,
transforming them while respecting the balanced use of fixed points
in $\Pi_0$.
If $P$ is already principal in the conclusion, there is nothing to do.
Otherwise, by induction hypothesis
we make $P$ principal in the immediate subderivations where it occurs,
and we shall then permute the first two rules.
If the first rule ${\cal R}$
is $\top$ or a non-unifiable instance of $\neq$, there is no subderivation,
and \emph{a fortiori} no subderivation where $P$ occurs.
In that case we apply an introduction rule for $P$,
followed by ${\cal R}$ in each subderivation.
This is obvious in the case of $\parr$, $\with$, $\forall$, $\bot$,
$\neq$ and $\top$ (note that there may not be any subderivation in the last
two cases, in which case the introduction of $P$ replaces ${\cal R}$).
If $P$ is a greatest fixed point that is coinducted on in $\Pi_0$,
we apply the coinduction rule with the coinvariance premise taken in $\Pi_0$,
followed by ${\cal R}$.
Otherwise, we freeze $P$ and apply ${\cal R}$.
By construction, the resulting derivation is balanced in the same way as
$\Pi_0$, and its reachable locations are contained in $|\Pi_0|$.
In all other cases we permute the introduction of $P$ under the first rule.
The permutations of MALL rules are simple. We shall not detail them, but
note that if $P$ is $\top$ or a non-unifiable $u\neq v$, permuting
its introduction under the first rule erases that rule.
The permutations involving freezing rules are obvious,
and most of the ones involving fixed points, such as ${\mathrel{\otimes}}/\nu$,
are not surprising:
\[\infer{\vdash \Gamma,\Gamma', P\mathrel{\otimes} P', \nu{}B\t}{
\infer{\vdash\Gamma,P,\nu{}B\t}{
\vdash\Gamma,P,S\t &
\vdash BS\vec{x} ,S\vec{x}^\bot
} &
\vdash\Gamma',P'
}
\quad \longrightarrow \quad
\infer{\vdash \Gamma,\Gamma',P\mathrel{\otimes} P', \nu{}B\t}{
\infer{\vdash \Gamma,\Gamma',P\mathrel{\otimes} P', S\t}{
\vdash \Gamma,P,S\t &
\vdash \Gamma',P'}
&
\vdash BS\vec{x}, S~\vec{x}^\bot
}
\]
The ${\with}/\nu$ and ${\neq}/\nu$ permutations
rely on the fact that the subderivations obtained by induction hypothesis
are balanced in the same way,
with one case for freezing in all additive branches
and one case for coinduction in all branches:
\disp{
\infer{\vdash\Gamma,P\with P', \nu{}B\t}{
\infer{\vdash\Gamma,P,\nu{}B\t}{
\infer{\vdash\Gamma,P,S\t}{\Pi} &
\infer{\vdash BS\vec{x}, S\vec{x}^\bot\vphantom{\t}}{\Theta}
} &
\infer{\vdash\Gamma,P',\nu{}B\t}{
\infer{\vdash\Gamma,P',S\t}{\Pi'} &
\infer{\vdash BS\vec{x}, S\vec{x}^\bot\vphantom{\t}}{\Theta}
}
}
}{
\infer{\vdash\Gamma,P\with P', \nu{}B\t}{
\infer{\vdash\Gamma,P\with P', S\t}{
\infer{\vdash \Gamma, P, S\t}{\Pi} &
\infer{\vdash \Gamma, P', S\t}{\Pi'}}
&
\infer{\vdash BS\vec{x}, (S\vec{x})^\bot\vphantom{\t}}{\Theta}}
}
Another non-trivial case is ${\mathrel{\otimes}}/{\neq}$ which makes use of
Lemma~\ref{lem:inst}: %
\disp{
\infer{\vdash \Gamma,\Gamma',P\mathrel{\otimes} Q, u\neq v}{
\infer{\vdash \Gamma,P,u\neq v}{
\Set{\infer{\vdash (\Gamma,P)\sigma}{\Pi_\sigma}}{\sigma\in csu(u\stackrel{.}{=} v)}} &
\infer{\vdash \Gamma',Q}{\Pi'}}
}{
\infer{\vdash \Gamma,\Gamma',P\mathrel{\otimes} Q, u\neq v}{
\Set{
\infer{\vdash (\Gamma,\Gamma',P\mathrel{\otimes} Q)\sigma}{
\infer{\vdash (\Gamma,P)\sigma}{\Pi_\sigma} &
\infer{\vdash (\Gamma',Q)\sigma}{\Pi'\sigma}}
}{\sigma\in csu(u\stackrel{.}{=} v)}
}
} %
A simple inspection shows that in each case,
the resulting derivation is balanced in the same way as $\Pi_0$,
and does not have any new reachable location |
the set of reachable locations may strictly decrease only
upon proof instantiation in ${\mathrel{\otimes}}/{\neq}$,
or when permuting $\top$ and trivial instances of $\neq$ under
other rules.
\end{proof}
\begin{lemma}[Synchronous permutability] \label{lem:sync}
Let $\Gamma$ be a sequent of synchronous and frozen formulas.
If $\;\vdash\Gamma$
has a balanced derivation $\Pi$ in which $P$ is minimal for $\prec$
then it also has a balanced derivation $\Pi'$ such that
$P$ is minimal and principal in the conclusion sequent of $\Pi'$,
and $|\Pi'|=|\Pi|$.
\end{lemma}
\begin{proof}
We proceed by induction on the derivation.
If $P$ is already principal, there is nothing to do.
Otherwise, since the first rule must be synchronous,
$P$ occurs in a single subderivation.
We can apply our induction hypothesis on that subderivation:
its conclusion sequent still cannot contain any asynchronous formula by
minimality of $P$ and,
by Proposition~\ref{prop:mini_subform}, $P$ is still minimal in it.
We shall now permute the first two rules, which are both synchronous.
The permutations of synchronous MALL rules are simple.
As for $\mathbf{1}$, there is no permutation involving $=$.
The permutations for $\mu$ follow the same geometry as those for $\exists$
or $\mathrel{\oplus}$. For instance, ${\mathrel{\otimes}}/{\mu}$ is as follows:
\disp{
\infer[\mathrel{\otimes}]{\vdash \Gamma, \Gamma', P\mathrel{\otimes} P', \mu B \t}{
\vdash \Gamma,P &
\infer[\mu]{\vdash \Gamma',P',\mu B\t}{\vdash \Gamma',P', B(\mu B)\t}}
}{
\infer[\mu]{\vdash \Gamma, \Gamma', P\mathrel{\otimes} P', \mu B \t}{
\infer[\mathrel{\otimes}]{\vdash \Gamma, \Gamma', P\mathrel{\otimes} P', B(\mu B) \t}{
\vdash \Gamma,P &
\vdash \Gamma',P', B(\mu B)\t}}
}
All those permutations preserve $|\Pi|$.
Balancing and minimality are obviously preserved, respectively
because asynchronous rule applications and
the leaf sequents of the synchronous trunk are left unchanged.
\end{proof}
\begin{theorem}
The $\mu$-focused system is sound and complete with respect to $\mmumall$:
If $\;\vdash\Uparrow\Gamma$ is provable, then $\;\vdash\Gamma$
is provable in $\mmumall$.
If $\;\vdash\Gamma$ has a quasi-finite $\mmumall$\ derivation,
then $\;\vdash\Uparrow\Gamma$ has a (focused) derivation.
\end{theorem}
\begin{proof}
For soundness, we observe that an unfocused derivation can be obtained
simply from a focused one by erasing focusing annotations
and removing switching rules
($\;\vdash\Delta\Uparrow\Gamma$ gives $\;\vdash\Delta,\Gamma$ and
$\;\vdash\Gamma\Downarrow P$ gives $\;\vdash \Gamma,P$).
To prove completeness, we first obtain a balanced derivation using
Lemma~\ref{lem:balance}. Then, we use permutation lemmas to reorder rules
in the freezing-annotated derivation so that we
can translate it to a $\mu$-focused derivation.
Formally, we first use an induction on the height of the derivation.
This allows us to assume that coinvariance proofs can be focused,
which will be preserved since those subderivations are left untouched
by the following transformations.
Then, we prove simultaneously the following two statements:
\begin{enumerate}
\item
If $\;\vdash\Gamma,\Delta$ has a balanced derivation $\Pi$,
where $\Gamma$ contains only synchronous and frozen formulas,
then $\;\vdash\Gamma\Uparrow\Delta$ has a derivation.
\item
If $\vdash\Gamma,P$ has a balanced derivation $\Pi$
in which $P$ is minimal for ${\prec}$,
and there is no asynchronous formula in its conclusion,
then there is a focused derivation of $\vdash\Gamma\Downarrow P$.
\end{enumerate}
We proceed by well-founded induction on $|\Pi|$
with a sub-induction on the number of non-frozen formulas in the
conclusion of $\Pi$.
Note that (1) can rely on (2) for the same $|\Pi|$ but
(2) only relies on strictly smaller instances of (1) and (2).
\begin{enumerate}
\item
If there is any, pick \emph{arbitrarily} an asynchronous formula $P$,
and apply Lemma \ref{lem:async} to make it principal in the first rule.
The subderivations of the obtained proof can be focused,
either by the outer induction in the case of coinvariance proofs,
or by induction hypothesis (1) for the other subderivations:
if the first rule is a freezing, then the reachable locations of the
subderivation and the full derivation are the same, but there is one
less non-frozen formula;
with all other rules, the principal location is consumed
and reachable locations strictly decrease.
Finally, we obtain the full focused derivation by composing those
subderivations using the focused equivalent of the rule applied on $P$.
When there is no asynchronous formula left, we have shown
in Lemma~\ref{lem:mini} that there is a minimal synchronous formula $P$
in $\Gamma,\Delta$.
Let $\Gamma'$ denote $\Gamma,\Delta$ without $P$.
Using switching rules,
we build the derivation of $\vdash\Gamma\Uparrow\Delta$
from $\vdash\Gamma'\Downarrow P$,
the latter derivation being obtained by (2) with $\Pi$ unchanged.
\item
Given such a derivation,
we apply Lemma~\ref{lem:sync} to make the formula $P$ principal.
Each of its subderivations has strictly less reachable locations,
and a conclusion of the form $\;\vdash\Gamma'', P'$
where $P'$ is a subformula of $P$
that is still minimal by Proposition~\ref{prop:mini_subform}.
For each of those we build a focused derivation of
$\;\vdash\Gamma''\Downarrow P'$:
if the subderivation still has no asynchronous formula in its conclusion,
we can apply induction hypothesis (2);
otherwise $P'$ is asynchronous by minimality
and we use the switching rule releasing focus on $P'$,
followed by a derivation of $\vdash\Gamma''\Uparrow P'$
obtained by induction hypothesis (1).
Finally, we build the expected focused derivation from those
subderivations by using the focused equivalent of the
synchronous freezing-annotated rule applied on $P$.
\end{enumerate}
\vspace{-0.6cm}\end{proof}
In addition to a proof of completeness, we have actually defined
a transformation that turns any unfocused proof into a focused one.
This process is in three parts:
first, balancing a quasi-finite unfocused derivation;
then, applying rule permutations on unfocused balanced derivations;
finally, adding focusing annotations to obtain a focused proof.
The core permutation process allows to reorder asynchronous rules
arbitrarily, establishing that, from the proof search viewpoint,
this phase consists of inessential non-determinism as usual,
except for the choice concerning greatest fixed points.
In the absence of fixed points, balancing disappears,
and the core permutation process is known to preserve the essence of
proofs, \emph{i.e.,}\ the resulting derivation behaves the same as the original
one with respect to cut elimination.
A natural question is whether our process enjoys the same property.
This is not a trivial question,
because of the merging of coinvariants which is performed during balancing,
and to a smaller extent the unfoldings also performed in that process.
We conjecture that those new transformations, which are essentially
loop fusions and unrolling, do also preserve the
cut elimination behavior of proofs.
A different proof technique for establishing completeness
consists in focusing a proof by cutting it against focused
identities~\cite{laurent04unp,chaudhuri08jar}.
The preservation of the essence of proofs is thus an immediate
corollary of that method.
However, the merging of coinvariants cannot be performed through
cut elimination, so this proof technique (alone) cannot be used
in our case.
\subsection{The $\nu$-focused system}
\label{sec:nufoc}
While the classification of $\mu$ as synchronous and $\nu$ as
asynchronous is rather satisfying and coincides with several other observations,
that choice does not seem to be forced from the focusing point of view alone.
After all, the $\mu$ rule also commutes with all other rules.
It turns out that one can design a $\nu$-focused system
treating $\mu$ as asynchronous and $\nu$ as synchronous,
and still obtain completeness.
That system is obtained from the previous one by changing only
the rules working on fixed points:
\[ \renewcommand\arraystretch{2}\begin{array}{cp{0.3cm}c}
\infer{\vdash\Gamma\Uparrow\mu{}B\t,\Delta}{
\vdash\Gamma\Uparrow B(\mu{}B)\t,\Delta}
& &
\infer{\vdash\Gamma\Uparrow\mu{}B\t,\Delta}{
\vdash\Gamma,(\mu{}B\t)^*\Uparrow\Delta}
\\
\infer{\vdash\Gamma\Downarrow\nu{}B\t}{
\vdash\Gamma\Downarrow S\t & \vdash\Uparrow BS\vec{x}, (S\vec{x})^\bot}
& &
\infer{\vdash(\mu\overline{B}\t)^*\Downarrow\nu{}B\t}{}
\end{array} \]
Note that a new asynchronous phase must start in the coinvariance premise:
asynchronous connectives in $BS\vec{x}$ or $(S\vec{x})^\perp$ might have to be
introduced before a focus can be picked.
For example, if $B$ is $(\lambda p.~ a^\perp \parr \bot)$ and
$S$ is $a^\perp$, one cannot focus on $S^\perp$ immediately
since $a^\perp$ is not yet available for applying the $init$;
conversely, if $B$ is $(\lambda p.~ a)$ and $S$ is
$a\mathrel{\otimes}\mathbf{1}$, one cannot focus on $BS$ immediately.
\begin{theorem}
The $\nu$-focused system is sound and complete with respect to $\mmumall$:
If $\;\vdash\Uparrow\Gamma$ is provable, then $\;\vdash\Gamma$
is provable in $\mmumall$.
If $\;\vdash\Gamma$ has a quasi-finite $\mmumall$\ derivation,
then $\;\vdash\Uparrow\Gamma$ has a (focused) derivation.
\end{theorem}
\begin{proof}[sketch]
The proof follows the same argument as for the $\mu$-focused system.
We place ourselves in a logic with explicit freezing annotations
for atoms and least fixed points,
and define balanced annotated derivations, requiring
that any instance of a least fixed point is used consistently throughout
a derivation, either always frozen or always unfolded;
together with the constraint on its sublocations, this means that
a least fixed point has to be unfolded the same number of times in
all (additive) branches of a derivation.
We then show that any quasi-finite annotated derivation can be balanced;
the proof of Lemma~\ref{lem:balance} can be adapted easily.
Finally, balanced derivations can be transformed into focused
derivations using permutations: the focalization graph technique
extends trivially, the new asynchronous permutations involving the
$\mu$ rule are simple thanks to balancing, and the new synchronous
permutations involving the $\nu$ rule are trivial.
\end{proof}
This flexibility in the design of a focusing system is unusual.
It is not of the same nature as the arbitrary bias assignment that
can be used in Andreoli's system: atoms are non-canonical, and the bias
can be seen as a way to indicate what is the synchrony of the formula
that a given atom might be instantiated with. But our fixed points
have a fully defined logical meaning, they are canonical.
The flexibility highlights the fact that focusing is a somewhat shallow
property, accounting for local rule permutability
independently of deeper properties such as positivity.
Although we do not see any practical use of such flexibility,
it is not excluded that one is discovered in the future,
like with the arbitrary bias assignment on atoms in Andreoli's original
system.
It is not possible to treat both least and greatest fixed points
as asynchronous. Besides creating an unclear situation regarding $init$,
this would require to balance both kinds of fixed points, which is
impossible. In $\mu$-focusing, balancing greatest fixed points unfolds
least fixed points as a side effect, which is harmless since there is
no balancing constraint on those. The situation is symmetric in
$\nu$-focusing. But if both least and greatest fixed points
have to be balanced, the two unfolding processes interfere
and may not terminate anymore.
It is nevertheless possible to consider mixed bias assignments
for fixed point formulas, if the $init$ rule is restricted accordingly.
We would consider two logically identical variants
of each fixed point: $\mu^+$ and $\nu^+$ being treated synchronously,
$\mu^-$ and $\nu^-$ asynchronously, and the axiom rule would be
restricted to dual fixed points of opposite bias:
\[ \infer{\vdash (\mu B\t)^+, (\nu \overline{B} \t)^-}{} \quad
\infer{\vdash (\nu B\t)^+, (\mu \overline{B} \t)^-}{} \]
This restriction allows to perform simultaneously the balancing of
$\nu^-$ and $\mu^-$ without interferences. Further, we conjecture
that a sound and complete focused proof system for that logic would be
obtained by superposing
the $\mu$-focused system for $\mu^+$, $\nu^-$ and the $\nu$-focused
system for $\mu^-$, $\nu^+$.
\subsection{Application to $\mmuLJL$} \label{sec:foc_mulj}
The examples of Section~\ref{sec:mumall_examples} showed that despite
its simplicity and linearity, $\mmumall$\ can be related to a more
conventional logic.
In particular we are interested in drawing some connections with
$\mmuLJ$~\cite{baelde08phd},
the extension of LJ with least and greatest fixed points.
In the following, we show a simple first step to this program,
in which we capture a rich fragment of $\mmuLJ${}
even though $\mmumall$\ does not have exponentials.
In this section, we make use of the properties of negative formulas
(Definition~\ref{def:connectives}), which has two important consequences:
we shall use the $\mu$-focused system, and could not use the
alternative $\nu$-focused one, since it does not agree with the
classification;
moreover, we shall work in a fragment of $\mmumall$\ without atoms,
since atoms do not have any polarity.
We have observed (Proposition~\ref{prop:struct}) that structural rules are
admissible for negative formulas of $\mmumall$.
This property allows us to obtain a
faithful encoding of a fragment of $\mmuLJ$\ in $\mmumall${}
despite the absence of exponentials.
The encoding must be organized so that formulas appearing on
the left-hand side of intuitionistic sequents can be encoded positively
in $\mmumall$.
The only connectives allowed to appear negatively shall thus be
$\wedge$, $\vee$, $=$, $\mu{}$ and $\exists$.
Moreover, the encoding must commute with negation,
in order to translate the (co)induction rules correctly.
This leaves no choice in the following design.
\begin{definition}[$\H$, ${\cal G}$, $\mmuLJL$]
The fragments $\H$ and ${\cal G}$ are given by the following grammar:
{\allowdisplaybreaks\begin{eqnarray*}
{\cal G} &::=& {\cal G}\wedge {\cal G} \| {\cal G} \vee {\cal G} \| s=t \| \exists x. {\cal G} x \|
\mu{}(\lambda{}p\lambda\vec{x}.{\cal G} p \vec{x})\t \| p \t
\\
&\|& \forall{}x. {\cal G} x \| \H \supset {\cal G} \|
\nu{}(\lambda{}p\lambda\vec{x}.{\cal G} p \vec{x})\t \\
\H &::=& \H\wedge \H \| \H \vee \H \| s=t \|
\exists{}x. \H x \|
\mu{}(\lambda{}p\lambda\vec{x}.\H p \vec{x})\t \| p \t
\end{eqnarray*}}
The logic $\mmuLJL$\ is the restriction of $\mmuLJ$\ to sequents
where all hypotheses are in the fragment $\H$,
and the goal is in the fragment ${\cal G}$.
This implies a restriction of induction and coinduction rules to
(co)invariants in $\H$.
Formulas in $\H$ and ${\cal G}$ are translated in $\mmumall$\ as follows:
\[ \begin{array}{rcl}
\enc{P\wedge Q} &\stackrel{def}{=}& \enc{P}\mathrel{\otimes}\enc{Q} \\
\enc{P\vee Q} &\stackrel{def}{=}& \enc{P}\mathrel{\oplus}\enc{Q} \\
\enc{s=t} &\stackrel{def}{=}& s=t \\
\enc{\exists x.Px} &\stackrel{def}{=}& \exists x. \enc{Px} \\
\enc{\mu{}B\t} &\stackrel{def}{=}& \mu\enc{B}\t \\
\end{array} \qquad
\begin{array}{rcl}
\enc{\forall x.Px} &\stackrel{def}{=}& \forall x. \enc{Px} \\
\enc{\nu{}B\t} &\stackrel{def}{=}& \nu\enc{B}\t \\
\enc{P\supset Q} &\stackrel{def}{=}& \enc{P} \multimap \enc{Q} \\
\enc{\lambda p\lambda\vec{x}. B p \vec{x}} &\stackrel{def}{=}&
\lambda p\lambda\vec{x}.\enc{B p \vec{x}} \\
\enc{p\t} &\stackrel{def}{=}& p\t
\end{array} \]
\end{definition}
For reference, the rules of $\mmuLJL$\ can be obtained simply from
the rules of the focused system presented in Figure~\ref{fig:muLJL},
by translating $\Gamma;\Gamma'\vdash P$ into $\Gamma,\Gamma'\vdash P$,
allowing both contexts to contain any $\H$ formula
and reading them as sets to allow contraction and weakening.
\begin{proposition} \label{prop:01}
Let $P$ be a ${\cal G}$ formula, and $\Gamma$ a context of $\H$ formulas.
Then $\Gamma\vdash P$ has a quasi-finite $\mmuLJL$\ derivation if and only if
$\;\vdash [\Gamma]^\perp, [P]$ has a quasi-finite $\mmumall$\ derivation,
under the restrictions that (co)invariants
in $\mmumall$\ are of the form $\lambda\vec{x}.~ [S\vec{x}]$ for $S\vec{x}\in\enc{\H}$.
\end{proposition}
\begin{proof}
The proof transformations are simple and compositional.
The induction rule corresponds to the $\nu$ rule for $(\mu{}[B]\t)^\bot$,
the proviso on invariants allowing the translations:
\[ \infer{\Gamma, \mu B \t \vdash G}{
\Gamma,S\t\vdash G &
B S \vec{x} \vdash S\vec{x}}
\quad \longleftrightarrow \quad
\infer{\vdash [\Gamma]^\perp, \nu \overline{[B]} \t, [G]}{
\vdash [\Gamma]^\perp, [S]^\perp\t, [G] &
\vdash \overline{[B]}[S]^\perp \vec{x}, [S]\vec{x}}
\]
Here, $[S]$ stands for $\lambda\vec{x}.~ [S\vec{x}]$,
and the validity of the translation relies on the fact that
$\overline{[B]}[S]^\perp\vec{x}$ is the same as $[BS\vec{x}]^\perp$.
Note that $BS$ belongs to $\H$ whenever both $S$ and $B$ are in $\H$,
meaning that for any $p$ and $\vec{x}$, $Bp\vec{x} \in \H$.
The coinduction rule is treated symmetrically,
except that in this case $B$ can be in ${\cal G}$:
\[ \infer{\Gamma \vdash \nu B \t}{\Gamma \vdash S \t &
S\vec{x}\vdash BS\vec{x}}
\quad\longleftrightarrow\quad
\infer{\vdash \enc{\Gamma}^\perp, \nu\enc{B}\t}{
\vdash \enc{\Gamma}^\perp, \enc{S}\t &
\vdash \enc{S}^\perp\vec{x}, \enc{B}\enc{S}\vec{x}} \]
In order to restore the additive behavior of some intuitionistic
rules (\emph{e.g.}, $\wedge{}R$) and translate the structural rules,
we can contract and weaken the negative $\mmumall$\ formulas corresponding
to encodings of $\H$ formulas.
\end{proof}
Linear logic provides an appealing proof theoretic setting because of
its emphasis on dualities and of its clear separation of concepts
(additive vs.~{}multiplicative, asynchronous vs.~{}synchronous). Our experience
is that $\mmumall${} is a good place to study focusing in the presence of
least and greatest fixed point connectives. To get similar results for
$\mmuLJ$, one can either work from scratch entirely
within the intuitionistic framework or use an encoding into linear logic.
Given a mapping from intuitionistic to linear logic, and a complete focused
proof system for linear logic, one can often build a complete
focused proof-system for intuitionistic logic.
\[
\xymatrix
{
\vdash F \ar@{.>}[d]\ar[r] & \ar[d] \vdash [F] \\
\vdash \Uparrow F & \ar[l]\vdash \Uparrow [F] \\
}
\]
The usual encoding of intuitionistic logic into linear logic involves
exponentials, which can damage focusing structures by causing both
synchronous and asynchronous phases to end.
Hence, a careful study of the polarity of linear connectives must
be done (cf. \cite{danos93kgc,liang07csl}) in order to minimize the
role played by the exponentials in such encodings. Here, as a result of
Proposition~\ref{prop:01}, it is possible to get a complete focused
system for $\mmuLJL$\ that inherits exactly the strong structure of linear
$\mu$-focused derivations.
This system is presented in Figure~\ref{fig:muLJL}.
Its sequents have the form $\Gamma;\Gamma'\vdash P$ where
$\Gamma'$ is a multiset of synchronous formulas (fragment $\H$)
and the set $\Gamma$ contains frozen least fixed points
in $\H$.
First, notice that accordingly with the absence of exponentials
in the encoding into linear logic, there is no structural rule.
The asynchronous phase takes place on sequents where $\Gamma'$ is not empty.
The synchronous phase processes sequents of the form
$\Gamma ; \vdash P$, where the focus is without any ambiguity on $P$.
It is impossible to introduce any connective on the right when
$\Gamma'$ is not empty.
As will be visible in the following proof of completeness,
the synchronous phase in $\mmuLJL$\ does not correspond exactly to
a synchronous phase in $\mmumall$: it contains rules that are translated
into asynchronous $\mmumall$\ rules, namely implication, universal
quantification and coinduction.
We introduced this simplification in order to simplify the presentation,
which is harmless since there is no choice in refocusing afterwards.
\begin{figure}[htpb]
\begin{center}
\[
\begin{array}{c}
\mbox{Asynchronous phase}
\\[6pt]
\infer{\Gamma;\Gamma', P\wedge Q \vdash R}{
\Gamma;\Gamma', P, Q \vdash R}
\quad
\infer{\Gamma;\Gamma', P\vee Q \vdash R}{
\Gamma;\Gamma', P\vdash R &
\Gamma;\Gamma', Q\vdash R}
\\[6pt]
\infer{
\Gamma;\Gamma', \exists{}x. P x \vdash Q}{
\Gamma;\Gamma', P x \vdash Q}
\\[6pt]
\infer{\Gamma;\Gamma', s = t \vdash P}{
\{ (\Gamma;\Gamma' \vdash P)\theta :
\theta\in csu(s\stackrel{.}{=} t) \} }
\\[6pt]
\infer{\Gamma;\Gamma',\mu{}B\t \vdash P}{
\Gamma, \mu{}B\t;\Gamma'\vdash P}
\quad
\infer{\Gamma;\Gamma',\mu{}B\t \vdash P}{
S \in \H &
\Gamma;\Gamma', S\t \vdash P &
;BS\vec{x} \vdash S\vec{x}
}
\\ ~
\\
\mbox{Synchronous phase}
\\[6pt]
\infer{\Gamma;\vdash A\wedge B}{\Gamma;\vdash A & \Gamma;\vdash B}
\quad
\infer{\Gamma;\vdash A_0\vee A_1}{\Gamma;\vdash A_i}
\quad
\infer{\Gamma;\vdash A\supset B}{\Gamma; A \vdash B}
\\[6pt]
\infer{\Gamma;\vdash t=t}{}
\quad
\infer{\Gamma;\vdash \exists{}x. P x}{\Gamma;\vdash P t}
\quad
\infer{\Gamma;\vdash \forall{}x. P x}{\Gamma;\vdash P x}
\\[6pt]
\infer{\Gamma, \mu{}B\t;\vdash \mu{}B\t}{}
\quad
\infer{\Gamma;\vdash \mu{}B\t}{\Gamma;\vdash B(\mu{}B)\t}
\\[6pt]
\infer{\Gamma;\vdash \nu{}B\t}{
S \in \H &
\Gamma;\vdash S\t &
;S\vec{x} \vdash BS\vec{x}
}
\end{array}
\]
\end{center}
\caption{Focused proof system for $\mmuLJL$} \label{fig:muLJL}
\end{figure}
\begin{proposition}[Soundness and completeness]
The focused proof system for $\mmuLJL$\ is sound and complete
with respect to $\mmuLJL$:
any focused $\mmuLJL$\ derivation of $\Gamma';\Gamma\vdash P$
can be transformed into a $\mmuLJL$\ derivation of $\,\Gamma',\Gamma\vdash P$;
any quasi-finite $\mmuLJL$\ derivation of $\,\Gamma\vdash P$
can be transformed into a $\mmuLJL$\ derivation of $\cdot\;;\Gamma\vdash P$.
\end{proposition}
\begin{proof}
The soundness part is trivial: unfocused $\mmuLJL$\ derivations can be
obtained from focused derivations by removing focusing annotations.
Completeness is established using the translation to linear logic
as outlined above.
Given a $\mmuLJL$\ derivation of $\Gamma\vdash P$,
we obtain a $\mmumall$\ derivation of $[\Gamma]\vdash [P]$ using
Proposition~\ref{prop:01}.
This derivation inherits quasi-finiteness,
so we can obtain a $\mu$-focused $\mmumall$\ derivation
of $\vdash \Uparrow [\Gamma]^\perp, [P]$.
All sequents of this derivation correspond to encodings of $\mmuLJL$\ sequents,
always containing a formula that corresponds to the right-hand side
of $\mmuLJL$\ sequents.
By permutability of asynchronous rules,
we can require that asynchronous rules are applied on right-hand side
formulas only after any other asynchronous rule in our $\mu$-focused derivation.
Finally, we translate that focused derivation into a focused
$\mmuLJL$\ derivation.
Let $\Gamma$ be a multiset of least fixed points in $\H$,
$\Gamma'$ be a multiset of $\H$ formulas,
and $P$ be a formula in ${\cal G}$.
\begin{enumerate}
\item
If there is a $\mu$-focused derivation of
$\vdash ([\Gamma]^\perp)^* \Uparrow [\Gamma']^\perp, [P]$
or $\vdash ([\Gamma]^\perp)^*, [P] \Uparrow [\Gamma']^\perp$
then there is a focused $\mmuLJL$\ derivation of
$\Gamma;\Gamma'\vdash P$.
\item
If there is a $\mu$-focused derivation of
$\vdash ([\Gamma]^\perp)^* \Downarrow [P]$
then there is a focused $\mmuLJL$\ derivation of $\Gamma;\vdash P$.
\end{enumerate}
We proceed by a simultaneous induction on the $\mu$-focused derivation.
\begin{enumerate}
\item
Since $[P]$ is the only formula that may be synchronous,
the $\mu$-focused derivation can only start with two switching rules:
either $[P]$ is moved to the left of the arrow, in which case
we conclude by induction hypothesis (1),
or $\Gamma'$ is empty and $[P]$ is focused on, in which case
we conclude by induction hypothesis (2).
If the $\mu$-focused derivation starts with a logical rule,
we translate it into a $\mmuLJL$\ focused rule before concluding by
induction hypothesis.
For instance, the $\with$ or $\neq$ rule, which can only be
applied to a formula in $[\Gamma']^\perp$, respectively correspond
to a left disjunction or equality rule.
Other asynchronous $\mmumall$\ rules translate differently depending
on whether they are applied on $[\Gamma]^\perp$ or $[P]$:
$\parr$ can correspond to left conjunction or right implication;
$\nu$ to left $\mu$ (induction) or right $\nu$ (coinduction);
$\forall$ to left $\exists$ or right $\forall$.
Note that in the case where $[P]$ is principal, the constraint on
the order of asynchronous rules means that $\Gamma$ is empty,
which is required by synchronous $\mmuLJL$\ rule.
Finally, freezing is translated by the $\mmuLJL$\ rule
moving a least fixed point from $\Gamma'$ to $\Gamma$.
\item
If the $\mu$-focused derivation starts with the switching rule releasing
focus from $[P]$ we conclude by induction hypothesis (1).
Otherwise it is straightforward to translate the first rule and
conclude by induction hypothesis (2):
$\mathrel{\otimes}$, $\mathrel{\oplus}$, ${=}$, $\exists$ and $\mu$
respectively map to the right rules for
$\wedge$, $\vee$, ${=}$, $\exists$ and $\mu$.
Note, however, that the tensor rule splits frozen formulas
in $([\Gamma]^\perp)^*$, while the right conjunction rule of $\mmuLJL${}
does not. This is harmless because weakening is obviously admissible
for the frozen context of $\mmuLJL$\ focused derivations.
This slight mismatch means that we would still have a complete
focused proof system for $\mmuLJL$\ if we enforced a linear use of
the frozen context. We chose to relax this constraint as it does not
make a better system for proof search.
\end{enumerate}
\vspace{-0.5cm}\end{proof}
Although $\mmuLJL$\ is only a small fragment of $\mmuLJ$,
it catches many interesting and useful problems. For example,
any Horn-clause specification can be expressed in $\H$ as a least
fixed point, and theorems that state properties such as totality or
functionality of predicates defined in this manner are in ${\cal G}$.
Theorems that state more model-checking properties, of the form
$\forall x.~ P~x\supset Q~x$, are in ${\cal G}$ provided
that $P$ and $Q$ are in $\H$.
Further, implications can be chained through a greatest fixed point
construction, which allows to specify various relations on
process behaviors.
For example, provided that one-step transitions $u\rightarrow v$ are specified
in $\H$, simulation is naturally expressed in ${\cal G}$ as follows:
\[ \nu S \lambda x \lambda y.~
\forall x'.~ x \rightarrow x' \supset \exists y'.~ y \rightarrow y' \wedge S~x'~y' \]
Finally, the theorems about natural numbers presented in
Section~\ref{sec:mumall_examples} are also in ${\cal G}$.
Although a formula in ${\cal G}$ can \emph{a priori} be a theorem in $\mmuLJ${}
but not in $\mmuLJL$,
we have shown~\cite{baelde09tableaux} that $\mmuLJL${}
is complete for inclusions of non-deterministic finite automata |
${\cal A}\subseteq {\cal B}$ being expressed naturally as
$\forall w.~ [{\cal A}]w \supset [{\cal B}]w$.
Interestingly, the $\mmuLJL$\ fragment has already been identified in
LINC~\cite{tiu05eshol} and the Bedwyr system~\cite{baelde07cade}
implements a proof-search strategy for it that is complete for finite
behaviors, \emph{i.e.,}\ proofs without (co)induction nor axiom rules,
where a fixed point has to be treated in a finite number of unfoldings.
This strategy coincides with the focused
system for $\mmuLJL$, where the finite behavior restriction corresponds
to dropping the freezing rule,
obtaining a system where proof search consists in
eagerly eliminating any left-hand side (asynchronous) formula
before working on the goal (right-hand side),
without ever performing any contraction or weakening.
The logic $\mmuLJ$\ is closely related to LINC, the main difference being
the generic quantifier $\nabla$, which allows to specify and reason about
systems involving variable binding,
such as the $\pi$-calculus~\cite{tiu05concur}.
But we have shown~\cite{baelde08lfmtp} that $\nabla$ can be added
in an orthogonal fashion in $\mmuLJ$\ (or $\mmumall$)
without affecting focusing results.
\section{Introduction}
\input{intro}
\section{$\mmumall$} \label{sec:mumall}
\input{mumall}
\section{Normalization} \label{sec:norm}
\input{norm}
\section{Focusing} \label{sec:focusing}
\input{focus}
\section{Conclusion}
\input{conclu}
\begin{acks}
This paper owes a lot to the anonymous reviewers of an earlier version,
and I thank them for that.
I also wish to thank Dale Miller with whom I started this work,
Olivier Laurent and Alexis Saurin for their insights on focusing,
and Pierre Clairambault, St\'ephane Gimenez, Colin Riba
and especially Alwen Tiu for
helpful discussions on normalization proofs.
\end{acks}
\bibliographystyle{acmtrans}
\input{main.bbl}
\begin{received}
Received October 2009;
revised July 2010;
accepted September 2010
\end{received}
\end{document}
\subsection{Equality}
The treatment of equality dates back
to~\cite{girard92mail,schroeder-Heister93lics}, originating from
logic programming.
In the disequality rule, which is a case analysis on all unifiers,
$csu$ stands for \emph{complete set of unifiers}, that is a set ${\cal S}$
of unifiers of $u\stackrel{.}{=} v$ such that any unifier $\sigma$ can be written
as $\theta\sigma'$ for $\theta\in{\cal S}$.
For determinacy reasons,
we assume a fixed mapping from unification problems to complete
sets of unifiers, always taking $\{id\}$ for $csu(u\stackrel{.}{=} u)$.
Similarly, we shall need a fixed mapping from each unifier
$\sigma' \in csu(u\theta\stackrel{.}{=} v\theta)$ to a $\sigma \in csu(u\stackrel{.}{=} v)$
such that $\theta\sigma' = \sigma\theta'$ for some $\theta'$
| existence is guaranteed since
$\theta\sigma'$ is a unifier of $u\stackrel{.}{=} v$.
In the first-order case, and in general when most general unifiers exist,
the $csu$ can be restricted to having at most one element.
But we do not rule out higher-order terms,
for which unification is undecidable and complete sets of unifiers
can be infinite~\cite{huet75tcs}
| in implementations, we restrict to well-behaved fragments
such as higher-order patterns~\cite{miller92jsc}.
Hence, the left equality rule might be infinitely branching. But
derivations remain inductive structures (they don't have infinite
branches) and are handled naturally in our proofs by means
of (transfinite) structural induction.
Again, the use of higher-order terms, and even the presence of the equality
connectives are not essential to this work. All the results presented below
hold in the logic without equality, and do not make much assumptions on
the language of terms.
It should be noted that our ``free'' equality is more powerful than
the more usual Leibniz equality. Indeed, it implies the injectivity
of constants: one can prove for example that $\forall x.~ 0 = s~x \multimap
\mathbf{0}$ since there is no unifier for $0\stackrel{.}{=} s~x$.
This example also highlights that constants and universal variables
are two different things,
since only universal variables are subject to unification |
which is why we avoid calling them eigenvariables.
It is also important to stress that the disequality rule does not
and must not embody any assumption about the signature,
just like the universal quantifier.
That rule enumerates substitutions over open terms,
not instantiations by closed terms.
Otherwise, with an empty domain we would prove
$\forall x.~ x=x \multimap \mathbf{0}$ (no possible instantiation for $x$)
and $\forall x.~ x=x$, but not (without cut) $\forall x.~ \mathbf{0}$.
Similarly,
by considering a signature with a single constant $c:\tau_2$,
so that $\tau_1$ is empty while $\tau_1\rightarrow\tau_2$ contains only $\lambda x.~
c$, we would indeed be able to prove $\forall x.~ x=x$ and
$\forall x.~ x=x \multimap \exists y.~ x=\lambda a.~ y$
but not (without cut) $\forall x \exists y.~ x=\lambda a.~ y$.
\begin{example}
Units can be represented by means of $=$ and $\neq$.
Assuming that $2$ and $3$ are two distinct constants, then we have
$2=2 \multimapboth \mathbf{1}$ and $2=3 \multimapboth \mathbf{0}$
(and hence $2\neq 2 \multimapboth \bot$ and $2\neq 3 \multimapboth \top$).
\end{example}
\subsection{Fixed points}
Our treatment of fixed points follows from a line of work on definitions
\cite{girard92mail,schroeder-Heister93lics,mcdowell00tcs,momigliano03types}.
In order to make that lineage explicit and help the understanding of our rules,
let us consider for a moment an intuitionistic framework (linear or not).
In such a framework, the rules associated with least fixed points can be derived
from Knaster-Tarski's characterization of an operator's least fixed point in
complete lattices: it is the least of its pre-fixed points\footnote{
Pre-fixed points of $\phi$ are those $x$ such that
$\phi(x) \leq x$.
}.
\[ \infer{\Sigma; \mu B \t \vdash S \t}{ \vec{x}; BS\vec{x}\vdash S\vec{x}} \quad\quad
\infer{\Sigma; \Gamma\vdash \mu B \t}{\Sigma; \Gamma \vdash B(\mu B)\t} \]
As we shall see,
the computational interpretation of the left rule is recursion.
Obviously, that computation cannot be performed without knowing
the inductive structure on which it iterates. In other words,
a cut on $S\t$ cannot be reduced until a cut on $\mu B\t$ is performed.
As a result, a more complex left introduction rule is usually considered
(\emph{e.g.,}\ in \cite{momigliano03types})
which can be seen as embedding this suspended cut:
\[ \infer{\Sigma; \Gamma, \mu B\t \vdash P}{
\Sigma; \Gamma, S\t \vdash P & \vec{x}; BS\vec{x}\vdash S\vec{x}} \quad\quad
\infer{\Sigma; \Gamma\vdash \mu B \t}{\Sigma; \Gamma \vdash B(\mu B)\t} \]
Notice, by the way, how the problem of suspended cuts (in the first set of
rules) and the loss of subformula property (in the second one) relate
to the arbitrariness of $S$, or in other words the difficulty of
finding an invariant for proving $\Gamma, \mu B\t \vdash P$.
Greatest fixed points can be described similarly as the greatest
of the post-fixed points:
\[ \infer{\Sigma;\Gamma, \nu B\t\vdash P}{\Sigma;\Gamma, B(\nu B)\t\vdash P}
\quad\quad
\infer{\Sigma;\Gamma\vdash\nu B\t}{\Sigma;\Gamma\vdash S\t &
\vec{x};S\vec{x}\vdash BS\vec{x}}
\]
\begin{example} \label{example:nat}
Let $B_{nat}$ be the operator
$(\lambda{}N\lambda{}x.~ x=0
\mathrel{\oplus} \exists{}y.~ x=s~y \mathrel{\otimes} N~y)$
and $nat$ be its least fixed point $\mu B_{nat}$.
Then the following inferences can be derived from the above rules:
\[ \infer{\Sigma; \Gamma, nat~t \vdash P}{
\Sigma; \Gamma, S~t \vdash P &
\vdash S~0 &
y; S~y \vdash S~(s~y)}
\quad\quad
\infer{\Sigma; \Gamma \vdash nat~0}{}
\quad\quad
\infer{\Sigma; \Gamma \vdash nat~(s~t)}{\Sigma; \Gamma\vdash nat~t}
\]
\end{example}
Let us now consider the translation of those rules to classical linear logic,
using the usual reading of $\Gamma\vdash P$ as
$\vdash \Gamma^\perp, P$ where $(P_1,\ldots,P_n)^\perp$ is
$(P_1^\perp,\ldots,P_n^\perp)$.
It is easy to see that the above right introduction rule for $\mu$
(resp.~\ $\nu$) becomes the $\mu$ (resp.~\ $\nu$) rule of
Figure~\ref{fig:mumall}, by taking $\Gamma^\perp$ for $\Gamma$.
Because of the duality between least and greatest fixed points
(\emph{i.e.,}\ $(\mu B)^\perp \equiv \nu \overline{B}$) the other rules collapse.
The translation of the above left introduction rule for $\nu$
corresponds to an application of the $\mu$ rule of $\mmumall$\ on
$(\nu B\t)^\perp \equiv \mu \overline{B}\t$.
The translation of the left introduction rule for $\mu$ is as follows:
\[ \infer{\vdash \Gamma^\perp, (\mu B \t)^\bot, P}{
\vdash \Gamma^\perp, S^\perp\t, P &
\vdash (B S \vec{x})^\perp, S \vec{x}
} \]
Without loss of generality, we can write $S$ as $S'^\perp$.
Then $(B S\vec{x})^\perp$ is simply $\overline{B} S'\vec{x}$
and we obtain exactly the $\nu$ rule of $\mmumall$\ on $\nu\overline{B}$:
\[ \infer[\nu]{\vdash \Gamma^\perp, \nu \overline{B} \t, P}{
\vdash \Gamma^\perp, S' \t, P &
\vdash \overline{B} S' \vec{x}, S'^\perp \vec{x}
} \]
In other words, by internalizing syntactically the
duality between least and greatest fixed points that exists in
complemented lattices, we have also obtained the identification
of induction and coinduction principles.
\begin{example}
As expected from the intended meaning of $\mu$ and $\nu$,
$\nu{}(\lambda{}p.p)$ is provable
(take any provable formula as the coinvariant)
and its dual $\mu{}(\lambda{}p.p)$ is not provable.
More precisely, $\mu{}(\lambda{}p.p) \multimapboth \mathbf{0}$
and $\nu{}(\lambda{}p.p) \multimapboth \top$.
\end{example}
\subsection{Comparison with other extensions of MALL}
The logic $\mmumall$\ extends MALL
with first-order structure ($\forall$, $\exists$, $=$ and $\neq$)
and fixed points ($\mu$ and $\nu$).
A natural question is whether fixed points can be compared
with other features that bring infinite behavior,
namely exponentials and second-order quantification.
In~\cite{baelde07lpar}, we showed that $\mmumall$\ can be encoded
into full second-order linear logic (LL2), \emph{i.e.,}\ MALL with exponentials
and second-order quantifiers,
by using the well-known second-order encoding:
\[ [\mu B\t] \equiv \forall S.~ !(\forall x.~ [B]S\vec{x}\multimap S\vec{x}) \multimap S\t \]
This translation highlights the fact that fixed points combine
second-order aspects (the introduction of an arbitrary (co)invariant)
and exponentials (the iterative behavior of the $\nu$ rule in
cut elimination).
The corresponding translation of $\mmumall$\ derivations into LL2
is very natural |
anticipating the presentation of cut elimination
for $\mmumall$, cut reductions in the original and encoded derivations
should even correspond quite closely.
We also provided a translation from LL2 proofs of encodings
to $\mmumall$\ proofs, under natural constraints on second-order instantiations;
interestingly, focusing is used to ease this reverse translation.
It is also possible to encode exponentials using fixed points,
as follows:
\[ [?P] \equiv \mu (\lambda p.~ \perp \mathrel{\oplus} (p\parr p) \mathrel{\oplus} [P])
\quad\quad [!P] \equiv [?P^\perp]^\perp \]
This translation trivially allows to simulate the rules of weakening ($W$),
contraction ($C$) and dereliction ($D$) for $[?P]$ in $\mmumall$:
each one is obtained by applying the $\mu$ rule and choosing the corresponding
additive disjunct.
Then, the promotion rule can be obtained for the dual of the encoding.
Let $\Gamma$ be a sequent containing only formulas of the form $[?Q]$,
and $\Gamma^\perp$ denote the tensor of the duals of those formulas,
we derive $\vdash\Gamma,[!P]$ from $\vdash\Gamma,[P]$
using $\Gamma^\perp$ as a coinvariant for $[!P]$:
\[ \infer[\nu]{\vdash \Gamma,
\nu (\lambda p.~ \mathbf{1} \with (p \mathrel{\otimes} p) \with [P])}{
\infer=[\mathrel{\otimes},init]{\vdash \Gamma,\Gamma^\perp}{} &
\infer{\vdash \Gamma, \mathbf{1} \with (\Gamma^\perp\mathrel{\otimes}\Gamma^\perp)
\with [P]}{
\infer=[W]{\vdash \Gamma, \mathbf{1}}{\infer{\vdash\mathbf{1}}{}} &
\infer=[C]{\vdash \Gamma, \Gamma^\perp\mathrel{\otimes}\Gamma^\perp}{
\infer{\vdash \Gamma, \Gamma, \Gamma^\perp\mathrel{\otimes}\Gamma^\perp}{
\infer=[\mathrel{\otimes},init]{\vdash \Gamma, \Gamma^\perp}{} &
\infer=[\mathrel{\otimes},init]{\vdash \Gamma, \Gamma^\perp}{}}} &
\vdash \Gamma, [P]}} \]
Those constructions imply that the encoding of provable statements
involving exponentials is also provable in $\mmumall$.
But the converse is more problematic:
not all derivations of the encoding can be translated
into a derivation using exponentials. Indeed, the encoding of
$[!P]$ is an infinite tree of $[P]$, and there is nothing
that prevents it from containing different proofs of $[P]$,
while $!P$ must be uniform, always providing the same proof of $P$.
Finally, accordingly with these different meanings, cut reductions
are different in the two systems.
It seems unlikely that second-order quantification can be encoded in
$\mmumall$, or that fixed points could be encoded using only second-order
quantifiers or only exponentials. In any case, if such encodings existed
they would certainly be as shallow as the encoding of exponentials,
\emph{i.e.,}\ at the level of provability,
and not reveal a connection at the level of proofs and cut elimination
like the encoding of fixed points in LL2.
\subsection{Basic meta-theory}
\begin{definition} \label{def:inst}
If $\theta$ is a term substitution, and $\Pi$ a derivation of
$\Sigma;\vdash\Gamma$, then we define $\Pi\theta$, a derivation of
$\Sigma\theta;\vdash \Gamma\theta$:
$\Pi\theta$ always starts with the same rule as $\Pi$,
its premises being obtained naturally by applying
$\theta$ to the premises of $\Pi$.
The only non-trivial case is the $\neq$ rule.
Assuming that we have a derivation $\Pi$ where $u\neq v$ is principal,
with a subderivation $\Pi_\sigma$ for each $\sigma\in csu(u\stackrel{.}{=} v)$,
we build a subderivation of $\Pi\theta$ for each
$\sigma'\in csu(u\theta\stackrel{.}{=} v\theta)$.
Since $\theta\sigma'$ is a unifier for $u\stackrel{.}{=} v$,
it can be written as $\sigma\theta'$ for some $\sigma\in csu(u\stackrel{.}{=} v)$.
Hence, $\Pi_\sigma\theta'$ is a suitable derivation for $\sigma'$.
Note that some $\Pi_\sigma$ might be unused in that process,
if $\sigma$ is incompatible with $\theta$,
while others might be used infinitely many times\footnote{
Starting with a $\neq$ rule on $x\neq y~z$, which admits the most
general unifier $[(y~z) / x]$, and applying the substitution
$\theta = [u~v / x]$, we obtain $u~v \neq y~z$ which has no
finite $csu$. In such a case, the infinitely many subderivations
of $\Pi\theta$ would be instances of the only subderivation of $\Pi$.
}.
\end{definition}
Note that the previous definition encompasses common signature manipulations
such as permutation and extension, since it is possible for a substitution
to only perform a renaming, or to translate a signature to an extended one.
We now define functoriality, a proof construction that is used to
derive the following rule:
\[ \infer[B]{\Sigma ; \vdash B P, \overline{B} Q}{\vec{x}; \vdash P\vec{x}, Q\vec{x}} \]
In functional programming terms, it corresponds to a $map$ function:
its type is $(Q\multimap P) \multimap (BQ\multimap BP)$
(taking $Q^\perp$ as $Q$ in the above inference).
Functoriality is particularly useful for dealing with fixed points:
it is how we propagate reasoning/computation underneath $B$
\cite{matthes98csl}.
\begin{definition}[Functoriality, $F_B(\Pi)$]
Let $\Pi$ be a proof of $\vec{x};\vdash P\vec{x}, Q\vec{x}$
and $B$ be a monotonic operator such
that $\Sigma\vdash B : (\vec{\gamma}\rightarrow o)\rightarrow o$.
We define $F_B(\Pi)$, a derivation of $\Sigma;\vdash BP, \overline{B} Q$,
by induction on the maximum depth of occurrences of $p$ in $B p$:
\begin{longitem}
\item When $B = \lambda p.~ P'$, $F_B(\Pi)$ is an instance of $init$ on $P'$.
\item When $B = \lambda p.~ p\t$, $F_B(\Pi)$ is $\Pi[\t/\vec{x}]$.
\item Otherwise,
we perform an $\eta$-expansion based on the toplevel connective of $B$
and conclude by induction hypothesis.
We only show half of the connectives, because dual connectives
are treated symmetrically.
There is no case for units, equality and disequality
since they are treated as part of the vacuous abstraction case.
When $B = \lambda p.~ B_1 p \mathrel{\otimes} B_2 p$:
\[ \infer[\parr]{\Sigma; \vdash B_1 P \mathrel{\otimes} B_2 P, \overline{B}_1 Q\parr \overline{B}_2 Q}{
\infer[\mathrel{\otimes}]{\Sigma; \vdash B_1 P \mathrel{\otimes} B_2 P, \overline{B}_1 Q, \overline{B}_2 Q}{
\infer{\Sigma; \vdash B_1 P, \overline{B}_1 Q}{F_{B_1}(\Pi)} &
\infer{\Sigma; \vdash B_2 P, \overline{B}_2 Q}{F_{B_2}(\Pi)}}}
\]
When $B = \lambda p.~ B_1 p \mathrel{\oplus} B_2 p$:
\[ \infer[\with]{\Sigma; \vdash B_1 P \mathrel{\oplus} B_2 P, \overline{B}_1 Q\with \overline{B}_2 Q}{
\infer[\mathrel{\oplus}]{\Sigma; \vdash B_1 P \mathrel{\oplus} B_2 P, \overline{B}_1 Q}{
\infer{\Sigma; \vdash B_1 P, \overline{B}_1 Q}{F_{B_1}(\Pi)}} &
\infer[\mathrel{\oplus}]{\Sigma; \vdash B_1 P \mathrel{\oplus} B_2 P, \overline{B}_2 Q}{
\infer{\Sigma; \vdash B_2 P, \overline{B}_2 Q}{F_{B_2}(\Pi)}}}
\]
When $B = \lambda p.~ \exists x.~ B' p x$:
\[ \infer[\forall]{\Sigma; \vdash \exists x.~ B' P x, \forall x.~ \overline{B}' Q x}{
\infer[\exists]{\Sigma,x; \vdash \exists x.~ B' P x, \overline{B}' Q x}{
\infer{\Sigma,x; \vdash B' P x, \overline{B}' Q x}{F_{B'{\bullet}x}(\Pi)}
}} \]
When $B = \lambda p.~ \mu (B' p) \t$,
we show that $\nu (\overline{B'} P^\perp)$ is a coinvariant of
$\nu (\overline{B'} Q)$:
\[ \infer[\nu]{\Sigma; \vdash \mu (B' P)\t, \nu (\overline{B'} Q)\t}{
\infer[init]{
\Sigma; \vdash \mu (B' P)\t, \nu (\overline{B'} P^\perp)\t}{} &
\infer[\mu]{\vec{x}; \vdash \mu (B' P)\vec{x},
(\overline{B'} Q) (\nu (\overline{B'} P^\perp))\vec{x}}{
\infer{\vec{x}; \vdash (B' P) (\mu (B' P))\vec{x},
(\overline{B'} Q) (\nu (\overline{B'} P^\perp))\vec{x}}{
F_{(\lambda p. B' p (\mu (B' P))\vec{x})}(\Pi)}}}
\]
\end{longitem}
\end{definition}
\begin{proposition}[Atomic initial rule] \label{def:atomicinit}
We call \emph{atomic} the $init$ rules acting on atoms or fixed points.
The general rule $init$ is derivable from atomic initial rules.
\end{proposition}
\begin{proof}
By induction on $P$, we build a derivation of $\;\vdash P^\perp, P$
using only atomic axioms.
If $P$ is not an atom or a fixed point expression,
we perform an $\eta$-expansion as in the previous definition
and conclude by induction hypothesis.
Note that although the identity on fixed points can be expanded,
it can never be eliminated: repeated expansions do not terminate
in general.
\end{proof}
The constructions used above can be used to establish the \emph{canonicity} of
all our logical connectives: if a connective is duplicated into, say, red and
blue variants equipped with the same logical rules, then those two versions
are equivalent.
Intuitively, it means that our connectives define a unique
logical concept.
This is a known property of the connectives of first-order
MALL, we show it for $\mu$ and its copy $\hat{\mu}$ by using our color-blind
expansion:
\[ \infer[\nu]{\vdash \nu \overline{B} \t, \hat{\mu} B \t}{
\infer[init]{\vdash \hat{\nu} \overline{B} \t, \hat{\mu} B \t}{} &
\infer[\hat{\mu}]{\vdash \overline{B} (\hat{\nu} \overline{B}) \vec{x}, \hat{\mu} B \vec{x}}{
\infer[init]{\vdash \overline{B} (\hat{\nu} \overline{B}) \vec{x}, B (\hat{\mu} B) \vec{x}}{}
}
} \]
\begin{proposition} \label{prop:unfold}
The following inference rule is derivable:
\[
\infer[\nu{}R]{\vdash \Gamma, \nu{}B\t}{\vdash \Gamma, B(\nu{}B)\t}
\]
\end{proposition}
\begin{proof}
The unfolding $\nu{}R$ is derivable from $\nu$,
using $B(\nu{}B)$ as the coinvariant $S$.
The proof of coinvariance $\,\vdash B(B(\nu B))\vec{x}, \overline{B}(\mu \overline{B})\vec{x}$ is obtained
by functoriality on $\,\vdash B(\nu B)\vec{x}, \mu\overline{B}\vec{x}$, itself obtained from
$\mu$ and $init$.
\end{proof}
\begin{example}
In general the least fixed point entails the greatest. The
following is a proof of $\mu{}B\t \multimap \nu{}B\t$,
showing that $\mu B$ is a coinvariant of $\nu B$:
\[ \infer[\mbox{$\nu$ on $\nu{}B\t$ with $S:=\mu{}B$}]{
\vdash \nu{}\overline{B}\t, \nu{}B\t}{
\infer[init]{\vdash \nu\overline{B}\t, \mu{}B\t}{}
&
\infer[\nu{}R]{\vdash B(\mu{}B)\vec{x}, \nu\overline{B}\vec{x}}{
\infer[init]{\vdash B(\mu{}B)\vec{x}, \overline{B}(\nu\overline{B})\vec{x}}{}
}
} \]
The greatest fixed point entails the least fixed point when the fixed
points are \emph{noetherian}, \emph{i.e.}, predicate operators have
vacuous second-order abstractions.
Finally, the $\nu{}R$ rule allows to derive $\mu B \t \multimapboth B (\mu B)\t$,
or equivalently $\nu B\t \multimapboth B (\nu B)\t$.
\end{example}
\subsection{Polarities of connectives}
\label{sec:mumall_positivity}
It is common to classify inference rules between invertible and non-invertible
ones. In linear logic, we can use the refined notions of \emph{positivity}
and \emph{negativity}.
A formula $P$ is said to be positive (resp.~ $Q$ is said to be
negative) when $P\multimapboth \oc P$ (resp.~ $Q\multimapboth \wn Q$).
A logical connective is said to be positive (resp.~ negative) when
it preserves positivity (resp.~ negativity). For example, $\mathrel{\otimes}$ is
positive since $P\mathrel{\otimes} P'$ is positive whenever $P$ and $P'$ are.
This notion is more semantical than invertibility, and has the advantage
of actually saying something about non-invertible connectives/rules.
Although it does not seem at first sight to be related to proof-search,
positivity turns out to play an important role in the understanding
and design of focused
systems~\cite{liang07csl,laurent02phd,laurent05apal,danos93kgc,danos93wll}.
Since $\mmumall$\ does not have exponentials, it is not possible
to talk about positivity as defined above.
Instead, we are going to take a backwards approach: we shall first define
which connectives are negative, and then check that the obtained negative
formulas have a property close to the original negativity.
This does not trivialize the question at all: it turns out that only
one classification allows to derive the expected property.
We refer the interested reader to \cite{baelde08phd} for the extension
of that proof to $\mmuLL$, \emph{i.e.,}\ $\mmumall$\ with exponentials,
where we follow the traditional approach.
\begin{definition} \label{def:connectives}
We classify as \emph{negative} the following connectives:
$\parr$, $\bot$, $\with$, $\top$, $\forall$, $\neq$, $\nu$.
Their duals are called \emph{positive}.
A formula is said to be negative (resp.~ positive) when
all of its connectives are negative (resp.~ positive).
Finally, an operator $\lambda{}p\lambda\vec{x}.Bp\vec{x}$ is said to be negative
(resp.~ positive) when the formula $Bp\vec{x}$ is negative (resp.~ positive).
\end{definition}
Notice, for example, that $\lambda{}p\lambda\vec{x}.p\vec{x}$
is both positive and negative. But $\mu p. p$ is only positive
while $\nu p. p$ is only negative.
Atoms (and formulas containing atoms)
are neither negative nor positive: indeed, they offer no structure\footnote{
This essential aspect of atoms makes them often less interesting or
even undesirable. For example, in our work on minimal generic
quantification~\cite{baelde08lfmtp} we show and exploit the fact
that this third quantifier can be defined in $\mmuLJ$\ \emph{without atoms}.
} from which the following fundamental property could be derived.
\begin{proposition} \label{prop:struct}
The following structural rules are admissible for any negative formula $P$:
\[
\infer[C]{\Sigma; \vdash \Gamma, P}{\Sigma; \vdash \Gamma, P, P}
\quad
\infer[W]{\Sigma; \vdash \Gamma, P}{\Sigma; \vdash \Gamma}
\]
\end{proposition}
We can already note that this proposition could not hold if $\mu$ was
negative, since $\mu (\lambda p. p)$ cannot be weakened (there is obviously
no cut-free proof of $\,\vdash \mu (\lambda p. p), \mathbf{1}$).
\begin{proof}
We first prove the admissibility of $W$.
This rule can be obtained by cutting a derivation of $\Sigma; \vdash P, \mathbf{1}$.
We show more generally that
for any collection of negative formulas $(P_i)_i$,
there is a derivation of $\;\vdash (P_i)_i, \mathbf{1}$.
This is done by induction on the total size of $(P_i)_i$,
counting one for each connective, unit, atom or predicate variable but
ignoring terms.
The proof is trivial if the collection is empty.
Otherwise,
if $P_0$ is a disequality we conclude by induction with one less formula,
and the size of the others unaffected by the first-order instantiation;
if it is $\top$ our proof is done;
if it is $\bot$ then $P_0$ disappears and we conclude by induction hypothesis.
The $\parr$ case is done by induction hypothesis,
the resulting collection has one more formula but is smaller;
the $\with$ makes use of two instances of the induction hypothesis;
the $\forall$ case makes use of the induction hypothesis with an extended
signature but a smaller formula.
Finally, the $\nu$ case is done by applying the $\nu$ rule with $\bot$ as the
invariant:
\[ \infer{\vdash \nu B \t, (P_i)_i, \mathbf{1}}{
\infer{\vdash \bot, (P_i)_i, \mathbf{1}}{
\vdash (P_i)_i, \mathbf{1}
} &
\vdash B (\lambda \vec{x}. \bot) \vec{x}, \mathbf{1}
}
\]
The two subderivations are obtained by induction hypothesis.
For the second one there is only one formula, namely
$B (\lambda \vec{x}. \bot) \vec{x}$, which is indeed negative (by monotonicity of $B$)
and smaller than $\nu B$.
We also derive contraction ($C$) using a cut, this time against
a derivation of $\vdash (P\parr P)^\perp, P$.
A generalization is needed for the greatest fixed point case,
and we derive the following for any negative $n$-ary operator $A$:
\[
\vdash
(A(\nu{}B_1)\ldots(\nu{}B_n) \parr A(\nu{}B_1)\ldots(\nu{}B_n))^\perp,
A(\nu{}B_1\parr\nu{}B_1)\ldots(\nu{}B_n\parr\nu{}B_n)
\]
We prove this by induction on $A$:
\begin{longitem}
\item It is trivial if $A$ is a disequality, $\top$ or $\bot$.
\item If $A$ is a projection $\lambda\vec{p}.~ p_i\t$,
we have to derive
$\vdash (\nu{}B_i\t\parr\nu{}B_i\t)^\perp, \nu{}B_i\t\parr\nu{}B_i\t$,
which is an instance of $init$.
\item If $A$ is $\lambda \vec{p}.~ A_1\vec{p}\parr A_2\vec{p}$,
we can combine our two induction hypotheses to derive the following:
\[ \vdash ((A_1(\nu{}B_i)_i\parr A_1(\nu{}B_i)_i)\parr
(A_2(\nu{}B_i)_i\parr A_2(\nu{}B_i)_i))^\perp,
A_1(\nu{}B_i)_i\parr A_2(\nu{}B_i)_i \]
We conclude by associativity-commutativity of the tensor,
which amounts to use cut against an easily obtained derivation of
$\vdash ((P_1\parr P_2)\parr(P_1\parr P_2)),
((P_1\parr P_1)\parr(P_2\parr P_2))^\perp$
for $P_j := A_j(\nu{}B_i)_i$.
\item If $A$ is $\lambda\vec{p}.~ A_1\vec{p}\with A_2\vec{p}$
we introduce the additive conjunction
and have to derive two similar premises:
\[ \vdash
((A_1\with A_2)(\nu B_i)_i \parr (A_1\with A_2)(\nu B_i)_i)^\perp,
A_j (\nu B_i \parr \nu B_i)_i \mbox{~ for $j\in\{1,2\}$} \]
To conclude by induction hypothesis, we have to choose the correct
projections for the negated $\with$.
Since the $\with$ is under the $\parr$, we have to use a cut |
one can derive in general
$\;\vdash ((P_1\with P_2)\parr(P_1\with P_2))^\perp, P_j\parr P_j$
for $j\in\{1,2\}$.
\item When $A$ is $\lambda \vec{p}.~ \forall{}x.~ A'\vec{p}x$,
the same scheme applies:
we introduce the universal variable and instantiate the two existential
quantifiers under the $\parr$ thanks to
a cut.
\item Finally, we treat the greatest fixed point case:
$A$ is $\lambda\vec{p}.~ \nu{}(A'\vec{p})\t$.
Let $B_{n+1}$ be $A'(\nu{}B_i)_{i\leq n}$.
We have to build a derivation of
\[ \vdash (\nu B_{n+1} \t\parr \nu B_{n+1}\t)^\perp,
\nu (A'(\nu B_i\parr \nu B_i)_i)\t \]
We use the $\nu$ rule, showing
that $\nu B_{n+1}\parr \nu B_{n+1}$ is a coinvariant of
$\nu (A'(\nu B_i\parr \nu B_i)_i)$.
The left subderivation of the $\nu$ rule is thus an instance of $init$,
and the coinvariance derivation is as follows:
\[ \small
\infer[cut]{
\vdash
(\nu{}B_{n+1}\vec{x}\parr\nu{}B_{n+1}\vec{x})^\perp,
A'(\nu{}B_i\parr\nu{}B_i)_i(\nu{}B_{n+1}\parr\nu{}B_{n+1})\vec{x}
}{
\vdash
(A'(\nu{}B_i)_i(\nu{}B_{n+1})\vec{x} \parr
A'(\nu{}B_i)_i(\nu{}B_{n+1})\vec{x})^\perp,
A'(\nu{}B_i\parr\nu{}B_i)_i(\nu{}B_{n+1}\parr\nu{}B_{n+1})\vec{x}
& \Pi' }
\]
Here, $\Pi'$ derives
$\vdash (\nu B_{n+1}\vec{x} \parr \nu B_{n+1} \vec{x})^\perp,
A'(\nu B_i)_i(\nu B_{n+1})\vec{x}\parr A'(\nu B_i)_i(\nu B_{n+1})\vec{x}$,
unfolding $\nu B_{n+1}$ under the tensor.
We complete our derivation by induction hypothesis,
with the smaller operator expression $A'$
and $B_{n+1}$ added to the $(B_i)_i$.
\end{longitem}
\vspace{-0.5cm}\end{proof}
The previous property yields some interesting remarks about
the expressiveness of $\mmumall$.
It is easy to see that provability is undecidable in $\mmumall$,
by encoding (terminating) executions of a Turing machine as a least fixed
point.
But this kind of observation does not say anything about what
theorems can be derived, \emph{i.e.,}\ the complexity of reasoning/computation
allowed in $\mmumall$.
Here, the negative structural rules derived in Proposition~\ref{prop:struct}
come into play.
Although our logic is linear, it enjoys those derived structural rules
for a rich class of formulas: for example, $nat$ is positive, hence reasoning
about natural numbers allows contraction and weakening, just
like in an intuitionistic setting.
Although the precise complexity of the normalization of $\mmumall${}
is unknown, we have adapted some remarks
from~\cite{burroni86,girard87tcs,alves06csl} to build an encoding
of primitive recursive functions in $\mmumall${}~\cite{baelde08phd} |
in other words, all primitive recursive functions can be proved
total in $\mmumall$.
\subsection{Examples} \label{sec:mumall_examples}
We shall now give a few theorems derivable in $\mmumall$. Although we do not
provide their derivations here but only brief descriptions of how to obtain
them, we stress that all of these examples are proved naturally.
The reader will note that although $\mmumall$\ is linear,
these derivations are intuitive
and their structure resembles that of proofs in intuitionistic logic.
We also invite the reader to check that the $\mu$-focusing system
presented in Section~\ref{sec:foc_mumall} is a useful guide
when deriving these examples, leaving only the important choices.
It should be noted that atoms are not used in this section;
in fact, atoms are rarely useful in $\mmumall$, as its main application
is to reason about (fully defined) fixed points.
Following the definition of $nat$ from Example~\ref{example:nat},
we define a few least fixed points expressing basic properties of natural
numbers. Note that all these definitions are positive.
\[ \begin{array}{lcl}
even &\stackrel{def}{=}&
\mu(\lambda{}E\lambda{}x.~ x=0
\mathrel{\oplus} \exists{}y.~ x=s~(s~y) \mathrel{\otimes} E~y)
\\
plus &\stackrel{def}{=}&
\mu(\lambda{}P\lambda{}a\lambda{}b\lambda{}c.~ a=0 \mathrel{\otimes} b=c \\
& & \phantom{\mu\lambda{}P\lambda{}a\lambda{}b\lambda{}c.}
\mathrel{\oplus} \exists{}a'\exists{}c'. a=s~a' \mathrel{\otimes} c=s~c' \mathrel{\otimes} P~a'~b~c')
\\
leq &\stackrel{def}{=}&
\mu(\lambda{}L\lambda{}x\lambda{}y.~ x=y
\mathrel{\oplus} \exists{}y'.~ y=s~y' \mathrel{\otimes} L~x~y')
\\
\hbox{\em half} &\stackrel{def}{=}&
\mu(\lambda{}H\lambda{}x\lambda{}h.~ (x=0 \mathrel{\oplus} x=s~0) \mathrel{\otimes} h=0 \\
& & \phantom{\mu\lambda{}H\lambda{}x\lambda{}h.}
\mathrel{\oplus} \exists{}x'\exists{}h'.~ x=s~(s~x') \mathrel{\otimes} h=s~h'
\mathrel{\otimes} H~x'~h')
\\
ack & \stackrel{def}{=} &
\mu (\lambda A \lambda m \lambda n \lambda a.~
m = 0 \mathrel{\otimes} a = s~n \\
& & \phantom{\mu\lambda \lambda m \lambda n \lambda a.~}
\mathrel{\oplus} (\exists p.~ m = s~p \mathrel{\otimes} n = 0 \mathrel{\otimes} A~p~(s~0)~a) \\
& & \phantom{\mu\lambda \lambda m \lambda n \lambda a.~}
\mathrel{\oplus} (\exists p \exists q \exists b.~ m = s~p \mathrel{\otimes} n = s~q
\mathrel{\otimes} A~m~q~b \mathrel{\otimes} A~p~b~a))
\end{array} \]
The following statements are theorems in $\mmumall$.
The main insights required for proving these theorems involve
deciding which fixed point expression should be introduced by
induction: the proper invariant is not the difficult choice here since
the context itself is adequate in these cases.
\[ \begin{array}{l}
\vdash \forall{}x.~ nat~x \multimap even~x \mathrel{\oplus} even~(s~x) \\
\vdash \forall{}x.~ nat~x \multimap \forall{}y\exists{}z.~ plus~x~y~z \\
\vdash \forall{}x.~ nat~x \multimap plus~x~0~x \\
\vdash \forall{}x.~ nat~x \multimap \forall{}y.~ nat~y \multimap
\forall{}z.~ plus~x~y~z \multimap nat~z \\
\end{array} \]
In the last theorem, the assumption $(nat~x)$ is not needed and can
be weakened, thanks to Proposition~\ref{prop:struct}.
In order to prove
$(\forall{}x.~ nat~x \multimap \exists{}h.~ \hbox{\em half}~x~h)$
the context does not provide an invariant that is strong enough.
A typical solution is to use complete induction,
\emph{i.e.,}\ use the strengthened invariant
$(\lambda{}x.~ nat~x \mathrel{\otimes}
\forall{}y.~ leq~y~x \multimap \exists{}h.~ \hbox{\em half}~y~h)$.
We do not know of any proof of totality for a non-primitive
recursive function in $\mmumall$.
In particular, we have no proof of
$\forall x \forall y.~ nat~x \multimap nat~y \multimap \exists z.~ ack~x~y~z$.
The corresponding intuitionistic theorem can be proved using nested
inductions, but it does not lead to a linear proof since it requires to
contract an implication hypothesis (in $\mmumall$, the dual of an implication
is a tensor, which is not negative and thus cannot \emph{a priori}
be contracted).
A typical example of co-induction involves the simulation
relation. Assume that $step : state \rightarrow label \rightarrow state \rightarrow o$ is an
inductively defined relation encoding a labeled transition system.
Simulation can be defined using the definition
\[
sim \stackrel{def}{=}
\nu(\lambda{}S\lambda{}p\lambda{}q.~
\forall{}a\forall{}p'.~ step~p~a~p'
\multimap \exists{}q'.~ step~q~a~q' \mathrel{\otimes} S~p'~q').
\]
Reflexivity of simulation ($\forall{}p.~ sim~p~p$) is proved easily
by co-induction with the co-invariant $(\lambda{}p\lambda{}q.~ p=q)$.
Instances of $step$ are not subject to induction but
are treated ``as atoms''.
Proving transitivity, that is,
$$\forall{}p\forall{}q\forall{}r.~ sim~p~q \multimap sim~q~r \multimap sim~p~r$$
is done by co-induction on $(sim~p~r)$ with the co-invariant
$(\lambda{}p\lambda{}r.~ \exists{}q.~ sim~p~q \mathrel{\otimes} sim~q~r)$.
The focus is first put on $(sim~p~q)^\bot$, then on $(sim~q~r)^\bot$.
The fixed points $(sim~p'~q')$ and $(sim~q'~r')$ appearing later in the proof
are treated ``as atoms'', as are all instances of $step$.
Notice that
these two examples are also cases where the context gives a coinvariant.
\subsection{Reduction rules}
Rules reduce instances of the cut rule,
and are separated into auxiliary and main rules.
Most of the rules are the same as for MALL.
For readability, we do not show the signatures $\Sigma$ when they
are not modified by reductions,
leaving to the reader the simple task of inferring them.
\subsubsection{Auxiliary cases}
If a subderivation does not start with a logical rule in which the cut
formula is principal,
its first rule is permuted with the cut.
We only present the commutations for the left subderivation,
the situation being perfectly symmetric.
\begin{longitem}
\item If the subderivation starts with a cut, splitting
$\Gamma$ into $\Gamma',\Gamma''$, we reduce as follows:
\disp{\infer[cut]{\vdash \Gamma',\Gamma'',\Delta}{
\infer[cut]{\vdash \Gamma',\Gamma'',P^\perp}{
\vdash \Gamma',P^\perp,Q^\perp &
\vdash \Gamma'',Q}
& \vdash P,\Delta}
}{
\infer[cut]{\vdash \Gamma',\Gamma'',\Delta}{
\infer[cut]{\vdash \Gamma',\Delta,Q^\perp}{
\vdash \Gamma',Q^\perp,P^\perp &
\vdash P,\Delta} &
\vdash Q,\Gamma''}
}
Note that this reduction alone leads to cycles,
hence our system is trivially not strongly normalizing.
This is only a minor issue, which could be solved, for example,
by using proof nets or a classical multi-cut rule (which amounts to
incorporate the required amount of proof net flexibility into
sequent calculus).
\item
Identity between a cut formula and a formula from the conclusion:
$\Gamma$ is restricted to the formula $P$ and
the left subderivation is an axiom.
The cut is deleted and the right subderivation is now directly connected
to the conclusion instead of the cut formula:
\disp{\infer[cut]{\vdash P,\Delta}{
\infer[init]{\vdash P,P^\perp}{} &
\infer{\vdash P\vphantom{^\perp},\Delta}{\Pi}}
}{
\infer{\vdash P,\Delta}{\Pi}
}
\item
When permuting a cut and a $\mathrel{\otimes}$,
the cut is dispatched according to the splitting of the cut formula.
When permuting a cut and a $\with$, the cut is duplicated.
The rules $\parr$ and $\mathrel{\oplus}$ are easily commuted down the cut.
\item The commutations of $\top$ and $\perp$ are simple,
and there is none for $\mathbf{1}$ nor $\mathbf{0}$.
\item When $\forall$ is introduced, it is permuted down and
the signature of the other derivation is extended.
The $\exists$ rule is permuted down without any problem.
\item There is no commutation for equality ($=$).
When a disequality (${\neq}$) is permuted down, the other premise is
duplicated and instantiated:
\disp{ \infer[cut]{\Sigma; \vdash \Gamma',u\neq v,\Delta}{
\infer[\neq]{\Sigma; \vdash \Gamma',u\neq v,P^\perp}{
\All{
\infer{\Sigma\theta; \vdash \Gamma'\theta,P^\perp\theta}{
\Pi_\theta}}
} &
\infer{\Sigma; \vdash P\vphantom{^\perp},\Delta}{\Pi'}}
}{
\infer[\neq]{\Sigma; \vdash \Gamma',u\neq v,\Delta}{
\All{
\infer[cut]{\Sigma\theta; \vdash \Gamma'\theta,\Delta\theta}{
\infer{\Sigma\theta; \vdash \Gamma'\theta, P^\perp\theta}{\Pi_\theta}
&
\infer{\Sigma\theta; \vdash P\vphantom{^\perp}\theta,\Delta\theta}{
\Pi'\theta}
}
}
}
}
\item
$\Gamma=\Gamma',\mu B\t$ and that least fixed point is introduced:
\disp{ \infer[cut]{\vdash \Gamma',\mu B\t,\Delta}{
\infer[\mu]{\vdash \Gamma',\mu B\t,P^\perp}{
\vdash \Gamma',B(\mu B)\t,P^\perp}
& \vdash P,\Delta}
}{
\infer[\mu]{\vdash \Gamma',\mu B\t,\Delta}{
\infer[cut]{\vdash \Gamma',B(\mu B)\t,\Delta}{
\vdash \Gamma',B(\mu B)\t,P^\perp
& \vdash P,\Delta}}
}
\item
$\Gamma=\Gamma',\nu B\t$ and that greatest fixed point is introduced:
\disp{
\infer[cut]{\vdash \Gamma',\nu B\t,\Delta}{
\infer[\nu]{\vdash \Gamma',\nu B\t,P^\perp}{
\vdash \Gamma',S\t,P^\perp & \vdash S\vec{x}^\perp, BS\vec{x}}
& \vdash P,\Delta
}
}{
\infer[\nu]{\vdash \Gamma',\nu B\t,\Delta}{
\infer[cut]{\vdash \Gamma',S\t,\Delta}{
\vdash \Gamma',S\t,P^\perp & \vdash P,\Delta}
& \vdash S\vec{x}^\perp, BS\vec{x}}
}
\end{longitem}
\subsubsection{Main cases}
When a logical rule is applied on the cut formula on both sides,
one of the following reductions applies.
\begin{longitem}
\item
In the multiplicative case,
$\Gamma$ is split into $(\Gamma',\Gamma'')$
and we cut the subformulas.
\disp{
\infer[cut]{\vdash \Gamma',\Gamma'',\Delta}{
\infer[\mathrel{\otimes}]{\vdash \Gamma', \Gamma'',P'\mathrel{\otimes} P''\vphantom{^\perp}}{
\vdash \Gamma', P' &
\vdash \Gamma'', P''} &
\infer[\parr]{\vdash P'^\perp\parr P''^\perp,\Delta}{
\vdash P'^\perp, P''^\perp, \Delta}}
}{
\infer[cut]{\vdash \Gamma',\Gamma'',\Delta}{
\vdash \Gamma', P' &
\infer[cut]{\vdash P'^\perp, \Gamma'',\Delta}{
\vdash \Gamma'', P'' &
\vdash P'^\perp, P''^\perp, \Delta}}
}
\item
In the additive case, we select the appropriate premise of $\with$.
\disp{
\infer[cut]{\vdash \Gamma,\Delta}{
\infer[\mathrel{\oplus}]{\vdash \Gamma, P_0 \mathrel{\oplus} P_1}{\vdash \Gamma, P_i} &
\infer[\with]{\vdash \Delta, P_0^\perp \with P_1^\perp}{
\vdash \Delta, P_0^\perp &
\vdash \Delta, P_1^\perp}}
}{
\infer[cut]{\vdash \Gamma,\Delta}{
\vdash \Gamma, P_i &
\vdash \Delta, P_i^\perp}
}
\item The $\mathbf{1}/\bot$ case reduces to
the subderivation of $\bot$.
There is no case for $\top/\mathbf{0}$.
\item In the first-order quantification case,
we perform a proof instantiation:
\disp{ \infer[cut]{\Sigma; \vdash \Gamma,\Delta}{
\infer[\exists]{\Sigma; \vdash \Gamma,\exists x.~ P x^\perp}{
\infer{\Sigma; \vdash \Gamma,P t^\perp}{\Pi_l}} &
\infer[\forall]{\Sigma; \vdash \forall x.~ P x\vphantom{^\perp}, \Delta}{
\infer{\Sigma,x; \vdash P x\vphantom{^\perp}, \Delta}{\Pi_r}}}
}{
\infer[cut]{\Sigma; \vdash \Gamma,\Delta}{
\infer{\Sigma; \vdash \Gamma,P t^\perp}{\Pi_l} &
\infer{\Sigma; \vdash P t, \Delta\vphantom{^\perp}}{\Pi_r[t/x]}}
}
\item The equality case is trivial, the interesting part concerning
this connective lies in the proof instantiations triggered by other
reductions. Since we are considering two terms that are already
equal, we have $csu(u\stackrel{.}{=} u)=\{id\}$ and
we can simply reduce to the subderivation corresponding to the
identity substitution:
\disp{ \infer[cut]{\Sigma; \vdash \Delta}{
\infer[=]{\Sigma; \vdash u=u}{} &
\infer[\neq]{\Sigma; \vdash u\neq u, \Delta}{
\infer{\Sigma; \vdash \Delta}{\Pi_{id}}
}}
}{
\infer{\Sigma; \vdash \Delta}{\Pi_{id}}
}
\item Finally in the fixed point case,
we make use of the functoriality transformation for propagating
the coinduction/recursion under $B$:
\disp{
\infer[cut]{\Sigma; \vdash \Gamma,\Delta}{
\infer[\mu]{\Sigma; \vdash \Gamma, \mu B\t}{
\infer{\Sigma; \vdash \Gamma, B(\mu B)\t}{\Pi'_l}} &
\infer[\nu]{\Sigma; \vdash \Delta, \nu \overline{B}\t}{
\infer{\Sigma; \vdash \Delta, S\t}{\Pi'_r} &
\infer{\vec{x}; \vdash S\vec{x}^\perp, \overline{B} S\vec{x}\vphantom{\t}}{\Theta}
}}
}{
\infer[cut]{\Sigma; \vdash \Gamma,\Delta}{
\infer{\Sigma; \vdash \Delta,S\t}{\Pi'_r} &
\infer[cut]{\Sigma; \vdash S\t^\perp,\Gamma}{
\infer{\Sigma; \vdash S\t^\perp, \overline{B} S\t}{\Theta[\t/\vec{x}]} &
\infer[cut]{\Sigma; \vdash B S^\perp \t, \Gamma}{
\infer{\Sigma; \vdash B S^\perp \t, \overline{B}(\nu\overline{B})\t}{
F_{B{\bullet}\t}(\nu(Id,\Theta))} &
\infer{\Sigma; \vdash B (\mu B)\t, \Gamma}{\Pi'_l}
}}}
}
\end{longitem}
One-step reduction $\Pi\rightarrow\Pi'$ is defined as the congruence
generated by the above rules.
We now seek to establish that such reductions can be applied
to transform any derivation into a cut-free one.
However, since we are dealing with transfinite (infinitely branching)
proof objects,
there are trivially derivations which cannot be reduced into
a cut-free form in a finite number of steps.
A possibility would be to consider transfinite reduction sequences,
relying on a notion of convergence for defining limits.
A simpler solution, enabled by the fact that our infinity
only happens ``in parallel'', is to define inductively
the transfinite reflexive transitive closure of one-step reduction.
\newcommand{\ra^*}{\rightarrow^*}
\begin{definition}[Reflexive transitive closure, ${\cal WN}$]
We define inductively $\Pi\ra^*\Xi$ to hold when
(1) $\Pi\rightarrow\Xi$,
(2) $\Pi\ra^*\Pi'$ and $\Pi'\ra^*\Xi$,
or
(3) $\Pi$ and $\Xi$ start with the same rule and their premises are in relation
(\emph{i.e.,}\ for some rule ${\cal R}$,
$\Pi = {\cal R}(\Pi_i)_i$, $\Xi = {\cal R}(\Xi_i)_i$
and each $\Pi_i\ra^*\Xi_i$).
We say that $\Pi$ \emph{normalizes} when there exists a cut-free
derivation $\Pi'$ such that $\Pi\ra^*\Pi'$.
We denote by ${\cal WN}$ the set of all normalizing derivations.
\end{definition}
From (1) and (2), it follows that if $\Pi$ reduces to $\Xi$ in $n>0$ steps,
then $\Pi\ra^*\Xi$.
From (3) it follows that $\Pi\ra^*\Pi$ for any $\Pi$.
In the finitely branching case, \emph{i.e.,}\ if the $\neq$ connective was
removed or the system ensured finite $csu$, the role of (3) is only
to ensure reflexivity.
In the presence of infinitely branching rules, however,
it also plays the important role of packaging an infinite number of reductions.
In the finitely branching case, one can show that $\Pi\ra^*\Xi$ implies
that there is a finite reduction sequence from $\Pi$ to $\Xi$
(by induction on $\Pi\ra^*\Xi$),
and so our definition of normalization corresponds to the usual notion
of weak normalization in that case.
\begin{proposition}
If $\Pi\rightarrow\Xi$ then $\Pi\theta\ra^*\Xi\theta$.
\end{proposition}
\begin{proof}
By induction on $\Pi$.
If the redex is not at toplevel but in an immediate subderivation $\Pi'$,
then the corresponding subderivations in $\Pi\theta$ shall be reduced.
If the first rule of $\Pi$ is disequality, there may be zero, several
or infinitely many subderivations of $\Pi\theta$ of the form $\Pi'\theta'$.
Otherwise there is only one such subderivation.
In both cases,
we show $\Pi\theta\ra^*\Xi\theta$ by (3),
using the induction hypothesis for the subderivations where the redex is,
and reflexivity of $\ra^*$ for the others.
If the redex is at toplevel, then $\Pi\theta\rightarrow\Xi\theta$.
The only non-trivial cases are the two reductions involving ${\neq}$.
In the auxiliary case, we have:
\[ \xymatrix{
cut({\neq}(\Pi_\sigma)_{\sigma\in csu(u\stackrel{.}{=} v)};\Pi_r)
\ar[r] \ar[d]^{\theta}
& {\neq}(cut(\Pi_\sigma;\Pi_r\sigma))_\sigma \ar[d]^{\theta} \\
cut({\neq}(\Pi'_{\sigma'})_{\sigma'\in csu(u\theta\stackrel{.}{=} v\theta)};
\Pi_r\theta)
\ar@{.>}[r]
& {\neq}(cut(\Pi'_{\sigma'};(\Pi_r\theta)\sigma'))_{\sigma'}
} \]
By Definition~\ref{def:inst}, $\Pi'_{\sigma'} = \Pi_{\sigma}\sigma''$ for
$\theta\sigma' = \sigma\sigma''$, $\sigma\in csu(u\stackrel{.}{=} v)$.
Applying $\theta$ on the reduct of $\Pi$, we obtain for each $\sigma'$
the subderivation
$cut(\Pi_\sigma;\Pi_r\sigma)\sigma'' =
cut(\Pi_\sigma\sigma'';\Pi_r\sigma\sigma'') =
cut(\Pi'_{\sigma'};\Pi_r\theta\sigma')$.
In the main case, $\Pi = cut({\neq}(\Pi_{id});u=u) \rightarrow \Pi_{id}$
and $\Pi\theta = cut({\neq}(\Pi'_{id});u\theta=u\theta)
\rightarrow \Pi'_{id} = \Pi_{id}\theta$.
\end{proof}
\begin{proposition} \label{prop:theta_wn}
If $\Pi$ is normalizing then so is $\Pi\theta$.
\end{proposition}
\begin{proof}
Given a cut-free derivation $\Pi'$ such that $\Pi\ra^*\Pi'$,
we show that $\Pi\theta\ra^*\Pi'\theta$ by a simple induction on $\Pi\ra^*\Pi'$,
making use of the previous proposition.
\end{proof}
\begin{proposition} \label{prop:id_wn}
We say that $\Xi$ is an $Id$-simplification of $\Pi$
if it is obtained from $\Pi$ by reducing an arbitrary,
potentially infinite number of redexes $cut(\Theta;Id)$ into $\Theta$.
If $\Xi$ is an $Id$-simplification of $\Pi$,
and $\Pi$ is normalizable then so is $\Xi$.
\end{proposition}
\begin{proof}
We show more generally
that if $\Xi$ is a simplification of $\Pi$ and $\Pi\ra^*\Pi'$
then $\Xi\ra^*\Xi'$ for some simplification $\Xi'$ of $\Pi'$.
This is easily done by induction on $\Pi\ra^*\Pi'$, once we will have
established the following fact:
\emph{
If $\Xi$ is a simplification of $\Pi$
and $\Pi\rightarrow\Pi'$,
then $\Xi\ra^*\Xi'$ for a simplification $\Xi'$ of $\Pi'$.}
If the redex in $\Pi$
does not involve simplified cuts, the same reduction can be
performed in $\Xi$, and the result is a simplification of $\Pi'$
(note that this could erase or duplicate some simplifications).
If the reduction is one of the simplications
then $\Xi$ itself is a simplification of $\Pi'$.
If a simplified cut is permuted with another cut (simplified or not)
$\Xi$ is also a simplification of $\Pi'$.
Finally, other auxiliary reductions on a simplified cut
also yield reducts of which $\Xi$ is already a simplification
(again, simplifications may be erased or duplicated).
\end{proof}
\subsection{Reducibility candidates}
\begin{definition}[Type]
A proof of type $P$ is a proof with a distinguished formula $P$
among its conclusion sequent.
We denote by $Id_P$ the axiom rule between $P$ and $P^\perp$,
of type $P$.
\end{definition}
In full details, a type should contain a signature under which the formula
is closed and well typed.
That extra level of information would be heavy,
and no real difficulty lies in dealing with it,
and so we prefer to leave it implicit.
If $X$ is a set of proofs,
we shall write $\Pi : P \in X$
as a shortcut for ``$\Pi\in X$ and $\Pi$ has type $P$''.
We say that $\Pi$ and $\Pi'$ are \emph{compatible} if their types are dual
of each other.
\begin{definition}[Orthogonality]
For $\Pi, \Pi' \in {\cal WN}$,
we say that $\Pi\Perp\Pi'$ when
for any $\theta$ and $\theta'$ such that $\Pi\theta$ and $\Pi'\theta'$
are compatible,
$cut(\Pi\theta;\Pi'\theta')\in{\cal WN}$.
For $\Pi\in{\cal WN}$ and $X\subseteq{\cal WN}$,
$\Pi\Perp X$ iff $\Pi\Perp\Pi'$ for any $\Pi'\in X$,
and $X^\perp$ is $\set{\Pi\in{\cal WN}}{\Pi\Perp X}$.
Finally, for $X,Y\subseteq{\cal WN}$, $X\Perp Y$ iff
$\Pi\Perp\Pi'$ for any $\Pi\in X$, $\Pi'\in Y$.
\end{definition}
\begin{definition}[Reducibility candidate]
A \emph{reducibility candidate} $X$ is a set of normalizing proofs
that is equal to its bi-orthogonal, \emph{i.e.,}\ $X=X^{\perp\perp}$.
\end{definition}
\newcommand{\mathrm{lfp}}{\mathrm{lfp}}
That kind of construction has some well-known properties\footnote{
This so-called \emph{polar} construction is used independently
for reducibility candidates and phase semantics in \cite{girard87tcs},
but also, for example,
to define behaviors in ludics \cite{girard01mscs}.
},
which do not rely on the definition of the relation $\Perp$.
For any sets of normalizable derivations $X$ and $Y$,
$X\subseteq Y$ implies $Y^\perp \subseteq X^\perp$
and $(X\cup Y)^\perp=X^\perp \cap Y^\perp$;
moreover, the symmetry of $\Perp$ implies that $X\subseteq X^{\perp\perp}$,
and hence $X^\perp=X^{\perp\perp\perp}$
(in other words, $X^\perp$ is always a candidate).
Reducibility candidates, ordered by inclusion, form a complete lattice:
given an arbitrary collection of candidates $S$,
it is easy to check that
$({\bigcup} S)^{\perp\perp}$ is its least upper bound in the lattice,
and ${\bigcap} S$ its greatest lower bound.
We check the minimality of $({\bigcup} S)^{\perp\perp}$:
any upper bound $Y$ satisfies ${\bigcup} S \subseteq Y$,
and hence $({\bigcup} S)^{\perp\perp} \subseteq Y^{\perp\perp} = Y$.
Concerning the greatest lower bound, the only non-trivial thing
is that it is a candidate, but it suffices to observe that
${\bigcap} S = {\bigcap}_{X\in S} X^{\perp\perp} =
({\bigcup}_{X\in S} X^\perp)^\perp$.
The least candidate is $\emptyset^{\perp\perp}$
and the greatest is ${\cal WN}$.
Having a complete lattice, we can use the Knaster-Tarski theorem:
any monotonic operator $\phi$ on reducibility candidates
admits a least fixed point $\mathrm{lfp}(\phi)$ in the lattice of candidates.
Our definition of $\Perp$ yields some basic observations about candidates.
They are closed under substitution,
\emph{i.e.,}\ $\Pi\in X$ implies that any $\Pi\theta\in X$.
Indeed, $\Pi\in X$ is equivalent to $\Pi\Perp X^\perp$
which implies $\Pi\theta\Perp X^\perp$ by definition of $\Perp$
and Proposition \ref{prop:theta_wn}.
Hence, $Id_P$ belongs to any candidate, since
for any $\Pi\in X^\perp$,
$cut(Id_{P\theta};\Pi\theta')\rightarrow\Pi\theta'\in X^\perp \subseteq {\cal WN}$.
Candidates are also closed under expansion, \emph{i.e.,}\
$\Pi'\rightarrow\Pi$ and $\Pi\in X$ imply that $\Pi'\in X$.
Indeed,
for any $\Xi\in X^\perp$,
$cut(\Pi'\theta;\Xi\theta')\ra^* cut(\Pi\theta;\Xi\theta')$
by Proposition~\ref{prop:theta_wn},
and the latter derivation normalizes.
A useful simplification follows from those properties:
for a candidate $X$, $\Pi\Perp X$ if for any $\theta$
and compatible $\Pi'\in X$, $cut(\Pi\theta;\Pi')$ normalizes | there
is no need to explicitly consider instantiations of members of $X$,
and since $Id\in X$, there is no need to show that $\Pi$ normalizes
by Proposition~\ref{prop:id_wn}.
The generalization over all substitutions is the only
novelty in our definitions. It is there to internalize the
fact that proof behaviors are essentially independent of their
first-order structure. By taking this into account from the beginning
in the definition of orthogonality, we obtain bi-orthogonals (behaviors)
that are closed under inessential transformations like substitution.
As a result,
unlike in most candidate of reducibility arguments, our candidates are
untyped. In fact, we could type them up-to first-order details,
\emph{i.e.,}\ restrict to sets of proofs whose types have the same
propositional structure. Although that might look more familiar,
we prefer to avoid those unnecessary details.
\begin{definition}[Reducibility]
Let $\Pi$ be a proof of $\vdash P_1,\ldots,P_n$,
and $(X_i)_{i=1\ldots n}$ a collection of reducibility candidates.
We say that $\Pi$ is $(X_1,\ldots,X_n)$-reducible if for any $\theta$
and any derivations $(\Pi'_i : P_i\theta^\perp \in X_i^\perp)_{i=1\ldots n}$,
the derivation
$cut(\Pi\theta;\Pi'_1,\ldots,\Pi'_n)$ normalizes.
\end{definition}
From this definition, it immediately follows that
if $\Pi$ is $(X_1,\ldots,X_n)$-reducible
then so is $\Pi\theta$.
Also observe that $Id_P$ is $(X,X^\perp)$-reducible for any candidate $X$,
since for any $\Pi\in X$ and $\Pi'\in X^\perp$
$cut(Id_{P\theta};\Pi,\Pi')$ reduces to $cut(\Pi;\Pi')$ which normalizes.
Finally, any $(X_1,\ldots,X_n)$-reducible derivation $\Pi$ normalizes,
by Proposition~\ref{prop:id_wn} and the fact that
$cut(\Pi;Id,\ldots,Id)$ normalizes.
\begin{proposition} \label{prop:red-interp}
Let $\Pi$ be a proof of $\vdash P_1,\ldots,P_n$,
let $(X_i)_{i=1\ldots n}$ be a family of candidates,
and let $j$ be an index in $1\ldots n$.
The two following statements are equivalent:
\emph{(1)} $\Pi$ is $(X_1,\ldots,X_n)$-reducible;
\emph{(2)}
for any $\theta$ and $(\Pi'_i : P_i\theta^\perp \in X_i^\perp)_{i\neq j}$,
$cut(\Pi\theta;(\Pi'_i)_{i\neq j})\in X_j$.
\end{proposition}
\begin{proof}
{(1) $\Rightarrow$ (2)}:
Given such $\theta$ and $(\Pi'_i)_{i\neq j}$,
we show that the derivation
$cut(\Pi\theta;(\Pi'_i)_{i\neq j}) \in X_j$.
Since $X_j=X_j^{\perp\perp}$,
it is equivalent to show that our derivation is in the orthogonal
of $X_j^\perp$.
For each $\sigma$ and $\Pi'': P_j\theta\sigma^\perp \in X_j^\perp$,
we have to show that
$cut(cut(\Pi\theta;(\Pi'_i)_{i\neq j})\sigma;\Pi'')$ normalizes.
Using cut permutation reductions, we reduce it into
$cut(\Pi\theta\sigma;
\Pi'_1\sigma,\ldots,\Pi'',\ldots, \Pi'_n\sigma)$,
which normalizes by reducibility of $\Pi$.
{(2) $\Rightarrow$ (1)} is similar:
we have to show that
$cut(\Pi\theta;\Pi'_1,\ldots \Pi'_n)$ normalizes,
we reduce it into
$cut(cut(\Pi\theta;(\Pi'_i)_{i\neq j});\Pi'_j)$
which normalizes since $\Pi'_j\in X_j^\perp$
and the left subderivation belongs to $X_j$ by hypothesis.
\end{proof}
\subsection{Interpretation} \label{sec:interpmu}
We interpret formulas as reducibility candidates,
extending Girard's interpretation of MALL connectives~\cite{girard87tcs}.
\begin{definition}[Interpretation] \label{def:interp}
Let $P$ be a formula
and ${\cal E}$ an environment
mapping each $n$-ary predicate variable $p$ occurring in $P$ to a candidate.
We define by induction on $P$ a candidate
called \emph{interpretation of $P$ under ${\cal E}$} and
denoted by $\interp{P}^{\cal E}$.
\[
\interp{p\t}^{\cal E} = {\cal E}(p)
\quad
\interp{a\vec{u}}^{\cal E} =
\All{\infer{\vdash a\vec{v}^\perp, a\vec{v}}{}}^{\perp\perp}
\quad
\interp{\mathbf{0}}^{\cal E} = \emptyset^{\perp\perp}
\quad
\interp{\mathbf{1}}^{\cal E} = \All{\infer{\vdash \mathbf{1}}{}}^{\perp\perp}
\]
\vspace{-0.5cm}{\allowdisplaybreaks
\begin{eqnarray*}
\interp{P \mathrel{\otimes} P'}^{\cal E} &=&
\Set{
\infer{\vdash \Delta,\Delta',Q\mathrel{\otimes} Q'}{
\infer{\vdash \Delta,Q\vphantom{'}}{\Pi} &
\infer{\vdash \Delta',Q'}{\Pi'}}
}{
\Pi:Q \in\interp{P}^{\cal E}, \Pi':Q' \in\interp{P'}^{\cal E}
}^{\perp\perp} \\
\interp{P_0 \mathrel{\oplus} P_1}^{\cal E} &=&
\Set{
\infer{\vdash \Delta,Q_0\mathrel{\oplus} Q_1}{\infer{\vdash \Delta,Q_i}{\Pi}}
}{
i \in \{0,1\}, \Pi : Q_i \in \interp{P_i}^{\cal E}
}^{\perp\perp} \\
\interp{\exists x.~ P x}^{\cal E} &=&
\Set{
\infer{\vdash \Gamma, \exists x.~ Q x}{
\infer{\vdash \Gamma, Q t}{\Pi}}
}{
\Pi : Q t \in \interp{P t}^{\cal E}
}^{\perp\perp} \\
\interp{u=v}^{\cal E} &=& \All{\infer{\vdash t=t}{}}^{\perp\perp}
\\
\interp{\mu B\t}^{\cal E} &=&
\mathrm{lfp}(
X \mapsto
\set{\mu \Pi}{
\Pi : B (\mu B) \vec{t'} \in [Bp\t]^{{\cal E},p\mapsto X}
}^{\perp\perp}
) \\
\interp{P}^{\cal E} &=& (\interp{P^\perp}^{\cal E})^\perp
\mbox{~ for all other cases}
\end{eqnarray*}}
The validity of that definition relies on a few observations.
It is easy to check that we do only form (bi-)orthogonals of
sets of proofs that are normalizing.
More importantly, the existence of least fixed point
candidates relies on the monotonicity of interpretations,
inherited from that of operators.
More generally,
$\interp{P}^{\cal E}$ is monotonic in ${\cal E}(p)$ if $p$ occurs only positively
in $P$, and antimonotonic in ${\cal E}(p)$ if $p$ occurs only negatively.
The two statements are proved simultaneously, following the
definition by induction on $P$.
Except for the least fixed point case,
it is trivial to check that (anti)monotonicity is preserved by the
first clauses of Definition~\ref{def:interp}, and in the case
of the last clause $\interp{P}^{\cal E} = (\interp{P^\perp}^{\cal E})^\perp$
each of our two statements is derived from the other.
Let us now consider the definition of $[\mu B \t]^{\cal E}$,
written $\mathrm{lfp}(\phi_{\cal E})$ for short.
First, the construction is well-defined:
by induction hypothesis and monotonicity of $B$,
$\interp{B q \t}^{{\cal E},q\mapsto X}$ is monotonic in $X$,
and hence $\phi_{\cal E}$ is also monotonic and admits a least fixed point.
We then show that $\mathrm{lfp}(\phi_{\cal E})$ is monotonic in ${\cal E}(p)$ when
$p$ occurs only positively in $B$ | antimonotonicity would be obtained
in a symmetric way.
If ${\cal E}$ and ${\cal E}'$ differ only on $p$ and ${\cal E}(p)\subseteq {\cal E}'(p)$,
we obtain by induction hypothesis that
$\phi_{\cal E}(X)\subseteq \phi_{{\cal E}'}(X)$ for any candidate $X$, and in particular
$\phi_{\cal E}(\mathrm{lfp}(\phi_{{\cal E}'})) \subseteq \phi_{{\cal E}'}(\mathrm{lfp}(\phi_{{\cal E}'})) =
\mathrm{lfp}(\phi_{{\cal E}'})$,
\emph{i.e.,}\ $\mathrm{lfp}(\phi_{{\cal E}'})$ is a prefixed point of $\phi_{\cal E}$,
and thus $\mathrm{lfp}(\phi_{\cal E})\subseteq \mathrm{lfp}(\phi_{{\cal E}'})$, that is to say
$[\mu B \t]^{\cal E}$ is monotonic in ${\cal E}(p)$.
\end{definition}
\begin{proposition}
For any $P$ and ${\cal E}$, $([P]^{\cal E})^\perp = [P^\perp]^{\cal E}$.
\end{proposition}
\begin{proposition} \label{prop:interp_subst}
For any $P$, $\theta$ and ${\cal E}$, $\interp{P}^{\cal E} = \interp{P\theta}^{\cal E}$.
\end{proposition}
\begin{proposition}
For any ${\cal E}$, monotonic $B$ and $S$,
$\interp{B S}^{\cal E} =
\interp{B p}^{{\cal E}{}, p\mapsto \interp{S}^{\cal E}}$.
\end{proposition}
Those three propositions are easy to prove,
the first one immediately following from Definition~\ref{def:interp}
by involutivity of both negations (on formulas and on candidates),
the other two by induction (respectively on $P$ and $B$).
Proposition~\ref{prop:interp_subst} has an important
consequence: $\Pi\in[P]$ implies $\Pi\theta\in [P\theta]$,
\emph{i.e.,}\ our interpretation is independent of first-order aspects.
This explains some probably surprising parts of the definition
such as the interpretation of least fixed points, where it
seems that we are not allowing the parameter of the fixed point
to change from one instance to its recursive occurrences.
In the following,
when the term structure is irrelevant or confusing, we shall write
$\interp{S}^{\cal E}$ for $\interp{S\t}^{\cal E}$.
For a predicate operator expression
$(\lambda \vec{p}.~ B\vec{p})$ of first-order arity $0$,
we shall write $\interp{B}^{\cal E}$ for
$\vec{X} \mapsto \interp{B\vec{p}}^{{\cal E},(p_i\mapsto X_i)_i}$.
When even more concision is desirable,
we may also write $\interp{B\vec{X}}^{\cal E}$ for
$\interp{B}^{\cal E}\vec{X}$.
Finally, we simply write $[P]$ and $[B]$ when ${\cal E}$ is empty.
\begin{lemma} \label{lem:fun}
Let $X$ and $Y$ be two reducibility candidates,
and $\Pi$ be a proof of $\vdash P\vec{x},Q\vec{x}$ that is $(X,Y)$-reducible.
Then $F_B(\Pi)$ is $([B]X,[\overline{B}]Y)$-reducible.
\end{lemma}
\begin{lemma} \label{lem:nu}
Let $X$ be a candidate
and $\Theta$ a derivation of $\vdash S\vec{x}^\perp, \overline{B} S \vec{x}$ that
is $(X^\perp,[\overline{B}]X)$-reducible.
Then
$\nu(Id_{S\t},\Theta)$ is $(X^\perp,[\nu \overline{B}\t])$-reducible for any $\t$.
\end{lemma}
\begin{proof}[of Lemmas~\ref{lem:fun} and \ref{lem:nu}]
We prove them simultaneously, generalized as follows
for any monotonic operator $B$ of second-order arity $n+1$,
and any predicates $\vec{A}$ and candidates $\vec{Z}$:
\begin{enumerate}
\item For any $(X,Y)$-reducible $\Pi$,
$F_{B\vec{A}}(\Pi)$ is
$([B]\vec{Z}X,[\overline{B}]\vec{Z}^\perp Y)$-reducible.
\item For any $(X^\perp,[\overline{B}]\vec{Z}^\perp X)$-reducible $\Theta$,
$\nu(Id_{S\t},\Theta)$ is
$(X^\perp,[\nu(\overline{B}\vec{Z}^\perp)\t])$-reducible.
\end{enumerate}
We proceed by induction on $B$: we first establish (1),
relying on strictly smaller instances of both (1) and (2);
then we prove (2) by relying on (1) for the same $B$
(modulo size-preserving first-order details).
The purpose of the generalization is to separate the main part of $B$
from auxiliary parts $\vec{A}$, which may be large
and whose interpretations $\vec{Z}$ may depend on $X$ and $Y$,
but play a trivial role.
\begin{enumerate}
\item
If $B$ is of the form $(\lambda \vec{p} \lambda q.~ B'\vec{p})$,
then $F_{B\vec{A}}(\Pi)$ is simply $Id_{B'\vec{A}}$,
which is trivially $([B'\vec{Z}],[\overline{B'}\vec{Z}^\perp])$-reducible
since $[\overline{B'}\vec{Z}^\perp] = [B'\vec{Z}]^\perp$.
If $B$ is of the form
$(\lambda\vec{p}\lambda q.~ q\t)$,
then $F_{B\vec{A}}(\Pi)$
is $\Pi[\t/\vec{x}]$ which is $(X,Y)$-reducible.
Otherwise, $B$ starts with a logical connective.
Following the definition of $F_B$, dual connectives are treated
in a symmetric way.
The tensor case essentially consists in showing that
if $\Pi'\vdash~P',Q'$ is $([P'],[Q'])$-reducible and
$\Pi''\vdash~P'',Q''$ is $([P''],[Q''])$-reducible then
the following derivation is $([P'\mathrel{\otimes} P''], [Q'\parr Q''])$-reducible:
\[ \infer[\parr]{\vdash P'\mathrel{\otimes} P'', Q'\parr Q''}{
\infer[\mathrel{\otimes}]{\vdash P'\mathrel{\otimes} P'',Q',Q''}{
\infer{\vdash P',Q'}{\Pi'} &
\infer{\vdash P'',Q''}{\Pi''}}} \]
\begin{longitem}
\item
The subderivation $\Pi'\mathrel{\otimes}\Pi''$ is
$([P'\mathrel{\otimes} P''],[Q'],[Q''])$-reducible: By
Proposition~\ref{prop:red-interp} it suffices to show that
for any $\theta$ and compatible $\Xi'\in[Q']^\perp$ and $\Xi''\in[Q'']^\perp$,
$cut(\Pi\theta;\Xi',\Xi'')$ belongs to $[P'\mathrel{\otimes} P'']$.
This follows from: the fact that it reduces to
$cut(\Pi'\theta;\Xi')\mathrel{\otimes} cut(\Pi''\theta;\Xi'')$;
that those two conjuncts are respectively in $[P']$ and $[P'']$ by
hypothesis;
and that $\set{u\mathrel{\otimes} v}{u\in[P'], v\in[P'']}$ is a subset
of $[P'\mathrel{\otimes} P'']$ by definition of the interpretation.
\item
We then prove that the full derivation, instantiated by $\theta$
and cut against any compatible $\Xi\in[P'\mathrel{\otimes} P'']^\bot$,
is in $[Q'\parr Q'']$.
Since the interpretation of ${\parr}$
is $\set{u\mathrel{\otimes} v}{u\in[Q']^\perp, v\in[Q'']^\perp}^\perp$,
it suffices to show that
$cut((\parr(\Pi'\mathrel{\otimes}\Pi''))\theta;\Xi)$ normalizes (which
follows from the reducibility of $\Pi'\mathrel{\otimes}\Pi''$) and that
for any substitutions $\sigma$ and $\sigma'$,
$cut((\parr(\Pi'\mathrel{\otimes}\Pi''))\theta;\Xi)\sigma$ normalizes when cut against
any such compatible $(u\mathrel{\otimes} v)\sigma'$.
Indeed, that cut reduces, using cut permutations and
the main multiplicative reduction, into
$cut(cut((\Pi'\mathrel{\otimes}\Pi'')\theta\sigma;\Xi\sigma);u\sigma',v\sigma')$
which normalizes by reducibility of $\Pi'\mathrel{\otimes}\Pi''$.
\end{longitem}
The additive case follows the same outline.
There is no case for units, including $=$ and $\neq$,
since they are treated with all formulas where $p$ does not occur.
In the case of first-order quantifiers,
say $B = \lambda \vec{p} \lambda q.~ \exists x.~ B' \vec{p} q x$,
we essentially have to show that,
assuming that $\Pi$ is $([P x], [Q x])$-reducible,
the following derivation is
$([\exists x.~ P x], [\forall x.~ Q x])$-reducible:
\[
\infer[\forall]{\Sigma;\vdash \exists x.~ P x, \forall x.~ Q x}{
\infer[\exists]{\Sigma,x;\vdash \exists x.~ P x, Q x}{
\infer{\Sigma,x;\vdash P x, Q x}{\Pi}}}
\]
\begin{longitem}
\item
We first establish that the immediate subderivation $\exists(\Pi)$
is reducible, by considering $cut(\exists(\Pi)\theta;\Xi)$
for any $\theta$ and compatible $\Xi\in[Q x]^\perp$.
We reduce that derivation into
$\exists(cut(\Pi\theta;\Xi))$ and conclude by
definition of $[\exists x.~ P x]$ and the fact that
$cut(\Pi\theta;\Xi)\in [P x]$.
\item
To prove that $\forall(\exists(\Pi))$ is reducible,
we show that $cut(\forall(\exists(\Pi))\theta;\Xi)$ belongs to
$[\forall x.~ Q x]$ for any $\theta$
and compatible $\Xi \in [\exists x.~ P x]^\perp$.
Since $[\forall x.~ Q x] = \set{\exists \Xi'}{\Xi'\in [Q t]^\perp}^\perp$,
this amounts to show that our derivation normalizes
(which follows from the reducibility of $\exists(\Pi)$)
and that
$cut(cut(\forall(\exists(\Pi))\theta;\Xi)\sigma;(\exists\Xi')\sigma')$
normalizes for any $\sigma$, $\sigma'$ and compatible $\Xi' \in [Q t]^\perp$.
Indeed, this derivation reduces,
by permuting the cuts and performing the main $\forall/\exists$ reduction,
into $cut(\exists(\Pi)\theta\sigma[t\sigma'/x];\Xi'\sigma',\Xi\sigma)$,
which normalizes by reducibility of $\exists(\Pi)$.
\end{longitem}
Finally, we show the fixed point case in full details since this
is where the generalization is really useful.
When $B$ is of the form $\lambda \vec{p} \lambda q.~ \mu(B'\vec{p}q)\t$,
we are considering the following derivation:
\[ \hspace{-0.3cm}\infer[\nu]{
\vdash \mu(B'\vec{A}P)\t, \nu(\overline{B'}\vec{A}^\perp Q) \t}{
\infer[init]{
\vdash \mu(B'\vec{A}P)\t, \nu(\overline{B'}\vec{A}^\perp P^\perp)\t}{} &
\infer[\mu]{
\vdash \mu(B'\vec{A} P)\vec{x},
\overline{B'}\vec{A}^\perp Q
(\nu(\overline{B'}\vec{A}^\perp P^\perp))\vec{x}}{
\infer{\vdash B'\vec{A} P (\mu (B'\vec{A} P))\vec{x},
\overline{B'}\vec{A}^\perp Q
(\nu(\overline{B'}\vec{A}^\perp P^\perp))\vec{x}}{
F_{B'\vec{A}{\bullet}(\mu(B'\vec{A}P))\vec{x}}(\Pi)}}} \]
We apply induction hypothesis (1) on
$B'' := (\lambda \vec{p} \lambda p_{n+1} \lambda q.~ B' \vec{p} q p_{n+1} \vec{x})$,
with
$A_{n+1} := \mu (B'\vec{A}P)$ and $Z_{n+1} := [\mu (B'\vec{Z}X)]$,
obtaining that the subderivation $F_{\ldots}(\Pi)$ is
$([B'']\vec{Z} Z_{n+1} X,
[\overline{B''}]\vec{Z}^\perp Z_{n+1}^\perp Y)$-reducible.
Then, we establish that $\mu(F_{\ldots}(\Pi))$
is reducible: for any $\theta$ and compatible
$\Xi\in [B'']\vec{Z}Z_{n+1}Y^\perp$,
$cut(\mu(F_{\ldots}(\Pi))\theta;\Xi)$ reduces to
$\mu(cut(F_{\ldots}(\Pi)\theta;\Xi))$ which
belongs to
$ [\mu (B'\vec{Z}X)\vec{x}] =
\set{\mu \Pi'}{
\Pi' \in [B'\vec{Z}X(\mu(B'\vec{Z}X))\vec{x}]}^{\perp\perp} $
by reducibility of $F_{\ldots}(\Pi)$.
We finally obtain the reducibility of the whole derivation by
applying induction hypothesis (2) on $B'$ with $A_{n+1}:=Q^\perp$,
$Z_{n+1} := Y^\perp$ and $X := \interp{\mu (B'\vec{Z}X) \vec{x}}^\perp$.
\item
Here we have to show that for any $\theta$ and any compatible $\Xi\in X$,
the derivation
$cut(\nu(Id_{S\t},\Theta)\theta;\Xi)$ belongs to $[\mu(B\vec{Z})\t]^\perp$.
Since only $\t$ is affected by $\theta$ in such derivations,
we generalize on it directly, and consider the following set:
\[ Y := \set{cut(\nu(Id_{S\vec{t'}},\Theta);\Xi)}{\Xi: S\vec{t'}\in X}^\perp \]
Note that we can form the orthogonal to obtain $Y$, since we are indeed
considering a subset of ${\cal WN}$: any $cut(\nu(Id;\Theta);\Xi)$ reduces
to $\nu(\Xi;\Theta)$, and $\Xi$ and $\Theta$ normalize.
We shall establish that $Y$ is a pre-fixed point of the operator $\phi$
such that $[\mu(B\vec{Z})\t]$ has been defined as $\mathrm{lfp}(\phi)$,
from which it follows that $[\mu(B\vec{Z})\t]\subseteq Y$,
which entails our goal |
note that this is essentially a proof by induction on $[\mu(B\vec{Z})]$.
So we prove the pre-fixed point property:
\[ \set{\mu\Pi}{\Pi: B\vec{A}(\mu (B\vec{A}))\vec{t''}
\in[B\vec{Z} Y \t']}^{\perp\perp} \subseteq Y \]
Observing that, for any $A,B\subseteq{\cal WN}$, we have
$A^{\perp\perp} \subseteq B^\perp \Leftrightarrow
A^{\perp\perp} \Perp B \Leftrightarrow
B \subseteq A^\perp \Leftrightarrow
B \Perp A$,
our property can be rephrased equivalently:
\[ \set{cut(\nu(Id_{S\vec{t'}},\Theta);\Xi)}{\Xi:S\vec{t'}\in X}
\Perp \set{\mu \Pi}{\Pi\in [B\vec{Z}Y\t']} \]
Since both sides are stable by substitution, there is no need
to consider compatibility substitutions here, and it suffices to consider cuts
between any compatible left and right-hand side derivations:
$cut(cut(\nu(Id,\Theta);\Xi);\mu \Pi)$.
It reduces, using cut exchange, the main fixed point reduction
and finally the identity reduction, into:
\[ \hspace{-0.5cm}\infer[cut]{\vdash \Gamma,\Delta}{
\infer{\vdash \Gamma, S\vec{t'}}{\Xi} &
\infer[cut]{\vdash S^\perp\vec{t'}, \Delta}{
\infer{\vdash S^\perp\vec{t'}, \overline{B}\vec{A}^\perp S\vec{t'}}{
\Theta[\vec{t'}/\vec{x}]} &
\infer[cut]{\vdash B\vec{A} S^\perp \vec{t'}, \Delta}{
\infer{\vdash B\vec{A} S^\perp \vec{t'},
\overline{B}\vec{A}^\perp(\nu (\overline{B}\vec{A}^\perp))\vec{t'}}{
F_{B\vec{A}{\bullet}\vec{t'}}(\nu(Id_{S\vec{x}},\Theta))} &
\infer{\vdash B\vec{A}(\mu (B\vec{A}))\vec{t'},\Delta}{\Pi}
}
}
} \]
By hypothesis, $\Xi\in X$,
$\Pi\in[B\vec{Z}Y\vec{t'}]$
and $\Theta[\vec{t'}/\vec{x}]$ is $(X^\perp,[\overline{B}\vec{Z}^\perp X \t'])$-reducible.
Moreover,
$\nu(Id_{S\vec{x}},\Theta)$ is $(X^\perp,Y^\perp)$-reducible by definition
of $Y$,
and thus, by applying (1) on the operator
$\lambda \vec{p} \lambda q.~ B \vec{p} q \vec{t'}$,
which has the same size as $B$, we obtain that
$F_{B\vec{A}{\bullet}\vec{t'}}(\nu(Id_{S\vec{x}},\Theta))$ is
$([B\vec{Z} X^\perp \vec{t'}],
[\overline{B}\vec{Z}^\perp Y^\perp \vec{t'}])$-reducible\footnote{
This use of (1) involving $Y$ is the reason why our two lemmas
need to deal with arbitrary candidates
and not only interpretations of formulas.
}.
We can finally compose all that to conclude that our derivation normalizes.
\end{enumerate}
\vspace{-0.5cm}\end{proof}
\subsection{Normalization}
\begin{lemma} \label{lem:mainbis}
Any proof of $\;\vdash P_1,\ldots,P_n$ is $([P_1],\ldots,[P_n])$-reducible.
\end{lemma}
\begin{proof}
By induction on the height of the derivation $\Pi$,
with a case analysis on the first rule.
We are establishing that for any $\theta$ and compatible
$(\gamma_i\in [P_i]^\perp)_{i=1\ldots n}$,
$cut(\Pi\theta;\vec{\gamma})$ normalizes.
If $\Pi\theta$ is an axiom on $P \equiv P_1\theta \equiv P_2^\perp\theta$,
the cut against a proof of $[P]$ and a proof of $[P]^\perp$
reduces into a cut between those two proofs, which normalizes.
If $\Pi\theta = cut(\Pi'\theta;\Pi''\theta)$ is a cut on the formula $P$,
$cut(\Pi\theta;\vec{\gamma})$ reduces to
$cut(cut(\Pi'\theta;\vec{\gamma}');cut(\Pi''\theta;\vec{\gamma}''))$
and the two subderivations belong to dual
candidates $[P]$ and $[P]^\perp$ by induction hypothesis
and Proposition~\ref{prop:red-interp}.
Otherwise, $\Pi$ starts with a rule from the logical group,
the end sequent is of the form $\vdash\Gamma,P$ where $P$ is the principal
formula, and we shall prove that $cut(\Pi\theta;\vec{\gamma}) \in [P]$
when $\vec{\gamma}$ is taken
in the duals of the interpretations of $\Gamma\theta$,
which allows to conclude again using Proposition~\ref{prop:red-interp}.
\begin{longitem}
\item
The rules $\mathbf{1}$, $\mathrel{\otimes}$, $\mathrel{\oplus}$, $\exists$, $=$
and $\mu$ are treated similarly,
the result coming directly from the definition of the interpretation.
Let us consider, for example, the fixed point case: $\Pi = \mu \Pi'$.
By induction hypothesis, $cut(\Pi'\theta;\vec{\gamma})\in [B(\mu B)\t]$.
By definition,
$[\mu B\t] = \mathrm{lfp}(\phi) = \phi(\mathrm{lfp}(\phi)) = X^{\perp\perp}$
where $X:=\set{\mu \Xi}{\Xi\in[B{\bullet}\t][\mu B]}$.
Since $[B(\mu B)\t] = [B{\bullet}\t][\mu B]$,
we obtain that $\mu (cut(\Pi'\theta;\vec{\gamma})) \in X$ and thus also
in $X^{\perp\perp}$.
Hence
$cut(\Pi\theta;\vec{\gamma})$, which reduces to the former, is
also in $[\mu B\t]$.
\item
The rules $\perp$, $\parr$, $\top$, $\with$, $\forall$, ${\neq}$,
and $\nu$ are treated similarly:
we establish that $cut(\Pi\theta;\vec{\gamma})\Perp X$ for some $X$
such that $[P] = X^\perp$.
First, we have to show that our derivation normalizes, which comes
by permuting up the cuts, and concluding by induction hypothesis |
this requires that after the permutation the derivations $\vec{\gamma}$
are still in the right candidates, which relies on closure under
substitution and hence signature extension for the case of disequality
and $\forall$.
Then we have to show that for any $\sigma$ and $\sigma'$, and any
compatible $\Xi \in X$, the derivation
$cut(cut(\Pi\theta;\vec{\gamma})\sigma;\Xi\sigma')$ normalizes too.
We detail this last step for two key cases.
In the $\forall$ case we have
$[\forall x.~ P x] = \set{\exists\Xi'}{\Xi'\in [P t^\perp]}^\perp$,
so we consider
$cut(cut((\forall\Pi')\theta;\vec{\gamma})\sigma;(\exists\Xi')\sigma')$,
which reduces to $cut(\Pi'\theta[t/x];$ $\vec{\gamma}\sigma,\Xi\sigma')$.
This normalizes by induction hypothesis on
$\Pi'[t/x]$, which remains smaller than $\Pi$.
The case of $\nu$ is the most complex, but is similar to the
argument developed for Lemma~\ref{lem:nu}.
If $\Pi$ is of the form $\nu(\Pi',\Theta)$ and $P \equiv \nu B \t$ then
$cut(\Pi;\gamma)\theta$ has type $\nu B \vec{u}$ for $u := \t\theta$.
Since $[\nu B\vec{u}] =
\set{\mu \Xi}{\Xi\in[\overline{B}{\bullet}\vec{u}][\mu \overline{B}]}^\perp$,
we show that for any $\sigma$, $\sigma'$ and compatible
$\Xi\in[\overline{B}(\mu\overline{B})\vec{u}]$,
the derivation
$cut(cut(\nu(\Pi',\Theta)\theta;\vec{\gamma})\sigma;(\mu \Xi)\sigma')$
normalizes.
Let $\vec{v}$ be $\vec{u}\sigma$,
the derivation reduces to:
\[ cut(cut(\Pi'\theta\sigma;\vec{\gamma}\sigma);
cut(\Theta[\vec{v}/\vec{x}];
cut(F_{\overline{B}{\bullet}\vec{v}}(\nu(Id,\Theta));\Xi\sigma'))) \]
By induction hypothesis,
$cut(\Pi'\theta\sigma;\vec{\gamma}\sigma)\in [S\vec{v}]$,
and $\Theta$ is $([S\vec{x}]^\perp,[BS\vec{x}])$-reducible.
By Lemmas~\ref{lem:fun} and \ref{lem:nu} we obtain
that $F_{\overline{B}{\bullet}\vec{v}}(\nu(Id,\Theta))$ is
$([\overline{B}{}S^\perp\vec{v}],[B(\nu B)\vec{v}])$-reducible.
Finally, $\Xi\in[\overline{B}(\mu\overline{B})\vec{v}]$.
We conclude by composing all these reducibilities
using Proposition~\ref{prop:red-interp}.
\end{longitem}
\vspace{-0.5cm}\end{proof}
\begin{theorem}[Cut elimination]
Any derivation can be reduced into a cut-free derivation.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:mainbis}, any derivation is reducible, and hence normalizes.
\end{proof}
The usual immediate corollary of the cut elimination result is that
$\mmumall$\ is consistent, since there is obviously no cut-free derivation
of the empty sequent.
However, note that unlike in simpler logics, cut-free derivations
do not enjoy the subformula property,
because of the $\mu$ and $\nu$ rules.
While it is easy to characterize the new formulas that can arise from $\mu$,
nothing really useful can be said for $\nu$,
for which no non-trivial restriction is known.
Hence, $\mmumall$\ only enjoys restricted forms of the subformula property,
applying only to (parts of) derivations that do not involve coinductions.
| proofpile-arXiv_065-6720 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{intro}
All topological spaces mentioned here are Hausdorff. As is traditional
in general Banach space theory, Banach spaces we mention
are considered as real Banach spaces even though nothing essential changes in the
context of complex spaces. Throughout let $K$ stand for an infinite compact topological space.
Let $X$ be a Banach space and $C$ a closed
convex subset of $X$.
A point $x_0\in C$ is a {\em point of support for} $C$ if there
is a functional $\varphi\in X^\ast$ such that $\varphi(x_0)\le\varphi(x)$
for all $x\in C$, and $\varphi(x_0)<\varphi(x')$ for some $x'\in C$.
Rolewicz \cite{Ro} proved in 1978 that every separable closed convex
subset $Y$ of a Banach space contains a point
which is not a point of support for $Y$, and asked if every
non-separable Banach space must contain a closed convex set
containing only points of support. In fact, this topic was already considered
by Klee \cite{Klee} in 1955 and the above theorem follows from 2.6
in that paper, by the same proof and taking $x_i$s to form
a dense set in $C$ \footnote{We thank Libor Vesely for pointing
out this, not widely recognised, connection.}. However it was Rolewicz's paper
which started a whole series of articles on this topic, and his question
has not yet been settled completely. It is known that the answer to
Rolewicz's question is independent of ZFC, and it is still not known if
the negative answer follows from $CH$. In \S\ref{semi} we
construct a $CH$ example of a nonseparable Banach space of the form $C(K)$ which violates
a strenghtening of the requirements in the Rolewicz's question.
The proof in \S\ref{semi} uses certain systems of pairs of points
of $K$, whose structure seems to us to be of independent interest.
They appear implicitly in many proofs about biorthogonal systems
in spaces of the form $C(K)$, see \cite{Hajekbook}, but their
existence is in fact entirely a property of the compact space $K$.
We call such systems {\em bidiscrete systems}. They are studied in
\S \ref{bidiscr}. Specifically, we prove in Theorem \ref{lemma2}
that if $K$ is an infinite compact Hausdorff space then $K$ has a
bidiscrete system of size $d(K)$, the density of $K$. This theorem
has not been stated in this form before, but we note that an
argument by Todor\v cevi\'c in \cite{StevoMM} can be easily
extended to give this result.
We now give some historical background. Mathematical background will be presented in
Section \ref{background}.
Borwein and Vanderwerff \cite{Bv} proved in 1996 that, in a Banach
space $X$, the existence of a closed convex set all of whose
points are support points is equivalent to the existence of an
uncountable semi-biorthogonal sequence for $X$, where
semi-biorthogonal sequences are defined as follows:
\begin{Definition}\label{biorthognal} Let $X$ be a Banach space. A sequence
$\langle (f_\alpha, \varphi_\alpha):\,\alpha<\alpha^\ast\rangle$ in $X\times X^\ast$ is said
to be
a {\em semi-biorthogonal sequence} if for all $\alpha,\beta<\alpha^\ast$ we have:
\begin{itemize}
\item $\varphi_\alpha(f_\alpha)=1$,
\item $\varphi_\alpha (f_\beta)=0$ if $\beta<\alpha$,
\item $\varphi_\alpha (f_\beta)\ge 0$ if $\beta>\alpha$.
\end{itemize}
\end{Definition}
We remind the reader of the better known notion of {\em a biorthogonal system}
$\{(f_\alpha, \varphi_\alpha):\,\alpha<\alpha^\ast\}$ in $X\times X^\ast$ which is defined to satisfy the first
item of the Definition \ref{biorthognal} with the last two items are strengthened to
\begin{itemize}
\item $\varphi_\alpha (f_\beta)=0$ if $\beta\neq\alpha$.
\end{itemize}
Notice that the requirements of a semi-biorthogonal sequence make it clear that we really need a well ordering
of the sequence
in the definition, but that the definition of a biorthogonal system does not require an underlying well-ordering.
There is nothing special about the values 0 and 1 in the above definitions, of course, and we could
replace them by any pair $(a,b)$ of distinct values in $\mathbb R$ and even let $b=b_\alpha$ vary with
$\alpha$. Equally, we could require the range of all $f_\alpha$ to be in $[0,1]$ or some other fixed nonempty
closed interval.
Obviously, any well-ordering of a biorthogonal system gives a
semi-biorthogonal sequence. On the other hand, there is an example
by Kunen under $CH$ of a nonmetrizable compact scattered space $K$
for which $X=C(K)$ does not have an uncountable biorthogonal
system, as proved by Borwein and Vanderwerff in \cite{Bv}. Since
$K$ is scattered, it is known that $X$ must have an uncountable
semi-biorthogonal sequence (see \cite{Hajekbook} for a
presentation of a similar example under $\clubsuit$ and a further
discussion). Let us say that a Banach space is a {\em Rolewicz
space} if it is nonseparable but does not have an uncountable
semi-biorthogonal sequence.
In his 2006 paper \cite{StevoMM}, Todor\v cevi\'c proved that
under Martin's Maximum (MM) every non-separable Banach space has
an uncountable biorthogonal system, so certainly it has an
uncountable semi-biorthogonal sequence. Hence, under MM there are
no Rolewicz spaces. On the other hand, Todor\v cevi\'c informed us
that he realized in 2004 that a forcing construction in \cite{bgt}
does give a consistent example of a Rolewicz space. Independently,
also in 2004 (published in 2009), Koszmider gave a similar forcing construction in
\cite{Kosz}. It is still not known if there has to be a Rolewicz
space under CH.
Our motivation was to construct a Rolewicz space of the form
$X=C(K)$ under CH. Unfortunately, we are not able to do so, but we
obtain in Theorem \ref{CH} a space for which we can at least show
that it satisfies most of the known necessary conditions for a
Rolewicz space and that it has no uncountable semi-bidiscrete
sequences of the kind that are present in the known failed
candidates for such a space, for example in $C(S)$ where $S$ is
the split interval.
Specifically, it is known that if $K$ has a non-separable Radon
measure or if it is scattered then $C(K)$ cannot be Rolewicz
(\cite{GJM}, \cite{Hajekbook}) and our space does not have either
of these properties. Further, it is known that a compact space $K$
for which $C(K)$ is a Rolewicz space must be both HS and HL
(\cite{Lazar}, \cite{Bv}) while not being metrizable, and our
space has these properties, as well. It follows from the
celebrated structural results on Rosenthal compacta by Todor{\v
c}evi\'c in \cite{StevoRos} that a Rosenthal compactum cannot be a
Rolewicz space, and our space is not Rosenthal compact. Finally,
our space is not metric but it is a 2-to-1 continuous preimage of
a metric space. This is a property possessed by the forcing
example in \cite{Kosz} and it is interesting because of a theorem
from \cite{StevoRos} which states that every non-metric Rosenthal
compact space which does not contain an uncountable discrete
subspace is a 2-to-1 continuous preimage of a metric space. Hence
the example in \cite{Kosz} is a space which is not Rosenthal
compact and yet it satisfies these properties, and so is our
space.
\section{Background}\label{background}
\begin{Definition}\label{nice} Let $X=C(K)$ be the Banach space of continuous functions
on a compact space $K$. We say that a sequence $\langle
(f_\alpha,\phi_\alpha):\,\alpha<\alpha^\ast\rangle$ in $X\times
X^\ast$ is a {\em nice} semi-biorthogonal sequence if it is a
semi-biorthogonal sequence and there are points $\langle
x^l_\alpha:\,l=0,1,\alpha<\alpha^\ast\rangle$ in $K$ such that
$\phi_\alpha=\delta_{x^1_\alpha}-\delta_{x^0_\alpha}$, where
$\delta$ denotes the Dirac measure. We similarly define nice
biorthogonal systems.
\end{Definition}
As Definition \ref{nice} mentions points of $K$ and $C(K)$ does
not uniquely determine $K$~\footnote{see e.g. Miljutin's theorem
\cite{Mil1}, \cite{Mil2} which states that for $K$, $L$ both
uncountable compact metrizable, the spaces $C(K)$ and $C(L)$ are
isomorphic.}, the definition is actually topological rather than
analytic. We shall observe below that the existence of a nice
semi-biorthogonal sequence of a given length or of a nice
biorthogonal system of a given size in $C(K)$ is equivalent to the
existence of objects which can be defined in terms which do not
involve the dual $C(K)^\ast$.
\begin{Definition}\label{nicetop} (1) A system
$\{(x_\alpha^0, x_\alpha^1):\,\alpha<\kappa\}$ of pairs of points
in $K$ (i.e. a subfamily of $K^2$) is called {\em a bidiscrete
system in} $K$ if there exist functions $\{
f_\alpha:\,\alpha<\kappa\} \subseteq C(K)$ satisfying that for every
$\alpha,\beta<\kappa$:
\begin{itemize}
\item $f_\alpha(x_\alpha^l)=l$ for $l\in \{0,1\}$,
\item if $\alpha\neq\beta$ then $f_\alpha(x_\beta^0)=f_\alpha(x_\beta^1)$.
\end{itemize}
\end{Definition}
(2) We similarly define semi-bidiscrete sequences in $K$ as
sequences $\langle (x_\alpha^0,
x_\alpha^1):\,\alpha<\alpha^\ast\rangle$ of points in $K^2$ that
satisfy the first requirement of (1) but instead of the second the
following two requirements:
\begin{itemize}
\item if $\alpha>\beta$ then $f_\alpha(x_\beta^0)=f_\alpha(x_\beta^1)$,
\item if $\alpha<\beta$ then $f_\alpha(x_\beta^0)=1\implies f_\alpha(x_\beta^1)=1$.
\end{itemize}
\begin{Observation}\label{nice=bidiscrete} For a compact space $K$,
$\{(x_\alpha^0, x_\alpha^1):\,\alpha<\alpha^*\} \subseteq K^2$ is a
bidiscrete system iff there are $\{ f_\alpha:\,\alpha<\alpha^*\}
\subseteq C(K)$ such that $\{(f_\alpha,
\delta_{x^\alpha_1}-\delta_{x^\alpha_0}):\,\alpha<\alpha^*\}$ is a
nice biorthogonal system for the Banach space $X=C(K)$. The
analogous statement holds for nice semi-bidiscrete sequences.
\end{Observation}
\begin{Proof} We only prove the statement for nice biorthogonal systems, the proof for the nice
semi-biorthogonal sequences is the same. If we are given a system
exemplifying (1), then
$\delta_{x^1_\alpha}(f_\beta)-\delta_{x^0_\alpha}(f_\beta)=
f_\beta(x^1_\alpha)-f_\beta(x^0_\alpha)$ has the values as
required. On the other hand, if we are given a nice biorthogonal
system of pairs $\{(f_\alpha,
\delta_{x^1_\alpha}-\delta_{x^0_\alpha}) : \alpha < \alpha^*\}$
for $X$, define for $\alpha<\alpha^*$ the function $g_\alpha \in
C(K)$ by $g_\alpha(x)=f_\alpha(x)-f_\alpha(x^0_\alpha)$. Then $\{
(x_\alpha^0, x_\alpha^1):\,\alpha<\alpha^\ast\}$ satisfies (1), as
witnessed by $\{ g_\alpha:\,\alpha<\alpha^\ast\}$.
$\eop_{\ref{nice=bidiscrete}}$
\end{Proof}
In the case of a 0-dimensional space $K$ we are often able to make
a further simplification by requiring that the functions
$f_\alpha$ exemplifying the bidiscreteness of $(x_\alpha^0,
x_\alpha^1)$ take only the values 0 and 1. This is clearly
equivalent to asking for the existence of a family $\{
H_\alpha:\,\alpha<\alpha^\ast\}$ of clopen sets in $K$ such that
each $H_\alpha$ separates $x^0_\alpha$ and $x^1_\alpha$ but not
$x^0_\beta$ and $x^1_\beta$ for $\beta\neq\alpha$. We call such
bidiscrete systems {\em very nice}. We can analogously define a
{\em very nice} semi-bidiscrete sequence, where the requirements
on the clopen sets become $x^l_\alpha\in H_\alpha\iff l=1$,
$\beta<\alpha\implies [x^0_\beta\in H_\alpha\iff x^{1}_\beta\in
H_\alpha]$ and $[\beta>\alpha\, \wedge \, x^0_\beta\in H_\alpha]
\implies x^1_\beta\in H_\alpha$.
We shall use the expression {\em very nice (semi-)biorthogonal
system (sequence)} in $C(K)$ to refer to a nice
(semi-)biorthogonal system (sequence) obtained as in the proof of
Claim \ref{nice=bidiscrete} from a very nice (semi-)bidiscrete
system (sequence) in $K$.
\begin{Example}
\label{splitinterval} (1) Let $K$ be the split interval (or double
arrow) space, namely the ordered space $K=[0,1]\times\{0,1\}$,
ordered lexicographically. Then $$\{\big((x,0), (x,1)\big) : x
\in [0,1]\}$$ forms a very nice bidiscrete system in $K$. This is
exemplified by the two-valued continuous functions $\{f_x : x \in
[0,1]\}$ defined by $f_x(r)=0$ if $r\le (x, 0)$ and $f_x(r)=1$
otherwise.
(2) Suppose that $\kappa$ is an infinite cardinal and
$K=2^\kappa$. For $l\in \{0,1\}$ and $\alpha<\kappa$ we define
$x^l_\alpha\in K$ by letting $x^l_\alpha(\beta)=1$ if
$\beta<\alpha$, $\,x^l_\alpha(\beta)=0$ if $\beta> \alpha$, and
$x^l_\alpha(\alpha)=l$. The clopen sets $H_\alpha=\{f\in
K:\,f(\alpha)=1\}$ show that the pairs $\{(x^0_\alpha,x^1_\alpha)
: {\alpha<\kappa}\}$ form a very nice bidiscrete system in the
Cantor cube $K = 2^\kappa$.
\end{Example}
In \cite{StevoMM}, Theorem 10, it is proved under
$MA_{\omega_1}\,$ that every Banach space of the kind $X=C(K)$ for
a nonmetrizable compact $K$ admits an uncountable nice
biorthogonal system. Moreover, at the end of the proof it is
stated that for a 0-dimensional $K$ this biorthogonal system can
even be assumed to be very nice (in our terminology).
As nice semi-biorthogonal sequences may be defined using only $K$
and $X=C(K)$ and do not involve the dual $X^\ast$, in
constructions where an enumerative tool such as $CH$ is used it is
easier to control nice systems than the general ones. In our CH
construction below of a closed subspace $K$ of $2^{\omega_1}$ we
would at least like to destroy all uncountable nice
semi-biorthogonal sequences by controlling semi-bidiscrete
sequences in $K$. We are only able to do this for semi-bidiscrete
sequences which are not already determined by the first
$\omega$-coordinates, in the sense of the following Definition
\ref{supernice} : In our space $K$ any uncountable nice
semi-biorthogonal sequence must be $\omega$-determined.
\begin{Definition}\label{supernice}
A family $\{(x^0_\alpha, x^1_\alpha):\,\alpha<\alpha^*\} \subseteq
2^{\omega_1} \times 2^{\omega_1}$ is said to be {\em
$\omega$-determined} if
\[
(\forall s\in
2^\omega)\,\{\alpha:\,x^0_\alpha\rest\omega=x^1_\alpha\rest\omega=s\}\mbox{
is countable}.
\]
For $K \subseteq 2^{\omega_1}$ we define an {\em $\omega$-determined
semi-biorthogonal sequence in $C(K)$} to be any nice
semi-biorthogonal sequence $\langle
(f_\alpha,\delta_{x^1_\alpha}-\delta_{x^0_\alpha}):\,\alpha<\alpha^\ast\rangle$
for which the associated semi-bidiscrete sequence $\langle
(x^0_\alpha, x^1_\alpha):\alpha<\alpha^\ast \rangle$ forms an
$\omega$-determined family.
\end{Definition}
\section{The $CH$ construction}\label{semi}
\begin{Theorem}\label{CH} Under $CH$, there is a compact space $K \subseteq 2^{\omega_1}$ with the following properties:
\begin{itemize}
\item $K$ is not metrizable, but is a 2-to-1 continuous preimage of a metric space,
\item $K$ is HS and HL,
\item\label{treci} every Radon measure on $K$ is separable,
\item $K$ has no isolated points,
\item $K$ is not Rosenthal compact,
\item any uncountable nice semi-biorthogonal sequence in $C(K)$ is $\omega$-determined.
\end{itemize}
\end{Theorem}
\begin{proof} We divide the proof into two parts. In the first we give various requirements on the construction,
and show that if these requirements are satisfied the space meeting the claim of the theorem can be constructed. In the second part we show that
these requirements can be met.
\subsubsection{The requirements}
Our space will be a closed subspace of $2^{\omega_1}$. Every such space can be viewed as the limit of an inverse system of spaces, as we now explain.
\begin{Definition}
For $\alpha\leq\beta\le\omega_1$, define $\piba : 2^{\beta}\rightarrow
2^{\alpha}$ by $\piba (f) = f\rest\alpha $.
\end{Definition}
Suppose that $K$ is a closed subspace of $2^{\omega_1}$, then
for $\alpha \le \omega_1$ we let $K_\alpha=\pi^{\omega_1}_\alpha(K)$. So, if $\alpha \le \beta$ then
$K_\alpha$ is the $\pi^\beta_\alpha$-projection of $K_\beta$. For $\alpha<\omega_1$ let
\[
A_\alpha=\pi^{\alpha+1}_\alpha(\{x\in K_{\alpha+1}:x(\alpha)=0\}), B_\alpha=\pi^{\alpha+1}_\alpha(\{x\in K_{\alpha+1}:x(\alpha)=1\}).
\]
The following statements are then true:
\begin{description}
\item{\bf R1}.\label{1.1} $K_\alpha$ is a closed subset of $2^\alpha$, and
$\pi_\alpha^\beta(K_\beta) = K_\alpha$ whenever
$\alpha \le \beta \le \omega_1$.
\item{{\bf R2}.}\label{1.2} For $\alpha < \omega_1$,
$A_\alpha$ and $B_\alpha$ are closed in $K_\alpha$,
$A_\alpha \cup B_\alpha = K_\alpha$, and
$K_{\alpha + 1} = A_\alpha \times \{0\} \cup B_\alpha \times \{1\}$.
\end{description}
Now $K$ can be viewed as the limit of the inverse system $\mathcal
K=\{K_\alpha:\,\alpha<\omega_1, \piba\rest K_\beta:\,\alpha\le
\beta<\omega_1\}$. Therefore to construct the space $K$ it is
sufficient to specify the system ${\cal K}$, and as long as the
requirements {\bf R1} and {\bf R2} are satisfied, the resulting
space $K$ will be a compact subspace of $2^{\omega_1}$. This will
be our approach to constructing $K$, that is we define $K_\alpha$
by induction on $\alpha$ to satisfy various requirements that we
list as {\bf Rx}.
The property HS+HL will be guaranteed by a use of irreducible
maps, as in \cite{DzK}. Recall that for spaces $X,Y$, a map
$f:\,X\rightarrow Y$ is called {\em irreducible} on $A\subseteq X$ iff
for any proper closed subspace $F$ of $A$ we have that $f(F)$ is a
proper subset of $f(A)$. We shall have a special requirement to
let us deal with HS+HL, but we can already quote Lemma 4.2 from
\cite{DzK}, which will be used in the proof. It applies to any
space $K$ of the above form.
\begin{Lemma}\label{Lemma4.2} Assume that $K$ and $K_\alpha$ satisfy {\bf R1} and {\bf R2} above.
Then $K$ is HL+HS iff for all closed $H \subseteq K$,
there is an $\alpha <\omega_1$ for which
$\pi^{\omega_1}_\alpha$ is irreducible on $(\pi^{\omega_1}_\alpha )^{-1} (\pi^{\omega_1}_\alpha (H))$.
\end{Lemma}
In addition to the requirements given above
we add the following basic requirement {\bf R3} which assures that $K$ has no isolated points.
\begin{description}
\item{{\bf R3}.}\label{1.3} For $n < \omega$,
$K_n = A_n = B_n = 2^n$. For $\alpha \ge \omega$, $A_\alpha$ and
$B_\alpha$ have no isolated points.
\end{description}
Note that the requirement {\bf R3} implies that for each $\alpha
\ge \omega$, $K_\alpha$ has no isolated points; so it is easy to
see that the requirements guarantee that $K$ is a compact subspace
of $2^{\omega_1}$ and that it has no isolated points. Further,
$K_\omega = 2^\omega$ by {\bf R1} and {\bf R3}. The space $K$ is
called {\em simplistic} if for all $\alpha$ large enough
$A_\alpha\cap B_\alpha$ is a singleton. For us `large enough' will
mean `infinite', i.e. during the construction we shall obey the
following:
\begin{description}
\item{{\bf R4}.}\label{1.4} For all $\alpha\in [\omega,\omega_1)$ we
have $A_\alpha\cap B_\alpha=\{s_\alpha\}$ for some $s_\alpha\in K_\alpha$.
\end{description}
By {\bf R4} we can make the following observation which will be useful later:
\begin{Observation}\label{delta} Suppose that $x\in K_\alpha, y\in K_\beta$ for some $\omega\le\alpha\le\beta$ and $x\nsubseteq y$, $y\nsubseteq x$ with $\Delta(x,y)\ge\omega$. Then $x\rest\Delta(x,y)=y\rest\Delta(x,y)=s_{\Delta(x,y)}$.
\end{Observation}
As usual, we used here the notation $\Delta(x,y) = \min \{\alpha :
x(\alpha) \ne y(\alpha)\}$.
Requirement {\bf R4} implies that $K$ is not 2nd countable, hence not
metrizable.
The following is folklore in the subject, but one can also see \cite{Piotr} for a detailed explanation
and stronger theorems:
\begin{Fact} Every Radon measure on a simplistic space is separable.
\end{Fact}
Now we come back to the property HS+HL. To assure this we shall
construct an auxiliary Radon measure $\mu$ on $K$. This measure
will be used, similarly as in the proof from Section \S 4 in
\cite{DzK}, to assure that for every closed subset $H$ of $K$ we
have $H = (\pi^{\omega_1}_\alpha )^{-1} (\pi^{\omega_1}_\alpha (H))$ for some countable coordinate
$\alpha$. In fact, what we need for our construction is not the
measure $\mu$ itself but a sequence $\langle
\mu_\alpha:\,\alpha<\omega_1 \rangle$ where each $\mu_\alpha$ is a
Borel measure on $K_\alpha$ and these measures satisfy that for
each $\alpha\le\beta<\omega_1$ and Borel set $B\subseteq K_\beta$,
we have $\mu_\beta(B)=\mu_\alpha(\piba(B))$. As a side remark
the
sequence $\langle \mu_\alpha:\,\alpha<\omega_1 \rangle$ will
uniquely determine a Radon measure $\mu = \mu_{\omega_1}$ on $K$.
To uniquely determine each Borel (=Baire)
measure $\mu_\alpha$ it is sufficient to decide its values on the
clopen subsets of $K_\alpha$. We formulate a requirement to
encapsulate this discussion:
\begin{description}
\item{{\bf R5}.} For
$\alpha\le\omega_1$, $\mu_\alpha$ is a finitely additive probability
measure on the clopen subsets of $K_\alpha$, and
$\mu_\alpha = \mu_\beta (\piba )^{-1}$ whenever
$\omega\le\alpha \le \beta \le \omega_1$. For $\alpha \le \omega$,
$\mu_\alpha$ is the usual product measure on the clopen subsets of $K_\alpha=2^\alpha$.
\end{description}
Let $\widehat{\mu}_\alpha$ be the Borel measure on $K_\alpha$
generated by $\mu_\alpha$. It is easy to verify that {\bf R1}-{\bf
R5} imply that for $\alpha \le \omega$, $\widehat {\mu}_\alpha$ is
the usual product measure on $K_\alpha=2^\alpha$, and that for any
$\alpha$, $\widehat {\mu}_\alpha$ gives each non-empty clopen set
positive measure and measure 0 to each point in $K_\alpha$. We
shall abuse notation and use $\mu_\alpha$ for both
$\widehat{\mu}_\alpha$ and its restriction to the clopen sets.
Note that by the usual Cantor tree argument these properties
assure that in every set of positive measure there is an
uncountable set of measure 0; this observation will be useful
later on.
The following requirements will help us both to obtain HS+HL and
to assure that $K$ is not Rosenthal compact. To formulate these
requirements we use $CH$ to enumerate the set of pairs $\{(\gamma,
J):\,\gamma<\omega_1\,,\, J \subseteq 2^\gamma \mbox{ is Borel}
\}$ as $\{(\delta_\alpha, J_\alpha):\,\omega\le \alpha
<\omega_1\}$ so that $\delta_\alpha \le\alpha$ for all $\alpha$
and each pair appears unboundedly often.
Suppose that $\omega \le \alpha < \omega_1$ and $K_\alpha$ and $\mu_\alpha$ are defined.
We define the following subsets of $K_\alpha$:
{\parindent= 40pt
$C_\alpha = (\pi^\alpha_{\delta_\alpha} )^{-1} (J_\alpha )$, if $J_\alpha\subseteq
K_{\delta_\alpha}$; $C_\alpha = \emptyset$ otherwise.
$L_\alpha = C_\alpha$ if $C_\alpha$ is closed;
$L_\alpha = K_\alpha$ otherwise.
$Q_\alpha = L_\alpha \setminus\bigcup\{ O:
O \hbox{\ is open and \ } \mu_\alpha (L_\alpha \cap O) = 0\}$
$N_\alpha =(L_\alpha\setminus Q_\alpha)\cup C_\alpha$, if
$\mu_\alpha(C_\alpha ) =0$;
$N_\alpha =(L_\alpha\setminus Q_\alpha)$ otherwise.
\medskip
}
Let us note that $L_\alpha$ is a closed subset of $K_\alpha$ and
that $Q_\alpha\subseteq L_\alpha$ is also closed and satisfies
$\mu_\alpha(Q_\alpha)=\mu_\alpha(L_\alpha)$, and hence
$\mu_\alpha(N_\alpha)=0$. Also observe that $Q_\alpha$ has no
isolated points, as points have $\mu_\alpha$ measure 0.
We now recall from \cite{DzK} what is meant by $A$ and $B$ being
{\em complementary regular closed subsets} of a space $X$: this
means that $A$ and $B$ are both regular closed with $A\cup B=X$,
while $A\cap B$ is nowhere dense in $X$. Finally, we state the
following requirements:
\begin{description}
\item{\bf R6}.\label{4.1}
For any $\beta\geq\alpha\geq\omega$, $s_\beta\notin
(\piba)^{-1}( N_\alpha)$;
\item{\bf R7}.\label{2.3} For any $\beta\geq\alpha\geq\omega$, $A_\beta\cap (\piba)^{-1}(Q_\alpha)$ and
$B_\beta\cap (\piba)^{-1} (Q_\alpha)$ are complementary regular closed
subsets of $(\piba )^{-1} (Q_\alpha)$.
\end{description}
The following claim and lemma explain our use of irreducible maps,
and the use of measure as a tool to achieve the HS+HL properties
of the space. The proof is basically the same as in \cite{DzK} but
we give it here since it explains the main point and also to show
how our situation actually simplifies the proof from \cite{DzK}.
For any $\alpha$, we use the notation $[s]$ for a finite partial
function $s$ from $\alpha$ to 2 to denote the basic clopen set
$\{f\in 2^{\alpha}:\,s\subseteq f\}$, or its relativization to a
subspace of $2^\alpha$, as it is clear from the
context.~\footnote{The notation also does not specify $\alpha$ but
again following the tradition, we shall rely on $\alpha$ being
clear from the context.}
\begin{Claim}\label{induction} Assume the requirements {\bf R1}-{\bf R5} and {\bf R7}. Then
for each $\beta\in [\alpha,\omega_1]$ the projection $\piba$ is irreducible on
$(\piba )^{-1} (Q_\alpha )$.
\end{Claim}
\begin{Proof of the Claim}
We use induction on $\beta\ge\alpha$.
The step $\beta=\alpha$ is clear. Assume that we know that the projection $\piba$ is irreducible on
$(\piba )^{-1} (Q_\alpha )$ and let us prove that $\pi^{\beta+1}_\alpha$ is irreducible on
$(\pi^{\beta+1}_\alpha )^{-1} (Q_\alpha )$. Suppose that $F$ is a proper closed subset of $(\pi^{\beta+1}_\alpha )^{-1}
(Q_\alpha )$ satisfying $\pi^{\beta+1}_\alpha(F)=Q_\alpha$. Then by the inductive assumption
$\pi^{\beta+1}_\beta(F)=(\piba )^{-1} (Q_\alpha )$. Let $x\in (\pi^{\beta+1}_\alpha )^{-1}
(Q_\alpha )\setminus F$, so we must have that $x\rest\beta=s_\beta$. Assume $x(\beta)=0$, the case
$x(\beta)=1$ is symmetric. Because $F$ is closed, we can find a basic clopen set $[t]$ in $K_{\beta+1}$
containing $x$ such that $[t]\cap F=\emptyset$. Let $s=t\rest\beta$.
Therefore $s_\beta\in [s]$ holds in $K_\beta$, and by {\bf R7} we can find $y\in {\rm int}(A_\beta
\cap (\piba )^{-1} (Q_\alpha ))\cap [s]$. Using the inductive assumption we conclude
$y\in {\rm int}(A_\beta\cap (\pi^{\beta+1}_\beta ) (F ))\cap [s]$, so there is a basic clopen
set $[v]\subseteq [s]$ in $K_\beta$ such that $y\in [v]$ and $[v]\subseteq A_\beta\cap (\pi^{\beta+1}_\beta ) (F )$.
But then $[v]$ viewed as a clopen set in $K_{\beta+1}$ satisfies $[v]\subseteq [t]$ and yet
$[v]\cap F\neq \emptyset$.
The limit case of the induction is easy by the definition of inverse limits.
$\eop_{\ref{induction}}$
\end{Proof of the Claim}
\begin{Lemma}\label{repeat} Assume the requirements {\bf R1}-{\bf R7}
and let $H$ be a closed subset of $K$. Then
there is an $\alpha <\omega_1$ such that
$\pi^{\omega_1}_\alpha$ is irreducible on $(\pi^{\omega_1}_\alpha )^{-1} (\pi^{\omega_1}_\alpha (H))$.
\end{Lemma}
\begin{Proof of the Lemma}
For each $\gamma < \omega_1$, let $H_\gamma = \pi^{\omega_1}_\gamma (H)$.
Then the $\mu_\gamma (H_\gamma )$ form a non-increasing sequence
of real numbers, so we may fix a $\gamma <\omega_1$
such that for all $\alpha\geq\gamma$, $\mu_\alpha(H_\alpha)=
\mu_\gamma(H_\gamma)$.
Next fix an $\alpha \geq\gamma$ such that $\delta_\alpha = \gamma$ and
$J_\alpha = H_\gamma$.
Then $L_\alpha = C_\alpha = (\pi^\alpha_\gamma)^{-1} (H_\gamma)$.
Hence $H_\alpha$ is a closed
subset of $L_\alpha$ with the same measure as $L_\alpha$,
so $Q_\alpha \subseteq H_\alpha \subseteq L_\alpha$, by the definition of $Q_\alpha$. Recall that by Claim \ref{induction}
we have that $\pi^{\omega_1}_\alpha$ is irreducible on $(\pi^{\omega_1}_\alpha)^{-1}(Q_\alpha)$.
Now we claim that
$\pi^{\omega_1}_\alpha$ is 1-1 on $(\pi^{\omega_1}_\alpha )^{-1}(H_\alpha\setminus Q_\alpha)$. Otherwise, there would be $x\neq y\in
(\pi^{\omega_1}_\alpha )^{-1}(H_\alpha\setminus Q_\alpha)$ with $x\rest\alpha=y\rest\alpha$. Therefore for some
$\beta\ge\alpha$ we have $x\rest\beta=y\rest \beta=s_\beta$, as otherwise $x=(\pi^{\omega_1}_\alpha )^{-1}(\{x\rest\alpha\})$. In
particular $s_\beta\in (\piba)^{-1}(H_\alpha)\subseteq (\piba)^{-1}(L_\alpha)$. On the other hand, if
$s_\beta\in (\piba)^{-1}(Q_\alpha)$ then $\{x,y\}\in (\pi^{\omega_1}_\alpha )^{-1}(Q_\alpha)$- a contradiction- so
$s_\beta\notin (\piba)^{-1}(Q_\alpha)$.
This means
$s_\beta\in (\piba)^{-1}(N_\alpha)$, in contradiction with {\bf R6}.
Thus, $\pi^{\omega_1}_\alpha$ must be irreducible on $(\pi^{\omega_1}_\alpha )^{-1} (H_\alpha )$ as well, and the Lemma is proved.
$\eop_{\ref{repeat}}$
\end{Proof of the Lemma}
Now we comment on how to assure that $K$ is not Rosenthal compact.
A remarkable theorem of Todor\v cevi\'c from \cite{StevoRos}
states that every non-metric Rosenthal compactum contains either
an uncountable discrete subspace or a homeomorphic copy of the
split interval. As our $K$, being HS+HL, cannot have an
uncountable discrete subspace, it will suffice to show that it
does not contain a homeomorphic copy of the split interval.
\begin{Claim}\label{Rosenthal} Suppose that the requirements {\bf R1}-{\bf R7} are met. Then
\begin{description}
\item{(1)} all $\mu$-measure 0 sets in $K$ are second countable and
\item{(2)} $K$ does not contain a homeomorphic copy of the split interval.
\end{description}
\end{Claim}
\begin{Proof of the Claim}
(1) Suppose that $M$ is a $\mu$-measure 0 Borel set in $K$ and let
$N=\pi_\omega^{\omega_1}(M)$, hence $N$ is of measure 0 in
$2^\omega$. Let $\alpha\in[\omega,\omega_1)$ be such that
$\delta_\alpha= \omega$ and $J_\alpha=N$. Then
$C_\alpha=(\pi_\omega^\alpha)^{-1}(N)$ and hence
$\mu_\alpha(C_\alpha)=0$ and so $C_\alpha\subseteq N_\alpha$.
Requirement {\bf R6} implies that for $\beta\ge\alpha$,
$(\pi_\beta^{\omega_1})^{-1}(s_\beta)\cap M= \emptyset$, so the
topology on $M$ is generated by the basic clopen sets of the form
$[s]$ for ${\rm dom}(s)\subseteq\alpha$. So $M$ is 2nd countable.
(2) Suppose that $H\subseteq K$ is homeomorphic to the split interval. Therefore $H$ is compact
and therefore closed in $K$. In particular $\mu(H)$ is defined.
If $\mu(H)=0$ then by (1), $H$ is 2nd countable, a contradiction.
If $\mu(H)>0$ then there is an uncountable set $N\subseteq H$ with
$\mu(N)=0$. Then $N$ is uncountable and 2nd countable,
contradicting the fact that all 2nd countable subspaces of the
split interval are countable. $\eop_{\ref{Rosenthal}}$
\end{Proof of the Claim}
Now we comment on how we assure that any uncountable nice semi-biorthogonal
system in $C(K)$ is $\omega$-determined, i.e. any uncountable semi-bidiscrete sequence in $K$
forms an $\omega$-determined family of pairs of points. For this we make one further
requirement:
\begin{description}
\item{\bf R8}. If $\alpha\,,\beta\in[\omega,\omega_1)$ with $\alpha < \beta$ then $s_\beta\rest\alpha\neq s_\alpha$.
\end{description}
\begin{claim}
\label{nosupernice} Requirements {\bf R1}-{\bf R8} guarantee that
any uncountable semi-bidiscrete sequence in $K$ is
$\omega$-determined.\end{claim}
\begin{Proof of the Claim}
Suppose that $\langle
(x^0_\alpha,x^1_\alpha):\,\alpha<\omega_1\rangle$ forms an
uncountable semi-bidiscrete sequence in $K$ that is not
$\omega$-determined. By the definition of a semi-bidiscrete
sequence, the $ (x^0_\alpha,x^1_\alpha)$'s are distinct pairs of
distinct points. Therefore there must be $s\in 2^\omega$ such that
$A=\{\alpha:\,x^0_\alpha\rest\omega=x^1_\alpha\rest\omega=s\}$ is
uncountable. We have at least one $l < 2$ such that
$\{x^l_\alpha:\,\alpha\in A\}$ is uncountable, so assume, without
loss of generality, that this is true for $l=0$.
Let $\alpha, \beta,\gamma$ be three distinct members of $A$. Then
by Observation \ref{delta} we have $$x^0_\alpha\rest
\Delta(x^0_\alpha, x^0_\beta)=x^0_\beta\rest \Delta(x^0_\alpha,
x^0_\beta) =s_{ \Delta(x^0_\alpha, x^0_\beta)}$$ and similarly
$$x^0_\alpha\rest \Delta(x^0_\alpha, x^0_\gamma)=x^0_\gamma\rest
\Delta(x^0_\alpha, x^0_\gamma)=s_{ \Delta(x^0_\alpha,
x^0_\gamma)}.$$ By {\bf R8} we conclude that $\Delta(x^0_\alpha,
x^0_\beta)$ is the same for all $\beta \in A\setminus\{\alpha\}$
and we denote this common value by $\Delta_\alpha$. Thus for
$\beta \in A\setminus\{\alpha\}$ we have $x^0_\beta\rest
\Delta_\alpha=s_{\Delta_\alpha}$, but applying the same reasoning
to $\beta$ we obtain $x^0_\alpha\rest
\Delta_\beta=s_{\Delta_\beta}$ and hence by {\bf R8} again we have
$\Delta_\alpha=\Delta_\beta$. Let $\delta^\ast$ denote the common
value of $\Delta_\alpha$ for $\alpha \in A$.
Again, taking distinct $\alpha,\beta,\gamma\in A$ we have
$x^0_\alpha\rest\delta^\ast=
x^0_\beta\rest\delta^\ast=x^0_\gamma\rest\delta^\ast$ and that
$x^0_\alpha(\delta^\ast), x^0_\beta(\delta^\ast)$ and
$x^0_\gamma(\delta^\ast)$ are pairwise distinct. This is, however,
impossible as the latter have values in $\{0,1\}$.
$\eop_{\ref{nosupernice}}$
\end{Proof of the Claim}
Finally we show that the space $K$ is a 2-to-1 continuous preimage
of a compact metric space. We simply define
$\varphi:\,K\into2^\omega$ as $\varphi(x)=x\rest\omega$. This is
clearly continuous. To show that it is 2-to-1 we first prove the
following:
\begin{Claim}\label{ultra} In the space $K$ above, for any $\alpha\neq\beta$ we have
$s_\alpha\rest\omega\neq s_\beta\rest\omega$.
\end{Claim}
\begin{Proof of the Claim} Otherwise suppose that $\alpha<\beta$ and yet
$s_\alpha\rest\omega= s_\beta\rest\omega$. By {\bf R8} we have
$s_\alpha\nsubseteq s_\beta$, so
$\omega\le\delta=\Delta(s_\alpha,s_\beta)<\beta$. By Observation
\ref{delta} applied to any $x\supseteq s_\alpha$ and $y\supseteq
s_\beta$ from $K$, we have
$s_\alpha\rest\delta=x\rest\delta=y\rest\delta=s_\beta\rest\delta=s_\delta$.
But this would imply $s_\delta\subseteq s_\beta$, contradicting
{\bf R8}. $\eop_{\ref{ultra}}$
\end{Proof of the Claim}
Now suppose that $\varphi$ is not 2-to-1, that is there are three
elements $x,y,z \in K$ such that $x\rest\omega=y\rest\omega=
z\rest\omega$. Let $\alpha=\delta(x,y)$ and $\beta=\delta(x,z)$,
so $\alpha, \beta \ge\omega$. By Observation \ref{delta} we have
$x\rest\alpha=y\rest\alpha=s_\alpha$,
$x\rest\beta=z\rest\beta=s_\beta$, so by requirement {\bf R8} we
conclude $\alpha=\beta$. Note that then $y(\alpha)=z(\alpha)$ and
so $\delta=\Delta(y,z)>\alpha$ and $y\rest\delta=
s_\delta\supseteq s_\beta$, in contradiction with {\bf R8}.
Therefore $\varphi$ is really 2-to-1.
\subsubsection{Meeting the requirements}
Now we show how to meet all these requirements. It suffices to
show what to do at any successor stage $\alpha+1$ for $\alpha\in
[\omega,\omega_1)$, assuming all the requirements have been met at
previous stages.
First we choose $s_\alpha$. By {\bf R5} for any $\gamma<\alpha$ we
have $\mu_\gamma(\{s_\gamma\})=0$ and
$\mu_\alpha((\pi_\gamma^\alpha)^{-1}(s_\gamma))=0$. Hence the set
of points $s\in K_\alpha$ for which $s\rest \gamma=s_\gamma$ for
some $\gamma<\alpha$ has measure 0, so we simply choose $s_\alpha$
outside of
$\bigcup_{\gamma<\alpha}(\pi^\alpha_\gamma)^{-1}(s_\gamma)$ , as
well as outside of
$\bigcup_{\gamma<\alpha}(\pi^\alpha_\gamma)^{-1}(N_\gamma)$ (to
meet {\bf R6}), which is possible as the $\mu_\alpha$ measure of
the latter set is also 0.
Now we shall use an idea from \cite{DzK}. We fix a strictly
decreasing sequence $\langle V_n:\,n\in\omega\rangle$ of clopen
sets in $K_\alpha$ such that $V_0=K_\alpha$ and
$\bigcap_{n<\omega}V_n=\{s_\alpha\}$. We shall choose a function
$f:\,\omega\to\omega$ such that letting
$$A_\alpha=\bigcup_{n<\omega} (V_{f(2n)}\setminus
V_{f(2n+1)})\cup\{s_\alpha\}$$ and $$B_\alpha=\bigcup_{n<\omega}
(V_{f(2n+1)}\setminus V_{f(2n)})\cup\{s_\alpha\}$$ will meet all
the requirements. Once we have chosen $A_\alpha$ and $B_\alpha$,
we let $$K_{\alpha+1}=A_\alpha\times \{0\}\cup B_\alpha\times
\{1\}.$$ For a basic clopen set $[s]=\{g\in
K_{\alpha+1}:\,g\supseteq s\}$, where $s$ is a finite partial
function from $\alpha+1$ to 2 and $\alpha\in{\rm dom}(s)$, we let
$\mu_{\alpha+1}([s])=1/2\cdot \mu_\alpha([s\rest\alpha])$. We
prove below that this extends uniquely to a Baire measure on
$K_{\alpha+1}$.
The following is basically the same (in fact simpler) argument
which appears in \cite{DzK}. We state and prove it here for the
convenience of the reader.
\begin{Claim}\label{fastfunction} The above choices of $A_\alpha$, $B_\alpha$, and
$\mu_{\alpha+1}$, with the choice of any function $f$ which is
increasing fast enough, will satisfy all the requirements {\bf
R1}-{\bf R8}.
\end{Claim}
\begin{Proof of the Claim} Requirements {\bf R1}-{\bf R4} are clearly met with any choice of $f$.
To see that {\bf R5} is met, let us first prove that
$\mu_{\alpha+1}$ as defined above indeed extends uniquely to a
Baire measure on $K_{\alpha+1}$. We have already defined
$\mu_{\alpha+1}([s])$ for $s$ satisfying $\alpha\in{\rm dom}(s)$. If
$\alpha\notin{\rm dom}(s)$ then we let
$\mu_{\alpha+1}([s])=\mu_{\alpha}(\pi^{\alpha+1}_\alpha [s])$. It
is easily seen that this is a finitely additive measure on the
basic clopen sets, which then extends uniquely to a Baire measure
on $K_{\alpha+1}$. It is also clear that this extension satisfies
{\bf R5}.
Requirements {\bf R6} and {\bf R8} are met by the choice of
$s_\alpha$, so it remains to see that we can meet {\bf R7}. For
each $\gamma\in[\omega,\alpha]$, if $s_\alpha\in
(\pi_\gamma^\alpha)^{-1}(Q_\gamma)$, fix an $\omega$-sequence
$\bar{t}_\gamma$ of distinct points in
$(\pi_\gamma^\alpha)^{-1}(Q_\gamma)$ converging to $s_\alpha$.
Suppose that $\bar{t}_\gamma$ is defined and that both
$A_\alpha\setminus B_\alpha$ and $B_\alpha\setminus A_\alpha$
contain infinitely many points from $\bar{t}_\gamma$. Then we
claim that $A_\alpha\cap (\pi_\gamma^\alpha)^{-1}(Q_\gamma)$ and
$B_\alpha\cap (\pi_\gamma^\alpha)^{-1} (Q_\gamma)$ are
complementary regular closed subsets of $(\pi_\gamma^\alpha )^{-1}
(Q_\gamma)$. Note that we have already observed that $Q_\gamma$
does not have isolated points, so neither does
$(\pi_\gamma^\alpha)^{-1} (Q_\gamma)$. Hence, since
$\{s_\alpha\}\supseteq A_\alpha\cap
(\pi_\gamma^\alpha)^{-1}(Q_\gamma)\cap B_\alpha\cap
(\pi_\gamma^\alpha)^{-1} (Q_\gamma)$, we may conclude that this
intersection is nowhere dense in both $A_\alpha\cap
(\pi_\gamma^\alpha)^{-1}(Q_\gamma)$ and $B_\alpha\cap
(\pi_\gamma^\alpha)^{-1} (Q_\gamma)$. Finally, $A_\alpha\cap
(\pi_\gamma^\alpha)^{-1}(Q_\gamma)$ and $B_\alpha\cap
(\pi_\gamma^\alpha)^{-1} (Q_\gamma)$ are regular closed because we
have assured that $s_\alpha$ is in the closure of both.
Therefore we need to choose $f$ so that for every relevant
$\gamma$, both $A_\alpha\setminus B_\alpha$ and $B_\alpha\setminus
A_\alpha$ contain infinitely many points of $\bar{t}_\gamma$.
Enumerate all the relevant sequences $\bar{t}_\gamma$ as
$\{\bar{z}^k\}_{k<\omega}$. Our aim will be achieved by choosing
$f$ in such a way that, for every $n$, both sets
$V_{f(2n)}\setminus V_{f(2n+1)}$ and $V_{f(2n+1)}\setminus
V_{f(2n+2)}$ contain a point of each $\bar{z}^k$ for $k\le n$.
$\eop_{\ref{fastfunction}}$
\end{Proof of the Claim}
This finishes the proof of the theorem.
$\eop_{\ref{CH}}$
\end{proof}
\section{Bidiscrete systems}\label{bidiscr} The main result of this section is Theorem \ref{lemma2}
below. In the course of proving Theorem 10 in \S7 of
\cite{StevoMM}, Todor\v cevi\'c actually proved that if $K$ is not
hereditarily separable then it has an uncountable bidiscrete
system. Thus his proof yields Theorem \ref{lemma2} for
$d(K)=\aleph_1$ and the same argument can be easily extended to a
full proof of \ref{lemma2}.
Let us first state some general observations about bidiscrete systems.
\begin{observation}
\label{closedsubspace}
Suppose that $K$ is a compact Hausdorff space and $H\subseteq K$ is closed, while
$\{ (x_\alpha^0, x_\alpha^1):\,\alpha<\kappa\}$
is a bidiscrete system in $H$, as exemplified by functions $f_\alpha\,(\alpha<\kappa)$.
Then there are functions $g_\alpha\,(\alpha<\kappa)$ in $C(K)$
such that $f_\alpha\subseteq g_\alpha$ and $g_\alpha\,(\alpha<\kappa)$ exemplify that
$\{ (x_\alpha^0, x_\alpha^1):\,\alpha<\kappa\}$
is a bidiscrete system in $K$.
\end{observation}
\begin{Proof} Since $H$ is closed we can, by Tietze's Extension Theorem,
extend each $f_\alpha$ continuously to
a function $g_\alpha$ on $K$. The conclusion follows
from the definition of a bidiscrete system.
$\eop_{\ref{closedsubspace}}$
\end{Proof}
\begin{Claim}\label{generaldiscrete} Suppose that $K$ is a compact
space and $F_i\subseteq G_i \subseteq K\,$ for $i\in I$ are such
that the $G_i$'s are disjoint open, the $F_i$'s are closed and in
each $F_i$ we have a bidiscrete system $S_i$. Then $\bigcup_{i\in
I}S_i$ is a bidiscrete system in $K$.
\end{Claim}
\begin{proof} For $i\in I$ let the bidiscreteness of $S_i$ be witnessed
by $\{g^i_{\alpha}\,:\,\alpha<\kappa_i\} \subseteq C(F_i)$. We can,
as in Observation \ref{closedsubspace}, extend each $g^i_{\alpha}$
to $h^i_{\alpha} \in C(K)$ which exemplify that $S_i$ is a
bidiscrete system in $K$.
Now we would like to put all these bidscrete systems together, for
which we need to find appropriate witnessing functions. For any
$i\in I$ we can apply Urysohn's Lemma to find functions $f_i \in
C(K)$ such that $f_i$ is 1 on $F_i$ and 0 on the complement of
$G_i$. Let us then put, for any $\alpha$ and $i$,
$f_{\alpha}^i=g^i_{\alpha}\cdot f_i$. Now, it is easy to verify
that the functions $\{f^i_{\alpha}:\,\alpha<\kappa_i, i\in I\}$
witness that $\bigcup_{i\in I}S_i$ is a bidiscrete system in $K$.
$\eop_{\ref{generaldiscrete}}$
\end{proof}
Clearly, Observation \ref{closedsubspace} is the special case of
Claim \ref{generaldiscrete} when $I$ is a singleton and $G_i = K$.
\begin{Claim}\label{splitintervalsplit}
If the compact space $K$ has a discrete subspace of size $\kappa
\ge \omega$ then it has a bidiscrete system of size $\kappa$, as
well.
\end{Claim}
\begin{Proof}
Suppose that $D = \{x_\alpha : \alpha < \kappa\}$ (enumerated in a
one-to-one manner) is discrete in $K$ with open sets $U_\alpha$
witnessing this, i.e. $D \cap U_\alpha = \{x_\alpha\}$ for all
$\alpha < \kappa$. For any $\alpha < \kappa$ we may fix a function
$f_\alpha \in C(K)$ such that $f_\alpha(x_{2\alpha+1}) = 1$ and
$f_\alpha(x) = 0$ for all $x \notin U_{2\alpha+1}$. Obviously,
then $\{f_\alpha : \alpha < \kappa\}$ exemplifies that
$\{(x_{2\alpha},x_{2\alpha+1}) : \alpha < \kappa \}$ is a
bidiscrete system in $K$.
\end{Proof}
The converse of Claim \ref{splitintervalsplit} is false, however
the following is true.
\begin{claim}
\label{discbidisc} Suppose that $B = \{ (x^0_\alpha,
x^1_\alpha):\,\alpha<\kappa\}$ is a bidiscrete system in $K$. Then
$B$ is a discrete subspace of $K^2$.
\end{claim}
\begin{Proof} Assume that the functions $\{f_\alpha:\,\alpha<\kappa\}
\subseteq C(K)$ exemplify the bidiscreteness of $B$. Then
$O_\alpha=f_\alpha^{-1}((-\infty,1/2))\times
f_\alpha^{-1}((1/2,\infty))$ is an open set in $K^2$ containing
$(x^0_\alpha, x^1_\alpha)$. Also, if $\beta\neq\alpha$ then
$(x^0_\beta, x^1_\beta) \notin O_\alpha$, hence $B$ is a discrete
subspace of $K^2$. $\eop_{\ref{discbidisc}}$
\end{Proof}
Now we turn to formulating and proving the main result of this
section.
\begin{Theorem}\label{lemma2} If $K$ is an infinite compact Hausdorff space
then $K$ has a bidiscrete system of size $d(K)$. If $K$ is moreover 0-dimensional
then there is a very nice bidiscrete system in $K$ of size $d(K)$.
\end{Theorem}
\begin{Proof} The proofs of the two parts of the theorem are the same, except that
in the case of a 0-dimensional space every time
that we take functions witnessing bidiscreteness, we need to observe that these functions
can be assumed to take values only in $\{0,1\}$. We leave it to the reader to check that this is indeed
the case.
The case $d(K)=\aleph_0$ is very easy, as it is well known that
every infinite Hausdorff space has an infinite discrete subspace
and so we can apply Claim \ref{splitintervalsplit}. So, from now
on we assume that $d(K)>\aleph_0$.
Recall that a Hausdorff space $(Y,\sigma)$ is said to be {\em
minimal Hausdorff} provided that there does not exist another
Hausdorff topology $\rho$ on $Y$ such that $\rho\subsetneq
\sigma$, i.e. $\rho$ is strictly coarser than $\sigma$. The
following fact is well known and easy to prove, and it will
provide a key part of our argument:
\begin{Fact}\label{coarse} Any compact Hausdorff space is minimal Hausdorff.
\end{Fact}
\begin{Lemma}\label{lemma1} Suppose that $X$ is a compact Hausdorff space with $d(X)\ge\kappa>\aleph_0$
in which every non-empty open (equivalently: regular closed)
subspace has weight $\ge\kappa$.
Then $X$ has a bidiscrete system of size $\kappa$.
\end{Lemma}
\begin{Proof of the Lemma} We shall choose $x_\alpha^0, x_\alpha^1, f_\alpha$ by induction on $\alpha<\kappa$
so that the pairs $(x_\alpha^0, x_\alpha^1)$ form a bidiscrete
system, as exemplified by the functions $f_\alpha$. Suppose that
$x_\beta^0, x_\beta^1, f_\beta$ have been chosen for
$\beta<\alpha<\kappa$.
Let $C_\alpha$ be the closure of the set $\{x_\beta^0,
x_\beta^1:\,\beta<\alpha\}$. Therefore $d(C_\alpha)<\kappa$ and,
in particular, $C_\alpha\neq X$. Let $F_\alpha\subseteq X\setminus
C_\alpha$ be non-empty regular closed, hence
$w(F_\alpha)\ge\kappa$.
Let $\tau_\alpha$ be the topology on $F_\alpha$ generated by the
family
\[
\mathcal{F}_\alpha = \{f_\beta^{-1}(-\infty,q)\cap
F_\alpha\,,\,f_\beta^{-1}(q,\infty)\cap F_\alpha\,
:\,\beta<\alpha,\, q\in {\mathbf Q}\},
\]
where $\mathbf Q$ denotes the set of rational numbers. Then
$|\mathcal{F}_\alpha|<\kappa$ (as $\kappa>\aleph_0$), hence the
weight of $\tau_\alpha$ is less than $\kappa$, consequently
$\tau_\alpha$ is strictly coarser than the subspace topology on
$F_\alpha$. Fact \ref{coarse} implies that $\tau_\alpha$ is not a
Hausdorff topology on $F_\alpha$, hence we can find two distinct
points $x^0_\alpha, x^1_\alpha\in F_\alpha$ which are not
$T_2$-separated by any two disjoint sets in $\tau_\alpha$ and, in
particular, in $\mathcal{F}_\alpha$. This clearly implies that
$f_\beta(x^0_\alpha)=f_\beta(x^1_\alpha) $ for all $\beta<\alpha$.
Now we use the complete regularity of $X$ to find $f_\alpha\in
C(X)$ such that $f_\alpha$ is identically 0 on the closed set
$C_\alpha\cup\{x^0_\alpha\}$ and $f_\alpha(x^1_\alpha)=1$. It is
straight-forward to check that $\{f_\alpha : \alpha < \kappa\}$
indeed witnesses the bidiscreteness of $\{(x_\alpha^0, x_\alpha^1)
: \alpha < \kappa\}$. $\eop_{\ref{lemma1}}$
\end{Proof of the Lemma}
Let us now continue the proof of the theorem. We let $\kappa$
stand for $d(K)$ and let
\[
{\mathcal P}=\{\emptyset\neq O\subseteq K:\,O\mbox{ open such that }[\emptyset\neq U
\mbox{ open}\subseteq O\implies d(U)= d(O)]\}.
\]
We claim that ${\mathcal P}$ is a $\pi$-base for $K$, i.e. that
every non-empty open set includes an element of ${\mathcal P}$.
Indeed, suppose this is not case, as witnessed by a non-empty open
set $U_0$. Then $U_0\notin{\cal P}$, so there is a non-empty open set
$\emptyset\neq U_1\subseteq U_0$ with $d(U_1)<d(U_0)$ (the case
$d(U_1)<d(U_0)$ cannot occur). Then $U_1$ itself is not a member
of ${\cal P}$ and therefore we can find a non-empty open set
$\emptyset\neq U_2\subseteq U_1$ with $d(U_2)<d(U_1)$, etc. In
this way we would obtain an infinite decreasing sequence of
cardinals, a contradiction.
Let now $\mathcal O$ be a maximal disjoint family of members of
${\mathcal P}$. Since ${\mathcal P}$ is a $\pi$-base for $K$ the
union of $\mathcal{O}$ is clearly dense in $K$. This implies that
if we fix any dense subset $D_O$ of $O$ for all $O\in {\mathcal
O}$ then $\bigcup \{ D_O : O \in \mathcal{ O} \}$ is dense in $K$,
as well. This, in turn, implies that $\sum \{d(O) : O \in \mathcal
{O}\} \ge d(K) = \kappa$.
If $|\mathcal{O}| = \kappa$ then we can select a discrete subspace
of $K$ of size $\kappa$ by choosing a point in each $O\in \mathcal
O$, so the conclusion of our theorem follows by Corollary
\ref{splitintervalsplit}.
So now we may assume that $|\mathcal O|<\kappa$. In this case,
since $\kappa>\aleph_0$, letting ${\mathcal O}'=\{O\in
{\mathcal O}:\,d(O)>\aleph_0\}$, we still have $\sum \{d(O) : O \in
\mathcal {O'}\} \ge \kappa$. Next, for each $O\in{\mathcal O}'$ we choose a
non-empty open set $G_O$ such that its closure
$\overline{G}_O\subseteq O$. Then we have, by the definition of
${\cal P}$, that $d(\overline{G}_O)=d(G_O)=d(O)$. By the same token,
every non-empty open subspace of the compact space
$\overline{G}_O$ has density $d(O)$, and hence weight $\ge d(O)$.
Therefore we may apply Lemma \ref{lemma1} to produce a bidiscrete
system $S_O$ of size $d(O)$ in $\overline{G}_O$. But then Claim
\ref{generaldiscrete} enables us to put these systems together to
obtain the bidscrete system $S = \bigcup \{S_O : O \in
\mathcal{O}'\}$ in $K$ of size $\sum \{d(O) : O \in \mathcal
{O'}\} \ge\kappa$. $\eop_{\ref{lemma2}}$
\end{Proof}
It is immediate from Theorem \ref{lemma2} and Observation
\ref{closedsubspace} that if $C$ is a closed subspace of the
compactum $K$ with $d(C) = \kappa$ then $K$ has a bidiscrete
system of size $\kappa$. We recall that the hereditary density
${\rm hd}(X)$ of a space $X$ is defined as the supremum of the
densities of all subspaces of $X$.
\begin{Fact}\label{hd} For any compact Hausdorff space $K$,
${\rm hd}(K)=\sup\{d(C):\,C\mbox{ closed}\subseteq K\}$.
\end{Fact}
From this fact and what we said above we immediately obtain the
following corollary of Theorem \ref{lemma2}.
\begin{Corollary}\label{theorem2} If $K$ is a compact Hausdorff space with
${\rm hd}(K)\ge\lambda^+$ for some $\lambda\ge\omega$, then $K$
has a bidiscrete system of size $\lambda^+$.
\end{Corollary}
\bigskip
We finish by listing some open questions.
\begin{Question} (1) Does every compact space $K$ admit a
bidiscrete system of size ${\rm hd}(K)$?
{\noindent (2)} Define
\[
{\rm bd}(K)=\sup\{|S|\,:\, S \mbox{ is a bidiscrete system in
}K\}.
\]
Is there always a bidiscrete system in $K$ of size ${\rm bd}(K)$?
{\noindent (3)} Suppose that $K$ is a 0-dimensional compact space
which has a bidiscrete system of size $\kappa$. Does then $K$ also
have a very nice bidiscrete system of size $\kappa$ (i.e. such
that the witnessing functions take values only in $\{0,1\}$)? Is
it true that any bidiscrete system in a 0-dimensional compact
space is very nice?
{\noindent (4)} (This is Problem 4 from \cite{JuSz}): Is there a
$ZFC$ example of a compact space $K$ that has no discrete subspace
of size $d(K)$?
{\noindent (5)} If the square $K^2$ of a compact space $K$
contains a discrete subspace of size $\kappa$, does then $K$ admit
a bidiscrete system of size $\kappa$ (or does at least $C(K)$ have
a biorthogonal system of size $\kappa$)? This question is of
especial interest for $\kappa = \omega_1$.
\end{Question}
| proofpile-arXiv_065-6736 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The standard model (SM) of particle physics successfully
explains almost of all the experimental results
around the electroweak scale.
Nevertheless, the SM suffers from several problems and
this fact strongly motivates us to explore physics beyond the SM.
One of them is the so-called hierarchy problem
originating from the ultraviolet sensitivity
of the SM Higgs doublet mass, and another one
is the absence of candidates for the dark matter particle.
In this paper we propose an extra-dimensional scenario
which can provide a possible solution to these two problems.
Among many models proposed to solve the hierarchy problem,
we concentrate on the gauge-Higgs unification
scenario~\cite{Manton:1979kb,YH}.
In this scenario, the SM Higgs doublet field is identified
with an extra-dimensional component of the gauge field
in higher-dimensional gauge theories
where the extra spacial dimensions are compactified to
realize four-dimensional effective theory at low energies.
The higher-dimensional gauge symmetry protects the Higgs
doublet mass from ultraviolet divergences~\cite{YH,finiteness},
and hence the hierarchy problem can be solved.
In the context of the gauge-Higgs unification scenario,
many models have been considered
in both the flat~\cite{Csaki}-\cite{GGHU} and
the warped~\cite{RS} background geometries~\cite{GHUinRS}-\cite{pNG}.
However, the latter problem has not been investigated in this scenario,
except for a few literatures~\cite{DMinGHU,Carena:2009yt,hosotani},
and in this paper, we propose a dark matter candidate
which can be naturally incorporated
in the gauge-Higgs unification scenario.
In the next section, we show a simple way to introduce
a candidate for the dark matter particle
in general higher-dimensional models.
In a sharp contrast with the usual Kaluza-Klein (KK) dark matter
in the universal extradimension scenario~\cite{KKDM},
our procedure is independent of the background
space-time metric.
In section 3, we apply this to the gauge-Higgs unification
scenario and show that a dark matter candidate as a
weakly-interacting-massive-particle (WIMP) emerges.
For our explicit analysis, we consider a gauge-Higgs unification model
based on the gauge group SO(5)$\times$U(1)$_X$
in five-dimensional warped background metric
with the fifth dimension compactified on the $S^1/Z_2$ orbifold.
In section 4, we evaluate the relic abundance
of the dark matter particle and its detection rates
in the direct dark matter detection experiments.
Section 5 is devoted to summary.
\section{A new candidate for the dark matter}
\label{Sec:APDM}
A stable and electric charge neutral WIMP is
a suitable candidate for the dark matter.
In general, a certain symmetry (parity) is necessary
to ensure the stability of a dark matter particle.
Such a symmetry can be imposed by hand in some models
or it can be accidentally realized
such as the KK parity~\cite{KKDM}.
The KK parity is actually an interesting possibility
for introducing a dark matter candidate
in higher-dimensional models.
However, we need to elaborate a model in order to
realize the KK parity in general warped
background geometry~\cite{Agashe:2007jb}.
In a simple setup, the KK parity is explicitly broken
by a warped background metric and
the KK dark matter is no longer stable~\cite{OY}.
So, here is an interesting question:
Is it possible in extradimensional models to introduce
a stable particle independently of the background
space-time metric, without imposing any symmetries by hand?
In the following we address our positive answer to this question.
In fact, when we impose the anti-periodic (AP) boundary condition
on bulk fields, the lightest AP field turns out to be stable.
In models with the toroidal compactification, no matter what
further orbifoldings are, the Lagrangian ${\mathcal L}$
should be invariant under a discrete shift of the coordinate
of the compactified direction,
\begin{equation}
{\cal L}(x,y+2\pi R) = {\cal L}(x,y),
\end{equation}
where $x$ and $y$ denote the non-compact four dimensional coordinate
and the compact fifth-dimensional one with a radius $R$, respectively.
When we introduce some fields which have the AP boundary condition as
\begin{equation}
\Phi(x,y+2\pi R) = -\Phi(x,y),
\end{equation}
these fields never appear alone but always do in pairs in the Lagrangian,
since the Lagrangian must be periodic.
Thus, there exists an accidental $Z_2$ parity, under which
the AP (periodic) fields transform as odd (even) fields.
This concludes that the lightest AP field is stable\footnote{
Similarly to the KK parity,
Lagrangian on the boundaries must be restricted
to respect the $Z_2$ parity.
}
and can be a good candidate for the dark matter
if it is colorless and electric-charge neutral.
In this way, a dark matter candidate can be generally incorporated
as the lightest AP field in higher-dimensional models.
However, except for providing the dark matter candidate,
there may be no strong motivation for introducing such AP fields.
In fact, AP fields often plays a crucial role
in the gauge-Higgs unification scenario
to make a model phenomenologically viable,
and therefore a dark matter candidate is simultaneously
introduced in such a model.
\section{Gauge-Higgs Dark Matter}
\label{Sec:GHDM}
We show a model of the gauge-Higgs unification,
which naturally has a dark matter candidate.
The dark matter particle originates from an AP field which is introduced
in a model for a phenomenological reason as will be discussed below.
We know well that, it is difficult,
in simple gauge-Higgs unification models with the flat metric,
to give a realistic top quark mass and a Higgs boson mass
above the current experimental lower bound.
This difficulty originates from the fact that
effective Higgs potential in the gauge-Higgs unification model
results in the Wilson line phase of order one.
When we consider the gauge-Higgs unification scenario
in the warped metric of the extra dimension,
this problem can be solved because of the effect of
the warped metric, although the Wilson line phase
of order one is obtained from effective Higgs potential.
However, as is claimed in Ref.~\cite{EWPMinGHU},
a small Wilson line phase is again required
in order for the scenario to be consistent with
the electroweak precision measurements.
Therefore, it is an important issue
in the gauge-Higgs unification scenario
how to naturally obtain a small Wilson line phase.
A simple way is to introduce AP fermions in a model.
It has been shown in Ref.~\cite{APinGHU} that
a small Wilson line phase is actually obtained
by introducing AP fermions.
This is the motivation we mentioned above\footnote{
In Ref.~\cite{DMinGHU},
with a {\it similar} purpose, a {\it similar} $Z_2$ symmetry
is imposed {\it but}
by hand.}.
An AP fermion, once introduced, not only reduces
unwanted new particle effects to the precisely measured
SM parameters but also provides a dark matter candidate
as its lightest electric-charge neutral component.
We call the dark matter candidate in the AP fermion
``gauge-Higgs dark matter'' in this paper.
The interactions between the dark matter and the Higgs field is
largely controlled by the gauge symmetry,
since the Higgs field is a part of the gauge field
in the gauge-Higgs unification scenario.
This fact leads to a strong predictive power of the model
for the dark matter phenomenology.
\subsection{A model}
\label{Sec:model}
Here we explicitly examine a 5D gauge-Higgs unification model
with a dark matter particle.
The model is based on the gauge symmetry
SO(5)$\times$U(1)$_X$~\cite{Carena:2009yt,hosotani}
compactified on the simplest orbifold $S^1/Z_2$
with the warped metric~\cite{RS}
\begin{eqnarray}
{\rm d} s^2
=
G_{M N} {\rm d} x^M {\rm d} x^N
=
e^{-2\sigma(y)} \eta_{\mu \nu} {\rm d} x^\mu {\rm d} x^\nu
-
{\rm d} y^2,
\end{eqnarray}
where $M=0,1,2,3,5$, $\mu=0,1,2,3$,
$\sigma(y) = k |y|$ at $-\pi R \leq y \leq \pi R$,
$\sigma(y) = \sigma(y + 2 \pi R)$, and
$\eta_{\mu \nu} = {\rm diag}(1, -1, -1, -1)$ is
the 4D flat metric.
We define the warp factor $a= \exp(-\pi k R)$
and as a reference value, we set the curvature $k$ and
the radius $R$ to give the warp factor $a = 10^{-15}$.
The bulk SO(5) gauge symmetry is broken down to
SO(4)$\simeq$SU(2)$_L\times$SU(2)$_R$
by the boundary conditions~\cite{Kawamura}.
Concretely, the gauge field and its 5th component transform
around the two fixed points $y_0=0$ and $y_L=\pi R$ as
\begin{eqnarray}
A_\mu(x,\,y_i-y) &=& P_i A_\mu(x,\,y_i+y) P_i^\dagger, \\
A_5(x,\,y_i-y) &=& -P_i A_5(x,\,y_i+y) P_i^\dagger,
\end{eqnarray}
under the $Z_2$ parity,
where
$P_0=P_L={\rm diag.}(-1,-1,-1,-1,+1)$ for the five-by-five anti-symmetric
matrix representation of the generators acting on the vector
representation, $\bf5$.
As for the remaining SO(4)$\times$U(1)$_X$ gauge symmetry,
the SU(2)$_R\times$U(1)$_X$ is assumed to be broken down to the hypercharge
symmetry U(1)$_Y$
by a VEV of an elementary Higgs field\footnote{
Note that introducing the elementary Higgs field
at the $y=0$ orbifold fixed point has no contradiction
against the motivation of the gauge-Higgs unification scenario
since the mass of the Higgs fields and their VEVs are of
the order of the Planck scale.
In this case, they decouple from TeV scale physics.
}
put on the $y=0$ orbifold fixed point.
Now the remaining gauge symmetry is the same as the SM,
where there exists the zero-mode of $A_5$
which is identified as the SM Higgs doublet
(possessing the right quantum numbers).
When the zero mode of $A_5$ develops a non-trivial VEV,
the SO(4) symmetry is broken down to SO(3)$\simeq$SU(2)$_D$ which
is the diagonal part of SU(2)$_L\times$SU(2)$_R\simeq$SO(4).
Taking the boundary Higgs VEV into account,
the electromagnetic U(1)$_{\rm EM}$ is left with unbroken.
Thanks to the custodial symmetry which is violated
only at the $y=0$ fixed point, that is, a superheavy energy scale,
the correction to the $\rho$-parameter is naturally
suppressed~\cite{EWPMinGHU}.
This allows the KK scale as low as a few TeV
without any contradictions against current experiments.
The components of gauge field are explicitly written as
\begin{equation}
A_M=
\left(
\begin{array}{cccc|c}
0 & A_V^3 & -A_V^2 & A_A^1 & A_H^1 \\
& 0 & A_V^1 & A_A^2 & A_H^2 \\
& & 0 & A_A^3 & A_H^3 \\
& & & 0 & A_H^4 \\ \hline
& & & & 0
\end{array}
\right)_M,
\end{equation}
where
\begin{eqnarray}
A_{\scriptsize\begin{array}{c}V\vspace{-2mm}\\A\end{array}}^i =
\frac1{\sqrt2}(A_L^i\pm A_R^i),\qquad (i=1,\,2,\,3), \\
A_F^\pm = \frac1{\sqrt2}(A_F^1\mp i A_F^2),\qquad (F=V,A,H).
\end{eqnarray}
The zero-modes of $A_5$ exist on $A_H$ and its VEV
can be rotated into only $(A_H^4)_5$ component
by the SO(4) symmetry, by which
the Wilson line phase $\theta_W$ is defined as
\begin{eqnarray}
W \equiv e^{i \theta_W}
=
P \exp\left( {-i g \int^{\pi R}_{-\pi R} {\rm d} y ~G^{55} (A_H^4)_5} \right),
\label{WilsonLinePase}
\end{eqnarray}
where $P$ denotes the path ordered integral.
For vanishing $\theta_W$,
the SM gauge bosons are included
in $A_L$ and $B_X$ (which is
the gauge boson of the U(1)$_X$ symmetry),
while the $A_H$ component is mixed into
the mass eigenstates of weak bosons for non-vanishing $\theta_W$.
We do not specify the fermion sector of the model
but just assume it works well, since this sector
is not strongly limited by the gauge symmetry
and has a lot of model-dependent degrees of freedom.
Thus, in our following analysis we leave the Higgs boson mass $m_h$ and
the Wilson line phase $\theta_W$ as free parameters,
which should be calculated through the loop induced effective
potential~\cite{EffPot}-\cite{EffPotRS}
once the fermion sector of the model is completely fixed.
Let us now consider an AP fermion, $\psi$,
as a ${\bf5}_0$-multiplet under SO(5)$\times$U(1)$_X$,
in which the dark matter particle is contained.
A parity odd bulk mass parameter $c$ of this multiplet
is involved as an additional parameter~\cite{GherghettaPomarol}.
The wave function profile along the compactified direction
is written by the Bessel functions with the index
$\alpha=\left|\gamma_5 c+1/2\right|$~\cite{GherghettaPomarol}
and the localization of the bulk fermion is controlled
by the bulk mass parameter.
We choose the boundary conditions of this multiplet
so that the singlet component
of the SO(4) is lighter than the vector one for small $\theta_W$
with $ c > 0$.
After the electroweak symmetry breaking,
the forth and fifth components are mixed with each other
through the non-vanishing Wilson line phase in $(4,5)$ component,
while the first, second and third ones are not.
The combinations of forth and fifth components make up
two mass eigenstates:
The lighter one is nothing but the dark matter particle, $\psi_{\rm DM}$,
and we denote the heavier state as $\psi_S$.
The first, second and third components denoted
as $\psi_i$\,$(i=1,\,2,\,3)$ have nothing to do
with the electroweak symmetry breaking, and thus
degenerate up to small radiative corrections.
They are heavier than $\psi_S$.
Note that only dark matter particles themselves have no couplings
with the weak gauge bosons,
which are linear combinations of $A_V$, $A_A$ and $A_H$ (and $B_X$),
and the couplings between the dark matter particle and the weak gauge bosons
are always associated with the transition from/to the heavier partners,
$\psi_i$.
On the other hand,
both types of couplings exist among
the dark matter particle and the Higgs boson.
At the energy scale below the 1st KK mode mass,
the effective Lagrangian is expressed as
\begin{eqnarray}
{\cal L}^{\rm 4D}_{\rm DM}&=&
\sum_{i=1,2,3,S,{\rm DM}}
\bar\psi_i (i\partial\hspace{-2.3mm}/ -m_a)\psi_i
+y_{\rm DM} \bar\psi_{\rm DM} H \psi_{\rm DM}
\nonumber\\
&& +\bar\psi_S H
\left(y_S+y_P\gamma_5\right)
\psi_{\rm DM}
+\bar\psi_{\rm DM} H
\left(y_S-y_P\gamma_5\right)
\psi_S
\nonumber\\
&& +\sum_{i=1,2,3}\bar\psi_i
W_i\hspace{-4mm}/\,\,\,
\left(g^V_i+g^A_i\gamma_5\right)
\psi_{\rm DM}
+\bar\psi_{\rm DM}
W_i\hspace{-4mm}/\,\,
\left(g^V_i+g^A_i\gamma_5\right)
\psi_i,
\label{effL}
\end{eqnarray}
where we denote $Z$ as $W^3$, and set $g_1^h=g_2^h$ due to the remaining
U(1)$_{\rm EM}$ symmetry.
Once we fix the free parameters $\theta_W$ and $c$ (also the warp factor),
we can solve the bulk equations of motion for $A_M$ and $\psi$
(see for example Ref.~\cite{sakamura-unitariy})
and obtain the mass spectra of all the states
and effective couplings in Eq.(\ref{effL}) among AP fields,
the gauge bosons and the Higgs boson,
independently of the Higgs boson mass $m_h$
(which is another free parameter of the model as mentioned above).
Using calculated spectra and the effective couplings,
we investigate phenomenology of the gauge-Higgs dark matter
in the next section.
Since we have only three parameters
(or four if we count also the warp factor),
the model has a strong predictive power.
\subsection{Constraints}
\label{Sec:constraints}
Before investigating the gauge-Higgs dark matter phenomenology,
we examine an
experimental constraint on the Wilson line phase $\theta_W$~\footnote{
Constraints in the case with the flat metric is discussed
in Ref.~\cite{ConstraintInFlatGHU}.
}.
In Ref.~\cite{EWPMinGHU},
it is claimed that $\theta_W$ should be smaller than $0.3$
or the KK gauge boson mass larger than 3 TeV
in order to be consistent with the electroweak precision measurements.
Using the relation between $m_W$ and $m_{KK}\equiv\pi k a$
(see for example Ref.~\cite{sakamura-unitariy}),
\begin{equation}
m_W \simeq \frac{\theta_W}{\sqrt{\ln(a^{-1})}}\frac{m_{KK}}\pi,
\end{equation}
and the formula for the first KK gauge boson mass $m_1=0.78m_{KK}$,
the latter constraint is translated as $\theta_W\lesssim0.4$.
According to these bounds, we restrict our analysis
in the range of a small Wilson line phase,
namely, $\theta_W\leq\pi/10$.
We expect that AP fields not only provide the dark matter particle
but also is helpful to realize such small value of $\theta_W$.
\section{Phenomenology of gauge-Higgs dark matter}
Now we are in a position to investigate
the gauge-Higgs dark matter phenomenology.
We first estimate the relic abundance of the dark matter
and identify the allowed region of the model parameter space
which predicts the dark matter relic density consistent with
the current cosmological observations.
Furthermore, we calculate the cross section of the elastic
scattering between the dark matter particle and nucleon
to show implications of the gauge-Higgs dark matter scenario
for the current and future direct dark matter detection experiments.
\subsection{Relic abundance}
In the early universe, the gauge-Higgs dark matter is
in thermal equilibrium through the interactions
with the SM particles.
According to the expansion of the universe,
temperature of the universe goes down and the dark matter eventually
decouples from thermal plasma of the SM particles
in its non-relativistic regime.
The thermal relic abundance of the dark matter can be evaluated
by solving the Boltzmann equation,
\begin{eqnarray}
\frac{d Y}{dx} =
-\frac{s \langle \sigma v \rangle}{x H}
\left( 1- \frac{x}{3 } \frac{d \log g_{*s}}{d x } \right)
\left( Y^2 -Y_{EQ}^2 \right),
\end{eqnarray}
where $x=m_{\rm DM}/T$,
$\langle \sigma v \rangle$
is the thermal averaged product of
the dark matter annihilation cross section ($\sigma $)
and the relative velocity of annihilating dark matter particles ($v$),
$Y(\equiv n/s)$ is the yield defined as the ratio
of the dark matter number density $(n)$ to the entropy density
of the universe $(s)$, and the Hubble parameter $H$
is described as $H= \sqrt{(8 \pi/3) G_N \rho}$
with the Newton's gravitational constant
$G_N=6.708 \times 10^{39}$ GeV$^{-2}$
and the energy density of the universe ($\rho$).
The explicit formulas for the number density of the dark matter particle,
the energy density, and the entropy density are given,
in the Maxwell-Boltzmann approximation, by
\begin{equation}
n = \frac{g_{DM}}{2 \pi^2} \frac{K_2(x)}{x} m^3, \quad
\rho = \frac{\pi^2}{30} g_* T^4, \quad
s = \frac{2 \pi^2}{45} g_{*s} T^3,
\end{equation}
where $K_2$ is the modified Bessel function of the second kind,
$g_{DM}=4$ is the spin degrees of freedom
for the gauge-Higgs dark matter, and
$g_*$ ($g_{*s}$) is the effective massless degrees of freedom
in the energy (entropy) density, respectively.
In non-relativistic limit, the annihilation cross section
can be expanded with respect to a small relative velocity as
\begin{eqnarray}
\sigma v = \sigma_0 + \frac{1}{4} \sigma_1 v^2 +\cdots,
\end{eqnarray}
where $v \simeq 2 \sqrt{1-4 m^2/s}$ in the center-of-mass frame
of annihilating dark matter particles.
The first term corresponds to the dark matter annihilations
via $S$-wave, while the second is contributed by the $S$-
and $P$-wave processes.
In the Maxwell-Boltzmann approximation,
the thermal average of the annihilation cross section
is evaluated as
\begin{eqnarray}
\left< \sigma v \right>
&\equiv&\frac1{8x^4 K_2(x)^2}
\int_{4x^2}^\infty ds\,
\sqrt{s}(s-4x^2)K_1(\sqrt{s})\sigma_{ann} \\
&=& \sigma_0 +\frac32\sigma_1 x^{-1}+\cdots,
\label{left< v}
\end{eqnarray}
where a unit $T=1$ is used in the first line.
There are several dark matter annihilation modes
in both $S$-wave and $P$-wave processes
(see Eq.~(\ref{effL})),
such as
$ \bar{\psi}_{\rm DM} \psi_{\rm DM} \to W^+ W^-, ZZ, HH$
through $\psi_i$, $\psi_S$ and $\psi_{\rm DM}$ exchanges in the $t$-channel
and $ \bar{\psi}_{\rm DM} \psi_{\rm DM} \to f \bar{f}, W^+ W^-, ZZ, HH$
through the Higgs boson exchange in the $s$-channel,
where $f$ stands for quarks and leptons.
Once the model parameters, $\theta_W$, $c$ and $m_h$, are fixed,
magnitudes of $\sigma_0$ and $\sigma_1$ are calculated.
With a given annihilation cross section,
the Boltzmann equation can be numerically solved.
The relic density of the dark matter is obtained as
$\Omega_{\rm DM} h^2 = m_{\rm DM} s_0 Y(\infty)/(\rho_c/h^2)$
with $s_0=2889$ cm$^{-3}$ and
$\rho_c/h^2=1.054\times 10^{-5}$ GeV cm$^{-3}$.
Here, we use an approximate formula~\cite{DMabundanceApp}
for the solution of the Boltzmann equation:
\begin{equation}
\Omega_{\rm DM}h^2 =
8.766\times10^{-11}({\rm GeV}^{-2})
\left(\frac{T_0}{2.75 {\rm K}}\right)^3
\frac{x_f}{\sqrt{g_{*}(T_f)}}
\left(\frac12\sigma_0+\frac38\sigma_1 x_f^{-1}\right)^{-1},
\label{Omegah2App}
\end{equation}
where $x_f=m_{\rm DM}/T_f$ is the freeze-out temperature
normalized by the dark matter mass, and $T_0=2.725$ K is the present
temperature of the universe.
The freeze-out temperature is approximately determined by~\cite{DMabundanceApp}
\begin{eqnarray}
\sqrt{\frac\pi{45G_N}}\frac{45g_{\rm DM}}{8\pi^4}
\frac{\pi^{1/2}e^{-x_f}}{g_{\rm *s}(T_f)x_f^{1/2}
g_{\rm *}(T_f)
m_{\rm DM}
\left(\frac12\sigma_0+\frac34\sigma_1 x_f^{-1}\right)
\delta(\delta+2)=1.
\label{xfApp}
\end{eqnarray}
Here the parameter $\delta$ defines $T_f$ through a relation
between the yield $Y$ and its value in thermal equilibrium,
$Y - Y_{\rm EQ} = \delta Y_{\rm EQ}$,
whose value is chosen so as to keep this approximation good.
We set $\delta=1.5$ according to Ref.~\cite{DMabundanceApp}.
In these approximations, we include the factor $1/2$
due to the Dirac nature of the gauge-Higgs dark matter
(see the discussion below Eq.~(2.16) of Ref.~\cite{DMabundanceApp}).
Let us now compare the resultant dark matter relic density
for various $ \theta_W $ and $c$
with the observed value~\cite{DMabundance}:
\begin{equation}
\Omega_{\rm DM}h^2=0.1143\pm 0.0034.
\end{equation}
\begin{figure}[t]
\begin{center}
\vspace{4mm}
\hspace{-8mm}
\includegraphics[width=0.58\textwidth]{Fig1.eps}
\end{center}
\caption{{\it The relic abundance}:
The relic abundances consistent with the observations are obtained in the two red regions.
In the upper-left corner outside the red region, the relic abundance is predicted to be too little,
while over-abundance of the dark matter relic density is obtained in the other region.
The contours corresponding to fixed dark matter masses are also shown.
Here, the Higgs mass has been taken to be $m_h=120$ GeV.
}
\label{fig:abundance}
\end{figure}
The result is depicted in Figure \ref{fig:abundance},
where the Higgs boson mass is set as $m_h=120$ GeV.
The regions consistent with the observations are indicated by red,
while too little (much) abundances
are obtained in the region above the red line on the upper-left
(in the other regions).
There are two allowed regions:
One is the very narrow region in upper-right,
where the right relic abundance is achieved
by the enhancement of the annihilation cross section
through the $s$-channel Higgs boson resonance,
so that the dark matter mass is $m_{\rm DM} \simeq m_h/2 =60$ GeV there.
The other one appears in upper-left with
the dark matter mass around a few TeV, where dark matter particles
can efficiently annihilate into the weak gauge bosons
and the Higgs bosons through the processes
with heavy fermions in the $t$-channel.
\subsection{Direct detection}
Next we investigate the implication of the gauge-Higgs dark matter
for the direct detection experiments~\cite{TrAnom}.
A variety of experiments are underway to directly detect
dark matter particles through their elastic scatterings off nuclei.
The most stringent limits on the (spin-independent) elastic
scattering cross section have been reported by the recent
XENON10~\cite{XENON10} and CDMS II~\cite{CDMS} experiments:
$\sigma_{el}({\rm cm}^2) \lesssim
7 \times 10^{-44} - 5 \times 10^{-43}$,
for a dark matter mass of
100 GeV$\lesssim m_{\rm DM} \lesssim$ 1 TeV.
Since the gauge-Higgs dark matter particle can scatter off a nucleon
through processes mediated by the Higgs boson
in the $t$-channel, a parameter region of our model
is constrained by this current experimental bound.
The elastic scattering cross section between
the dark matter and nucleon mediated by the Higgs boson is given as
\begin{equation}
\sigma_{el}(DM+N\to DM+N)=\frac{y_{\rm DM}^2 m_N^2 m_{\rm DM}^2}
{\pi v_h^2 m_h^4 (m_{\rm DM}+m_N)^2}
\left|f_N \right|^2,
\end{equation}
where $m_N=0.931{\rm eV}$ is the nucleon mass~\cite{PDG},
and $v_h=246{\rm GeV}$ is the VEV of the Higgs doublet.
The parameter $f_N$ is defined as
\begin{eqnarray}
f_N = \langle N \left| \sigma_q m_q \bar{q}q
- \frac{\alpha_s}{4 \pi} G_{\mu \nu} G^{\mu \nu}
\right| N \rangle
= m_N \left(\frac{2}{9} f_{T_G}+f_{T_u}+f_{T_d}+f_{T_s}\right),
\end{eqnarray}
where $q$ represents light quarks ($u$, $d$, and $s$)
and $G_{\mu \nu}$ is the gluon field strength.
Contributions from the light quarks to the hadron matrix
element is evaluated by lattice QCD simulations~\cite{fbyLattice},
\begin{eqnarray}
f_{T_u}+f_{T_d} \simeq 0.056, \; \; f_{T_s} < 0.038,
\end{eqnarray}
while the contribution by gluon $f_{T_G}$ is determined
from the trace anomaly condition~\cite{TrAnom}:
\begin{equation}
f_{T_G}+f_{T_u}+f_{T_d}+f_{T_s}=1.
\end{equation}
In our analysis, we use the conservative value, $f_{T_s}=0$.
For various values of parameters, $\theta_W$ and $c$,
with $m_h=120$ GeV fixed,
we evaluate the elastic scattering cross sections
between the dark matter particle and nucleon.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig2.eps}
\end{center}
\caption{{\it The direct detection}:
The red regions correspond to parameter sets that predict the right abundance.
The parameter sets with $\theta_W>\pi/10$ are indicated in gray, and those
excluded by the current bound from the direct detection experiments
are in brown.
The expected search limits by future experiments are also shown.}
\label{fig:DD}
\end{figure}
The result is shown in figure \ref{fig:DD}.
The parameter sets in red regions lead to the appropriate
dark matter abundances.
The gray region corresponds to $\theta_W>\pi/10$, which we do not consider
as discussed in section~\ref{Sec:constraints}.
The already excluded region
from XENON10~\cite{XENON10} and CDMS II~\cite{CDMS} experiments
is shown in brown,
by which a part of the red region with $m_{\rm DM} =2-3$ TeV
is excluded.
Here, we naively extrapolate the exclusion limit beyond 1 TeV,
although the experimental bounds shown in the original papers
are depicted in the range $m_{\rm DM} \leq 1$ TeV~\footnote{
We would like to thank Yoshitaka Itow for his advise on the
current experimental bounds for the dark matter mass beyond 1 TeV.
}.
The other three lines indicate expected future limits
by XMASS~\cite{XMASS}, SCDMS~\cite{SCDMS} and , XENON100~\cite{XENON100}
respectively from above to below.
The allowed region with the dark matter mass around TeV is fully
covered by the future experiments.
On the other hand, most of the narrow region consistent
with the observed dark matter abundance
is out side of the reach of the future experiments.
\section{Summary}
In extradimensional theories,
the AP boundary condition for a bulk fermion
can be imposed in general.
We show that the lightest mode of the AP fields
can be stable and hence become a candidate
for the dark matter in the effective 4D theory
due to the remaining accidental discrete symmetry.
This mechanism works even with general non-flat metric,
in contrast to the KK parity which does not work
in a simple warped model.
Although we can introduce the AP fields in various phenomenological
extradimensional models, they are usually not so strongly motivated
except for providing the dark matter particle.
In contrast, it is worth noting that in the
gauge-Higgs unification scenario, AP fields often play
a crucial role to realize a phenomenologically viable model.
Thus, we examine the possibility of the dark matter
in the gauge-Higgs unification scenario.
We find that due to the structure of the gauge-Higgs unification,
the interactions of the dark matter particle
with the SM particles, especially with the Higgs boson,
are largely controlled by the gauge symmetry
and the model has a strong predictive power
for the dark matter phenomenology.
Because of this feature, we call this scenario
as the gauge-Higgs dark matter scenario.
We have investigated this scenario based on
a five-dimensional SO(5)$\times$U(1)$_X$
gauge-Higgs unification model compactified
on the warped metric as an example.
This model is favorable because it contains the bulk
custodial symmetry and thus a few TeV KK scale
can be consistent with the electroweak precision measurement.
We have evaluated the relic abundance of the dark matter particle
and identified the parameter region of the model
to be consistent with the observed dark matter relic density.
We have found two allowed regions:
One is a quite narrow region where the right dark matter relic
density is achieved by the dark matter annihilation
through the Higgs boson resonance,
so that the dark matter mass is close to a half
of the Higgs boson mass.
In the other region, the dark matter annihilation process
is efficient and the dark matter particle with a few TeV mass
is consistent with the observations.
Furthermore, we have calculated the cross section
of the elastic scattering between the dark matter particle
and nucleon and shown the implication of the gauge-Higgs
dark matter scenario for the current and future direct
dark matter detection experiments.
It turns out that the region with a few TeV dark matter mass
is partly excluded by the current experiments
and the whole region can be explored by future experiments.
On the other hand, most of the narrow region is out side of the
experimental reach.
\vspace{1cm}
\leftline{\bf Acknowledgments}
This work is supported in part by a Grant-in-Aid for Science Research from the
Ministry of Education, Culture, Sports, Science and Technology, Japan
(Nos.~16540258 and 17740146 for N.~H., No.~21740174 for S.~M. and
No.~18740170 for N.~O.),
and by the Japan Society for the Promotion of Science (T.Y.).
| proofpile-arXiv_065-6756 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
As possible hosts of the Square Kilometre Array (SKA), South Africa
and Australia are building SKA Precursor arrays: MeerKAT and ASKAP,
respectively. The two telescopes will complement each other well:
ASKAP will have a wider field of view but a smaller frequency range
and lower sensitivity, while MeerKAT will be more sensitive, have a
larger frequency range, but with a smaller field of view. MeerKAT
will have additional shorter and longer baselines, giving it enhanced
surface brightness sensitivity as well as astrometric capability. It
is also envisaged that MeerKAT will have the capability of phasing-up
array elements and will, from time to time, participate in
the European, Australian and global VLBI networks. This document gives a
short overview of the expected scientific capabilities as well as the
technical specifications of the MeerKAT telescope and invites the
community to submit Large Project proposals that take
advantage of the unique capabilities of the instrument. In this
document we give a short description of the MeerKAT array in
Sect.~2. A brief summary of MeerKAT key science is given in Sect.~3.
More detailed technical specifications and the array configuration are
given in Sect.~4, with the proposal format and policies listed in
Sect.~5. A summary is given in Sect.~6.
\section{MeerKAT}
The Karoo Array Telescope MeerKAT will be the most sensitive
centimetre wavelength instrument in the Southern Hemisphere; it will
provide high-dynamic range and high-fidelity imaging over almost an
order of magnitude in resolution ($\sim 1$ arcsec to $\sim 1$ arcmin at 1420
MHz). The array will be optimized for deep and high fidelity imaging
of extended low-brightness emission, the detection of micro-Jansky
radio sources, the measurement of polarization, and the monitoring of
radio transient sources. It will be ideal for extragalactic HI
science, with the possibility of detecting extremely low column
density gas, but high resolution observations of individual galaxies
are also possible. Its sensitivity, combined with excellent
polarisation purity, will also make it well suited for studies of
magnetic fields and their evolution, while its time domain capability
will be ideal for studying transient events. Planned high frequency
capabilities will give access to Galactic Centre pulsars, and make
possible measurements of CO in the early Universe at redshifts $z\sim
7$ or more.
MeerKAT is being built in the Karoo, a part of South Africa's Northern
Cape region which has a particularly low population density. Part of
the Northern Cape, through an Act of Parliament, is being declared a
Radio Astronomy Reserve. The approximate geographical coordinates of
the array are longitude 21$^{\circ}$23$'$E and latitude
30$^{\circ}$42$'$S. MeerKAT will be an array of 80 antennas of 12 m
diameter, mostly in a compact two-dimensional configuration with 70\%
of the dishes within a diameter of 1 km and the rest in a more
extended two-dimensional distribution out to baselines of 8 km. An
additional seven antennas will be placed further out, giving E--W
baselines out to about 60 km. These will give a sub-arcsecond
astrometric capability for position measurements of detected sources
and enable their cross-identification with other instruments. The
extra resolution will also drive down the confusion limit for
surveys. Finally, it will be possible to phase the central core as a
single dish for VLBI observations with the European and Australian
networks. The initial frequency range of the instrument, in 2013,
will be from 900 MHz to approximately 1.75 GHz. This will be extended
with a 8--14.5 GHz high frequency mode in 2014. The lower frequency range
will be further extended to 580 MHz--2.5 GHz in 2016.
\section{Science with MeerKAT}
We envisage a range of scientific projects for which MeerKAT will have
unique capabilities. These include extremely sensitive studies of
neutral hydrogen in emission --- possibly out to $z = 1.4$ using
stacking and gravitational lens amplification --- and highly
sensitive continuum surveys to $\mu$Jy levels, at frequencies as low
as 580~MHz. The good polarisation properties will also enable
sensitive studies of magnetic fields and Faraday rotation to be
conducted. MeerKAT will be capable of sensitive measurements of
pulsars and transient sources. The high frequency capability will
facilitate such measurements even towards the centre of the
Galaxy. MeerKAT will be sensitive enough to conduct molecular line
surveys over a wide frequency range: not only will Galactic Surveys of
hydroxyl and methanol masers be possible, but at longer wavelengths
(pre-biotic) molecules can also be detected. At the highest
frequencies, CO at $z > 7$ may be detectable in its $J=1-0$ ground
state transition.
Many of the applications of the Precursor instruments are driven by
the SKA scientific programme, which has been described in a special
volume of ``New Astronomy Reviews'' (vol.\ 48, 2004) as well as in the
description of the SKA Precursor ASKAP science in volume 22 of
``Experimental Astronomy'' (2008). We do not intend to repeat the full
scientific motivation here, but present a brief outline of the
particular scientific programmes in which we believe MeerKAT will
excel, and which we hope will excite collaborations from among
astronomers world-wide. The location and science goals of MeerKAT lend
themselves to intensive collaborations and joint projects with the
many facilities at other wavelengths available in the southern
hemisphere. Combinations with large mm-arrays like ALMA, but also
SALT, VISTA, VST, APEX, VLT and Gemini South, to name but a few of the
many instruments available, should prove to be fruitful.
\subsection{Low frequency bands (580 MHz - 2.5 GHz)}
\subsubsection{Extragalactic HI science and the evolution of galaxies}
Deep HI observations are a prime science objective for MeerKAT. In the
general SKA Precursor environment, initial indications are that
MeerKAT will be the pre-eminent southern hemisphere HI observation
facility for regions $\sim 10$ deg$^2$ or less and for individually
significant HI detections out $z \sim 0.4$. For surveys of $\sim 30$
deg$^2$ or more, ASKAP will likely be the instrument of choice. Where
exactly the ideal balance point lies between these facilities will
continue to evolve as our understanding of both telescopes and their
survey capabilities improve. Together, these facilities offer the
opportunity to create a comprehensive tiered HI program covering all
epochs to redshift unity and beyond.
\paragraph{Deep HI surveys}
The formation of stars and galaxies since the epoch of re-ionisation
is one of today's fundamental astrophysical problems. Determining the
evolution of the baryons and the dark matter therefore forms one of
the basic motivations for the SKA and MeerKAT. A one-year deep HI
survey with MeerKAT would give direct detections of HI in emission out
to $z \sim 0.4$, and using the stacking technique and gravitational
lensing would enable statistical measurements of the total amount of
HI out to even higher redshifts up to $z \sim 1.4$. The advantage of
the stacking technique is that high signal-to-noise detections of
individual galaxies are not necessarily required. Using previously
obtained (optical and near-IR) redshifts, one can shift even very low
signal-to-noise spectra (which would not on their own constitute a
reasonable detection) such that all the spectral lines fall into a
common channel and then stack the spectra to produce an average
spectrum. Since spectroscopic redshifts are required, the HI survey
will need to overlap with an existing or near-future redshift survey
field. A further sensitivity enhancement involving gravitational lens
amplification may be exploited in appropriate fields.
\paragraph{Studies of the Low Column Density Universe}
Galaxies are believed to be embedded in a ``cosmic web'', a
three-dimensional large scale structure of filaments containing the
galaxy groups and clusters. It is now reasonably certain that most of
the baryons do not, in fact, reside in galaxies, but are found outside
galaxies spread along this ``web''. The material is, however, tenuous
and the neutral fraction is small. It has possibly been seen in a few
lines of sight as absorption features against background sources but a
direct detection of the cosmic web would significantly improve our
understanding of the baryon content of the universe. The cosmic web
may be the source of the HI seen around galaxies taking part in the
so-called cold accretion process. The material is expected to have
column densities around $10^{17-18}$ cm$^{-2}$. Surveys for this low
column density HI would likely be conducted by targeting a number of
nearby galaxies. Assuming a 20 km s$^{-1}$ channel spacing (the
expected FWHM line-width of an HI line), one would need to integrate
with MeerKAT for about 150 hours for a $5\sigma$ detection of a
$10^{18}$ cm$^{-2}$ signal at a resolution of $\sim 90''$. Assuming only
night-time observing, this means that a direct detection of the low
column density gas around galaxies can be done for a different galaxy
every two weeks, thus rapidly enabling comparisons of morphology and
properties of the low column density gas for a wide range in Hubble
type. Depending on the flexibility of the correlator and the presence
of background sources these observations could also be used to probe
the low column density universe at higher redshifts using HI
absorption.
\paragraph{A high-resolution survey of the HI distribution in 1000 nearby galaxies}
Detailed, high-resolution (sub-kpc) observations of the interstellar
medium in nearby galaxies are crucial for understanding the internal
dynamics of galaxies as well as the conversion from gas into
stars. Recent high-resolution HI surveys (such as The HI Nearby Galaxy
Survey THINGS performed at the VLA) clearly showed the power of
obtaining detailed 21-cm observations and combining them with
multi-wavelength (particularly infrared and UV) data to probe galaxy
evolution and physical processes in the interstellar medium. A more
extensive sensitive high-resolution survey in the southern hemisphere
will provide important data on star formation and dark matter in a
large range of galaxy types in a wide range of environments. A single
8-hour observation with MeerKAT rivals the THINGS VLA observations in
terms of resolution and column density sensitivity. This is
particularly relevant with the advent of sensitive surveys and
observations of the molecular and dust component of the ISM by
Herschel and, in the future, ALMA. These combined studies will provide
the local calibration point against which higher redshift studies can
be gauged. The presence of major optical, IR and sub-mm telescopes in
the southern hemisphere make such a multi-wavelength approach
desirable.
\paragraph{An HI absorption line survey (and OH mega-masers)}
Most HI absorption measurements have been made at optical wavelengths
in damped Lyman-$\alpha$ systems. Such systems are prone to biases, as
from the ground it is only possible to observe the line red-shifted to
$z \simeq 1.7$. Furthermore, dust obscuration probably causes the
observations to be biased against systems with a high
metallicity. Such biases are not a problem for the HI line. As radio
continuum sources span a large range of redshift, MeerKAT observations
should detect absorption over the low frequency band to $z = 1.4$. The
VLBI capability of the array should enable high-resolution follow up
with either the EVN or the Australian array, depending on declination
and redshift. A judicious choice of frequency bands for the HI
absorption line survey will also pick up narrow band emission from
hydroxyl, OH. The extragalactic OH emission, especially at 1667 MHz,
will delineate mega-masers, maser emission associated mainly with
interacting or starburst galaxies, some of which will show
polarisation (Zeeman-)patterns from which line of sight magnetic
fields may be inferred.
\subsubsection{Continuum measurements}
\paragraph{Ultra-deep, narrow-field continuum surveys with full polarisation measurements}
The MeerKAT-ASKAP complementarity discussed in relation to deep HI
surveys applies also to surveys in radio continuum and
polarisation. ASKAP will survey the entire southern sky to an rms
noise limit of 50 $\mu$Jy per beam in 1 year of observing time, while
we envisage that MeerKAT will, in the first instance, make a number of
deep pointings in fields that are already being studied at other
wavelengths (e.g., Herschel ATLAS, Herschel HerMES, SXDS, GOODS and
COSMOS). Within the 1 deg$^{2}$ MeerKAT field at 1400 MHz, a
conservative $(5\sigma)$ estimate of the sensitivity is 7 $\mu$Jy per
beam in 24 hours with 500 MHz bandwidth. This scales to 0.7 $\mu$Jy in
100 days. Dealing with confusion at this level will require judicious
use of the long baselines (using the 60 km E--W spur). This exciting
work will study radio-galaxy evolution, the AGN-starburst galaxy
populations and their relationship, perhaps through AGN feedback, in
unprecedented detail, so addressing the evolution of black holes with
cosmic time. It may even reveal a new population of radio sources and
address the enigmatic far-IR-radio correlation at high redshifts.
\paragraph{Magnetic Fields}
Polarisation studies will, in the first instance, use the full low
frequency band in determining rotation measures for the fraction of stronger
sources. The intra-cluster medium has been shown to be
magnetised and it would seem that magnetic fields play a critical role
in the formation and evolution of clusters. A rotation measure survey
of several clusters would be feasible with MeerKAT through
observations, in several low frequency bands, of sources within and
behind the cluster.
\paragraph{Galactic studies and the Magellanic Clouds}
As well as important extragalactic science, we envisage much interest
in Galactic and Magellanic surveys with MeerKAT, both in HI, for
measurements of dynamics, together with measurements of the Zeeman
effect and determinations of the line of sight component of the
magnetic field. Similar measurements may be made in the 4 ground state
OH lines. There are relatively few measurements of all 4 ground state
OH lines. MeerKAT's wide band and high spectral resolution will
enable such measurements, which give valuable information on the
excitation temperature and the column density of the cold gas
component. Very few know interstellar molecules have lines in bands
below L-band. An interesting exception is methanol whose transition at
830 MHz was the first to be measured and so identify the
species. While a Galactic survey in this line would be interesting, it
will also be exciting to perform a census of low frequency transitions
in directions towards the Galactic Centre and Sgr B. Molecules with
transitions at the lower frequencies tend to be bigger and could be
more important as pre-biological molecules and their potential
importance for the origin of life.
\subsection{High frequency science (8 - 14.5 GHz)}
\subsubsection{Pulsars and transients}
The high-frequency capability of MeerKAT will be particularly useful
for studies of the inner Galaxy. We expect the population of pulsars
in the Galactic Centre to be large, but it is being obscured by
interstellar scattering, which cannot be removed by instrumental
means. Observing at sufficiently high frequencies ($\sim$ 10 GHz or
higher) the pulsar population can be revealed. A number of these
pulsars will be orbiting the central supermassive black hole. The
orbital motion of these pulsars will be affected by the spin and
quadrupole moment of the black hole. By measuring the effects of
classical and relativistic spin-orbit coupling on the pulsar's orbital
motion in terms of precession, traced with pulsar timing, we can test
the cosmic censorship conjecture and the no-hair theorem. The
technical requirements in terms of data acquisition and software
involved in this application are, however, challenging and a large
community effort will be needed to be successful in this challenging
but exciting science goal.
\subsubsection{High-$z$ CO}
While HI emission has been difficult to detect at even moderate
redshifts, CO has been detected with the VLA in the $J = 3-2$
rotational transition at $z = 6.4$. The new large millimetre array,
ALMA, will detect higher CO rotational transitions at $z > 6$, and it
may be instructive to measure the ground state rotational transition
for comparison. The new EVLA will open up CO$(1-0)$ surveying,
particularly in the northern sky. In the southern sky, MeerKAT will be
its counterpart, but with a larger field of view and sky
coverage. MeerKAT at 14.5 GHz will facilitate the detection of CO$(1-0)$
emission at $z > 6.7$, and the ground state transition of HCO$^+$ at
$z > 4.9$. It will be important to exploit such commensality with
ALMA, and to compare the atomic and molecular content of galaxies as a
function of redshift, since recent studies show that the molecular
hydrogen proportion may increase with redshift.
\subsection{VLBI science}
The availability of the phased central MeerKAT antennas, as the
equivalent of an $\sim 85$ m diameter single-dish antenna, will have a
profound effect on the highest resolution measurements, made with
VLBI. A phased MeerKAT will both increase the $uv$-coverage available
in the South, and provide great sensitivity on the longest baselines,
where visibility amplitude is often low as sources are becoming
resolved. This and the recently demonstrated e-VLBI capacity with the
Hartebeesthoek Radio Telescope will create great demand for the phased
MeerKAT in the VLBI networks. A particular application of
significance with the European VLBI network will be wide field imaging
VLBI of sources in Deep Fields. High sensitivity VLBI studies of the
Hubble Deep Field revealed $\mu$Jy sources, many of which were
starburst galaxies. In one case a radio-loud AGN was detected in a
dust obscured, $z = 4.4$ starburst system, suggesting that at least
some fraction of the optically faint radio source population harbour
hidden AGN.
Wide-field imaging developments with the EVN and MeerKAT will not only
produce more interesting fine detail on galaxies in the early
universe, they will also be a test bed for the SKA. Furthermore, the
presence of the HESS high-energy telescope and its successor in close
proximity to South Africa will enhance the importance of the southern
VLBI arrays for studying the radio component of high-energy gamma ray
sources.
Another field of interest for VLBI arrays including MeerKAT is that of
(narrow band) masers. Trigonometric parallaxes of maser spots are
refining distance measurements in the Milky Way and improving our
knowledge of the structure and dynamics of the Galaxy. Both hydroxyl
(1.6 GHz) and methanol (12 GHz lines) still reveal new and interesting
properties, like alignments and discs, in regions of star
formation. Studies of OH masers as well as the radio continuum in
starburst galaxies like Arp 220 are revealing strings of supernovae
and strange point sources whose spectra have high a frequency
turnover, and might even be indicative of 'hypernovae'.
\subsection{Other Science}
We have described some of the exciting science that will be done with
MeerKAT, but the new instrument will have the potential to do much
more. All-sky surveys at 600 MHz and 8 GHz are possible, as well as
Galactic polarisation measurements and deep studies of magnetic
fields, and science requiring high brightness sensitivity at high
frequency (e.g., of the Sunyaev-Zel'dovich effect). There are
possibilities for pulsar surveys and much more. While initially
MeerKAT science will focus on the key science areas described here, we
also welcome inventive proposals beyond the ones suggested in this
document that make use of unique scientific capabilities of MeerKAT.
\section{MeerKAT: specifications and configuration}
MeerKAT will consist of 80 dishes of 12 m each, and it will be capable
of high-resolution and high fidelity imaging over a wide range in
frequency. The minimum baseline will be 20~m, the maximum 8 km. An
additional spur of 7 dishes will be added later to provide longer
(8--60 km) baselines. It is intended that the final array will have 2
frequency ranges: 0.58--2.5 GHz and 8--14.5 GHz, with the full frequency
range gradually phased in during the first years of the array.
MeerKAT commissioning will take place in 2012 with the array coming
online for science operations in 2013. Table 1 summarizes the final
MeerKAT specifications. Table 2 gives an overview of the various
phases of the MeerKAT construction and commissioning leading up to
these final specifications.
MeerKAT will be preceded by a smaller prototype array of seven
antennas, called KAT-7. The commissioning of this science and
engineering prototype will start in 2010, with test science
observations expected later that year. KAT-7 will be used as a test
bed for MeerKAT, as well as for the data reduction pipelines etc. and
is more limited in its science scope, with smaller frequency coverage
(1.2-1.95 GHz), and longest and shortest baselines of 200m and 20m
respectively, as also indicated in Table 2.
The \emph{maximum} processed bandwidth on MeerKAT will initially be
850 MHz per polarization. This will gradually be increased to 4 GHz.
There will be a fixed number of channels, initially 16384, though this
may be increased if demand requires. By choosing smaller processed
bandwidths (down to 8 MHz) the velocity resolution may be increased.
Note that not all combinations of specifications will be realized. For
example, fully correlating the long 60 km baselines with the full
array at the minimum sample time with the maximum number of channels
will result in prohibitive data rates.
\begin{table}
\begin{center}
\caption{MeerKAT final system properties}
\begin{tabular*}{0.76\textwidth}{@{\extracolsep{\fill}}|l|l|}
\hline
Number of dishes$^a$ & 80 (central array)\\
& + 7 (spur)\\
Dish diameter & 12 m \\
Aperture efficiency & 0.7\\
System temperature & 30 K\\
Low frequency range$^a$ & 0.58--2.5 GHz\\
High frequency range$^a$ & 8--14.5 GHz\\
Field of view & 1 deg$^2$ at 1.4 GHz \\
& 6 deg$^2$ at 580 MHz \\
& 0.5 deg$^2$ at 2 GHz \\
$A_e/T_{\rm sys}$ & 200 m$^2$/K\\
Continuum imaging dynamic range$^b$ & 1:10$^5$\\
Spectral dynamic range$^b$ & 1:10$^5$\\
Instrumental linear & \\
\ \ polarisation purity & $-25$ dB across field\\
Minimum and maximum bandwidth & \\
\ \ per polarization$^a$ & 8 MHz--4 GHz\\
Number of channels & 16384\\
Minimum sample time & 0.1 ms\\
Minimum baseline & 20 m\\
Maximum baseline & 8 km (without spur)\\
& 60 km (with spur)\\
\hline
\end{tabular*}\\
\vspace{2pt}
\emph{Notes:} $a$: \emph{Final values. See Table 2 for roll-out schedule.}\\ $b$: \emph{Dynamic range defined as rms/maximum.}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{MeerKAT Phasing Schedule}
\begin{tabular}{|l|r|r|r|r|r|}
\hline
& KAT-7 & Phase 1 & Phase 2 & Phase 3 & Phase 4\\
& 2010 & 2013 & 2014 & 2015 & 2016 \\
\hline
Number of dishes & 7 & 80 & 80 & 87 & 87 \\
Low freq.\ range (GHz) & 1.2--1.95 & 0.9--1.75 & 0.9--1.75 & 0.9--1.75 & 0.58--2.5\\
High freq.\ range (GHz) & --- & --- & 8--14.5 & 8--14.5 & 8--14.5 \\
Maximum processed & & & & & \\
\ \ bandwidth (GHz) & 0.256 & 0.850 & 2 & 2 & 4 \\
Min.\ baseline (m) & 20 & 20 & 20 & 20 & 20 \\
Max.\ baseline (km) & 0.2 & 8 & 8 & 60 & 60 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Configuration}
The MeerKAT array will be constructed in multiple phases (see Table 2). The first
phase will consist of 80 dishes distributed over two components.
\begin{itemize}
\item 1. A dense inner component containing 70\% of the dishes. These are
distributed in a two-dimensional fashion with a Gaussian $uv$-distribution with a
dispersion of 300 m, a shortest baseline of 20 m and a longest baseline of 1 km.
\item 2. An outer component containing 30\% of the dishes. These are also
distributed resulting in a two-dimensional Gaussian $uv$-distribution with a
dispersion of 2500 m and a longest baseline of 8 km.
\end{itemize}
This will be followed by a second phase which will involve the
addition of a number of longer baselines.
\begin{itemize}
\item 3. A spur of an additional 7 antennas will be distributed along
the road from the MeerKAT site to the Klerefontein support base,
approximately 90 km SE from the site. This will result in E--W
baselines of up to 60 km. The positions of these antennas will be
chosen to optimize the high-resolution performance of the array to
enable deep continuum imagine and source localisation.
\end{itemize}
Figure 1 shows a concept configuration of components 1 and 2 listed
above. Positions of individual antennas may still change pending
completion of geological measurements, but will remain consistent with
the concept of a 70/30 division between a 1 km maximum baseline core
and an 8 km maximum baseline outer component. X and Y positions of the
antennas with respect to the centre of the array are given in the
Appendix. Representative $uv$-distributions for observations of
different duration towards a declination of $-30^{\circ}$ are given in
Fig.\ 2. A histogram of the total baseline distribution for an 8h
observation towards $-30^{\circ}$ is given in Fig.~3.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figure1}
\caption{\emph{Overview of the MeerKAT configuration. The inner component contains 70\% of the dishes,
using a two-dimensional Gaussian uv-distribution with a dispersion of 300 m and a longest baseline
of 1 km. The outer component contains 30\% of the dishes, and is distributed as a two-dimensional
Gaussian uv-distribution with a dispersion of 2.5 km and a longest baseline of 8 km. The shortest
baseline is 20 m. The three circles have diameters of 1, 5 and 8 km. The inset on the right shows a
more detailed view of the inner core.}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.495\textwidth]{figure2a}
\includegraphics[width=0.45\textwidth]{figure2b}
\caption{\emph{Left panel: uv distribution of the MeerKAT array for observations towards declination $-30^{\circ}$,
with the observing time indicated in the sub-panels. Right panel: density of uv-samples for the
corresponding observations in the left panel.}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure3}
\caption{\emph{Histogram of the uv-distance for an 8h observation towards -30$^{\circ}$. The histogram numbers
assume 5 min sample integration times.}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure4}
\caption{\emph{Scale factor by which the naturally-weighted noise will be increased
when using tapering to obtain resolution shown on the horizontal axis,
assuming a frequency of 1420 MHz.}}
\end{figure}
\subsection{Sensitivity}
The multi-resolution configuration of the telescope means a taper or
similar kind of weighting of the $uv$-samples needs to be used to
produce a synthesized beam of the desired resolution. Tapering will
increase the noise with respect to natural weighting (as it reduces
the effective number of $uv$ samples). Figure 4 and Table 3 list the
correction factors to be applied to the expected noise for tapering to
a desired resolution with respect to the untapered, naturally-weighted
noise, assuming a frequency of 1420 MHz.
As a guideline, for an 8 hr spectral line observation, assuming a
channel width of 5 km s$^{-1}$ (23.5 kHz at 1420 MHz), a system
temperature of 30K at 1420 MHz, two polarizations, and a system
efficiency of 0.7, the expected untapered, naturally-weighted
5$\sigma$-noise level is 1.8 mJy/beam. Similarly, a 24 hour continuum
observation with a similar setup, but a 500 MHz bandwidth gives a
5$\sigma$ naturally weighted noise level of 7.2 $\mu$Jy. These
are naturally-weighted, untapered noises and need to be scaled with the
factors listed in Table 3 and Figure 4 for the desired resolution.
\begin{table}
\begin{center}
\caption{Scale factors as shown in Fig.\ 4}
\begin{tabular}{|c|c|}
\hline
Beam size at & Scale-\\
1420 MHz & factor\\
(arcsec) & \\
\hline
6 & 2.7 \\
8 & 1.9 \\
10 &1.8\\
20 &1.8\\
40 &1.5\\
60 &1.6\\
80 &1.9\\
100 &2.2\\
\hline
\end{tabular}\\
\vspace{2pt}
\end{center}
\end{table}
\section{Call for Large Project proposals}
\subsection{Key Science Areas}
MeerKAT observing time will be allocated for Large Project proposals
and shorter PI proposals, with the intention that 75\% of telescope time
will be made available for the Large Projects during the first 5
years. The current call for proposals only applies to the Large
Projects. A separate call for short proposals will be made at a later
stage. As described in Section 2, the MeerKAT Large Projects are
envisaged to cover the following Key Science areas, although well-motivated
proposals in other appropriate fields are welcome:
\begin{itemize}
\item \emph{Neutral hydrogen}
\begin{itemize}
\item Deep emission and absorption studies out to redshift $z = 1.4$.
\item Deep observations of selected groups or galaxies and detection
of the cosmic web.
\item A high-resolution survey of a substantial number of galaxies
in the nearby universe, coordinated with surveys at other
wavelengths.
\item Targeted observations of selected regions of the Galaxy and
the Magellanic Clouds in HI and OH.
\end{itemize}
\item \emph{Deep continuum observations}
\begin{itemize}
\item Deep observations, with polarization in selected regions.
\item Observations of polarized emission in cluster magnetic fields with a goal
of understanding the role of magnetism in the Universe.
\end{itemize}
\item \emph{Pulsars and transients}
\begin{itemize}
\item High frequency pulsar searches and monitoring in regions towards the
Galactic centre.
\item Long duration ``staring'' observations with high time resolution with
transient detection as a goal.
\end{itemize}
\item \emph{Deep field searches for high-z CO and HCO$^+$}
\begin{itemize}
\item Searches for high-$z$ emission from the ground state ($J=1-0$)
CO at 14.5 GHz in selected regions, perhaps in concert with ALMA
observations.
\end{itemize}
\end{itemize}
\subsection{Large Project proposals}
We invite applications for Large Projects from the South African,
African and international astronomical community. A Large Project is
defined as a project with duration $\geq 1000$ hours, which addresses
one of the Key Science Areas listed above in a coherent manner,
utilizing the resolution and/or sensitivity strengths of MeerKAT (thus
includes Projects with a large complementarity with other SKA
Precursor or Pathfinder projects). Large Projects should aim to
achieve their goals in a manner not possible by a series of small
projects (for which a separate call will be issued at a later stage).
This call for proposals is open to South-African, African and
international research teams. We encourage including South African
collaborators because of their familiarity with the instrument and to
facilitate communication with the engineering and commissioning
teams. There will be a limited number of projects per Key Science area
and proposers are therefore encouraged to coordinate and collaborate
prior to submission. The final decision on allocations of observing
time will be through an international peer-review process and
successful teams will be invited to further develop their projects by
mid-2010.
Initially, the MeerKAT operations team will provide calibrated
$uv$-data, however, as experience with the array grows, development
of other data products will be undertaken by the MeerKAT science and
engineering team, in collaboration with the research teams.
Further data reduction pipelines, simulations and preparations will be
the responsibility of the research teams. We envisage grants being
available for team members to spend time in South Africa to become
familiar with the MeerKAT instrument and its software and for training
local students and postdocs who want to be part of the proposal. We
also plan on making available bursaries for postdocs and students
(preferably based in South Africa) to support the projects. We will
also consider supporting team meetings. Teams will have reasonable
access to advice and support from the MeerKAT team when operations
commence.
Teams are asked to specify in their proposals what their final and
intermediate data products and deliverables will be. We also request
they indicate how their project will make optimal use of the MeerKAT
roll-out phases as specified in Table~2. We encourage teams to
indicate their publication policy and to also indicate the
possibilities for early results. There will be an 18 month
proprietary period on any data, counted from when the observation was
made. Teams can apply to the MeerKAT director for an extension,
which will be given only if justified by the nature of the
project and if there is evidence of progress towards
publication of substantive results.
\subsection{Proposal format}
Large Project proposals must be in PDF format, with a minimum font
size of 11 pt and reasonable page margins. Proposals must contain the
following chapters:
\begin{enumerate}
\item Abstract (maximum of half a page).
\item A list of team members and their roles and affiliations (2 pages maximum).
\item Scientific case --- a comprehensive discussion of the astrophysical importance of
the work proposed (6 pages maximum).
\item Observational strategy --- the most effective way to perform the
proposed observations. Teams should indicate milestones and
intermediate results and data products (2 pages maximum).
\item Complementarity --- coordination with surveys at other
wavelengths or with other SKA Precursor or Pathfinder instruments (2 page maximum).
\item Strategy for data storage and analysis (2 pages maximum).
\item Organisation --- specify team leaders, their roles and their
sub-groups, with a clear overview of how the team will divide the
identified tasks (2 pages maximum).
\item The publication strategy --- specify intermediate and final data releases
and availability (1 page maximum).
\item Team budget --- teams should indicate and justify budgetary
needs that cannot be met as part of established observing and data
analysis procedures commonly in use at other observatories and research
institutes (1 page maximum).
\item Requirements for special software --- teams are asked to indicate
special computing needs. These must be communicated to
the project as early as possible and the project team will be
expected to provide as much of that specialized software as possible
into the MeerKAT observing system. However, that effort will be
considered positively when awarding time to the proposal (1 page maximum).
\item Public outreach and popularization efforts (1 page maximum).
\end{enumerate}
\subsection{Proposal submission}
The deadline for submission of proposals is {\bf March 15, 2010}.
Proposals must be submitted as a PDF file. Proposal submission procedures will
be available on the MeerKAT website {\tt http://www.ska.ac.za/meerkat} from about one
month before the deadline for submission.
An evaluation of proposals will be made in mid-2010 by an
international peer review committee and successful teams will be
invited to further develop their proposals.
For questions regarding MeerKAT technical specifications and proposal
preparation please contact Justin Jonas, {\tt [email protected]}.
\section{Concluding remarks}
With this short discussion of the MeerKAT scientific goals and
applications, we invite Large Proposals from teams
prepared to help design observational programmes, and help contribute
to developing simulations, and advanced observing methods and analysis
techniques.
MeerKAT will be capable of very exciting science. It will be a major
pathfinder to the SKA, giving insights into many of the technical
challenges of the SKA, but also giving a glimpse of the new
fundamental studies that the SKA will facilitate.
\bigskip \emph{We acknowledge the help and stimulation given by
members of the MeerKAT International Scientific Advisory Group:
Bruce Bassett, Mike Garrett, Michael Kramer, Robert Laing, Scott
Ransom, Steve Rawlings and Lister Staveley-Smith. We also thank
Michael Bietenholz, Bradley Frank and Richard Strom for their help and
contributions.}
\newpage
\section*{Appendix}
The following list contains the X and Y coordinates of the 80 antennas
as shown in Fig.~1. Coordinates are in meters, with positive Y to the
north and positive X to the east. The center of the array is at
longitude 21$^{\circ}$23$'$E and latitude 30$^{\circ}$42$'$S. The
antenna positions are not final and may still change slightly pending
geophysical investigations, but simulations using this layout will
allow investigation of all aspects of the MeerKAT array
\begin{table}[h!]
\tt
\footnotesize
\begin{tabular}{r r | r r}
X (m) & Y (m) & X (m) & Y(m)\\
176.061 & 170.880 & -130.841 & 61.884 \\
-66.969 & -606.976 & 82.985 & -388.329 \\
242.654 & 132.243 & -248.388 & -130.752 \\
164.949 & -57.655 & -127.192 & 88.410 \\
91.389 & 103.574 & 110.910 & -158.480 \\
99.429 & -334.219 & -79.387 & 138.975 \\
-18.404 & 193.350 & 16.065 & -162.254 \\
-16.364 & 104.879 & 44.682 & 346.417 \\
25.054 & 25.450 & -134.745 & -221.536 \\
522.297 & -193.134 & 205.048 & 412.908 \\
436.562 & -373.858 & 311.991 & -72.972 \\
192.254 & 276.835 & -5.728 & -289.321 \\
-416.962 & 258.853 & -45.094 & 234.652 \\
71.097 & 174.981 & 511.999 & 92.634 \\
198.639 & -27.787 & -178.644 & 96.323 \\
-396.094 & 18.153 & -449.125 & -100.602 \\
103.972 & -39.268 & 1808.727 & -1849.094 \\
-226.167 & 36.690 & -1269.759 & 4222.136 \\
-318.506 & 125.535 & 1830.618 & -872.945 \\
-46.269 & -2.061 & 1042.711 & 910.623 \\
163.962 & 3.157 & 643.621 & -524.147 \\
-389.063 & -302.250 & 554.371 & 2372.854 \\
-229.144 & 345.759 & -335.516 & -2204.955 \\
-136.392 & 110.671 & -311.006 & -578.926\\
-54.749 & 45.694 & -85.148 & -141.128 \\
-213.017 & 282.157 & 4355.955 & 798.831 \\
418.688 & -84.697 & 3220.445 & 2676.747 \\
143.813 & -63.878 & 2175.319 & -2864.864 \\
-282.214 & 72.204 & -3753.578 & -1390.820 \\
-10.417 & -244.618 & 388.291 & -1460.652 \\
194.556 & 83.973 & 1307.771 & 580.398 \\
-81.701 & 13.926 & -3190.450 & 400.925 \\
-429.376 & -280.397 & 732.985 & 736.581 \\
234.769 & -234.833 & -1510.254 & 122.862 \\
-445.419 & -177.570 & -2303.276 & -209.614 \\
-61.951 & 157.988 & -766.562 & 474.935 \\
380.132 & 165.606 & 1027.818 & 200.245 \\
94.251 & 66.542 & -3373.788 & 2233.961 \\
-169.380 & -136.695 & -1208.553 & -3073.213 \\
169.497 & 242.840 & -980.743 & -560.740 \\
\end{tabular}
\end{table}
\end{document}
| proofpile-arXiv_065-6774 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Astrophysical studies can gain significantly by associating data from
different wavelength ranges of the electromagnetic spectrum.
Dedicated multi-wavelength surveys have been a strong focus of
observational astronomy in recent years, e.g. AEGIS
\citep{Davis_2007}, COSMOS \citep{Scoville_2007}, or GOODS
\citep{Dickinson_2003}. At redshifts lower than those probed by these
surveys, several surveys of NASA's Galaxy Evolution Explorer
\citep[GALEX;][]{Martin_2005} essentially provide the perfect
ultraviolet counterparts of the Sloan Digital Sky Survey
\citep[SDSS;][]{York_2000} optical data sets. These surveys or the
combination of these datasets enables to provide invaluable insights
on stars and galaxy properties.
Naturally, these data are taken by different detectors of the separate
projects, hence it is required to combine their information by associating the
independent detections.
Recent work by \citet{Budavari_2008} laid down the statistical
foundation of the cross-identification problem.
Their probabilistic approach assigns an objective Bayesian evidence
and subsequently a posterior probability to each potential
association, and can even consider physical information, such as
priors on the spectral energy distribution or redshift, in addition to
the positions on celestial sphere.
In this paper, we put the Bayesian formalism to work, and aim to
assess the benefit of using posterior probabilities over simple angular
separation cuts using mock catalogs of GALEX and SDSS.
In Section~\ref{sec_simulations}, we present a general procedure to
build mock catalogs that take into account source confusion and
selection functions.
Section~\ref{sec_xmatch} provides the details of
the cross-identification strategy, and defines the relevant
quality measures of the associations based on
angular separation and posterior probability.
In Section~\ref{sec_results}, we present the results for the GALEX-SDSS
cross-identification, and propose a set of criteria to build
reliable combined catalogs.
\section{Simulations}\label{sec_simulations}
The goal is to mimic as close as possible the process of observation
and the creation of source lists.
First, a mock catalog of artificial objects is generated with known
clustering properties, using the method of \citet{Pons-Borderia_1999}.
We then complement this by adding observational effects that are not
included in this method. We generate simulated detections as
observations of the artificial objects with given astronometric
accuracy and selections. Hence the difference between separate sets of
simulated detections, say for GALEX and SDSS, is not only in the
positions, but also they are different subsets of the mock objects.
\subsection{The Mock Catalog}
We built the mock catalog as a combination of clustered sources (for
galaxies) and sources with a random distribution (for stars). To
simulate clustered sources, we generate a realization of a Cox point
process, following the method described by
\citet{Pons-Borderia_1999}. This point process has a known correlation
function which is similar to that observed for galaxies. We create
such a process within a cone of 1Gpc depth; assuming the notation of
\citet{Pons-Borderia_1999}, we used $\lambda_s = 0.1$ and $l =
1h^{-1}$Mpc for the Cox process parameters. For our purpose, it is
sufficient that the distribution on the sky (i.e., the angular
correlation function) of the mock galaxies displays clustering up to
scales equal to the search radius used for the cross-identification
(5\arcsec~here) and that this distribution is similar to the actual
one. Figure \ref{fig_wtheta} shows the angular correlation function of
our mock galaxy sample (filled squares) along with the measurement
obtained by \citet{Connolly_2002} from SDSS galaxies with
$18<r^{\star}<22$. Note that the galaxy clustering is not well known
at small scales ($\theta < 10\arcsec$) because of the combination of
seeing, point spread function, etc. Hence there is no constraint in
his regime. There is nevertheless a good overall agreement between our
mock catalog and the observations at scales between 10 and 30\arcsec.
\begin{figure}[t
\plotone{f1.eps}
\caption{\small Angular correlation function of mock galaxies
(filled squares) compared to the angular correlation function of
SDSS galaxies selected with $18<r^{\star}<22$, from
\citet{Connolly_2002} (filled circles).}
\label{fig_wtheta}
\end{figure}
In the case of both GALEX and SDSS, galaxies and stars show on average
similar densities over the sky. We create a mock catalog over 100
sqdeg with a total of 10$^7$ sources, half clustered and half
random. The minimum Galactic latitude at which this mock catalog is
representative is around 25$^{\circ}$. For this case study we do not
consider the variation of star density with Galactic latitude; we note
that several mock catalogs can be constructed with different star
densities, and prior probabilities (see sect. \ref{sec_xmatch})
varying accordingly.
\subsection{Simulated Detections}\label{sec_detections}
From our mock catalog we create two sets of simulated detections,
using the approximate astrometry errors of the surveys we consider. We
assume that the errors are Gaussian, and create two detections for
each mock object: a mock SDSS detection with $\sigma_{S}$, and a mock
GALEX one with $\sigma_{G}$. We consider constant errors for SDSS, and
variable errors for GALEX. For GALEX we focus here on the case of the
Medium Imaging Survey (MIS); we will consider two selections: all MIS
objects, or MIS objects with signal-to-noise ratio (S/N) larger than
3. We randomly assign to the mock sources errors from objects of the
GALEX datasets following the relevant selections and using the
position error in the NUV band (\texttt{nuv\_poserr}). The
distributions of these errors are shown on figure \ref{fig_sigmas}. In
the case of GALEX, the position errors are defined as the combination
of the Poisson error and the field error, added in quadrature. The
latter is assumed to be constant over the field (and equal to
0.42\arcsec~in NUV). For SDSS we assume that $\sigma_{S} = 0.1\arcsec$
for all objects. Our results are unchanged if we use variable SDSS
errors for our SDSS mock detections, as the SDSS position errors are
significantly smaller than the GALEX ones.
\begin{figure}[t
\plotone{f2.eps}
\caption{\small Distribution of astrometry errors for simulated
detections. The solid line shows errors on nuv detections for the
selection of all GALEX MIS objects, and dotted line for the MIS
objects with S/N $>3$. These distributions are normalized by their
integrals.}
\label{fig_sigmas}
\end{figure}
\subsection{Selection function and confusion}\label{sec_merging}
Our goal is to use the mock catalog described above as a predictive
tool in order to assess the quality of the cross-identifications
between two datasets, here GALEX and SDSS. Hence our mock catalog has
to present similar properties than the data. In practice we need to
include two effects: the selection functions of both catalogs in order
to match the number density of the data, as well as the confusion of
detections caused by the combination of the seeing and point spread
functions.
To apply the selection function, we assign to each mock source a
random number $u$, drawn from an uniform distribution, which
represents the property of the objects. We use the values of $u$ to
select the simulated detections we further consider to study a given
case of cross-identification. The length of the interval in $u$ sets
the density for a given mock catalog. Using the notations of
\citet{Budavari_2008}, we computed the number of SDSS GR7 sources,
$N_{\rm{SDSS}}$ and GALEX GR5, $N_{\rm{}GALEX}$, and scaled them to
the area of our mock catalog. These numbers set the interval in $u$
for both detection sets. We then use the overlap between the intervals
in $u$ to set the density of common objects, as set by the prior
probability determined independently from the data (see sect.\
\ref{sec_xmatch}).
To simulate the confusion of the detections, we performed the
cross-identification of the SDSS and GALEX detections sets with
themselves, using a search radii of 1.5\arcsec~and
5\arcsec~respectively. These values of search radius correspond to the
effective widths of the PSF in both surveys \citep{Stoughton_2002,
Morrissey_2007}\footnote{see also
\url{http://www.sdss.org/DR7/products/general/seeing.html}}. We then
consider only the detections that satisfy the selection function
criterion, and merge them. For SDSS, we keep one source chosen
randomly from the various identifications. For GALEX, we keep the
source with the largest position error.
This procedure is repeated for each cross-identification we consider,
as modifying the selection function naturally implies a change in the
number densities and priors.
\section{Cross identification}\label{sec_xmatch}
We performed the cross-identification between the SDSS and GALEX
detection sets using a 5\arcsec~radius. For each association
\citep[see][]{Budavari_2008}, we compute the Bayes factor
\begin{equation}
B(\psi; \sigma_{S}, \sigma_{G}) = \frac{2}{\sigma^2_{S} +
\sigma^2_{G}}\exp\left[-\frac{\psi^2}{2(\sigma^2_{S} +
\sigma^2_{G})}\right]
\end{equation}
where $\psi$ is the angular separation between the two detections, and
is expressed here in radians, as $ \sigma_{S}$ and $\sigma_{G}$. We
also derive the posterior probability that the two detections are from
the same source
\begin{equation}\label{eq_posterior}
P = \left[1 + \frac{1-P_0}{B\,P_0} \right]^{-1}
\approx
\frac{B\,P_0}{1+B\,P_0}
\end{equation}
where $P_0$ is the prior probability, and the approximation is
for the usually small priors.
The Bayes factor, and hence the posterior probability depend on the
position errors from both surveys. As we use a constant prior $P_0$,
this implies that if all objects have the same position errors within
a survey, the posterior probability depends on the angular separation
only. In this case, there is no difference between using a criterion
based upon separation or probability.
We use the posterior probability rather than the Bayes factor as a
criterion. In the assumption of a constant prior probability, the
posterior probability is a monotonic function of the Bayes factor.
However, while we consider here for our case study that the prior is a
constant, in practice it may vary over the sky. Note also that for
instance two surveys with similar position error distributions can
have different priors and then a criterion defined on the basis of the
Bayes factor for one survey can not be applied directly to the other
one.
In order to set the overlap between our two detection sets as
described in sect. \ref{sec_merging} to match the selection functions
of the actual datasets, we need to compute the prior $P_0$ from the
data, using the actual cross-identification between GALEX GR5 and SDSS
DR7.
The prior is given by
\begin{equation}\label{eq_prior}
P_0 = \frac{N_{\star}}{N_{SDSS} N_{GALEX}}.
\end{equation}
$N_{\star}$ is the number of sources in the overlap between the
various selections (angular, radial, etc \ldots) of the catalogs
considered for the cross identification, i.e. the number of sources in
the resulting catalog. We use the self-consistency argument discussed
by \citet{Budavari_2008}
\begin{equation}\label{eq_iter}
\sum P = N_{\star}
\end{equation}
to derive $P_0$. To choose the value of the prior, we use the
iterative process described in \citet{Budavari_2008}. We start the
process by setting $N_{\star} = min(N_{SDSS},N_{GALEX})$. We then
compute the sum of the posterior probabilities derived using
eq. \ref{eq_posterior}. According to eq. \ref{eq_iter}, this sum gives
us a new value for $N_{\star}$. The same procedure is then repeated
using this updated value, yielding an updated value of the prior as
well. The chosen value for the prior is obtained after convergence;
we hereafter call this value the \textit{observed} prior.
Then we set the overlap between the two detection sets in our mock
catalog such that the prior value derived for the cross-identification
in the simulations matches the observed one. We use the same iterative
process as described above to determine $P_0$ in the
simulations. Figure \ref{fig_nstar_iter} shows this iteration process
starting from $N_{\star} = N_{\rm{}GALEX}$ for the case with all MIS
objects (filled circles) or MIS S/N $>$3 objects (open circles). The
procedure converges quickly in terms of number of steps. Note also
that the query we use to compute the sum runs in roughly 1 second on
these simulations.
The benefit of the use of simulations is that, in this case, once we
set the overlap between the detection sets required to match the
observed prior, we know the input value of $N_{\star}$ (i.e. the
actual number of detections in the overlap between the two sets) and
hence we can derive the prior corresponding to this number directly
using eq. \ref{eq_prior}, which we call the \textit{true} prior. We
show this true prior on fig. \ref{fig_nstar_iter} as solid line for
the case of all MIS objects, and dashed line for MIS objects with S/N
$>$3. The true priors we are required to use in order to match the
data is slightly lower than the observed ones for both selections: 4\%
lower for all MIS objects and 2.5\% for MIS objects with S/N $>$3. In
other words, we need to use less objects in the overlap between our
detection sets than what we expect from the data.
A different prior value implies a change in the posterior probability;
however the latter also depends on the values of the Bayes factor
$B$. Given the scaling of the relation between the posterior and prior
probabilities (eq. \ref{eq_posterior}), for low $B$ values ($B \ll
1$), a variation of 4\% in the prior yields a variation in posterior
probability of the same amount. For high $B$ values ($B \gg 1$), the
variation is about 0.5\%. Hence this difference between the true and
observed priors has a negligible impact on the values of the posterior
probabilities derived afterwards.
\begin{figure}[htbp]
\plotone{f3.eps}
\caption{\small Prior probability self-consistent estimation as a
function of iteration step. Filled circles show the iteration for
the case of all MIS objects, and open circles for MIS objects with
S/N $>$3. The solid (dashed) line shows the true prior for all MIS
objects (MIS objects with S/N $>$3).}
\label{fig_nstar_iter}
\end{figure}
To quantify the quality of the cross-identification, we define the
true positive rate, $T$ and the false positive contamination, $F$. We
can express these quantities as a function of the angular separation
of the association, or the posterior probability. Let $n(x)$ be the
number of associations, where $x$ denote separation or
probability. This number is the sum of the true and false positive
cross-identifications: $n(x) = n_T(x) + n_F(x)$. We define the true
positive rate and false positive contamination as a function of
angular separation as
\begin{eqnarray}
T(\psi) & = & \frac{\sum n_T(x<\psi)}{N_T}\\
F(\psi) & = & \frac{\sum n_F(x<\psi)}{\sum n(x<\psi)}
\end{eqnarray}
where $N_T$ is the total number of true associations.
Similar rates are defined as a function of the probability,
\begin{eqnarray}
T(P) & = & \frac{\sum n_T(x>P)}{N_T}\\
F(P) & = & \frac{\sum n_F(x>P)}{\sum n(x>P)}.
\end{eqnarray}
\begin{figure}[t
\plotone{f4.eps}
\caption{\small True positive contamination rate (in blue, solid
lines) and false positive contamination (red, dashed lines) as a
function of angular separation (left) and posterior probability
(right). GALEX position errors from the full MIS sample yield the
thick curves; the S/N $>3$ constraint yields the thin curves. We
also show the posterior probability thresholds defined as in
\citet{Budavari_2008} (vertical lines on right hand side plot).}
\label{fig_rates_sep_proba}
\end{figure}
We use the detection merging process to qualify the
cross-identifications as true or false. In our final mock catalog, a
detection represents a set of detections that have been merged. We
therefore consider a case as a true cross-identification where there
is at least one detection in common within the two sets of merged
detections.
Figure~\ref{fig_rates_sep_proba} represents the true positive rate and
the false contamination rate as a function of angular separation
(left) and posterior probability (right). These results suggest that
in the case of the SDSS GALEX-MIS cross-identification, it is required
to use a search radius of 5\arcsec~in order to recover all the true
associations. In the case of all MIS objects, 90\% of the true matches
are recovered at 1.64\arcsec~with a 2.6\% contamination from false
positive. As expected, results are better using objects with high
signal-to-noise ratio (S/N $>$ 3), where 90\% of the true matches are
recovered at 1.15\arcsec~with a 1\% contamination. Turning to the
posterior probability, the trends are similar to the ones observed as
a function of separation. However, the false positive contamination
increases less rapidly with probability. For instance, a cut at
$P>0.89$ recovers 90\% of the true associations, with a slightly lower
contamination from false positive (2.3\%). We examine in details the
benefits of using separation or probability as a criterion in
Section~\ref{sec_results}.
\section{Results}\label{sec_results}
\subsection{Performance analysis}\label{sec_roc}
\begin{figure}[t]
\plotone{f5.eps}
\caption{\small Cross identification diagnostic plot: 1-true
positive rate versus the false positive contamination. These
quantities are computed as a function of probability (blue, solid
lines) or separation (red, dashed lines). Thick lines show the
results for all GALEX MIS objects, and thin lines for GALEX MIS
objects with S/N $>3$. The dotted line represents the locus of
$1-T = F$.}
\label{fig_roc}
\end{figure}
Using the quantities defined above, we can build a diagnostic plot in
order to assess the overall quality of the cross-identification, and
define a criterion to select the objects to use in practice for
further analyses. We show on Fig.\ \ref{fig_roc} the true positive
rate against the false positive contamination, computed as a function
of probability or angular separation. We can compare the false
positive contamination that yields a given true positive rate
threshold for each of these parameters.
The results show that there are some differences between criteria
based on angular separation or posterior probability. Considering all
GALEX MIS objects (solid lines on fig. \ref{fig_roc}), for $1-T
>0.18$, the false contamination rate is slightly lower when using
angular separation as a criterion. This range of true positive rates
corresponds to angular separations smaller than 1.2\arcsec. As there
is a lower limit to the GALEX position errors, this translates into an
upper limit in terms of posterior probability at a given angular
separation. This in turn implies that the probability criterion does
not appear as efficient as separation for associations at small
angular distances in the SDSS-GALEX case.
At $1-T <0.18$, this trend reverses: considering a criterion based on
probability yields a lower false contamination rate.
We can characterize these diagnostic curves by the Bayes threshold,
where $1-T = F$, which minimizes the Bayes error. The location of this
threshold is represented on fig. \ref{fig_roc} by the intersection
between the diagnostic curves and the dotted line. Our results show
that this intersection happens at lower false positive contamination
rate when using the posterior probability as criterion.
For all GALEX objects, the separation where $1-T = F$, $\psi_c$, is
equal to 2.307\arcsec~and the probability, $P_c$ is 0.613. Using the
angular separation as criterion, the Bayes error is then $P_e =
0.102$; using the posterior probability, $P_e = 0.091$. For GALEX
objects with S/N $>$3, $\psi_c = $1.882\arcsec, $P_c$ = 0.665; $P_e =
0.055$ using the angular separation and $P_e = 0.049$ using the
posterior probability.
These results show that a selection based on posterior probability
yields better results (i.e., a lower false contamination rate, and
lower Bayes error) than a selection based on angular separation.
\subsection{Associations}
\begin{figure}[t]
\plotone{f6.eps}
\caption{\small True positive rate (blue, thick lines) and false
contamination rate (red, thin lines) as a function of probability
for the one GALEX to one SDSS (solid lines), one GALEX to two SDSS
(dashed lines), one GALEX to many SDSS (dotted lines)
associations. The left panel show these rates for all GALEX MIS
objects, and the right one for the GALEX MIS objects with S/N
$>$3. Note that the curves representing the one GALEX to many SDSS
associations can barely be seen as the value are too small.}
\label{fig_rate_proba_1tox}
\end{figure}
\begin{figure}[t]
\plotone{f7.eps}
\caption{\small 1-True positive rates as a function of the false
contamination rate for the one GALEX to one SDSS (thick lines) and
one GALEX to two SDSS (thin lines) associations. The rates are
computed as a function of probability (blue, solid lines) or
separation (red, dashed lines). The left panel show these rates
for all GALEX MIS objects, and the right one for the GALEX MIS
objects with S/N $>$3.}
\label{fig_roc_1tox}
\end{figure}
\input{tab1}
Beyond the confused objects, the cross-identification list contains
several types of associations, where a single detection in one catalog
is linked to possibly more than one detection in the other. We list
in table \ref{tab_xtox} the contingency table of the percentages of
these types in the mock catalog and, in brackets, for the SDSS DR7 to
GALEX GR5 cross-identifications.
The main contribution is from the one GALEX to one SDSS (1G1S, 74\%),
but there are also, for the most significant ones, cases of one GALEX
to two SDSS (1G2S, 21\%) or one GALEX to many SDSS (1GmS,
3\%). Comparing with the data, our mock catalogs are slightly
pessimistic in the sense that the fraction of one to one matches is
lower than in the observations. However, these fractions match
reasonably well enough, which enables us to discuss these cases in the
context of our mock catalogs. We show on figure
\ref{fig_rate_proba_1tox} the true positive and false contamination
rates as a function of probability for the 1G1S (solid lines), 1G2S
(dashed lines), and 1GmS (dotted lines) associations. The 1G1S true
associations represent the bulk (up to 85\%) of the total
cross-identifications. There is also a significant fraction of true
associations within the one 1G2S cases (up to nearly 13\%), while the
1GmS are around 1\%. For the 1G2S or 1GmS cases, we use two methods to
select one object among the various associations: the one
corresponding to the highest probability or the smallest
separation. We computed the true positive and false contamination
rates for these cases as a function of the quantity used for the
selection of the association. We compare the results from these two
methods on figure \ref{fig_roc_1tox}, which shows the diagnostic
curves for the 1G1S (thick lines), 1G2S (thin lines); we do not show
here the 1G2m as they represent only 1\% of the associations. The
diagnostic curves present the same trend than the global ones (see
Fig. \ref{fig_roc}): the posterior probability criterion yields a
lower false contamination rate than the angular separation criterion
above some true positive rate value (e.g., $1-T < 0.29$, for 1G1S
associations considering the cross-identification of all SDSS GALEX
objects).
This is however an artifact caused by the distribution of the GALEX
position errors (see sect. \ref{sec_roc}). For the 1G2S or 1GmS cases,
these results show that true associations can be recovered by
selecting maximal probability, with a low contamination from false
positive (up to around 1\%).
On Figs. \ref{fig_rate_proba_1tox} and \ref{fig_roc_1tox} we compare
the results from all GALEX MIS objects and GALEX MIS objects with S/N
$>$ 3. The quality of the cross-identifications are better for the
latter, for all types of associations.
\subsection{Alternative Error model}
\begin{figure}[t]
\epsscale{1.}
\plotone{f8.eps}
\caption{\small GALEX position error as a function of NUV
magnitude. Circles show the GALEX pipeline error, squares the
alternative errors (see text). The solid line show the linear
model we use to modify the GALEX pipeline errors.}
\label{fig_errors_nuv_mag}
\end{figure}
The accuracy of the analysis of the quality of the
cross-identification strongly depends on the GALEX pipeline position
errors. We use the real data, namely the angular separation to the
SDSS sources measured during the cross-identification between GALEX
GR5 and SDSS DR7, to get an alternative estimation of realistic
errors. In principle the distribution of the angular separations of
the associations depends on the combination of the GALEX and SDSS
position errors. However, the latter are significantly smaller than
the former, so we consider the SDSS errors as negligible here. We
compare on figure \ref{fig_errors_nuv_mag} the dependence on the NUV
magnitude of the position error in the NUV band from the GALEX
pipeline (circles on fig. \ref{fig_errors_nuv_mag}) and the distance
to the SDSS sources (squares), considering only objects classified as
point sources in SDSS. The angular separation between the sources of
the two surveys are significantly larger than the quoted GALEX
pipeline errors. These latter errors are a combination of a constant
field error (equal in NUV to 0.42\arcsec) and a Poisson term. In the
range where both errors estimates are constant ($18<$NUV$<20$), this
comparison suggests that the GALEX field error might be slightly
underestimated. Fot fainter objects, our alternative error increase
faster with magnitude than the GALEX pipeline errors, which might
indicate that this dependence is not well reproduced by the Poisson
term.
We fitted a linear relation to modify the GALEX errors in order to
match the angular separations to the SDSS sources
\begin{equation}
\textrm{NUV}^{mod}_{poserr} = 2.2\textrm{NUV}_{poserr} - 0.3
\end{equation}
where the position errors are in units of arcsec. This error model is
shown as a solid line on figure \ref{fig_errors_nuv_mag}. It
reproduces well the alternative errors for NUV $\lesssim 22.5$, which
is similar to the 5$\sigma$ limiting magnitude for the MIS in the NUV
band \citep[22.7;][]{Morrissey_2007}.
We followed the same steps as described in sect. \ref{sec_detections}
and \ref{sec_merging} with these new errors and performed the
cross-identification. The diagnostic curves we obtain are presented on
Fig.~\ref{fig_roc_alt_err}.
\begin{figure}[t]
\epsscale{1.}
\plotone{f9.eps}
\caption{\small Same as figure \ref{fig_roc} using alternative
position errors for GALEX sources (see text).}
\label{fig_roc_alt_err}
\end{figure}
The trends are similar to those observed using the GALEX pipeline
errors. The quality of the cross-identification is nevertheless worse
with the alternate errors. In this case, the values of angular
separation and probability where $1-T = F$ are $\psi_c =
3.126\arcsec$, $P_c = 0.711$ for all GALEX objects. Using the angular
separation as a criterion, $P_e = 0.144$ (0.102 with the GALEX
pipeline error), and $P_e = 0.127$ (0.091 with pipeline errors) with
the posterior probability. For GALEX objects with S/N $>$ 3, $\psi_c =
2.514\arcsec$, $P_c = 0.780$ ; $P_e = 0.0958$ (0.055, pipeline errors)
using angular separation, and $P_e = 0.0812$ (0.049, pipeline errors)
with the posterior probability.
In other words, the contamination from false positive is larger at a
given true positive rate. For instance, for all GALEX MIS objects,
with 90\% of the true associations and considering posterior
probability as a criterion, the contamination is 5\% compared to 2.3\%
using the GALEX pipeline errors. Note also that the difference between
the angular separation and the probability diagnostic curves is larger
with this alternate error model. This suggests that the probability is
a more efficient way than angular separation to select
cross-identifications for surveys with larger position errors.
\subsection{Building a GALEX-SDSS catalog}
\input{tab2}
The combination of the results we presented can be used to define a
set of criteria for constructing a reliable joint GALEX-SDSS
catalog. It is natural to have different selections for each type of
association. We will here focus on the 1G1S and 1G2S cases, as they
represent around 95\% of the associations.
In Table \ref{tab_crit} we propose a set of criteria, based on the
posterior probability, to get 90\% of the true cross-identifications,
consisting of 80\% of 1G1S and 10\% of 1G2S. We also list the
corresponding false positive contamination. These cuts enable to build
catalogs with 1.8\% of false positives when using all GALEX objects,
or 0.8\% when using GALEX objects with S/N $>3$.
\section{Conclusions}
We presented a general method using simple mock catalogs to assess the
quality of the cross-identification between two surveys which takes
into account the angular distribution and confusion of sources, and
the respective selection functions of the surveys. We applied this
method to the cross-identification of the SDSS and GALEX sources. We
used the probabilistic formalism of \citet{Budavari_2008} to study how
the quality of the associations can be quantified by the posterior
probability. Our results show that criteria based on posterior
probability yield lower contamination rates from false positive than
criteria based on angular separation. In particular, the posterior
probability is more efficient than angular separation for surveys with
larger position errors. Our study also suggest that the GALEX pipeline
position errors might be underestimated and we described an
alternative measure of these errors. We finally proposed a set of
selection criteria based on posterior probability to build reliable
SDSS-GALEX catalogs that yield 90\% of the true associations with less
than 2\% contamination from false positives.
\section{Acknowledgements}
The authors gratefully acknowledge support from the following
organisations: Gordon and Betty Moore Foundation (GMBF 554),
W. M. Keck Foundation (Keck D322197), NSF NVO (AST 01-22449), NASA
AISRP (NNG 05-GB01G), and NASA GALEX (44G1071483).
| proofpile-arXiv_065-6797 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The detection of a new particle having a sub-GeV mass would likely hint at the presence of
physics beyond the standard model.
This possibility has been raised recently by the observation of three events for the rare
decay mode \,$\Sigma^+\to p\mu^+\mu^-$\, with dimuon invariant masses narrowly
clustered around 214.3\,MeV by the HyperCP collaboration a few years ago~\cite{Park:2005ek}.
Although these events can be accounted for within the standard model (SM) when
long-distance contributions are properly included~\cite{Bergstrom:1987wr}, the probability
that the three events have the same dimuon mass in the SM is less than 1~percent.
This makes it reasonable to speculate that a light neutral particle, $X$,
is responsible for the observed dimuon-mass distribution via
the decay chain \,$\Sigma^+\to p X\to p\mu^+\mu^-$\,~\cite{Park:2005ek}.
The new-particle interpretation of the HyperCP result has been theoretically explored to
some extent in the literature~\cite{He:2005we,Deshpande:2005mb,Gorbunov:2005nu,
He:2006uu,He:2006fr,Zhu:2006zv,Tatischeff:2007dz,Chen:2006xja,Chen:2007uv}.
Various ideas that have been proposed include the possibility that $X$ is spinless or that
it has spin one.
In the spinless case, $X$ could be a sgoldstino in supersymmetric
models~\cite{Gorbunov:2005nu} or a $CP$-odd Higgs boson in the next-to-minimal
supersymmetric standard model (NMSSM)~\cite{He:2006fr,Zhu:2006zv}.
In the case of $X$ being a spin-1 particle, one possible candidate is the gauge ($U$) boson
of an extra U(1) gauge group in some extensions of the~SM~\cite{Chen:2007uv}.
The presence of $X$ in \,$\Sigma^+\to p\mu^+\mu^-$\, implies that it also contributes to other
\,$|\Delta S|=1$\, transitions, such as the kaon decays \,$K\to\pi\mu^+\mu^-$.\,
In general, the contributions of $X$ to \,$|\Delta S|=1$\, processes fall into two types.
The first one is induced by the flavor-changing (effective) couplings of $X$ to $d s$.
In addition to these two-quark contributions, there are so-called four-quark contributions
of $X$, which arise from the combined effects of the usual four-quark \,$|\Delta S|=1$\,
operators in the SM and the flavor-conserving couplings of $X$ to quarks, as well as its
interactions with the SM gauge fields~\cite{He:2006uu}.
Although the two-quark contributions are generally expected to dominate over the four-quark
ones, in some models the parameter space may have regions where the two types of
contributions are comparable in size and hence could interfere
destructively~\cite{He:2006uu,He:2006fr}.
Accordingly, to explore the $X$ hypothesis in detail and compare its predictions with
experimental results in a definite way,
it is necessary to work under some model-dependent assumptions.
There are a number experiments that have recently been performed or are still ongoing
to test the $X$ hypothesis~\cite{Tung:2008gd,Love:2008hs,ktev,e391a,belle}.
Their results have begun to restrict some of the proposed ideas on $X$ in the literature.
In particular, as already mentioned, $X$ could be a light $CP$-odd Higgs boson in the NMSSM.
In the specific NMSSM scenario considered in Ref.~\cite{He:2006fr}, $X$ does not couple
to up-type quarks and has the same flavor-conserving coupling $l_d$ to all down-type
quarks, implying that the four-quark contributions of $X$ to \,$|\Delta S|=1$\, decays are
proportional to~$l_d$~\cite{He:2006fr}.
Recent searches for the radiative decays \,$\Upsilon({\rm1S,2S,3S})\to\gamma X\to\gamma\mu^+\mu^-$\,
by the CLEO and BaBar collaborations~\cite{Love:2008hs} have come back negative and
imposed sufficiently small upper-bounds on $l_d$
to make the four-quark contributions negligible compared to the two-quark ones.
With only the two-quark contributions being present, the scalar part of the $sdX$ coupling is
already constrained by \,$K\to\pi\mu\mu$\, data to be negligibly small, whereas its pseudoscalar
part can be probed by \,$K\to\pi\pi\mu\mu$\, measurements~\cite{He:2005we,Deshpande:2005mb}.
There are now preliminary results on the branching ratio $\cal B$ of
\,$K_L\to\pi^0\pi^0X\to\pi^0\pi^0\mu^+\mu^-$\, reported by the KTeV and E391a
collaborations~\cite{ktev,e391a}.
The KTeV preliminary measurement \,${\cal B}<9.44\times10^{-11}$\, at 90\%~C.L.~\cite{ktev}
is the much more stringent of the two and has an upper bound almost 20 times smaller than
the lower limit \,${\cal B}_{\rm lo}=1.7\times10^{-9}$\, predicted in Ref.~\cite{He:2005we}
under the assumption that the $sdX$ pseudoscalar coupling, $g_P^{}$, is purely real.
However, there is a possibility that $g_P^{}$ has an imaginary part, and in the case where
this coupling is mostly imaginary the predicted lower bound, ${\cal B}_{\rm lo}$, can be much
smaller.\footnote{We gratefully acknowledge D. Gorbunov for pointing this out to us.}
More precisely, one can find that \,${\cal B}_{\rm lo}<7\times10^{-11}$,\, which evades
the above bound from KTeV, if \,$|{\rm Im}\,g_P^{}|>0.98\,|g_P^{}|$\, and, moreover,
\,${\cal B}_{\rm lo}=1.7\times10^{-9}\,|\epsilon_K^{}|^2\sim8\times10^{-15}$\,
if $g_P^{}$ is purely imaginary, \,$\epsilon_K^{}\sim{\cal O}(0.002)$\, being the usual
$CP$-violation parameter in kaon mixing.
If the KTeV preliminary result stands in their final report, then it will have imposed
a significant constraint on~$g_P^{}$, restricting it to be almost purely imaginary, for
the scenario in which $X$ has spin zero and its four-quark contributions to flavor-changing
transitions are negligible.
To place stronger restrictions on~$g_P^{}$, it is important to look for the decays of particles
other than neutral kaons, such as \,$K^\pm\to\pi^\pm\pi^0X$\, and
\,$\Omega^-\to\Xi^-X$\,~\cite{Kaplan:2007nn}.
Although the $X$ couplings in the \,$|\Delta S|=1$\, sector are in general independent of those
in the \,$|\Delta B|=1$\, sector, there is also new information from the latter sector that
seems compatible with the results of the $K_L$ measurements.
Very recently the Belle collaboration has given a~preliminary report on their search for
a~spinless $X$ in \,$B\to\rho\mu^+\mu^-$\, and \,$B\to K^*\mu^+\mu^-$\, with
$m_{\mu\mu}^{}$ values restricted within a small region around \,$m_{\mu\mu}^{}=214.3$\,MeV.\,
They did not observe any event and provided stringent upper-bounds on the branching ratios of
\,$B\to\rho X$\, and \,$B\to K^*X$\,~\cite{belle}.
Unlike the spinless case, the scenario in which $X$ has spin one is not yet as strongly
challenged by experimental data, for it predicts that the lower limit of the branching ratio of
\,$K_L\to\pi^0\pi^0X\to\pi^0\pi^0\mu^+\mu^-$\, arising from the two-quark $dsX$ axial-vector
coupling, taken to be real, is \,$2\times10^{-11}$\,~\cite{He:2005we}.
This prediction is well below the preliminary upper-bound of \,$9.44\times10^{-11}$\, from
KTeV~\cite{ktev} and could get lower in the presence of an imaginary part of the $dsX$ coupling.
It is therefore interesting to explore the spin-1 case further, which we will do here.
In this paper we focus on the contributions of $X$ with spin~1 to a number of rare
processes involving mesons containing the $b$ quark.
We will not deal with specific models, but will instead adopt a model-independent
approach, assuming that $X$ has flavor-changing two-quark couplings to down-type quarks only
and that its four-quark contributions to flavor-changing transitions are negligible compared
to the two-quark ones.
Accordingly, since the $bdX$ and $bsX$ couplings generally are not related to the $sdX$
couplings, we further assume that the $b(d,s)X$ couplings each have both parity-even and
parity-odd parts, but we leave the parity of $X$ unspecified.
Specifically, we allow $X$ to have both vector and axial-vector couplings to $b(d,s)$.
The more limited case of $X$ being an axial-vector boson with only parity-even couplings
to $b(d,s)$ has been considered in Ref.~\cite{Chen:2006xja}.
Following earlier work~\cite{He:2005we}, to be consistent with HyperCP observations
we also assume that $X$ does not interact strongly and decays inside
the detector with \,${\cal B}(X\to\mu^+\mu^-)=1$.\,
In exploring the effect of $X$ with spin~1 on $B$ transitions, we will incorporate
the latest experimental information and obtain constraints on the flavor-changing
couplings of $X$ in order to predict upper bounds on the rates of a number of rare decays.
At this point it is worth pointing out that, since we let $X$ have vector couplings to $b(d,s)$,
the transitions in which we are interested include $B$ decays into $X$ and a pseudoscalar
meson, such as pion or kaon, which were not considered in Ref.~\cite{Chen:2006xja}.
As our numbers will show, most of the branching ratios of the decays we consider can be
large enough to be detected in near-future $B$ experiments.
This can serve to guide experimental searches for $X$ in order to help confirm or
rule out the spin-1 case.
\section{Interactions and amplitudes}
Assuming that $X$ has spin one and does not carry electric or color charge, we can express
the Lagrangian describing its effective couplings to a $b$~quark and
a light quark \,$q=d$ or $s$\, as
\begin{eqnarray} \label{LbqX}
{\cal L}_{bqX}^{} \,\,=\,\,
-\bar q\gamma_\mu^{}\bigl(g_{Vq}^{}-g_{Aq}^{}\gamma_5^{}\bigr)b\,X^\mu \,\,+\,\, {\rm H.c.}
\,\,=\,\,
-\bar q\gamma_\mu^{}\bigl(g_{{\rm L}q}^{}P_{\rm L}^{}+g_{{\rm R}q}^{}P_{\rm R}^{}\bigr)b\,X^\mu
\,\,+\,\, {\rm H.c.} ~,
\end{eqnarray}
where $g_{Vq}^{}$ and $g_{Aq}^{}$ parametrize the vector and axial-vector couplings, respectively,
\,$g_{{\rm L}q,{\rm R}q}^{}=g_{Vq}^{}\pm g_{Aq}^{}$,\, and
\,$P_{\rm L,R}^{}=\frac{1}{2}(1\mp\gamma_5^{})$.\,
Generally, the constants $g_{Vq,Aq}^{}$ can be complex.
In the following, we derive the contributions of these two-quark interactions of $X$ to
the amplitudes for several processes involving $b$-flavored mesons.
As mentioned above, we follow here the scenario in which the four-quark flavor-changing
contributions of $X$ are negligible compared to the effects induced by~${\cal L}_{bqX}$.
The first transition we will consider is $B_q^0$-$\bar B_q^0$ mixing, which is characterized
by the physical mass-difference $\Delta M_q$ between the heavy and light mass-eigenstates
in the $B_q^0$-$\bar B_q^0$ system.
This observable is related to the matrix element $M_{12}^q$ for the mixing by
\,$\Delta M_q=2\,|M_{12}^{q}|$,\, where \,$M_{12}^q=M_{12}^{q,\rm SM}+M_{12}^{q,X}$\,
is obtained from the effective Hamiltonian ${\cal H}_{b\bar q\to\bar b q}$ for the SM plus
$X$-mediated contributions using
\,$2m_{B_q} M_{12}^q=\bigl\langle B_q^0\bigr|{\cal H}_{b\bar q\to\bar b q}
\bigl|\bar B_q^0\bigr\rangle$\,~\cite{Buchalla:1995vs}.
The SM part of $M_{12}^q$ is dominated by the top loop and given by~\cite{Buchalla:1995vs}
\begin{eqnarray}
M_{12}^{q,\rm SM} \,\,\simeq\,\, \frac{G_{\rm F}^2 m_W^2}{12\pi^2}\,f_{B_q}^2 m_{B_q}^{}\,
\eta_B^{} B_{B_q}^{}\, \bigl(V_{tb}^{}V_{tq}^*\bigr)^2\, S_0^{}\bigl(m_t^2/m_W^2\bigr) ~,
\end{eqnarray}
where $G_{\rm F}$ is the usual Fermi constant, $f_{B_q}$ is the $B_q$ decay-constant,
$\eta_B^{}$ contains QCD corrections, $B_{B_q}$ is a~bag parameter, $V_{kl}$ are elements of
the Cabibbo-Kobayashi-Maskawa (CKM) matrix, and the loop function
\,$S_0^{}\bigl(m_t^2/m_W^2\bigr)\simeq2.4$.\,
To determine the $X$ contribution $M_{12}^{q,X}$, we derive the effective Hamiltonian
${\cal H}_{b\bar q\to \bar b q}^X$ from the amplitude for the tree-level transition
\,$b\bar q\to X^*\to\bar b q$\, calculated from~${\cal L}_{bqX}$.
Thus
\begin{eqnarray} \label{Hbq2bq}
{\cal H}_{b\bar q\to \bar b q}^X &=&
\frac{\bar q\gamma^\mu\bigl(g_{{\rm L}q}^{}P_{\rm L}^{}+g_{{\rm R}q}^{}P_{\rm R}^{}\bigr)b\,
\bar q\gamma_\mu^{}\bigl(g_{{\rm L}q}^{}P_{\rm L}^{}+g_{{\rm R}q}^{}P_{\rm R}^{}\bigr)b}
{2\Bigl(m_X^2-m_{B_q}^2\Bigr)}
\nonumber \\ && +\,\,
\frac{\Bigl\{\bar q\Bigl[\bigl(g_{{\rm L}q}^{}m_q^{}-g_{{\rm R}q}^{}m_b^{}\bigr)P_{\rm L}^{} +
\bigl(g_{{\rm R}q}^{}m_q^{}-g_{{\rm L}q}^{}m_b^{}\bigr)P_{\rm R}^{}\Bigr]b\Bigr\}^2}
{2\Bigl(m_X^2-m_{B_q}^2\Bigr)m_X^2} ~,
\end{eqnarray}
where we have used in the denominators the approximation \,$p_X^2=m_{B_q}^2$\, appropriate
for the $B_q$ rest-frame and included an overall factor of 1/2 to account for
the products of two identical operators.
In evaluating the matrix element of this Hamiltonian at energy scales \,$\mu\sim m_b^{}$,\,
one needs to include the effect of QCD running from high energy scales which mixes
different operators. The resulting contribution of $X$ is
\begin{eqnarray}
M_{12}^{q,X} &=&
\frac{f_{B_q}^2\, m_{B_q}^{}}{3\bigl(m_X^2-m_{B_q}^2\bigr)} \Biggl[
\bigl(g_{Vq}^2+g_{Aq}^2\bigr) P_1^{\rm VLL} +
\frac{g_{Vq}^2\,\bigl(m_b^{}-m_q^{}\bigr)^2+g_{Aq}^2\,\bigl(m_b^{}+m_q^{}\bigr)^2}{m_X^2}\,
P_1^{\rm SLL}
\nonumber \\ && \hspace{16ex} +\,\, \bigl(g_{Vq}^2-g_{Aq}^2\bigr) P_1^{\rm LR}
+ \frac{g_{Vq}^2\,\bigl(m_b^{}-m_q^{}\bigr)^2-g_{Aq}^2\,\bigl(m_b^{}+m_q^{}\bigr)^2}{m_X^2}\,
P_2^{\rm LR} \Biggr] ~,
\end{eqnarray}
where \,$P_1^{\rm VLL}=\eta_1^{\rm VLL} B_1^{\rm VLL}$,\,
\,$P_1^{\rm SLL}=-\mbox{$\frac{5}{8}$}\, \eta_1^{\rm SLL} R_{B_q} B_1^{\rm SLL}$,\, and
\,$P_j^{\rm LR}=-\mbox{$\frac{1}{2}$}\, \eta_{1j}^{\rm LR} R_{B_q} B_1^{\rm LR}
+ \mbox{$\frac{3}{4}$}\, \eta_{2j}^{\rm LR} R_{B_q} B_2^{\rm LR}$,\,
\,$j\,=\,1,2$\,~\cite{Buras:2001ra},
with the $\eta$'s denoting QCD-correction factors, the $B$'s being bag parameters defined by
the matrix elements
$\bigl\langle B_q^0\bigr|\bar q\gamma^\mu P_{\rm L}^{}b\,
\bar q\gamma_\mu^{}P_{\rm L}^{}b \bigl|\bar B_q^0\bigr\rangle =
\bigl\langle B_q^0\bigr|\bar q\gamma^\mu P_{\rm R}^{}b\,
\bar q\gamma_\mu^{}P_{\rm R}^{}b \bigl|\bar B_q^0\bigr\rangle =
\mbox{$\frac{2}{3}$} f_{B_q}^2 m_{B_q}^2 B_1^{\rm VLL}$,\,
\,$\bigl\langle B_q^0\bigr|\bar q P_{\rm L}^{}b\,\bar q P_{\rm L}^{}b\bigl|\bar B_q^0\bigr\rangle
= \bigl\langle B_q^0\bigr|\bar qP_{\rm R}^{}b\,\bar qP_{\rm R}^{}b\bigl|\bar B_q^0\bigr\rangle
= -\mbox{$\frac{5}{12}$} f_{B_q}^2 m_{B_q}^2 R_{B_q} B_1^{\rm SLL}$,\,
\,$\bigl\langle B_q^0\bigr|\bar q\gamma^\mu P_{\rm L}^{}b\,
\bar q\gamma_\mu^{}P_{\rm R}^{}b\bigl|\bar B_q^0\bigr\rangle =
-\mbox{$\frac{1}{3}$} f_{B_q}^2 m_{B_q}^2 R_{B_q} B_1^{\rm LR}$,\, and
\,$\bigl\langle B_q^0\bigr|\bar q P_{\rm L}^{}b\,\bar q P_{\rm R}^{}b\bigl|\bar B_q^0\bigr\rangle
=\mbox{$\frac{1}{2}$} f_{B_q}^2 m_{B_q}^2 R_{B_q} B_2^{\rm LR}$,\,
and \,$R_{B_q}=m_{B_q}^2/\bigl(m_b^{}+m_q^{}\bigr){}^2$.\,
Bounds on $g_{Vq}^{}$ and $g_{Aq}^{}$ can then be extracted from comparing the measured
and SM values of~$\Delta M_q$.
The second transition of interest is \,$B_q^0\to\mu^+\mu^-$,\, which receives a contribution
from \,$B_q^0\to X^*\to\mu^+\mu^-$.\, To derive the amplitude for the latter, we need not only
${\cal L}_{bqX}$, but also the Lagrangian describing \,$X\to\mu^+\mu^-$.\,
Allowing the $X$ interaction with $\mu$ to have both parity-even and -odd parts,
we can write the latter Lagrangian as
\begin{eqnarray} \label{LlX}
{\cal L}_{\mu X}^{} \,\,=\,\,
\bar\mu\gamma_\alpha^{}\bigl(g_{V\mu}^{}+g_{A\mu}^{}\gamma_5^{}\bigr)\mu\,X^\alpha ~,
\end{eqnarray}
where $g_{V\mu}^{}$ and $g_{A\mu}^{}$ are coupling constants, which are real due to
the hermiticity of~${\cal L}_{\mu X}$.
Using the matrix elements
\,$\bigl\langle0\bigr|\bar q\gamma^\mu b\bigl|\bar B_q^0\bigr\rangle =
\bigl\langle0\bigr|\bar q b\bigl|\bar B_q^0\bigr\rangle = 0$,\,
\,$\bigl\langle0\bigr|\bar q\gamma^\mu\gamma_5^{}b\bigl|\bar B_q^0(p)\bigr\rangle =
-i f_{B_q}p^\mu$,\, and
\,$\bigl\langle0\bigr|\bar q\gamma_5^{}b\bigl|\bar B_q^0\bigr\rangle =
i f_{B_q}m_{B_q}^2/\bigl(m_b^{}+m_q^{}\bigr)$,\,
we then arrive at
\begin{eqnarray}
{\cal M}\bigl(\bar B_q^0\to X\to\mu^+\mu^-\bigr) \,\,=\,\,
-\frac{2i f_{B_q}^{}\, g_{Aq}^{}\, g_{A\mu}^{}\, m_\mu^{}}{m_X^2}\, \bar\mu\gamma_5^{}\mu ~.
\end{eqnarray}
The resulting decay rate is
\begin{eqnarray} \label{rate_B2ll}
\Gamma\bigl(\bar B_q^0\to X\to\mu^+\mu^-\bigr) \,\,=\,\,
\frac{f_{B_q}^2\,\bigl|g_{Aq}^{}\,g_{A\mu}^{}\bigr|^2 m_\mu^2}{2\pi\,m_X^4}\,
\sqrt{m_{B_q}^2-4m_\mu^2} ~.
\end{eqnarray}
This implies that we need, in addition, the value of $g_{A\mu}^{}$, which can be estimated from
the contribution of ${\cal L}_{\mu X}$ in Eq.~(\ref{LlX}) at one-loop level to the anomalous
magnetic moment of the muon,~$a_{\mu}$.
We will determine $g_{A\mu}^{}$ in the next section.
Before moving on to other transitions, we note that from ${\cal L}_{\mu X}$ follows the decay rate
\begin{eqnarray} \label{rate_X2ll}
\Gamma\bigl(X\to\mu^+\mu^-\bigr) \,\,=\,\,
\frac{g_{V\mu}^2\, m_X^{}}{12\pi}
\Biggl(1+\frac{2 m_\mu^2}{m_X^2}\Biggr) \sqrt{1-\frac{4 m_\mu^2}{m_X^2}} \,+\,
\frac{g_{A\mu}^2\, m_X^{}}{12\pi}\,\Biggl(1-\frac{4 m_\mu^2}{m_X^2}\Biggr)^{\!3/2} ~.
\end{eqnarray}
The next process that can provide constraints on $g_{Vq}^{}$ and $g_{Aq}^{}$ is
the inclusive decay \,$b\to q\mu^+\mu^-$,\, to which \,$b\to q X$\, can contribute.
From ${\cal L}_{bqX}$ above, it is straightforward to arrive at the inclusive decay rate
\begin{eqnarray} \label{rate_b2qX}
\Gamma(b\to q X) &=& \frac{|\bm{p}_X^{}|}{8\pi\,m_b^2 m_X^2}
\Bigl\{\bigl|g_{Vq}^{}\bigr|^2\Bigl[\bigl(m_b^{}+m_q^{}\bigr)^2+2 m_X^2\Bigr]
\Bigl[\bigl(m_b^{}-m_q^{}\bigr)^2-m_X^2\Bigr] \nonumber \\ && \hspace*{11ex} +\,\,
\bigl|g_{Aq}^{}\bigr|^2\Bigl[\bigl(m_b^{}-m_q^{}\bigr)^2+2 m_X^2\Bigr]
\Bigl[\bigl(m_b^{}+m_q^{}\bigr)^2-m_X^2\Bigr]\Bigr\} ~,
\end{eqnarray}
where $\bm{p}_X^{}$ is the 3-momentum of $X$ in the rest frame of~$b$.
One may probe the \,$b\to q X$\, contribution to \,$b\to q\mu^+\mu^-$\, by examining
the measured partial rate of the latter for the smallest range available of the dimuon
mass, $m_{\mu\mu}^{}$, that contains \,$m_{\mu\mu}^{}=m_X^{}$.
We will also consider the exclusive decays \,$B\to M X$,\, which contribute to
\,$B\to M\mu^+\mu^-$,\, where $M$ is a~pseudoscalar meson~$P$, scalar meson~$S$,
vector meson~$V$, or axial-vector meson~$A$.
To evaluate their decay amplitudes, we need the \,$\bar B\to M$\, matrix elements of the
\,$b\to q$ operators in ${\cal L}_{bqX}$.
The matrix elements relevant to \,$\bar B\to P X$\, and \,$\bar B\to S X$\, are
\begin{eqnarray}
\kappa_P^{}\,\bigl\langle P\bigl(p_P^{}\bigr)\bigr|\bar q\gamma^\mu b
\bigl|\bar B\bigl(p_B^{}\bigr) \bigr\rangle &\,=\,&
\frac{m_B^2-m_P^2}{k^2}\, k^\mu\, F_0^{B P} +
\Biggl[\bigl(p_B^{}+p_P^{}\bigr)^\mu-\frac{m_B^2-m_P^2}{k^2}\,k^\mu\Biggr] F_1^{B P} ~,
\\
i \kappa_S^{}\,\bigl\langle S\bigl(p_S^{}\bigr)\bigr|\bar q\gamma^\mu \gamma_5^{} b
\bigl|\bar B\bigl(p_B^{}\bigr) \bigr\rangle &\,=\,&
\frac{m_B^2-m_S^2}{k^2}\, k^\mu\,F_0^{BS} +
\Biggl[ \bigl(p_B^{}+p_S^{}\bigr)^\mu - \frac{m_B^2-m_S^2}{k^2}\,k^\mu \Biggr] F_1^{BS} ~,
\end{eqnarray}
and \,$\langle P|\bar q\gamma^\mu\gamma_5^{}b|\bar B\rangle =
\langle S|\bar q\gamma^\mu b|\bar B\rangle=0$,\,
where \,$k=p_B^{}-p_{P,S}^{}$,\, the factor $\kappa_P^{}$ has a value of~1 for
\,$P=\pi^-,\bar K,D$\, or \,$-\sqrt2$\, for \,$P=\pi^0$,\,
the values of $\kappa_S^{}$ will be given in the next section,
and the form factors $F_{0,1}^{BP,BS}$ each depend on~$k^2$.
For \,$\bar B\to V X$\, and \,$\bar B\to A X$,\, we need
\begin{eqnarray}
\kappa_V^{}\,\bigl\langle V\bigl(p_V^{}\bigr)\bigr|\bar q\gamma_\mu^{}b
\bigl|\bar B\bigl(p_B^{}\bigr)\bigr\rangle \,&=&\,
\frac{2 V^{B V}}{m_B^{}+m_V^{}}\, \epsilon_{\mu\nu\sigma\tau}^{}\,
\varepsilon_V^{*\nu} p_B^\sigma\, p_V^\tau ~, \\
\kappa_V^{}\,\bigl\langle V\bigl(p_V^{}\bigr)\bigr|\bar q\gamma^\mu\gamma_5^{}b
\bigl|\bar B\bigl(p_B^{}\bigr)\bigr\rangle \,&=&\,
2 i A_0^{B V} m_V^{}\, \frac{\varepsilon_V^{*}\!\cdot\!k}{k^2}\, k^\mu
+ i A_1^{B V}\, \bigl( m_B^{}+m_V^{} \bigr) \biggl(
\varepsilon_V^{*\mu}-\frac{\varepsilon_V^{*}\!\cdot\!k}{k^2}\, k^\mu \biggr)
\nonumber \\ && - \,\,
\frac{i A_2^{B V}\, \varepsilon_V^{*}\!\cdot\!k}{m_B^{}+m_V^{}}
\biggl( p_B^\mu + p_V^\mu - \frac{m_B^2-m_V^2}{k^2}\, k^\mu \biggr) ~,
\end{eqnarray}
\begin{eqnarray}
\kappa_A^{}\,\bigl\langle A\bigl(p_A^{}\bigr)\bigr|\bar q\gamma^\mu b
\bigl|\bar B\bigl(p_B^{}\bigr)\bigr\rangle \,&=&\,
-2i V_0^{B A} m_A^{}\, \frac{\varepsilon_A^{*}\!\cdot\!k}{k^2}\, k^\mu
- iV_1^{B A}\, \bigl( m_B^{} -m_A^{} \bigr) \biggl(
\varepsilon_A^{*\mu}-\frac{\varepsilon_A^{*}\!\cdot\!k}{k^2}\, k^\mu \biggr)
\nonumber \\ && + \,\,
\frac{i V_2^{B A}\, \varepsilon_A^{*}\!\cdot\!k}{m_B^{} -m_A^{}}
\biggl( p_B^\mu + p_A^\mu - \frac{m_B^2 -m_A^2}{k^2}\, k^\mu \biggr) ~, \\
\kappa_A^{}\,\bigl\langle A\bigl(p_A^{}\bigr)\bigr|\bar q\gamma_\mu^{} \gamma_5 b
\bigl|\bar B\bigl(p_B^{}\bigr)\bigr\rangle \,&=&\,
\frac{-2 A^{BA}}{m_B^{} -m_A^{}}\, \epsilon_{\mu\nu\sigma\tau}^{}\,
\varepsilon_A^{*\nu} p_B^\sigma\, p_A^\tau ~,
\end{eqnarray}
where \,$k=p_B^{}-p_{V,A}^{}$,\, the factor $\kappa_V^{}$ has a magnitude of~1 for
\,$V=\rho^-,\bar K^*,\phi,D^*$\, or \,$\sqrt2$\, for \,$V=\rho^0,\omega$,\,
the values of $\kappa_A^{}$ will be given in the next section, and the form factors $V^{BV}$,
$A_{0,1,2}^{BV}$, $V_{0,1,2}^{BA}$, and $A^{BA}$ are all functions of~$k^2$.
Since $X$ has spin~1, its polarization $\varepsilon_X^{}$ and momentum $p_X^{}$ satisfy
the relation \,$\varepsilon_X^*\cdot p_X^{}=0$.\,
The amplitudes for \,$\bar B\to P X$\, and $\bar B \to S X$ are then
\begin{eqnarray} \label{M_B2PX}
{\cal M}(\bar B\to P X) &\,=\,& \frac{2\, g_{Vq}^{}}{\kappa_P^{}}\, F_1^{B P}\,
\varepsilon_X^*\!\cdot\!p_P^{} ~, \\ \label{M_B2SX}
{\cal M}(\bar B\to S X) &\,=\,&
\frac{2i\, g_{Aq}^{}}{\kappa_S^{}}\, F_1^{B S}\, \varepsilon_X^*\!\cdot\!p_S^{} ~,
\end{eqnarray}
leading to the decay rates
\begin{eqnarray} \label{rate_B2PX}
\Gamma (B \to P (S) X) \,\,=\,\, \frac{|\bm{p}_X^{}|^3}{2\pi\, \kappa_{P (S)}^2\, m_X^2}
\Bigl| g_{V (A) q}^{}\,F_1^{BP (S)} \Bigr|^2 ~,
\end{eqnarray}
where $\bm{p}_X^{}$ is the 3-momentum of $X$ in the rest frame of $B$.
For \,$\bar B\to V X$\, and \,$\bar B\to A X$,\, the amplitudes are
\begin{eqnarray} \label{M_B2VX}
{\cal M}(\bar B\to V X) \,&=&\,
-\frac{i g_{Aq}^{}}{\kappa_V^{}} \Biggl[
A_1^{BV}\, \bigl(m_B^{}+m_V^{}\bigr)\,\varepsilon_V^*\!\cdot\!\varepsilon_X^* \,-\,
\frac{2A_2^{BV}\, \varepsilon_V^*\!\cdot\!p_X^{}\,\varepsilon_X^*\!\cdot\!p_V^{}}
{m_B^{}+m_V^{}} \Biggr]
\nonumber \\ && +\,\,
\frac{2 g_{Vq}^{}\, V^{BV}}{\kappa_V^{}\,\bigl(m_B^{}+m_V^{}\bigr)}\,
\epsilon_{\mu\nu\sigma\tau}^{}\, \varepsilon_V^{*\mu}\varepsilon_X^{*\nu}p_V^\sigma\,p_X^\tau ~,
\end{eqnarray}
\begin{eqnarray} \label{M_B2AX}
{\cal M}(\bar B\to A X) \,&=&\,
-\frac{i g_{Vq}^{}}{\kappa_A^{}} \Biggl[
V_1^{BA}\, \bigl(m_B^{}-m_A^{}\bigr)\,\varepsilon_A^*\!\cdot\!\varepsilon_X^* \,-\,
\frac{2V_2^{BA}\, \varepsilon_A^*\!\cdot\!p_X^{}\,\varepsilon_X^*\!\cdot\!p_A^{}}
{m_B^{}-m_A^{}} \Biggr]
\nonumber \\ && +\,\,
\frac{2 g_{Aq}^{}\, A^{BA}}{\kappa_A^{}\,\bigl(m_B^{}-m_A^{}\bigr)}\, \epsilon_{\mu\nu\sigma\tau}^{}\,
\varepsilon_A^{*\mu}\varepsilon_X^{*\nu}p_A^\sigma\, p_X^\tau ~.
\end{eqnarray}
The corresponding decay rates can be conveniently written as~\cite{Kramer:1991xw}
\begin{eqnarray} \label{rate_B2VX}
\Gamma(B\to M'X) \,\,=\,\, \frac{|\bm{p}_X^{}|}{8\pi\, m_B^2}
\Bigl( \bigl|H_0^{M'}\bigr|^2 + \bigl|H_+^{M'}\bigr|^2 + \bigl|H_-^{M'}\bigr|^2 \Bigr) ~,
\end{eqnarray}
where \,$M'=V$ or $A$,\, \,$H_0^{M'}=-a_{M'}^{}\,x_{M'}^{}-b_{M'}^{}\bigl(x_{M'}^2-1\bigr)$,\,
and \,$H_\pm^{M'}=a_{M'}^{}\pm c_{M'}\,\sqrt{x_{M'}^2-1}$,\,
with \,$x_{M'}^{}=\bigl(m_B^2 - m_{M'}^2 - m_X^2\bigr)/\bigl(2 m_{M'}^{}m_X^{}\bigr)$,\,
\begin{eqnarray} \label{abcV}
a_V^{} &=& \frac{g_{Aq}^{}\, A_1^{BV}}{\kappa_V^{}}\bigl(m_B^{}+m_V^{}\bigr) ~, \hspace{5ex}
b_V^{} \,=\, \frac{-2 g_{Aq}^{}\, A_2^{BV} m_V^{} m_X^{}}{\kappa_V^{}\,\bigl(m_B^{}+m_V^{}\bigr)} ~,
\hspace{5ex}
c_V^{} \,=\, \frac{2 g_{Vq}^{}\, m_V^{} m_X^{} V^{BV}}{\kappa_V^{}\,\bigl(m_B^{}+m_V^{}\bigr)} ~,
~~~~~ \\ \label{abcA}
a_A^{} &=& \frac{g_{Vq}^{}\, V_1^{BA}}{\kappa_A^{}}\bigl(m_B^{}-m_A^{}\bigr) ~, \hspace{5ex}
b_A^{} \,=\, \frac{-2 g_{Vq}^{}\, V_2^{BA}m_A^{}m_X^{}}{\kappa_A^{}\,\bigl(m_B^{}-m_A^{}\bigr)} ~,
\hspace{5ex}
c_A^{} \,=\, \frac{2 g_{Aq}^{}\, m_A^{} m_X^{} A^{BA}}{\kappa_A^{}\,\bigl(m_B^{}-m_A^{}\bigr)} ~.
\end{eqnarray}
In the next section, we employ the expressions found above to extract constraints on
the couplings $g_{Vq}^{}$ and $g_{Aq}^{}$ from currently available experimental information.
We will subsequently use the results to predict upper bounds for the branching ratios of
a number of $B$ decays.
Before proceeding, we remark that we have not included in ${\cal L}_{bqX}$ in Eq.~(\ref{LbqX})
the possibility of dipole operators of the form
\,$\bar q\sigma^{\mu\nu}(1\pm\gamma_5^{})b\,\partial_\mu X_\nu$.\,
They would contribute to the processes dealt with above, except for \,$B_q^{}\to\mu^+\mu^-$.\,
However, we generally expect the effects of these operators to be suppressed compared to those
of ${\cal L}_{bqX}$ by a factor of order \,$p_X^{}/\Lambda\sim m_b^{}/\Lambda$,\,
with $\Lambda$ being a heavy mass representing the new-physics scale, if their contributions all
occur simultaneously.
\section{Numerical Analysis}
\subsection{Constraints from \boldmath$B_q$-$\bar B_q$ mixing}
As discussed in the preceding section, the $X$ contribution $M_{12}^{q,X}$ to $B_q$-$\bar B_q$
mixing is related to the observable \,$\Delta M_q=2\,|M_{12}^{q}|$,\, where
\,$M_{12}^q=M_{12}^{q,\rm SM}+M_{12}^{q,X}$.\,
The experimental value $\Delta M_q^{\rm exp}$ can then be expressed in terms of the SM
prediction $\Delta M_q^{\rm SM}$ as
\begin{eqnarray} \label{DM}
\Delta M_q^{\rm exp} \,\,=\,\, \Delta M_q^{\rm SM}\,\bigl|1+\delta_q^{}\bigr| \,\,,
\hspace{5ex} \delta_q^{} \,\,=\,\, \frac{M_{12}^{q,X}}{M_{12}^{q,\rm SM}} \,\,,
\end{eqnarray}
and so numerically they can lead to the allowed range of $\delta_q$,
from which we can extract the bounds on $g_{Vq,Aq}^{}$.
Thus, with \,$\Delta M_d^{\rm exp}=(0.507\pm0.005){\rm\,ps}^{-1}$\,~\cite{pdg} and
\,$\Delta M_d^{\rm SM}=\bigl(0.560^{+0.067}_{-0.076}\bigr){\rm\,ps}^{-1}$\,~\cite{ckmfit},
using the approximation \,$\bigl|1+\delta_d^{}\bigr|\simeq 1+{\rm Re}\,\delta_d$,\,
we can extract the $1 \sigma$ range
\begin{eqnarray} \label{delta_d}
-0.22 \,\,<\,\, {\rm Re}\,\delta_d^{} \,\,<\,\, +0.03 ~.
\end{eqnarray}
Similarly, \,$\Delta M_s^{\rm exp}=(17.77\pm0.12){\rm\,ps}^{-1}$\,~\cite{pdg} and
\,$\Delta M_s^{\rm SM}=(17.6^{+1.7}_{-1.8}){\rm\,ps}^{-1}$\,~\cite{ckmfit} translate into
\begin{eqnarray} \label{delta_s}
-0.09 \,\,<\,\, {\rm Re}\,\delta_s^{} \,\,<\,\, 0.11 ~.
\end{eqnarray}
To proceed, in addition to \,$m_X^{}=214.3$\,MeV,\, we use
\,$m_b^{}=4.4$\,GeV,\, \,$P_1^{\rm VLL}=0.84$,\, \,$P_1^{\rm SLL}=-1.47$,\,
\,$P_1^{\rm LR}=-1.62$,\, \,$P_2^{\rm LR}=2.46$\,~\cite{Buras:2001ra},
CKM parameters from Ref.~\cite{ckmfit},
\,$f_{B_d}=190$\,MeV,\, \,$f_{B_s}=228$\,MeV,\, \,$\eta_B^{}=0.551$,\, \,$B_{B_d}=1.17$,\,
and \,$B_{B_s}=1.23$\,~\cite{ckmfit,Buchalla:1995vs},
as well as meson masses from Ref.~\cite{pdg}.
Also, we will neglect $m_d^{}$ and $m_s^{}$ compared to $m_b^{}$.
It follows that for the ratio in Eq.~(\ref{DM})
\begin{eqnarray} \label{red}
{\rm Re}\,\delta_d^{} &=& \Bigl\{-4.4\,\Bigl[({\rm Re}\,g_{Vd}^{})^2-({\rm Im}\,g_{Vd}^{})^2\Bigr]
- 8.2\, ({\rm Re}\,g_{Vd}^{})({\rm Im}\,g_{Vd}^{}) \nonumber \\ && ~
+\, 17\,\Bigl[({\rm Re}\,g_{Ad}^{})^2-({\rm Im}\,g_{Ad}^{})^2\Bigr]
+ 33\,({\rm Re}\,g_{Ad}^{})({\rm Im}\,g_{Ad}^{}) \Bigr\}\times10^{12} ~,
\nonumber \\ \phantom{|^{\int^|}}
{\rm Re}\,\delta_s^{} &=& \Bigl\{-2.5\,\Bigl[({\rm Re}\,g_{Vs}^{})^2-({\rm Im}\,g_{Vs}^{})^2\Bigr]
+ 0.2\, ({\rm Re}\,g_{Vs}^{})({\rm Im}\,g_{Vs}^{}) \nonumber \\ && ~
+ 9.9\,\Bigl[({\rm Re}\,g_{As}^{})^2-({\rm Im}\,g_{As}^{})^2\Bigr]
- 0.7\, ({\rm Re}\,g_{As}^{})({\rm Im}\,g_{As}^{}) \Bigr\}\times10^{11} ~.
\end{eqnarray}
Hence constraints on the couplings come from combining these formulas
with Eqs.~(\ref{delta_d}) and~(\ref{delta_s}).
If only $g_{Vq}^{}$ or $g_{Aq}^{}$ contributes at a time, the resulting constraints are
\begin{eqnarray} \label{Bmix_bounds}
&& -0.7\times10^{-14} \,\,<\,\, ({\rm Re}\,g_{Vd}^{})^2-({\rm Im}\,g_{Vd}^{})^2
+ 1.9\, ({\rm Re}\,g_{Vd}^{})({\rm Im}\,g_{Vd}^{}) \,\,<\,\, 5.0\times10^{-14} ~,
\nonumber \\ \phantom{|^{\int}}
&& -1.3\times10^{-14} \,\,<\,\, ({\rm Re}\,g_{Ad}^{})^2-({\rm Im}\,g_{Ad}^{})^2
+ 1.9\, ({\rm Re}\,g_{Ad}^{})({\rm Im}\,g_{Ad}^{}) \,\,<\,\, 0.2\times10^{-14} ~,
\\ \phantom{|^{\int}}
&& -4.4\times10^{-13} \,\,<\,\, ({\rm Re}\,g_{Vs}^{})^2-({\rm Im}\,g_{Vs}^{})^2
- 0.1\, ({\rm Re}\,g_{Vs}^{})({\rm Im}\,g_{Vs}^{}) \,\,<\,\, 3.6\times10^{-13} ~,
\nonumber \\ \phantom{|^{\int}}
&& -0.9\times10^{-13} \,\,<\,\, ({\rm Re}\,g_{As}^{})^2-({\rm Im}\,g_{As}^{})^2
- 0.1\, ({\rm Re}\,g_{As}^{})({\rm Im}\,g_{As}^{}) \,\,<\,\, 1.1\times10^{-13} ~.
\end{eqnarray}
If one assumes instead that $g_{Vq,Aq}^{}$ are real, then from Eqs.~(\ref{delta_d})-(\ref{red})
one can determine the allowed ranges of the couplings shown in Fig.~\ref{mix_bounds}.
\begin{figure}[ht] \vspace*{3ex}
\includegraphics[height=2.5in,width=2.6in]{fig_gv_ga_Bd_mixing.eps} \hspace{5ex}
\includegraphics[width=2.5in,trim=0 0 0 0,clip]{fig_gv_ga_Bs_mixing.eps} \vspace*{-1ex}
\caption{Parameter space of $g_{Vq}^{}$ and $g_{Aq}^{}$ subject to constraints from
$B_q$-$\bar B_q$ mixing, \,$q=d,s$,\, if $g_{Vq,Aq}^{}$ are taken to be real.\label{mix_bounds}}
\end{figure}
\subsection{Constraints from leptonic decays \,\boldmath$B_q\to\mu^+\mu^-$}
As the \,$B_q\to\mu^+\mu^-$\, width in Eq.~(\ref{rate_B2ll}) indicates, to determine
$g_{Aq}^{}$ requires knowing the $X\mu\mu$ coupling constant~$g_{A\mu}^{}$.
Since ${\cal L}_{\mu X}$ in Eq.~(\ref{LlX}) generates the contribution of $X$ to
the muon anomalous magnetic moment~$a_\mu^{}$, we may gain information on $g_{A\mu}^{}$
from~$a_\mu^{}$.
The $X$ contribution is calculated to be~\cite{He:2005we,Leveille:1977rc}
\begin{eqnarray} \label{amuX}
a_\mu^X \,\,=\,\,
\frac{m^2_\mu}{4\pi^2m^2_X}\bigl(g_{V\mu}^2\,f_V^{}(r)+g_{A\mu}^2\,f_A^{}(r)\bigr) \,\,=\,\,
1.1\times10^{-3}\,g_{V\mu}^2 \,-\, 9.0\times10^{-3}\,g_{A\mu}^2 ~,
\end{eqnarray}
where \,$r=m^2_\mu/m^2_X$,\,
\begin{eqnarray}
f_V^{}(r) \;=\; \int^1_0 dx\, \frac{x^2-x^3}{1-x +r x^2} ~, \hspace{5ex}
f_A^{}(r) \;=\; \int^1_0 dx\, \frac{-4 x+5 x^2-x^3-2r x^3}{1-x +r x^2} ~.
\end{eqnarray}
Presently there is a discrepancy of $3.2\sigma$ between the SM prediction for $a_\mu^{}$ and
its experimental value,
\,$\Delta a_\mu^{}=a_\mu^{\rm exp}-a_\mu^{\rm SM} =
(29\pm9)\times10^{-10}$\,~\cite{Jegerlehner:2009ry},
with \,$a_\mu^{\rm exp}=(11659208\pm6)\times 10^{-10}$\,~\cite{pdg}.
Consequently, since the $g_{V\mu}^{}$ and $g_{A\mu}^{}$ terms in $a_\mu^X$ are opposite in sign,
we require that \,$0<a_\mu^X<3.8\times10^{-9}$,\, which corresponds to the allowed parameter
space plotted in Fig.~\ref{amu_bounds}.
Avoiding tiny regions where the two terms in Eq.~(\ref{amuX}) have to conspire subtly to satisfy
the $a_\mu^X$ constraint, we then have
\begin{eqnarray}
g_{V\mu}^2 \,\,\lesssim\,\, 1\times10^{-5} ~, \hspace{5ex}
g_{A\mu}^2 \,\,\lesssim\,\, 1\times10^{-6} ~,
\end{eqnarray}
provided that \,$0<1.1\,g_{V\mu}^2-9.0\,g_{A\mu}^2<3.8\times10^{-6}$.\,
We note that combining these requirements for $g_{V\mu}^{}$ and $g_{A\mu}^{}$ with
Eq.~(\ref{rate_X2ll}) results in the width
\,$\Gamma\bigl(X\to\mu^+\mu^-\bigr)\lesssim1.8\times10^{-8}$\,GeV.\footnote{It is
worth mentioning here that in Ref.~\cite{He:2005we} the number for
\,$\Gamma\bigl(X_A\to\mu^+\mu^-\bigr)$\, in their Eq.~(18), corresponding to \,$g_{V\mu}^{}=0$\,
and \,$g_{A\mu}^2=6.7\times10^{-8}$,\, is too large by a factor of~3.}\,
Assuming that \,$B_{d,s}\to X^*\to\mu^+\mu^-$\, saturates the latest measured bounds
\,${\cal B}\bigl(B_d\to\mu^+\mu^-\bigr)<6.0\times 10^{-9}$\, and
\,${\cal B}\bigl(B_s\to\mu^+\mu^-\bigr)<3.6\times 10^{-8}$\,~\cite{hfag}, respectively,
we use Eq.~(\ref{rate_B2ll}) with \,$g_{A\mu}^2=1\times 10^{-6}$\, to extract
\begin{eqnarray} \label{B2ll_bounds}
|g_{Ad}^{}|^2 \,\,<\,\, 2.8\times10^{-14} ~, \hspace{5ex}
|g_{As}^{}|^2 \,\,<\,\, 1.2\times10^{-13} ~,
\end{eqnarray}
which are roughly comparable to the corresponding limits in Eq.~(\ref{Bmix_bounds})
from $B_q$-$\bar B_q$ mixing.
\begin{figure}[ht] \vspace*{2ex}
\includegraphics[width=2.5in]{fig_gv_ga_amu.eps} \vspace*{-1ex}
\caption{Parameter space of $g_{V\mu}^{}$ and $g_{A\mu}^{}$ subject to constraints from
the muon anomalous magnetic moment.\label{amu_bounds}} \vspace*{-3ex}
\end{figure}
\subsection{Constraints from inclusive decay \,\boldmath$b\to q\mu^+\mu^-$}
Since there is still no experimental data on the inclusive \,$b\to d\mu^+\mu^-$,\,
we consider only the \,$q=s$\, case.
Thus, employing Eq.~(\ref{rate_b2qX}) and the $B_d^0$ lifetime~\cite{pdg}, we find
\begin{eqnarray} \label{rate_b2sX}
{\cal B}(b\to s X) \,\,\simeq\,\, \frac{\Gamma(b\to s X)}{\Gamma_{B_d^0}}
\,\,=\,\, 8.55\times10^{13}\,\bigl(|g_{Vs}^{}|^2+|g_{As}^{}|^2\bigr) \,\,.
\end{eqnarray}
To get constrains on $g_{Vs,As}^{}$, it is best to examine the measured partial rate for
the smallest $m_{\mu\mu}^{}$ bin available which contains \,$m_{\mu\mu}^{}=m_X^{}$.\,
The most recent data have been obtained by the BaBar and Belle
collaborations~\cite{Aubert:2004it,Iwasaki:2005sy}, the former giving the more restrictive
\begin{eqnarray}
{\cal B}(b\to s\ell^+\ell^-)_{m_{\ell\ell}^{}\in[0.2{\rm\,GeV},1.0{\rm\,GeV}]} \,\,=\,\,
\bigl(0.08\pm 0.36^{+0.07}_{-0.04}\bigr)\times 10^{-6} ~,
\end{eqnarray}
which is the average over \,$\ell=e$ and~$\mu$.
This data allows us to demand that the $X$ contribution be below its 90\%-C.L.
upper-bound.
With \,${\cal B}(X\to\mu^+\mu^-)=1$,\, it follows that
\begin{eqnarray}
{\cal B}(b\to s X) \,\,<\,\, 6.8\times 10^{-7} ~,
\end{eqnarray}
which in combination with Eq.~(\ref{rate_b2sX}) implies
\begin{eqnarray} \label{incl_bound}
|g_{Vs}^{}|^2 \,+\, |g_{As}^{}|^2 \,\,<\,\, 8.0\times 10^{-21} ~.
\end{eqnarray}
\subsection{Constraints from exclusive decays \,\boldmath$B\to P\mu^+\mu^-$}
It can be seen from Eq.~(\ref{M_B2PX}) that only the vector coupling $g_{Vq}^{}$ is relevant to
the \,$B\to P X$ decay, not~$g_{Aq}^{}$.
As mentioned earlier, the possibility of $X$ having vector couplings was not considered in
Ref.~\cite{Chen:2006xja}, and therefore \,$B\to PX$\, decays were not studied therein.
Currently there is experimental information available on
\,$B\to\pi\mu^+\mu^-$\, and \,$B\to K\mu^+\mu^-$\, that can be used to place constraints
on~$g_{Vq}^{}$. For the form factors $F_1^{BP}$, since they are functions of
\,$k^2=(p_B^{}-p_P^{})^2=m_X^2\ll m_B^2$,\,
it is a good approximation to take their values at \,$k^2=0$.\,
Thus, for \,$B\to(\pi,K)$\, we adopt those listed in Table~\ref{table1}.
Using Eq.~(\ref{rate_B2PX}), we then obtain
\begin{eqnarray} \label{rate_B2piX}
&& {\cal B} (B^+\to\pi^+ X) \,\,=\,\, 1.06\times10^{13}\, |g_{Vd}^{}|^2 ~, \hspace{5ex}
{\cal B}\bigl(B_d \to\pi^0X\bigr) \,\,=\,\, 4.96\times10^{12}\, |g_{Vd}^{}|^2 ~, ~~~
\\ && \hspace*{10ex}
{\cal B}\bigl(B^+\to K^+X\bigr) \,\,\simeq\,\, {\cal B}\bigl(B_d \to K^0X\bigr) \,\,=\,\,
1.85\times10^{13}\, |g_{Vs}^{}|^2 ~. \phantom{|^{\int}} \label{rate_B2KX}
\end{eqnarray}
Experimentally, at present there are only upper limits for \,${\cal B}(B\to\pi\mu^+\mu^-)$,\,
namely~\cite{hfag,Wei:2008nv}
\begin{eqnarray}
{\cal B}(B^+\to\pi^+\mu^+\mu^-) \,\,<\,\, 6.9 \times 10^{-8} ~, \hspace{5ex}
{\cal B}\bigl(B_d \to\pi^0\mu^+\mu^-\bigr) \,\,<\,\, 1.84 \times 10^{-7}
\end{eqnarray}
at 90\%~C.L.
Assuming that the contributions of \,$B\to\pi X\to\pi\mu^+\mu^-$\, saturate these bounds
and using Eq.~(\ref{rate_B2piX}) along with \,${\cal B}(X\to\mu^+\mu^-)=1$,\,
we find from the more stringent of them
\begin{eqnarray} \label{B2PX_bound}
|g_{Vd}^{}|^2 \,\,<\,\, 6.5 \times 10^{-21} ~.
\end{eqnarray}
For \,$B\to K\mu^+\mu^-$,\, there is data on the partial branching ratio that is pertinent
to \,$B\to K X$.\, The latest measurement provides
\,${\cal B}(B\to K\mu^+\mu^-)_{m_{\mu\mu}\le2{\rm\,GeV}}=
\bigl(0.81^{+0.18}_{-0.16}\pm0.05\bigr)\times 10^{-7}$\,~\cite{Wei:2009zv}.
The corresponding SM prediction is consistent with this data~\cite{Antonelli:2009ws}
and has an uncertainty of about~30\%~\cite{Bobeth:2007dw}. In view of this, we can
demand that \,${\cal B}(B\to K X\to K\mu^+\mu^-)$\, be less than 40\% of the central
value of the measured result.\footnote{In estimating ${\cal B}(B\to M X\to M\mu^+\mu^-)$,
we neglect the interference in the \,$B\to M\mu^+\mu^-$\, rate between the SM and $X$
contributions because $X$ is very narrow, having a width of \,$\Gamma_X\lesssim10^{-8}$\,GeV,\,
as found earlier.}
Thus, with \,${\cal B}(X\to\mu^+\mu^-)=1$,\, we have
\begin{eqnarray}
{\cal B}(B\to K X) \,\,<\,\, 3.2\times 10^{-8} ~.
\end{eqnarray}
Comparing this limit with Eq.~(\ref{rate_B2KX}) results in
\begin{eqnarray} \label{B2KX_bound}
|g_{Vs}^{}|^2 \,\,<\,\, 1.7\times 10^{-21} ~,
\end{eqnarray}
which is stronger than the $g_{Vs}^{}$ bound inferred from Eq.~(\ref{incl_bound}).
One can expect much better bounds on $g_{Vq}^{}$ from future measurements of
\,$B\to(\pi,K)\mu^+\mu^-$\, with $m_{\mu\mu}^{}$ values restricted within a small region
around \,$m_{\mu\mu}^{}=m_X^{}$.\,
\begin{table}[t]
\caption{Form factors relevant to \,$B\to P X$~\cite{Ball:2007hb}.} \smallskip
\begin{tabular}{|c|ccccccc|}
\hline
$$ & $\vphantom{\sum_|^|}\,$ $B_d\to\pi$ \, & \, $B_d\to\eta$ \, & \, $B_d\to\eta'$ \, &
\, $B_s\to K$ \, & \, $B_d\to K$ \, & \, $B_c\to D_d^+$ \, & \, $B_c\to D_s^+$ \, \\
\hline
\, $F_1^{BP}(0)\vphantom{\int_|^{|^|}}$ \, & 0.26 & 0.23 & 0.19 & 0.30 & 0.36 & 0.22 & 0.16 \\
\hline
\end{tabular} \label{table1} \medskip
\end{table}
\subsection{Constraints from exclusive decays \,\boldmath$B\to V\mu^+\mu^-$}
\begin{table}[b]
\caption{Form factors relevant to \,$B\to V X$~\cite{Ball:2004rg}.} \smallskip
\begin{tabular}{|c|ccccccc|}
\hline
\, $\vphantom{\sum_|^|}$\, & \, $B_d\to\rho$ \, & \, $B_d\to\omega$ \, & \, $B_s\to K^*$ \, &
\, $B_d\to K^*$ \, & \, $B_s\to\phi$ \, & \, $B_c\to D_d^{*+}$ \, & \, $B_c\to D_s^{*+}$ \, \\
\hline
\, $V^{BV}(0)\vphantom{\sum_|^|}$ \, & 0.32 & 0.29 & 0.31 & 0.41 & 0.43 & 0.63 & 0.54 \\
$A_1^{BV}(0)\vphantom{\sum_|^|}$ & 0.24 & 0.22 & 0.23 & 0.29 & 0.31 & 0.34 & 0.30 \\
$A_2^{BV}(0)\vphantom{\sum_|^|}$ & 0.22 & 0.20 & 0.18 & 0.26 & 0.23 & 0.41 & 0.36 \\
\hline
\end{tabular} \label{table2}
\end{table}
For \,$B\to V X$\, decays, the values of the relevant form factors at \,$k^2=0$\, are
listed in Table~\ref{table2}.
Employing those for \,$B=B_d$\, and \,$V=\rho,K^*$\, in Eq.~(\ref{rate_B2VX}), we find
\begin{eqnarray} \label{Br_rhoX1}
{\cal B}\bigl(B_d\to\rho^0 X\bigr) &=&
1.77 \times 10^{10}\,|g_{Vd}^{}|^2 + 6.18\times 10^{12}\,|g_{Ad}^{}|^2 ~, \nonumber \\
{\cal B}\bigl(B_d\to K^{*0}X\bigr) &=& \vphantom{|^{\int}}
5.45 \times 10^{10}\,|g_{Vs}^{}|^2 + 1.79\times 10^{13}\,|g_{As}^{}|^2 ~.
\end{eqnarray}
It is worth noting here that the dominance of the $g_{Aq}^{}$ terms in the preceding
formulas over the $g_{Vq}^{}$ terms also occurs in other \,$B\to V X$\, transitions and
corresponds to the fact that in the decay rate, Eq.~(\ref{rate_B2VX}), the $g_{Aq}^{}$ term
$|H_0^V|^2$ is significantly enhanced with respect to the $g_{Vq}^{}$ term
in~$|H_+^V|^2+|H_-^V|^2$.
Currently there is no published measurement of \,${\cal B}(B\to\rho\mu^+\mu^-)$,\,
but there are publicly available experimental data on \,${\cal B}(B\to K^*\mu^+\mu^-)$\,
for the $m_{\mu\mu}^{}$ bin containing \,$m_{\mu\mu}^{}=m_X^{}$,\, the most precise being
\,${\cal B}(B\to K^*\mu^+\mu^-)_{m_{\mu\mu}\le2\rm\,GeV}=
\bigl(1.46_{-0.35}^{+0.40}\pm0.11\bigr)\times10^{-7}$\,~\cite{Wei:2009zv}.
The corresponding SM prediction agrees with this data~\cite{Antonelli:2009ws}
and has an uncertainty of about~30\%~\cite{Beneke:2004dp}.
This suggests requiring \,${\cal B}(B\to K^*X\to K^*\mu^+\mu^-)$\, to be less than 40\% of
the central value of the measured result.
Thus, with \,${\cal B}(X\to\mu^+\mu^-)=1$,\, we have
\begin{eqnarray} \label{B_B2KX}
{\cal B}\bigl(B_d\to K^{*0} X\bigr) \,\,<\,\, 5.8\times10^{-8} ~.
\end{eqnarray}
In addition, very recently the Belle collaboration has provided a preliminary report on their
search for $X$ with spin~1 in \,$B\to\rho X\to\rho\mu^+\mu^-$\, and
\,$B\to K^*X\to K^*\mu^+\mu^-$.\,
They did not observe any event and reported the preliminary bounds~\cite{belle}
\begin{eqnarray}
{\cal B}\bigl(B_d\to\rho^0X,\,\rho^0\to\pi^+\pi^-{\rm\,\,and\,\,}X\to\mu^+\mu^-\bigr) &<&
0.81\times 10^{-8} ~, \nonumber \\
{\cal B}\bigl(B_d\to K^{*0}X,\,K^{*0}\to K^+\pi^-{\rm\,\,and\,\,}X\to\mu^+\mu^-\bigr) &<&
1.53\times 10^{-8} \vphantom{|^{\int}}
\end{eqnarray}
at 90\%~C.L.
Since \,${\cal B}(\rho^0\to\pi^+\pi^-)\simeq1$\, and \,${\cal B}(K^{*0}\to K^+\pi^-)\simeq2/3$,\,
these numbers translate into
\begin{eqnarray} \label{B_B2VX}
{\cal B}\bigl(B_d\to\rho^0X\bigr) \,\,<\,\, 0.81\times 10^{-8} ~, \hspace{5ex}
{\cal B}\bigl(B_d\to K^{*0}X\bigr) \,\,<\,\, 2.3\times 10^{-8} ~,
\end{eqnarray}
the second one being more restrictive than the constraint in Eq.~(\ref{B_B2KX}).
In the absence of more stringent limits, in the following we use these numbers inferred from
the preliminary Belle results.
Accordingly, applying the limits in Eq.~(\ref{B_B2VX}) to Eq.~(\ref{Br_rhoX1}) yields
\begin{eqnarray} \label{B2rX_bound}
0.00286\,|g_{Vd}^{}|^2 \,+\, |g_{Ad}^{}|^2 &<& 1.3\times10^{-21} ~, \\ \label{B2KsX_bound}
0.00304\,|g_{Vs}^{}|^2 \,+\, |g_{As}^{}|^2 &<& 1.3\times10^{-21} ~. \vphantom{|^{\int}}
\end{eqnarray}
The $g_{As}^{}$ bound implied from the last equation can be seen to be stricter than that
from Eq.~(\ref{incl_bound}).
From Eqs.~(\ref{B2PX_bound}), (\ref{B2KX_bound}), (\ref{B2rX_bound}), and (\ref{B2KsX_bound}),
we can then extract the individual limits
\begin{eqnarray} \label{gd_bounds}
|g_{Vd}^{}|^2 \,\,<\,\, 6.5\times10^{-21} ~, & ~~~~~ &
|g_{Ad}^{}|^2 \,\,<\,\, 1.3\times10^{-21} ~, \\ \label{gs_bounds}
|g_{Vs}^{}|^2 \,\,<\,\, 1.7\times10^{-21} ~, & ~~~~~ &
|g_{As}^{}|^2 \,\,<\,\, 1.3\times10^{-21} ~. \vphantom{|^{\int}}
\end{eqnarray}
These bounds are clearly much stronger than those in Eqs.~(\ref{Bmix_bounds}) and~(\ref{B2ll_bounds})
derived from $B_q^0$-$\bar B_q^0$ mixing and \,$B_q^0\to\mu^+\mu^-$,\, respectively.
Also, combining Eqs.~(\ref{B2PX_bound}) and~(\ref{B2rX_bound}), we have plotted the allowed parameter
space of $g_{Vd}^{}$ and $g_{Ad}^{}$ in Fig.~\ref{g_plots}(a) under the assumption that they
are real. Similarly, Fig.~\ref{g_plots}(b) shows the $g_{Vs}^{}$-$g_{As}^{}$ region allowed
by Eqs.~(\ref{B2KX_bound}) and~(\ref{B2KsX_bound}).
\begin{figure}[ht]
\includegraphics[height=2.5in,width=2.5in]{fig_gvd_gad.eps} \hspace{5ex}
\includegraphics[height=2.5in,width=2.5in]{fig_gvs_gas.eps} \vspace*{-1ex}
\caption{Parameter space of $g_{Vq}^{}$ and $g_{Aq}^{}$, taken to be real, subject to constraints on
(a)~$B\to\pi X$ (lightly shaded, yellow region), \,$B\to\rho X$ (medium shaded, green region),
and both of them (heavily shaded, red region) and
(b)~$B\to K X$ (lightly shaded, yellow region), \,$B\to K^*X$ (medium shaded, green region),
and both of them (heavily shaded, red region).\label{g_plots}}
\end{figure}
\subsection{Predictions for \,\boldmath$B\to MX$\, decays, \,$M=P,V,S,A$}
We can now use the results above to predict the upper limits for branching ratios of
a number of additional $B$-decays involving $X$. Specifically, we explore two-body decays of
$B_{d,s}^0$ and $B_{u,c}^{}$ into $X$ and some of the lightest mesons~$M$.
We deal with \,$M=P$, $V$, $S$, and $A$\, in turn.
The $g_{Vd}^{}$ bound in Eq.~(\ref{gd_bounds}) leads directly to limits on
the branching ratios of \,$B_d^0\to\pi^0X$,\, \,$B_d^0\to\eta^{(\prime)}X$,\,
\,$B_s^0\to K^0X$,\, and \,$B_c^{}\to D_d^+X$.\,
Thus, from Eq.~(\ref{rate_B2piX}) follows
\begin{eqnarray}
{\cal B}\bigl(B_d^0\to\pi^0X\bigr) \,\,<\,\, 3.2\times10^{-8} ~.
\end{eqnarray}
Furthermore, employing Eq.~(\ref{rate_B2PX}) and Table~\ref{table1}, with
\,$\kappa_\eta^{}=\kappa_{\eta'}^{}=\sqrt2$,\, one gets
\begin{eqnarray}
{\cal B}\bigl(B_d^0\to\eta X\bigr) \,\,<\,\, 2.4\times10^{-8} ~, & ~~~~~ &
{\cal B}\bigl(B_d^0\to\eta'X\bigr) \,\,<\,\, 1.6\times10^{-8} ~, \nonumber \\
{\cal B}\bigl(B_s^0\to K^0X\bigr) \,\,<\,\, 8.2\times10^{-8} ~, & ~~~~~ &
{\cal B}\bigl(B_c^{}\to D_d^+X\bigr) \,\,<\,\, 1.7\times10^{-8} ~. \vphantom{|^{\int}}
\end{eqnarray}
Similarly, the $g_{Vs}^{}$ bound in Eq.~(\ref{gs_bounds}) implies
\begin{eqnarray}
&& {\cal B}\bigl(B_s^0\to\eta X\bigr) \,\,<\,\, 1.2\times10^{-8} ~, \hspace{5ex}
{\cal B}\bigl(B_s^0\to\eta'X\bigr) \,\,<\,\, 1.7\times10^{-8} ~, \nonumber \\ && \hspace*{15ex}
{\cal B}\bigl(B_c^{}\to D_s^+X\bigr) \,\,<\,\, 2.3\times10^{-9} ~, \vphantom{|^{\int}}
\end{eqnarray}
where the first two numbers have been calculated using \,$\kappa_\eta^{}=\kappa_{\eta'}^{}=1$,\,
\,$F_1^{B_s\eta}(0)=-F_1^{B_dK}(0)\,\sin\varphi$,\, and
\,$F_1^{B_s\eta'}(0)=F_1^{B_dK}(0)\,\cos\varphi$\,~\cite{Carlucci:2009gr}, with $F_1^{B_dK}(0)$
from Table~\ref{table1} and \,$\varphi=39.3^\circ$\,~\cite{Feldmann:1998vh}.
The $g_{Vq}^{}$ and $g_{Aq}^{}$ bounds in Eqs.~(\ref{gd_bounds}) and~(\ref{gs_bounds}),
together with Fig.~\ref{g_plots}, lead to upper limits for the branching ratios of
several other \,$B\to V X$\, decays.
Thus, combining Eq.~(\ref{rate_B2VX}) with the relevant form-factors in Table~\ref{table2}
yields for \,$q=d$\,
\begin{eqnarray}
{\cal B}(B^+\to\rho^+X) \,\,<\,\, 1.7\times 10^{-8}~, & ~~~~~ &
{\cal B}\bigl(B_d^0\to\omega X\bigr) \,\,<\,\, 7.0\times 10^{-9} ~, \nonumber \\
{\cal B}\bigl(B_s^0\to K^{*0}X\bigr) \,\,<\,\, 2.2\times 10^{-8} ~, & ~~~~~ &
{\cal B}\bigl(B_c^{}\to D_d^{*+}X\bigr) \,\,<\,\, 5.0\times 10^{-9} \vphantom{|^{\int}}
\end{eqnarray}
and for \,$q=s$\,
\begin{eqnarray}
{\cal B}\bigl(B_s^0\to\phi X\bigr) \,\,<\,\, 3.9\times 10^{-8} ~, \hspace{5ex}
{\cal B}\bigl(B_c^{}\to D_s^{*+}X\bigr) \,\,<\,\, 3.9\times 10^{-9} ~,
\end{eqnarray} \nopagebreak
where \,$|\phi\rangle\simeq|s\bar s\rangle$\, has been assumed.
In contrast to the \,$B\to P X$ case, $g_{Aq}^{}$ is the only coupling relevant to
\,$B\to SX$\, decays, as Eq.~(\ref{M_B2SX}) indicates.
From the $g_{Aq}^{}$ bounds found above, we can then estimate the branching ratios of some of
these decays.
Since the quark contents of many of the scalar mesons below 2\,GeV are not yet well established,
we consider only the decays with \,$S=a_0^{}(1450)$ and $K_0^*(1430)$,\, which are perhaps
the least controversial of the light scalar mesons~\cite{pdg}.
Adopting the form-factor values \,$F_1^{B_da_0(1450)}(0)=0.26$\, and
\,$F_1^{B_dK_0^*(1430)}(0)=0.26$\,~\cite{Cheng:2003sm}, we use Eq.~(\ref{rate_B2PX})
with \,$\kappa_S^{}=1$\, for \,$S=a_0^+(1450),K_0^*(1430)$\, and
\,$\kappa_S^{}=-\sqrt2$\, for \,$S=a_0^0(1450)$,\, as well as the $g_{Aq}^{}$ limits in
Eqs.~(\ref{gd_bounds}) and~(\ref{gs_bounds}), to obtain
\begin{eqnarray}
&& {\cal B}\bigl(B^+\to a_0^+(1450)X\bigr) \,\,<\,\, 1.1\times10^{-8} ~, \hspace{5ex}
{\cal B}\bigl(B_d^0\to a_0^0(1450)X\bigr) \,\,<\,\, 5.1\times10^{-9} ~, \nonumber \\ && \hspace*{5ex}
{\cal B}\bigl(B^+\to K_0^{*+}(1430)X\bigr) \,\,\simeq\,\,
{\cal B}\bigl(B_d^0\to K_0^{*0}(1430)X\bigr) \,\,<\,\, 1.0\times10^{-8} ~. \vphantom{|^{\int}}
\end{eqnarray}
Similarly to the \,$B\to V X$\, case, both $g_{Vq,Aq}^{}$ contribute to \,$B\to A X$,\,
as Eq.~(\ref{M_B2AX}) shows.
We will consider the decays with the lightest axial-vector mesons
\,$A=a_1^{}(1260)$, $b_1^{}(1235)$, $K_1(1270)$, and $K_1(1400)$.
The latter two are mixtures of the $K_{1A}$ and $K_{1B}$ states~\cite{pdg}, namely
\,$K_1(1270)=K_{1A}\,\sin\theta+K_{1B}\,\cos\theta$\, and
\,$K_1(1400)=K_{1A}\,\cos\theta-K_{1B}\,\sin\theta$,\, with \,$\theta=58^\circ$,\,
\,$m_{K_{1A}}=1.37$\,GeV,\, and \,$m_{K_{1B}}=1.31$\,GeV\,~\cite{Chen:2005cx}.
Incorporating the bounds in Eqs.~(\ref{gd_bounds}) and~(\ref{gs_bounds}) into
Eq.~(\ref{rate_B2VX}) with \,$\kappa_A^{}=1$\, for \,$S=a_1^+,b_1^+,K_1^{}$\, and
\,$\kappa_A^{}=-\sqrt2$\, for \,$S=a_1^0,b_1^0$,\, as well as the form factors listed
in Table~\ref{table3}, we arrive at
\begin{eqnarray} \label{B2AX}
{\cal B}\bigl(B^+\to a_1^+(1260)X\bigr) &\,\simeq\,&
2 {\cal B}\bigl(B_d^0\to a_1^0(1260)X\bigr) \,\,<\,\, 1.6\times10^{-8} ~, \nonumber \\
{\cal B}\bigl(B^+\to b_1^+(1235)X\bigr) &\,\simeq\,&
2 {\cal B}\bigl(B_d^0\to b_1^0(1235)X\bigr) \,\,<\,\, 1.2\times10^{-7} ~, \vphantom{|^{\int}}
\nonumber \\
{\cal B}\bigl(B^+\to K_1^{*+}(1270)X\bigr) &\,\simeq\,&
{\cal B}\bigl(B_d^0\to K_1^{*0}(1270)X\bigr) \,\,<\,\, 2.6\times10^{-8} ~, \vphantom{|^{\int}}
\nonumber \\
{\cal B}\bigl(B^+\to K_1^{*+}(1400)X\bigr) &\,\simeq\,&
{\cal B}\bigl(B_d^0\to K_1^{*0}(1400)X\bigr) \,\,<\,\, 1.3\times10^{-8} ~. \vphantom{|^{\int}}
\end{eqnarray}
\begin{table}[b]
\caption{Form factors relevant to \,$B\to A X$~\cite{Cheng:2003sm}.} \smallskip
\begin{tabular}{|c|cccc|}
\hline
\, $\vphantom{\sum_|^|}$\, & \, $B_d\to a_1^{}(1260)$ \, & \, $B_d\to b_1^{}(1235)$ \, &
\,\, $B_d\to K_{1A}$ \,\, & \,\, $B_d\to K_{1B}$ \,\, \\
\hline
\, $A^{BA}(0)\vphantom{\sum_|^|}$ \, & 0.25 & 0.10 & 0.26 & 0.11 \\
$V_1^{BA}(0)\vphantom{\sum_|^|}$ & 0.37 & 0.18 & 0.39 & 0.19 \\
$V_2^{BA}(0)\vphantom{\sum_|^|}$ & 0.18 & $-$0.03~~ & 0.17 & $-$0.05~~ \\
\hline
\end{tabular} \label{table3}
\end{table}
Before ending this section, we would like to make a few more remarks regarding our results above.
The branching ratios of \,$B^+\to\rho^+X$,\, \,$B_s^0\to\phi X$,\,
\,$B_d^0\to K_0^*(1430)X$,\, and \,$B\to K_1^{}X$\, were also estimated in
Ref.~\cite{Chen:2006xja} under the assumption that the vector couplings \,$g_{Vd,Vs}^{}=0$.\,
Compared to their numbers, our \,$B^+\to\rho^+X$\, result above is of similar order,
but our numbers for \,$B_s^0\to\phi X$\, and \,$B_d^{}\to K_0^*(1430)X$\, are smaller
by almost two orders of magnitude. This is mostly due to the more recent data that we have
used to extract the $g_{Aq}^{}$ values.
On the other hand, our results for \,$B\to K_1^{}(1270)X,\,K_1^{}(1400)X$\, are larger
than the corresponding numbers in Ref.~\cite{Chen:2006xja} by up to two orders of magnitude.
The main cause of this enhancement is the nonzero contributions of~$g_{Vs}^{}$ to their decay
rates. As one can see in Eq.~(\ref{rate_B2VX}) for the \,$B\to AX$\, rate, the $g_{Vq}^{}$
term in $|H_0^A|^2$ is significantly greater than the $g_{Aq}^{}$ term in
\,$|H_+^A|^2+|H_-^A|^2$.\,
For the same reason, without $g_{Vd}^{}$, the \,$B\to a_1^{}X,\,b_1^{}X$\, branching ratios
in Eq.~(\ref{B2AX}) would be orders of magnitude smaller.
Thus our inclusion of the vector couplings of $X$ has not only given rise to nonvanishing
\,$B\to PX$\, decays, but also helped make most of our predicted \,$B\to M X$\, branching
ratios as large as $10^{-8}$ to $10^{-7}$, which are within the reach of near-future
$B$ measurements.
\section{Conclusions}
Recent searches carried out by the CLEO, BaBar, E391a, KTeV, and Belle collaborations for
the HyperCP particle, $X$, have so far come back negative.
Furthermore, the new preliminary result from KTeV has led to significant experimental
restrictions on the $sdX$ pseudoscalar coupling in the scenario where $X$
is a spinless particle and has negligible four-quark flavor-changing interactions.
In contrast, the possibility that $X$ is a spin-1 particle is not well challenged by
experiment yet.
In this paper, we have investigated some of the consequences of this latter possibility.
Specifically, taking a~model-independent approach, we have allowed $X$ to have both
vector and axial-vector couplings to ordinary fermions.
Assuming that its four-quark flavor-changing contributions are not important compared to
its two-quark $bqX$ interactions, we have systematically studied the contributions of $X$
to various processes involving $b$-flavored mesons, including $B_q$-$\bar B_q$ mixing,
\,$B_q\to\mu^+\mu^-$,\, inclusive \,$b\to q\mu^+\mu^-$,\, and exclusive \,$B\to M\mu^+\mu^-$\,
decays, with \,$q=d,s$\, and $M$ being a spinless or \mbox{spin-1} meson.
Using the latest experimental data, we have extracted bounds on the couplings of $X$ and
subsequently predicted the branching ratios of a number of \,$B\to M X$\, decays, where $M$
is a~pseudoscalar, vector, scalar, or axial-vector meson.
The presence of the vector couplings $g_{Vq}^{}$ of $X$ has caused the decays with
a pseudoscalar $M$ to occur and also greatly enhanced the branching ratios of the decays
with an axial-vector~$M$.
The \,$B\to M X$\, branching ratios that we have estimated can reach the $10^{-7}$ level,
as in the cases of \,$B_s^0\to K^0X$\, and \,$B^+\to b_1^+(1235)X$,\,
which is comparable to the preliminary upper limits for the branching ratios of
\,$B_d\to\rho^0X,\,K^{*0}X$\, recently measured by Belle.
Therefore, we expect that the $B$ decays that we have considered here can be probed by
upcoming $B$ experiments, which may help confirm or rule out the new-particle interpretation
of the HyperCP result.
\acknowledgments
This work was supported in part by NSC and NCTS.
We thank Hwanbae Park and HyoJung Hyun for valuable discussions on experimental results.
We also thank X.G.~He and G.~Valencia for helpful comments.
| proofpile-arXiv_065-6798 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The effect of disorder on different types of condensed matter orderings is nowadays a subject of considerable interest \cite{binder,ghosal}. For the case of disordered magnetic systems, random-field spin models have been systematically studied, not only for theoretical interests, but for some identifications with experimental realizations \cite{belanger}. An interesting issue, is the study of how quenched randomness
destroys some types of criticalities. So, in what concerns the effect produced by random fields in low dimensions, it has been noticed \cite{hui,berker} that first-order transitions will be replaced by continuous transitions, so tricritical points and critical end points will be depressed in temperature, and a finite amount of disorder will suppresse them. Nevertheless, in two dimensions, an infinitesimal amount of field randomness seems to destroy any first-order transition \cite{wehr, boechat}. Interestingly, the simplest model exhibiting a tricritical phase diagram in the absence of randomness is the Blume-Capel model.
The Blume-Capel model \cite{blume,capel} is a regular Ising model for spin-1 used to model $ \bf ^{4}He-^{3}He$ mixtures\cite{emeryg}. The interesting feature is the existence of a tricritical point in the phase diagram represented in the plane temperature versus crystal field, as shown in Figure 1. This phase diagram was firslty obtained in the mean-field approach, but the same qualitative properties were also observed in low dimensions. The latter was confirmed through some approximation techniques as well as by Monte Carlo simulations \cite{mahan,jain,grollau,kutlu,seferoglu}. Also, the tricritical behavior is still held in two dimensions \cite{clusel,care,paul,caparica}. Nevertheless, in other models this situation is controversial. For example, the random-field Ising Model in the mean-field approach \cite{aharony} also exhibits a tricritical point, but some Monte Carlo simulations \cite{fytas} in the cubic lattice suggest that this is only an artifact of the mean-field calculations. Accordingly, this interesting fact in the Blume-Capel model motivated some authors to explore the richness of this model, within the mean-field approach, by introducing disorder in the crystal field \cite{hamid,benyoussef,salinas,carneiro} as well as by adding and external random field \cite{miron}. For the former case, it was obtained a variety of phase diagrams including different critical points with some similar topologies found for the random-field spin$-1/2$ Ising model \cite{kaufman,octavio}. However, in those studies the fourth-order critical points, which limit the existence of tricritical points, were overlooked. Consequently, our aim in this work is to improve those previous studies by considering a more general probability distribution function for the crystal field, and bettering some results given in references \cite{benyoussef,salinas,carneiro}. The next section is dedicated to define the model and the special critical points produced by it.
\vskip \baselineskip
\begin{figure}[htp]
\begin{center}
\includegraphics[height=5.5cm]{fig1.eps}
\end{center}
\caption[]{\footnotesize Phase diagram of the Blume-Capel model in the plane $k_{B}T/J-\Delta/J$ within the mean-field approach, where $k_{B}$ is the Boltzman constant, $T$ is the temperature, $J>0$ is the coupling constant between each pair of spins, and $\Delta$ is the crystal field (also called the anisotropy field). The black circle represents the tricritical point. The Ferromagnetic and Paramagnetic phases are represented by $\bf F$ and $\bf P$, respectively. The full line represents the continuous or second-order critical frontier, and the dotted
line is for the first-order frontier.}
\label{n}
\end{figure}
\vskip \baselineskip
\section{The Model}
The infinite-range-interaction Blume-Capel model is given by the following
Hamiltonian
\begin{equation} {\cal H} = -\frac{J}{N}\sum_{(i,j)} S_{i}S_{j} + \sum_{i} \Delta_{i} S_{i}^{2} ~, \label{Ham} \end{equation}
\vskip \baselineskip
where $S_{i}=-1,0,1$, and $N$ is the number of spins. The first sum runs over all distinct pairs of spins.
The coupling constant $J$ is divided by $N$ in order to maintain the extensivity.
The crystal fields are represented by quenched variables $\{ \Delta_{i} \}$, obeying
the probability distribution function (PDF) given by,
\begin{equation}P(\Delta_{i}) = \frac{p}{\sqrt{2\pi} \,\sigma} \exp \left[ - \frac{(\Delta_{i}-\Delta)^{2}}{2{\sigma}^{2}} \right] + \frac{(1-p)}{\sqrt{2\pi} \,\sigma} \exp \left[ - \frac{{\Delta_{i}}^{2}}{2{\sigma}^{2}} \right ] ~, \label{PDF} \end{equation}
\vskip \baselineskip
which consists of a superposition of two independent Gaussian distributions
with the same width $\sigma$, centered at $\Delta_{i}=\Delta$ and $\Delta_{i}=0$, with probabilities $p$ and $(1-p)$, respectively.
For $\sigma =0$, we recover the bimodal distribution studied in references \cite{benyoussef,salinas}, and
for $p=1$, the simple Gaussian one of reference \cite{carneiro} . For $\sigma=0$ and $p=1$, we go back to the simple Blume-Capel model without randomness\cite{emeryg}. \\
By standard procedures \cite{octavio}, we get the analytical expression for the free energy per spin ($f$), through which may be obtained a self-consistent equation for the magnetization $m$. Thus, we have the following relations at the equilibrium,
\begin{equation} f = \frac{1}{2}Jm^{2}
-\frac{1}{\beta} E \left \{ \log(2\exp(-\beta\Delta_{i}) \cosh(\beta J m) +1) \right \} ~, \label{f} \end{equation}
\begin{equation} m = \sinh(\beta m) E \left \{ {\left [ \cosh(\beta m) +\frac{1}{2} \exp(\beta \Delta_{i}) \right ]}^{-1} \right \} ~, \label{m} \end{equation}
\vskip \baselineskip
where the quenched average, represented by $E \{...\}$, is taken with respect to the PDF given
in Eq. (\ref{PDF}), and $\beta = 1/(k_{B}T)$. To write conditions for locating tricritical and fourth-order critical points, we expand the right hand of Eq. (\ref{m}) in powers of $m$ (Landau's expansion, see \cite{stanley}). Conveniently, we expand the magnetization up to seventh order in $m$, so
\begin{equation} m= A_{1} m + A_{3}m^{3}+A_{5}m^{5}+A_{7}m^{7}+... ~,\end{equation}
where
\begin{equation} A_{1} = \beta E\{ g_{i} \} ~,\end{equation}
\begin{equation} A_{3} = \beta^{3} E\{(\frac{1}{6}g_{i} -\frac{1}{2}g_{i}^{2}) \} ~, \end{equation}
\begin{equation} A_{5} = \beta^{5} E\{ (\frac{1}{120}g_{i} - \frac{1}{8}g_{i}^{2} +\frac{1}{4}g_{i}^{3}) \} ~, \end{equation}
\begin{equation} A_{7} = \beta^{7} E\{ ( \frac{1}{5040}g_{i} -\frac{1}{80}g_{i}^{2} + \frac{1}{12}g_{i}^{3} -\frac{1}{8}g_{i}^{4} ) \} ~,\end{equation}
and
\begin{equation} g_{i} = (1+\frac{1}{2} \exp(\beta \Delta_{i}))^{-1} ~. \end{equation}
\vskip \baselineskip
In order to obtain the continuous critical frontier one sets $A_{1}=1$, provided
that $A_{3} <0$. If a first-order critical frontier begins after the continuous one, the latter
line ends at a tricritical point if $A_{3}=0$, provided that $A_{5}<0$. The possibility of a fourth-order critical point is given for $A_{1}=1$, $A_{3}=0$, $A_{5}=0$ and $A_{7}<0$. Thus, a fourth-order point may be regarded as the last
tricritical point. \\
By taking $\beta \to \infty$ ($T \to 0$), we get the asymptotic limit of Eqs. (\ref{f}) and (\ref{m}), so we have
\begin{eqnarray}\nonumber
f & = & \frac{1}{2} Jm^{2} - p\left( \frac{1}{2} (Jm-\Delta) \left ( 1+ {\rm erf} \left [ \frac{Jm-\Delta}{\sqrt{2} \, \sigma} \right ] \right ) +\frac{\sigma}{\sqrt{2\pi}} \exp \left [ - \frac{(Jm-\Delta)^{2}}{2 \, {\sigma}^{2}} \right ] \right ) \nonumber \\
& - & (1-p) \left ( \frac{1}{2} Jm \left ( 1+ {\rm erf} \left[\frac{Jm}{\sqrt{2} \, \sigma} \right ] \right) +\frac{\sigma}{\sqrt{2\pi}} \exp \left [ - \frac{J^{2} m^{2}}{2 \, {\sigma}^{2}} \right ] \right )
~, \end{eqnarray}
\begin{equation} m = \frac{p}{2} \left ( 1+ {\rm erf} \left [ \frac{Jm -\Delta}{\sqrt{2}\, \sigma} \right ] \right ) + \frac{(1-p)}{2} \left ( 1+ {\rm erf} \left [ \frac{Jm}{\sqrt{2}\, \sigma} \right ] \right ) ~,
\end{equation}
where
\begin{equation} {\rm erf } \left ( \frac{x}{\sqrt{2}} \right ) = \sqrt{\frac{2}{\pi}} \int_{0}^{x} dz e^{-z^{2}/2} ~. \end{equation}
\vskip \baselineskip
The critical frontiers, for a given pair $(\sigma,p)$, are obtained by solving a non-linear set of equations, which consist of equating the free energies for the corresponding phases (Maxwell's construction), and the respective magnetization equations based on the relations given in Eqs. (\ref{f}), and (\ref{m}). We must carefully verify that every numerical solution minimizes the free energy. \\
The symbols used to represent the different critical lines and points \cite{octavio} are as follows:
\begin{itemize}
\item Continuous or second-order critical frontier: continuous line;
\item Fist-order critical frontier: dotted line;
\item Tricritical point: located by a black circle;
\item Fourth-order critical point: located by an empty square;
\item Ordered critical point: located by an asterisk;
\item Critical end point: located by a black triangle.
\end{itemize}
To clarify, we mean by a continuous critical frontier that which separates two distinct phases
through which the order parameter changes continuously to pass from one phase to another, contrary to the case of the first-order transition,
through which, the order parameter suffers a discontinuous change, so the two corresponding phases coexist at each critical point.
A tricritical point is basically the point in which a continuous line terminates to give rise a first-order critical line.
A fourth-order critical point is sometimes called a vestigial tricritical point, because it may be regarded as the last tricritical point. An ordered critical point is the point, inside an ordered region, where a first-order critical line ends, above which the order parameter passes smoothly from one ordered phase to the other.
Finally, a critical end point corresponds to the intersection of a continuous
line that separates the paramagnetic from one of the ferromagnetic phases with a first-
order line separating the paramagnetic and the other ferromagnetic phase. In following section we make use of this
definitions.
\section{Results and Discussion}
The distinct phase diagrams for the present model were numerically obtained by scanning the whole p-domain for each $\sigma$-width. So, distinct topologies belonging to different $p$-ranges were found for a given $\sigma$. For instance, Figure 2 shows the whole variety of them for a small $\sigma/J=0.1$, for each arbitrary representative $p$.
\begin{figure}[htp]
\centering
\subfigure[][] {\includegraphics[width=5.0cm]{fig2a.eps}}
\vspace{0.7cm}
\subfigure[][] {\includegraphics[width=5.0cm]{fig2b.eps}}
\vspace{0.4cm}
\subfigure[][]{\includegraphics[width=5.0cm]{fig2c.eps}}
\vspace{0.4cm}
\subfigure[][] {\includegraphics[width=5.0cm]{fig2d.eps}}
\subfigure[][]{\includegraphics[width=5.0cm]{fig2e.eps}}
\subfigure[][]{\includegraphics[width=5.0cm]{fig2f.eps}}
\caption{\footnotesize Phase diagrams of the Blume-Capel Model whose crystal field obeys the PDF given in Eq.(\ref{PDF}). For $\sigma/J=0.1$, the diagrams show a variety of topologies according to the probability $p$. For convenience, we classify them in four topologies, so in (a) is shown Topology I; in (b) and (c) Topology II; in (d) Topology III, then figures (e) and (f) represent Topology IV. }
\label{imagenes}
\end{figure}
Note that for small values of $p$, one only ferromagnetic order appears at low temperatures, as shown in Figure 2(a) for $p=0.15$. We designate it as Topology I. Figures 2(b) and (c) ($p=0.5,0.8$) represent the same topology (Topology II), which consists of one first-order critical line separating two different ferromagnetic phases $\bf F_{1}$ and $\bf F_{2}$, and a continuous line remaining for $\Delta/J \to \infty$. Figure 2(c), though qualitatively the same as in 2(b), is intended to show how the first-order line and the continuous line approach themselves as $p$ increases. Figure 2(d) shows Topology III, for $p=0.85$, so the preceding first-order line is now dividing the continuous line by two critical end points. Note that the upper continuous line terminates, following a reentrant path, at a critical end point where the phases $\bf F_{1}$, $\bf F_{3}$, and $\bf P$ coexist. So, at the lower critical end point, $\bf F_{1}$, $\bf F_{2}$, and $\bf P$ coexist. Above the ordered critical point, the order parameter passes smoothly from $\bf F_{1}$ to $\bf F_{3}$ (see the inset there). If we increase $p$
up to some $p=p^{*}$, the upper continuous line and the first-order line will be met by a fourth-order point (represented by a square) as shown in Figure 2(e). Thus, $p^{*}$ is the threshold for Topology IV. Then, for $p > p^{*}$, those lines will be met at a tricritical point, as noticed in Figure 2(f). Conversely, tricritical points appear for $p>p^{*}$, so the last one for $p=p^{*}$. The same types of phase diagrams are found in references \cite{benyoussef,salinas,carneiro}. Nevertheless, we improve their results, not only bettering some of their numerical calculations, but in that we may now locate the regions of validity of these topologies in the plane $\sigma/J-p$. To this end, we start by locating the fourth-order points in the plane $\sigma/J-p$, as shown in Figure 3.
\begin{figure}[htp]
\begin{center}
\vspace{0.7cm}
\includegraphics[height=6cm]{fig3.eps}
\end{center}
\caption[]{\footnotesize Some fourth-order critical points located in the plane $p-\sigma/J$. Note that if $\sigma = 0$, we recover the bimodal case studied in references \cite{benyoussef, salinas}, where we found $p^{*}=0.9258$, just in agreement with them. Note that if $p=1$, $\sigma/J = 0.202$, so it is a $\sigma$-limit for the tricritical behavior. The dashed line is only a guide to the eyes.}
\label{nua}
\end{figure}
Note that $\sigma/J=0.202$ is a cut-off for the tricritical behavior. Then, Topology IV will no longer found for greater widths. On the other hand, we determine the threshold for Topology III, by estimating numerically which value of $p$, for each $\sigma/J$, produces a situation like that presented in Figure 4 (case $\sigma/J=0.1$), where we see how Topology III emerges for a $p$ slightly greater than $0.836$. For $\sigma/J=0$, we found this threshold for $p=0.8245$, which is smaller than that obtained in reference \cite{benyoussef}. There, the authors suggested that Topology III disappears for $p<8/9=0.888...$ . However, Figure 5 illustrates that this type of phase diagram
is still present even for a smaller $p$, as confirmed by the free energy evaluated at three disctinct ($k_{B}T/J,\Delta/J$)-points along the first-order critical line, at which there are three types of coexistences, namely, $\bf F_{1}$ with $\bf F_{3}$, $\bf F_{1}$ with $\bf P$, and $\bf F_{1}$ with $\bf F_{2}$. We also noted another discrepancy with respect to a critical $\sigma/J$, found in reference \cite{carneiro}, above which Topology III disappears for $p=1$. There, the authors affirmed that if $\sigma/J > 0.229$, the paramagnetic-ferromagnetic transition becomes second order at all temperatures, but we noticed that it only happens for a greater width, namely, $\sigma/J=0.283$. \\
In order to obtain the frontier which separates Topologies I and II (in the plane $\sigma/J-p$), we have to find the corresponding $p$, for a given $\sigma/J$, that locates the one ordered critical point at $T=0$. To this end, the next subsection is focused on zero temperature calculations.
\vskip \baselineskip
\begin{figure}[htp]
\centering
\vspace{0.7cm}
\subfigure[][] {\includegraphics[width=6.0cm]{fig4a.eps}}
\subfigure[][] {\includegraphics[width=6.0cm]{fig4b.eps}}
\caption{\footnotesize Phase diagrams (for $\sigma/J = 0.1$) showing two slightly different values of $p$, between which there is
a critical $p$ for passing from Topology II to III. So, that critical point must be found for $p=0.8365 \pm 0.0005$. }
\label{imagenes4}
\end{figure}
\vskip \baselineskip
\begin{figure}[htp]
\centering
\subfigure[][] {\includegraphics[width=5.0cm]{fig5a.eps}}
\vspace{0.7cm}
\subfigure[][] {\includegraphics[width=5.0cm]{fig5b.eps}}
\vspace{0.4cm}
\subfigure[][]{\includegraphics[width=5.0cm]{fig5c.eps}}
\vspace{0.4cm}
\subfigure[][] {\includegraphics[width=5.0cm]{fig5d.eps}}
\caption{\footnotesize In (a) is shown the most critical region of the phase diagram for $\sigma=0$, and $p=0.83$. It is a typical phase diagram for Topology III (like that of Figure 2(d)). Note that three points belonging to the first-order critical line are highlighted by an ellipse, a rectangle, and a circle. The ellipse is surrounding a critical point where the phases $\bf F_{1}$ and $\bf F_{3}$ coexist, as confirmed through the free energy versus the magnetization in (b). In (c) and (d) the free energy shows which phases are coexisting at the points surrounded by the rectangle and the circle. Thus, in (c), the phases $\bf F_{1}$ and $\bf P$ coexist, because two symmetric minima at finites values of $m$, and one minimum at $m=0$, are at the same level. In (d), like in (b), are shown four symmetric minima at the same level. Therefore, the phases $\bf F_{1}$ and $\bf F_{2}$ coexist at this critical point. }
\label{imagenes}
\end{figure}
\subsection{Analysis at $T=0$}
In order to perform zero temperature calculations we make use of the equations (11) and (12). Consequently, for
$\sigma/J = 0$ (see reference \cite{salinas}), there are two ferromagnetic phases $\bf F_{1}$ and $\bf F_{2}$ coexisting at $\Delta/J = 1-p/2$, having magnetizations $m_{1}=1$ and $m_{2}=1-p$, respectively. We observed that these relations still remain up to some finite $\sigma$, after which a $\sigma$-dependency emerges. So, for a greater width called $\sigma^{'}$,
the ordered critical point (that of Topology III) must be found at $T=0$. Then, the first-order critical line is supressed and one only ferromagnetic order exists for any $p$. For instance, if we choose $p=0.5$, we find $\sigma^{'}/J=0.2$, as illustrated in Figure 6. There, the zero temperature free energy versus the order parameter is plotted for three different values of $\sigma/J$, at the point where $ \bf F_{1}$ and $ \bf F_{2}$ coexist. Thus, In (a), two minima are at the same level for $\sigma=0.1$. In (b), it still happens for $\sigma=0.15$. Nonetheless, in (c), for $\sigma/J=0.2$, the ordered critical point is already at $T=0$. Therefore, for this particular $p$, there is only one ferromagnetic phase for $\sigma/J>0.2$.
\begin{figure}[htp]
\centering
\vspace{0.7cm}
\subfigure[][] {\includegraphics[width=5.0cm]{fig6a}}
\subfigure[][] {\includegraphics[width=5.0cm]{fig6b.eps}}
\subfigure[][]{\includegraphics[width=5.0cm]{fig6c.eps}}
\caption{\footnotesize The Free energy (see Eq. (11)) versus the order parameter, plotted for $p=0.5$, for three different values of $\sigma/J$. In (a) and (b) two ferromagnetic phases coexist, but in (c) we note that for $\sigma/J = 0.2$, there is already a crossover to pass from one ferromagnetic phase to the other. }
\label{imagenes6}
\end{figure}
For completeness, Figure 7(a) shows what Figure 6(c) illustrates by means of the free energy. There, it is shown
where the ordered critical point is located, that is, at $T=0$. In Figure 7(b), we see the line composed by the $(p,\sigma^{'}/J)$-points. This line separates phase diagrams containing two and one ferromagnetic phases.
Particularly, for $p=1$, $\sigma^{'}/J = (2\pi)^{-1/2}$, as obtained in reference \cite{carneiro} and confirmed numerically by us.\\
We summarize the preceding analysis by showing, in Figure 8, the regions of validity for the four qualitatively distinct phase diagrams. Note that along the horizontal axis ($\sigma/J=0$), regions II and III are separated by $p=0.8245$, and regions III and IV by $p=0.9258$. Along the vertical axis (at $p=1$), regions IV and III are separated by $\sigma/J=0.202$, regions III and II by $\sigma/J=0.283$, then, regions II and I by $\sigma/J=(2\pi)^{-1/2} \approx 0.3989$. Furthermore, the line separating topologies I and II is the same as in Figure 7(b). The frontier separating topologies II and III consists of points estimated by the analysis illustrated by Figure 4. Finally, the line between topologies III and IV is made of fourth-order critical points, i.e., it is based upon the points in Figure 3.
\vskip \baselineskip
\begin{figure}[htp]
\centering
\vspace{0.7cm}
\subfigure[][] {\includegraphics[width=5.6cm]{fig7a.eps}}
\subfigure[][] {\includegraphics[width=5.6cm]{fig7b.eps}}
\caption{\footnotesize In (a) is shown the phase diagram obtained for $p=0.5$, for the corresponding critical $\sigma^{'}/J$. Note that the ordered-critical point (that appeared in Topology II) is now located at the horizontal axis. So, for $\sigma > \sigma^{'}$ there will be only one ferromagnetic order at low temperatures for $p=0.5$. In (b), the line separating topologies I and II. This line is made of points numerically obtained by finding $\sigma^{'}/J$, for each $p$.}
\label{imagenes9}
\end{figure}
\vskip \baselineskip
\begin{figure}[tp]
\begin{center}
\centering
\vspace{0.7cm}
\includegraphics[height=7cm]{fig8.eps}
\end{center}
\caption[]{\footnotesize Regions, in the plane $\sigma/J$ versus $p$, associated with the topologies for the present model (see also Figure 2). The horizontal and the vertical axes represent the probability $p$, and the width $\sigma$, respectively (see Eq. (\ref{PDF})). The tricritical behavior belongs to the region IV. The simplest topology belongs to region I, where only one ferromagnetic phase appears, whereas the rest topologies contain two ferromagnetic orders at low temperatures.}
\label{nxxx}
\end{figure}
\vskip \baselineskip
\section{Conclusions}
We revisiting the study of the infinite-range-interaction spin-1 Blume-Capel Model with quenched randomness, by considering a more general
probability distribution function for the crystal field $\Delta_{i}$, which consists of two Gaussian distributions centered at $\Delta_{i} = \Delta$ and $\Delta_{i} = 0$, with probabilities $p$ and $(1-p)$, respectively.
For $\sigma=0$, we recover the bimodal case studied in references \cite{benyoussef,salinas}, and for $p=1$, the
Gaussian case studied in reference \cite{carneiro}. For $\sigma$-widths in $ 0 < \sigma < 0.202J $, the system exhibits four distinct topologies according to the range in which $p$ belongs. So, we designed them as Topology I,II,III, and IV, in increasing order of $p$. Topology I contains one continuous critical line separating a ferromagnetic phase to the paramagnetic phase. In Topology II, one first-order critical line separating two ferromagnetic phases is added. This line terminates at an ordered critical point. The most complex criticality belongs to Topology III, where the first-order line now divides the continuous critical line by two critical end points. In Topology IV, the first-order line and the continuous line are met by a tricritical point. Accordingly, Topology I presents one ferromagnetic phase, whereas the rest ones show two distinct ferromagnetic orders at low temperatures. On the other hand, the tricritical behavior manifested in Topology IV emerges for $p>p^{*}$, where $p^{*}$ denotes
the probability for a given $\sigma/J$, where a fourth-order critical point is found. This point may be regarded as the last tricritical point vanishing for $\sigma/J > 0.202$, since $\sigma/J = 0.202$ leads to $p^{*} =1$. Consequently, the tricritical behavior is no longer found for any $p$. Topology III disappears for $\sigma/J > 0.283$, and Topology II is limited by $\sigma/J = 0.3989$, above which the first-order line separating the two ferromagnetic phases is suppressed for any $p$. After that, for $\sigma/J > 0.3989$, only the simplest topology survives. \\\\
Therefore, we show through this model how a complex magnetic criticality is reduced by the strength of the disorder (see also \cite{octavio,crokidakisa,crokidakisb}). Nevertheless, the critical dimensions for these types of phase diagrams is still an open problem to be solved.
\vskip \baselineskip
{\large\bf Acknowledgments}
\vskip \baselineskip
\noindent
Financial support from
CNPq (Brazilian agency) is acknowledged.
\vskip 2\baselineskip
| proofpile-arXiv_065-6807 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The limit of applicability of chiral perturbation theory
($\chi$PT)~\cite{Weinberg:1978kz,Gasser:1983yg,GSS89} is ultimately set
by the scale of spontaneous chiral symmetry breaking:
$\Lambda_{\chi SB} \simeq 4 \pi f_\pi\sim 1$ GeV, but not only. Resonances
with excitation energy lower than 1 GeV, e.g.\ $\rho$(770) or $\Delta$(1232),
set a lower limit, if not included explicitly. The $\Delta$(1232)
is especially important because of its very low excitation energy, as defined by
the $\Delta$-nucleon mass splitting:
\begin{equation}
\varDelta \equiv M_\Delta - M_N \approx (1232-939) \,\mbox{MeV} = 293 \,\mbox{MeV.}
\end{equation}
This means we can expect an early breakdown of $\chi$PT in the baryon sector,
on one hand, but an easy fit of the $\Delta$ into the $\chi$PT power-counting
scheme ($\varDelta \ll \Lambda_{\chi SB}$), on the other.
A first work on the inclusion of $\Delta$-resonance and, more generally, the
decuplet fields in $\chi$PT was done by Jenkins and Manohar \cite{Jenkins:1991es},
who at the same time developed the ``heavy-baryon" (HB) expansion~\cite{JeM91a}.
They counted the $\Delta$-excitation scale to be of the same order as other
light scales in the theory, i.e., Goldstone-boson momenta and masses.
For the two-flavor QCD, this hierarchy of scales,
\begin{equation}
\varDelta \sim p \sim m_\pi \ll \Lambda_{\chi SB}\,,
\end{equation}
results in the ``small scale expansion" (SSE) \cite{Hemmert:1996xg}.
Alternatively, one can count the resonance excitation scale to be different from
the pion mass, i.e.,
\begin{equation}
m_\pi < \varDelta \ll \Lambda_{\chi SB}\, .
\end{equation}
This is an example of effective-field theory (EFT) with two distinct light scales.
The power counting of graphs will then depend on
whether the typical momenta are comparable to $m_\pi$ or to $\varDelta$.
The expansion can be carried out in terms of one
small parameter, e.g.,
\begin{equation}
\delta = \frac{\varDelta}{ \Lambda_{\chi SB}}\ll 1\, .
\end{equation}
Then, $m_\pi / \Lambda_{\chi SB}$ should count as $\delta$ to some power greater than one.
The simplest is to take an integer power:
\begin{equation}
\frac{m_\pi}{ \Lambda_{\chi SB}} \sim \delta^2 .
\end{equation}
This counting scheme goes under the name of ``$\delta$-expansion" \cite{Pascalutsa:2003zk}.
The main advantage of the $\delta$-expansion over the SSE is that it provides a more
adequate counting of the resonant contributions and
a power-counting argument to sum a subset of graphs generating the resonance width.
In Sect.~4 we shall see a brief account of one recent applications of the $\delta$-expansion,
a new calculation of the $\Delta$-resonance effect on
the nucleon polarizabilities and Compton scattering off protons \cite{Lensky:2008re}.
More applications can be found elsewhere \cite{Pascalutsa:2007wb,Geng:2008bm,Geng:2009hh,Long:2009wq}, including these proceedings~\cite{McGovern:2009sw}.
The other purpose of this paper is to remark on
two, quite unrelated, consistency problems of $\chi$PT in the baryon sector (B$\chi$PT).
One concerns the treatment of higher-spin fields (Sect.~2), and the other is about
the power counting (Sect.~3 and 4).
\section{Higher-spin fields}
Including the baryon field in the chiral Lagrangian, one sooner or later faces
the consistency problems of higher-spin field theory.
The $\Delta$(1232), being a spin-3/2 state, can be represented
by a Rarita-Schwinger (RS) vector-spinor, $\psi_\alpha(x)$, with the following
free Lagrangian:
\begin{equation}
\mathcal{L}_{RS}= \bar \psi_\alpha \, ( i\, \gamma^{\alpha\beta\varrho} \partial_\varrho - M_\Delta \,\gamma^{\alpha\beta}) \, \psi_\beta,
\end{equation}
where $\gamma^{\alpha\beta\rho} $ and $\gamma} \def\Ga{{\it\Gamma}^{\alpha\beta}$ are
totally antisymmetrized products of three and two Dirac matrices.
The Lagrangian consists of a kinetic term, which is invariant under
a gauge symmetry:
\begin{equation}
\psi_\alpha \to \psi_\alpha + \partial_\alpha \eps
\end{equation}
(with a spinor $\eps$), and a mass term, which breaks
the gauge symmetry.
This formalism provides a proper field-theoretic description of a spin-3/2
particle. The symmetry ensures that the massless particle has 2 spin degrees of freedom,
while the mass term breaks it such as to raise the number of spin
degrees of freedom to 4.
This pattern has to be preserved by interactions
of this field, but such a consistency criterion proved to be tough
to fulfill.
For instance, the usual minimal substitution
of the electromagnetic field, $\partial_\rho \to \partial_\rho + i e A_\rho$, leads
to U(1)-invariant theory, but at expense of loss of the spin-3/2 gauge symmetry
of massless theory. As the result, all hell,
with its negative-norm states \cite{Johnson:1961vt},
superluminal modes \cite{Velo:1969bt}, etc.~\cite{Deser:2000dz}, breaks loose.
Naive attempts to restore the spin-3/2 gauge symmetry break the U(1)
gauge symmetry, and so on.
In fact, there are `no-go theorems' forbidding a consistent
coupling of a spin-3/2 field to electromagnetism
without gravity, see e.g., \cite{Weinberg:1980kq}.
This situation is frustrating, especially since we would like to couple
the $\Delta$'s to pions too, and so, chiral symmetry is one more symmetry to worry about.
Fortunately, `locality' is one of the principles that underlines
the `no-go theorem', and, given that the EFT framework
is essentially non-local, we have a way to work around it.
One method has been outlined in Ref.~\cite{Pascalutsa:2006up}
(Sect.~4.2 therein), and a similar method has been developed in
parallel~\cite{Krebs:2008zb}. However, a complete closed-form solution to this problem
is still lacking.
\section{Heavy fields and dispersion in the pion mass}
Another important issue of concern is
the treatment of heavy fields in $\chi$PT. This
problem comes already with the inclusion of the nucleon field.
A key question is: "how to count derivatives of the nucleon field?"
The nucleon is heavy ($M_{\,N} \sim \La_{\chi SB}$), and hence
the time (0th) component of the nucleon derivative, or momentum,
is much greater than the spatial components:
\begin{equation}
\partial_i N(x) \ll \partial_0 N(x),
\end{equation}
or, in the momentum space, $p \ll \sqrt{M_N^2 +p^2}$, for an on-shell nucleon.
It would be correct to count the 0th component as $\mathcal{O}(1)$, while
the spatial components as $\mathcal{O}(p)$, but this counting obviously
does not respect the Lorentz invariance.
In a Lorentz-invariant formulation, $\partial_\mu N$ counts
as $\mathcal{O}(1)$, except when in a particular combination, $(i \partial\!\!\!\!\!/ - M_N)N$,
which counts as $\mathcal{O}(p)$. This counting has a consistency problem,
as can be seen from the following example. Consider an expression,
$ P_\mu - M_N \gamma} \def\Ga{{\it\Gamma}_\mu$,
where $P_\mu$ here is the nucleon four-momentum which, as $\gamma} \def\Ga{{\it\Gamma}_\mu$ and $M_N$,
counts as 1. The counting of this expression, as a whole, will unfortunately depend on
how it is contracted. E.g., whether contracted with $p$ or $\gamma$ we have:
\begin{eqnarray}
P^\mu (P_\mu - M_N \gamma} \def\Ga{{\it\Gamma}_\mu) = P \!\!\!\!\!/ \, ( P \!\!\!\!\!/ \, - M_N) & \sim & \mathcal{O}(p), \nonumber\\
\gamma} \def\Ga{{\it\Gamma}^\mu (P_\mu - M_N \gamma} \def\Ga{{\it\Gamma}_\mu) = -3 M_N + ( P \!\!\!\!\!/ \, - M_N) &\sim & \mathcal{O}(1). \nonumber
\end{eqnarray}
This inconsistency leads eventually to the appearance of nominally
lower-order or higher-order contributions than ones expected from power-counting
\cite{GSS89}.
The heavy-baryon (HB) expansion of Jenkins and Manohar \cite{JeM91a} overcomes this problem,
but again, at the expense of manifest Lorentz-invariance. In HB$\chi$PT one writes
\begin{equation}
P_\mu = M_N v_\mu + \ell_\mu
\end{equation}
with $v=(1, 0, 0, 0)$, which allows to assign a consistent power to $\ell$.
More recently it is becoming increasingly clear that the power-counting
problem of Lorentz-invariant formulation is not very severe \cite{Becher:1999he}, or perhaps not a problem at all \cite{Gegelia:1999gf}.
The lower-order `power-counting violating' contributions
come out to be analytic in quark masses,
and therefore match the contributions that come multiplying the low-energy constants
(LECs), and as result,
do not play any role other than renormalizing the LECs.
The higher-order contributions, on the other hand, can be both analytic and non-analytic
in quark masses.
Their analytic parts may contain ultra-violet divergencies, so one needs to define
the renormalization scheme for the higher-order LECs, before they
actually appear in the calculation. The non-analytic parts are most interesting,
as they may come with unnaturally large coefficients, and therefore cannot be dismissed
as `higher order' at all.
\begin{figure}[bt]
\begin{minipage}[c]{.32\linewidth}
\centerline{ \epsfxsize=4cm
\epsffile{selfenergy_diag.eps}
}
\end{minipage}
\hspace{.06\linewidth}%
\begin{minipage}[c]{.57\linewidth}
\caption{The nucleon self-energy contribution at order $p^3$.}
\figlab{Nselfen}
\end{minipage}
\end{figure}
This discussion is nicely illustrated by the classic example of chiral corrections to
the nucleon mass. Up to $\mathcal{O}(p^3)$ this expansion is given by
\begin{equation}
\eqlab{expansion}
M_N = {M}_{N0} - 4 c_1 m_\pi^2 + \Sigma^{(3)}_N,
\end{equation}
where ${M}_{N0}$ and $c_1$ are LECs which, supposedly, represent the values
of nucleon mass and $\pi N$ $\sigma} \def\Si{{\it\Sigma}$-term
in the chiral limit. The last term is the (leading) 3rd-order self-energy correction,
\Figref{Nselfen}:
\begin{subequations}
\begin{eqnarray}
\Sigma^{(3)}_N & = &\left. i \,\frac{3 g_A^2}{4f_\pi^2} \int \!\frac{d^4 k }{(2\pi)^4}
\frac{k \!\!\!\!/ \gamma_5 ( p\!\!\!\!/ - k\!\!\!\!/+M_{N}) k \!\!\!\!/ \gamma_5}{(k^2-m_\pi^2) [(p-k)^2-M_{N}^2]} \right|_{p\!\!\!\!/ = M_N}\\
& \stackrel{\mathrm{dim reg}}{=} &
\frac{3 g_A^2}{4f_\pi^2} \frac{M_{N}^3}{(4\pi)^2} \int\nolimits_0^1\! dx\, \Big\{
[x^2+\mu^2 (1-x)] \left(L_\eps+\ln [x^2+\mu^2 (1-x)]\right) \nonumber\\
& & \hskip2.5cm + \, [2x^2-\mu^2 (2+x)] \left(L_\eps+1+\ln [x^2+\mu^2(1-x) ]\right)-3
L_\eps\Big\},
\eqlab{selfen}
\end{eqnarray}
\end{subequations}
where $\mu = m_\pi/M_{N}$, while $L_\eps = -1/\eps -1+ \gamma_E - \ln(4\pi \La/M_{N})$
exhibits the ultraviolet divergence as $\eps=(4-d)/2 \to 0$, with $d$ being the number of dimensions, $\La$ the scale of dimensional regularization, and $\gamma_{E}$ the Euler's constant. Note that we took the physical nucleon mass for the on-mass-shell condition, as well
as for the propagator pole, and not the chiral-limit mass $M_{N0}$,
which comes from the Lagrangian.
There are several reasons for that (for one, $M_N$ is the ``known known" here), but
in any case the difference between doing it one way or the other is
of higher order.
After the integration over $x$ we obtain:
\begin{subequations}
\begin{eqnarray}
&& \Sigma^{(3)}_N = \frac{3 g_A^2 M_{N}^3}{2(4\pi f_\pi)^2}\left\{- L_\eps
+\left(1-L_\eps\right)\mu^2 \right\}\,+\,\overline \Sigma^{(3)}_N,
\eqlab{low} \\
\mbox{with} && \overline \Sigma^{(3)}_N = - \frac{3 g_A^2 M_{N}^3}{(4\pi f_\pi)^2}
\Big( \mu^3 \sqrt{1-\mbox{\small{$\frac{1}{4}$}} \mu^2} \, \,\arccos \mbox{\small{$\frac{1}{2}$}} \mu + \mbox{\small{$\frac{1}{4}$}} \mu^4\, \ln \mu^2 \Big)\nonumber\\
&& \,\,\,\, = \, - \frac{3 g_A^2}{(4\pi f_\pi)^2} \frac{1}{2}\Big[ \pi \, m_\pi^3 - (m_\pi^4/M_N)
(1- \ln m_\pi/M_N ) + \mathcal{O}(m_\pi^5) \Big] .
\eqlab{renorm}
\end{eqnarray}
\end{subequations}
Now we can see the problem explicitly. While the power-counting
of the graph (\Figref{Nselfen}) gives order 3, the result contains
both lower and higher powers of the light scale, $m_{\,\pi}$.
The higher-order terms should not be a problem. Formally we can either keep them or not
without an effect to the accuracy with which we work. There are cases where
it is not as simple as that. One such case is considered in the next section.
The lower-order terms,
written out in \Eqref{low}, have been of a bigger concern \cite{GSS89}.
Fortunately, they are of the same form as the first two terms
in the expansion of nucleon mass, \Eqref{expansion}. Chiral symmetry
ensures this ``miracle" happens every time. The troublesome lower-order
terms can thus be absorbed into a renormalization of the available LECs ---
a view introduced by Gegelia and Japaridze \cite{Gegelia:1999gf}.
In fact,
these terms {\it must} be absorbed, if $M_{N0}$ and $c_1$ are really to represent the values of nucleon mass and $\sigma} \def\Si{{\it\Sigma}$-term in the chiral limit. As a result,
\begin{equation}
\eqlab{expansion}
M_N = {M}_{N0} - 4 c_1 m_\pi^2 + \overline\Sigma^{(3)}_N,
\end{equation}
and all is well, from the power-counting point of view.
The only question left (in some expert's minds)
is whether
these LECs will be renormalized in exact same amounts in calculations of other
quantities at
this order. In my view, again, the symmetries ensure this is so.
I am not aware of an example to the contrary.
Alternatively, the HB formalism \cite{JeM91a} yields right away the following
expression for the graph of \Figref{Nselfen}:
\begin{equation}
\Sigma^{(3)HB}_N = - \frac{3 g_A^2}{(4\pi f_\pi)^2} \frac{1}{2}\pi \, m_\pi^3,
\end{equation}
i.e., only the first term in the expansion of the renormalized self-energy, \Eqref{renorm}.
So, no lower-order terms are present (in dimensional regularization!),
no higher-order terms either:
a perfect consistency with power counting. However, as practice shows, in too many cases
the thus neglected higher-order (in $p/M_N$) terms are not that small.
Unlike in the above-considered example of nucleon mass, the higher powers of $m_\pi/M_N$
can come with `unnaturally large' coefficients. In these cases,
the HB expansion demonstrates poor convergence.
One such case --- the nucleon polarizabilities --- will be considered below, but first, I would
like to introduce a principle of {\it analyticity} of the chiral expansion.
For this purpose I would like to have a dispersion relation in the
variable $t=m_{\,\,\,\pi}^{\,\,\,2}$. It is clear that for negative $t$, the chiral-loop graphs of
the type in \Figref{Nselfen} will have an imaginary part, reflecting the possibility
of decay of the nucleon into itself and a tachionic pion, and hence there is a cut
extending from $t=0$ to $t=-\infty$. In the rest of the complex $t$ plane,
we can expect an analytic dependence. A dispersion-relation for a quantity such as
nucleon self-energy must then read:
\begin{equation}
\mathrm{Re}\, \Sigma_N(t) = -\frac{1}{\pi} \int\limits_{-\infty}^0 dt' \,\frac{\mathrm{Im}\, \Sigma_N (t') }{t'-t}
\end{equation}
In the above example of 3rd order self-energy, we can easily find the imaginary part from
\Eqref{selfen}, if we restore the $i \eps$ prescription and use $\ln(-1+i\eps) = i\pi$,
\begin{equation}
\mathrm{Im}\, \Sigma_N^{(3)} (t) =
\frac{3 g_A^2}{(4\pi f_\pi)^2} \frac{\pi}{2} \left[ - (-t)^{3/2} \left( 1-\frac{t}{4M_N^2} \right)^{1/2}
\! + \,\frac{t^2}{2 M_N}\right] \, \theta} \def\vth{\vartheta} \def\Th{\Theta(-t)\,.
\end{equation}
According to the expansion \Eqref{expansion}, we should be making at least two subtractions
at $t=0$, and hence
\begin{equation}
\mathrm{Re}\, \overline \Sigma_N(t) = \mathrm{Re}\, \Sigma_N(t) -
\mathrm{Re}\, \Sigma_N(0) - \mathrm{Re}\, \Sigma_N' (0)\, \, t \, =\,
-\frac{1}{\pi} \int\limits_{-\infty}^0 dt' \,\frac{\mathrm{Im}\, \Sigma_N (t') }{t'-t} \left(\frac{t}{t'}\right)^2.
\end{equation}
Substituting the expression for the imaginary part, and taking $t=m_{\,\,\,\pi}^{\,\,\,2}$, we
indeed recover the result of \Eqref{renorm}, therefore validating the analyticity assumptions
on one hand, and revealing the intricate nature of the `higher-order terms' on the other.
\section{Compton scattering and proton polarizabilities}
\begin{figure}[tb]
\begin{minipage}[c]{.4\linewidth}
\centerline{\epsfclipon \epsfxsize=6cm%
\epsffile{olfig62.eps}
}
\end{minipage}
\hspace{.1\linewidth}%
\begin{minipage}[c]{.45\linewidth}
\caption{The scalar polarizabilities of the proton.
The results of HB$\chi$PT~\cite{Beane:2004ra} and B$\chi$PT~\cite{Lensky:2008re} are
shown respectively by the grey and red blob. Experimental
results are from Federspiel et~al.~\cite{Federspiel:1991yd},
Zieger et al.~\cite{Zieger:1992jq}, MacGibbon et al.~\cite{MacG95},
and TAPS~\cite{MAMI01}.
`Sum Rule' indicates the Baldin sum rule constraint on $\alpha+\beta$~\cite{Bab98}.
`Global average' represents the PDG summary~\cite{PDG2006}.}
\figlab{potato}
\end{minipage}
\end{figure}
The main aim of low-energy Compton scattering experiments on protons
and light nuclei in recent years has been to detect the nucleon {\it polarizabilities} \cite{Holstein:1992xr}.
For the
scalar electric $\alpha$ and magnetic $\beta$ polarizabilities of the proton, the phenomenology
seems to be in a very good shape, see `global average' in \Figref{potato}. That's
why it is intriguing to see that these values are not entirely in agreement with two
recent $\chi$PT calculations (cf.\ the grey \cite{Beane:2004ra}
and the red \cite{Lensky:2008re} blob in \Figref{potato}). Note that, the
$\chi$PT analyses are not in disagreement with the experimental data
for cross-sections, as \Figref{fixE} shows, for example, in the case of Ref.~\cite{Lensky:2008re}.
The principal differences with phenomenology arise apparently at the stage
of interpreting the effects of polarizabilities in Compton observables. It is
important to sort out this disagreement in a near future, perhaps with the help of a round
of new experiments at MAMI and HIGS.
\begin{figure}[t]
\begin{minipage}[c]{.57\linewidth}
\centerline{ \epsfclipon
\epsfxsize=8.5cm%
\epsffile{cs.eps}
}
\end{minipage}
\hspace{.06\linewidth}%
\begin{minipage}[c]{.34\linewidth}
\caption{Angular dependence of
the $\gamma} \def\Ga{{\it\Gamma} p\to \gamma} \def\Ga{{\it\Gamma} p$ differential cross-section in
the center-of-mass system for a fixed photon-beam energies
as specified for each panel. Data points are from SAL~\cite{Hal93} ---
filled squares, and MAMI~\cite{MAMI01} --- filled circles. The curves are:
Klein-Nishina --- dotted, Born graphs and WZW-anomaly --- green dashed,
adding the $p^3$ $\pi N$ loop contributions of B$\chi$PT
--- blue dash-dotted. The result of adding the $\Delta$
contributions, i.e., the complete NNLO result of Ref.~\cite{Lensky:2008re}, is shown by the red solid line with a band.}
\figlab{fixE}
\end{minipage}
\end{figure}
For now, however, I focus on the differences between the two $\chi$PT
calculations. The earlier one \cite{Beane:2004ra} is done in HB$\chi$PT at order $p^4$.
The latest is a manifestly-covariant calculation at order $p^3$ and $p^4/\varDelta$,
hence includes the $\De$-isobar effects within the $\delta} \def\De{\Delta$-counting scheme. Despite the similar results for polarizabilities,
the composition of these results order by order is quite different. In HB$\chi$PT
one obtains for the central values (in units of $10^{-4}$fm$^3$):
\begin{eqnarray}
\alpha &=& \underbrace{12.2}_{\mathcal{O}(p^3)} + \underbrace{(-0.1)}_{\mathcal{O}(p^4)} = 12.1\,,\\
\beta&=&\underbrace{1.2}_{\mathcal{O}(p^3)} +\underbrace{2.2}_{\mathcal{O}(p^4)} =3.4 \,.
\end{eqnarray}
while in B$\chi$PT with $\Delta$'s:
\begin{eqnarray}
\alpha &=& \underbrace{6.8}_{\mathcal{O}(p^3)} + \underbrace{(-0.1) + 4.1}_{\mathcal{O}(p^4/\varDelta)} = 10.8\,,\\
\beta&=&\underbrace{-1.8}_{\mathcal{O}(p^3)} +\underbrace{ 7.1-1.3}_{\mathcal{O}(p^4/\varDelta)} =4.0 \,.
\end{eqnarray}
The difference at the leading order comes precisely due to the `higher-order' terms.
For instance, for the magnetic polarizability at $\mathcal{O}(p^3)$, in one case we have:
\begin{equation}
\beta^{(3)HB}= \frac{e^2g_A^2}{768\pi^2 f_\pi^2 m_\pi }\,,
\eqlab{HBresult}
\end{equation}
while in the other:
\begin{eqnarray}
\beta^{(3)}&=&\frac{e^2 g_A^2}{192\pi^3 f_\pi^2 M_N }
\int\limits^1_0\! dx \, \bigg\{ 1 - \frac{(1-x)(1-3x)^2+x}{\mu^2 (1-x)+x^2}
- \frac{x \mu^2 +x^2 [1-(1-x)(4-20x+21x^2)]}{\big[\mu^2 (1-x)+x^2\big]^2}
\bigg\}\nonumber \\
&=&\frac{e^2g_A^2}{192\pi^3 f_\pi^2 M_N }\Big[\, \frac{\pi}{4\mu}+18\ln\mu+\frac{63}{2}
-\frac{981\pi}{32}\mu-\big(100\, \ln\mu+\frac{121}{6}\big)\mu^2+\ldots \Big].
\eqlab{expanded}
\end{eqnarray}
The first term in the expanded expression \eref{expanded}
is exactly the same as the HB result \eref{HBresult}, but how about the
higher-order terms. Their coefficients are at least a factor of 10 bigger than the
coefficient of the leading term. Given that the expansion parameter is $\mu \sim 1/7$,
there is simply no argument why these terms should be neglected.
As a consequence, the neat agreement of the HB $p^3$ result
with the empirical numbers for $\alpha$ and $\beta$ should perhaps be viewed
as a remarkable coincidence, rather than, as it's often viewed,
a ``remarkable prediction" of HB$\chi$PT.
In fact, the predictive power is what is first
of all compromised by the unnaturally large, higher order (in HB-expansion) terms.
These terms will of course be recovered in the higher-order HB calculations, but with
each order higher there will be an increasingly higher number of unknown LECs.
In contrast, the covariant results provide an example of how one gets to the important
effects already in the lower-order calculations, before new LECs start to appear.
Now let's return to the dispersion relations in the pion mass squared.
Denoting $t=\mu^2$,
the dispersion relation for the magnetic polarizability should read
\begin{equation}
\mathrm{Re}\, \beta(t) = -\frac{1}{\pi} \int\limits_{-\infty}^0 dt' \,\frac{\mathrm{Im}\, \beta (t') }{t'-t}\,.
\eqlab{DRbeta}
\end{equation}
The imaginary
part at the 3rd order can be calculated from the first line of \Eqref{expanded},
\begin{eqnarray}
\mathrm{Im}\, \beta^{(3)} (t) &=& -\,
C\,\,
\mathrm{Im}\, \int\limits^1_0\! dx\, \bigg\{
\frac{(1-x)(1-3x)^2 +x+\mbox{$\frac{x}{1-x}$} }{(1-x)t+x^2-i\eps} \nonumber\\
&&\hskip4cm - \frac{d}{dt } \frac{x \,t +x^2 [1-(1-x)(4-20x+21x^2)]}{(1-x) [(1-x)t+x^2-i\eps]}
\bigg\} \\
&=& -
\frac{\pi C}{8\lambda} \def\La{{\Lambda}^3} \big[ 2-72 \lambda} \def\La{{\Lambda} +t(418\lambda} \def\La{{\Lambda}-246) - t^2(316\lambda} \def\La{{\Lambda}-471) +t^3(54\lambda} \def\La{{\Lambda}-212) +27t^4\big],
\nonumber
\end{eqnarray}
where $C=\frac{e^2 g_A^2}{192\pi^3 f_\pi^2 M_N }$ and
$\lambda} \def\La{{\Lambda} = \sqrt{-t\,(1-\mbox{\small{$\frac{1}{4}$}} t)}$. At this order there are no counter-terms, hence no subtractions,
and indeed I have verified that the unsubtracted relation \eref{DRbeta} gives exactly the
same result as \Eqref{expanded}.
The relation for the electric polarizability $\alpha$ has been verified in a similar fashion. These
tests validate the analyticity assumption and elucidate the nature of the 'higher-order' terms.
Finally, let me note that such dispersion relations,
as well as the usual ones (in energy) \cite{Holstein:2005db}, do not hold in the framework of
Infrared Regularization (IR) \cite{Becher:1999he}.
The IR loop-integrals will always, in addition to the unitarity cuts, have an unphysical
cut. Although the unphysical cut lies far from the region of $\chi$PT applicability and
therefore does not pose a threat to unitarity, it does make an impact, and as result,
a set of the higher-order terms is altered. To me, this is a showstopper. The only
practical advantage of the manifest Lorentz-invariant formulation over the HB one
is the account of `higher-order' terms which may, or may not, be unnaturally large. Giving up on analyticity, one has no principle to assess these terms reliably.
\section{Summary}
Here are some points which have been illustrated in this paper:
\begin{itemize}
\item The region of applicability of B$\chi$PT without the $\De$(1232)-baryon is: $p\ll 300$ MeV.
An explicit $\De(1232)$ is needed to extend this limit to substantially higher energies. Two
schemes are presently used to power-count the $\De$ contributions: SSE and $\delta} \def\De{\Delta$-expansion.
\item Inclusion of heavy fields poses a difficulty with power counting in a Lorentz-invariant
formulation --- contributions of lower- and higher-order arise in a calculations given-order graph.
However, this is not a problem --- the lower-order contributions renormalize the available LECs,
while the higher-order ones are, in fact, required by analyticity and should be kept.
\item Dispersion relations in the pion-mass squared have been derived and are shown
to hold in the examples of lowest order chiral corrections to the nucleon mass and
polarizabiltities.
\item The present state-of-art $\chi$PT calculations of low-energy Compton scattering
are in a good agreement with experimental cross-sections, but have an appreciable
discrepancy with PDG values for proton polarizabilities.
\end{itemize}
| proofpile-arXiv_065-6824 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{}}
\section{Introduction}
Measurements of diboson production cross sections at the Tevatron test
the electroweak sector of the Standard Model and have been used to
place limits on models of physics beyond the Standard
Model~\cite{WW_D0}. Diboson measurements are also useful in the
context of searches for the Standard Model Higgs boson at the
Tevatron. In this presentation we focus on recent diboson results
that are relevant to the Higgs searches.
The search for the Higgs boson at the Tevatron involves searching for
a very small signal in overwhelming backgrounds. Sophisticated
analysis techniques are often used to exploit small differences
between signal and background events. The searches also gain power
from increasing signal acceptance and dividing events into several
regions depending on their signal-to-background ratios. Some of the
diboson searches and measurements presented below take advantage of
similar techniques while others use somewhat simpler strategies.
Comparison of the results derived with different techniques is a
useful test of the analysis strategies employed in the Higgs searches.
The measurements presented here are performed in $p\bar{p}$ collision
data with $\sqrt{s} = 1.96$~TeV collected by either the CDF II or
D0~ detector. The detectors are described in detail
elsewhere~\cite{CDFdet}~\cite{D0det}.
\section{Fully Leptonic Decay Channels}
$WW$, $WZ$, and $ZZ$ production have all been observed at the Tevatron
at the 5$\sigma$ level in events where both bosons decay leptonically.
Table~\ref{tab:lep_sum} shows recent measurements of the cross
sections for each of the two experiments. The measured cross sections
agree well between the two experiments and with the Standard Model
predictions. The observation of $ZZ$ production and the measurement
of the $WW$ production cross section are discussed in more detail below.
\begin{table}
\begin{center}
\caption{\label{tab:lep_sum}Summary of diboson cross sections measured in leptonic channels at the CDF and D0~ detectors.}
\begin{tabular}{|l|c|c|c|}
\hline
\hline
& \multicolumn{3}{c|}{Cross section [pb]} \\
Process & CDF & D0~ & NLO prediction \\
\hline
$WW$ & $12.1^{+1.8}_{-1.7}$ & $11.4 \pm 2.2$ & $11.7 \pm 0.7$ \\
$WZ$ & $4.3^{+1.3}_{-1.1}$ & $2.7^{+1.7}_{-1.3}$ & $3.7 \pm 0.3$ \\
$ZZ$ & $1.4^{+0.7}_{-0.6}$ & $1.6 \pm 0.65$ & $1.4\pm0.1$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{$ZZ$ Observation}
$ZZ$ production is predicted to have a low cross section in the
Standard Model, $\sigma(p\bar{p} \rightarrow ZZ) = 1.4 \pm 0.1$~pb,
making it one of the rarest processes observed at the Tevatron so far.
Both the D0~ and CDF measurements combine a search in a four-lepton
final state ($ZZ \rightarrow llll$) with a search in a final state
with two leptons and $\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}$ ($ZZ \rightarrow ll\nu \nu$). The
four-lepton channel is more sensitive because of the very small
background levels, but the number of expected signal events is also
small. The $ll\nu\nu$ final state, on the other hand, will have
larger numbers of expected signal events, but the larger backgrounds
make the channel less sensitive.
The first observation of $ZZ$ production was reported by D0~ in 1.7
$fb^{-1}$ of data~\cite{ZZ_D0}. The search in the $ZZ \rightarrow llll$
channel was conducted in events with four electrons, four muons, or
two electrons and two muons. Events with electrons were divided
further based on the number of electrons in the central region,
creating several categories with different expected
signal-to-background ratios. The number of expected background events
was taken from Monte Carlo simulation. Three four-lepton signal
events (two with four electrons and one with four muons) were
observed. The expected background was $0.14^{+0.03}_{-0.02}$ events.
The invariant mass of the four leptons is shown in
Fig.~\ref{fig:ZZ_D0} superimposed on the expected shape of the $ZZ$
signal and the predicted background contribution. This result was
combined with a less powerful search in $ZZ\rightarrow ll\nu\nu$,
resulting in an observation of the signal with a significance of
5.7$\sigma$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{E08FF1.eps}
\caption{\label{fig:ZZ_D0}Invariant mass of four-lepton events observed in $ZZ$ search.}
\end{figure}
CDF also presented strong evidence for $ZZ$ production in
1.9$fb^{-1}$~\cite{ZZ_CDF}. In the $ZZ \rightarrow llll$ channel, three
events were observed with a predicted background of
$0.096^{+0.092}_{-0.063}$. The search in the $ZZ \rightarrow
ll\nu\nu$ channel calculated event-by-event probability densities for
the $WW$ and $ZZ$ processes to discriminate between them. Combination
of the $llll$ and $ll\nu \nu$ channels yielded a signal significance
of 4.4$\sigma$ and a cross section measurement of
$\sigma(p\bar{p}\rightarrow ZZ) = 1.4^{+0.7}_{-0.6}$ pb.
\subsection{Precise $WW$ Cross Section Measurement}
For ``high'' Higgs masses ($m_{H}>135$~GeV), the most sensitive
channel at the Tevatron is direct Higgs production with the Higgs
decaying to two $W$ bosons and the bosons subsequently decaying
leptoically ($H
\rightarrow WW \rightarrow l\nu l\nu$). Measurement of the Standard Model
production of $WW \rightarrow l\nu l\nu$ events is a useful test of
our understanding of this final state. It also provides a measurement
of the primary background to the Higgs search.
Both experiments have recently published precise measurements of the
$WW$ cross section in the $lvlv$ mode. In 3.6 $fb^{-1}$ at CDF, events with
two opposite-sign leptons (electrons or muons) were selected. The
primary backgrounds were $W$+jet, $W\gamma$, and Drell-Yan events; in
total roughly equal amounts of signal and background were expected. A
matrix element technique was used, meaning the differential cross
sections of signal and several background processes were evaluated to
derive an event-by-event probability density. A likelihood ratio
between probabilities was formed to discriminate between signal and
background. A fit to this likelihood ratio is shown in
Figure~\ref{fig:WW_CDF}. The extracted $WW$ cross section was
$\sigma(p\bar{p}) \rightarrow 12.1 \pm 0.9$ (stat)
$^{+1.6}_{-1.4}$(syst) pb, the most precise measurement of this
process at the Tevatron to date.
\begin{figure}
\includegraphics[width=0.48\textwidth]{FitHistAll.eps}
\caption{\label{fig:WW_CDF}Fit to matrix element likelihood ratio used to extract the $WW$ cross section in the $l\nu l\nu$ final state.}
\end{figure}
In 1.0 $fb^{-1}$ at D0~, the event selection was optimized to give the
lowest statistical and systematic uncertainties in each of three
lepton channels ($ee$, $e\mu$, and $\mu\mu$). The predicted signal
and background event yields were then used to extract the cross
section. The three channels were combined to give a cross section
measurement, $\sigma(p\bar{p} \rightarrow WW) = 11.5 \pm 2.1$
(stat+syst) $\pm 0.7$ (lumi) pb.
\section{Semi-leptonic Decay Channels}
For ``light'' Higgs masses ($m_{H}<135$~GeV, the most sensitive search
channels at the Tevatron are those where a Higgs is produced in
association with a $W$ or $Z$ boson with the $W$ or $Z$ decaying
leptonically and the Higgs decaying to $b\bar{b}$. The $W$ or $Z$
leptonic decays with one or two identified leptons or large missing
transverse energy (from invisible decays or un-identified leptons) are
all used in the Higgs searches. Studying the analogous final states
from semileptonic decays of $WW$, $WZ$, and $ZZ$ events can improve
our understanding of these channels. The diboson results presented
here do not require $b$-tagging, which is an important difference with
respect to the low-mass Higgs searches.
Diboson events where one boson decays to two quarks ($WW/WZ
\rightarrow l\nu qq$, $ZW/ZZ \rightarrow \nu\nu qq$, and $ZW/ZZ
\rightarrow llqq$) suffer large
backgrounds from $W/Z+$jets events. As a result measurements carried
out in these channels will be less precise than those from the fully
leptonic channels. Recent Tevatron results have proven that it is
possible to observe these processes, both with multivariate techniques
similar to those used in Higgs searches and with simpler techniques
relying on the invariant mass of the two jets (dijet mass or
$M_{jj}$).
One measurement in the channel with large missing transvere energy and
two jets and three measurements with one identified lepton and two
jets are presented below. A feature of all of them is that $W
\rightarrow qq\prime$ and $Z\rightarrow q\bar{q}$ are very challenging
to distinguish due to detector resolution effects, so the signals
measured are a sum of diboson production processes.
Table~\ref{tab:semi} summarizes the measurements performed in the
semi-leptonic decay modes. They are described in more detail below.
\begin{table*}
\begin{center}
\caption{\label{tab:semi}Summary of measurements in semi-leptonic decay modes.}
\begin{tabular}{|c|c|c|c|c|}
\hline
\hline
& & \multicolumn{3}{c|}{Cross section [pb]} \\
Process and channel & Analysis technique & CDF & D0~ & NLO prediction \\
\hline
$WW+WZ+ZZ \rightarrow \mbox{$\raisebox{.3ex}{$\not\!$}E_T$} jj$ & $M_{jj}$ fit & 18.0 $\pm$ 3.8 & & 16.8 $\pm$ 0.8 \\
\hline
$WW+WZ \rightarrow l \nu jj$ & $M_{jj}$ fit & 14.4 $\pm$ 3.8 & 18.5 $\pm$ 5.7 & 15.4 $\pm$ 0.8 \\
$WW+WZ \rightarrow l \nu jj$ & Random Forest classifier & & 20.2 $\pm$ 4.5 & 15.4 $\pm$ 0.8 \\
$WW+WZ \rightarrow l \nu jj$ & Matrix elements & 17.7 $\pm$ 3.9 & & 15.4 $\pm$ 0.8 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{$WW+WZ+ZZ \rightarrow jj$ at CDF}
CDF reported the first observation of diboson production where one
boson decays to leptons and the other to hadrons~\cite{METjets_CDF} in
3.5 $fb^{-1}$. Events with very large missing transverse energy ($\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}>$60
GeV) and exactly two jets were used for this observation. No veto on
events with a lepton in the final state was imposed. The analysis was
therefore sensitive to a sum of $WW, WZ$, and $ZZ$ processes.
One challenge in this channel was understanding the QCD multi-jet
background (MJB). An event with many jets can have large fake $\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}$
because of mismeasurement of the jet energies, often stemming from
instrumental effects, making this background difficult to model. The
size of the MJB background was significantly reduced by imposing cuts
on the $\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}$ significance and the angle between the $\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}$ and the
jets. The remaining MJB was modeled with a data sample enriched in
multi-jet events, selected by finding events with a large difference
in direction between their track-based missing transverse momentum and
their calorimeter-based missing transvere energy.
The second large background stemmed from electroweak processes
($W$+jets, $Z$+jets, and $t$-quark production). These backgrounds
were modeled using Monte Carlo. The uncertainty on the model was
evaluated with data, using $\gamma+$jet events.
The signal cross section was extracted by a fit to the dijet mass
spectrum with signal and background templates. The fit is shown in
Figure~\ref{fig:METjets_CDF}. The fitted cross section was found to
be $\sigma(p\bar{p} \rightarrow WW+WZ+ZZ) = 18.0 \pm
2.8$(stat)$\pm2.4$(syst)$\pm1.1$(lumi), with the dominant systematic
uncertainty due to the jet energy scale. The signal was observed with
a significance of 5.3$\sigma$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{prldibosons.ps}
\caption{\label{fig:METjets_CDF}Fit to mass dijet mass distribution to extract $WW+WZ+ZZ$ cross section in events with large missing transverse energy and two jets.}
\end{figure}
\subsection{$WW+WZ \rightarrow l \nu j j$ at D0~}
The first evidence of $WW+WZ \rightarrow l\nu jj$ was reported by the
D0~ collaboration~\cite{WWWZ_D0} in 1.07 $fb^{-1}$. Events with a
well-identified electron or muon, at least two jets, and $\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}>$20 GeV
were selected. The background due to QCD multi-jet events was reduced
by requiring the transverse mass of the lepton-$\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}$ system to be
larger than 35 GeV. The remaining backgrounds were dominated by
$W$+jets production, with some QCD multi-jet, $Z$+jets, and top quark
production contributing as well.
The QCD multijet background was modeled using data with somewhat
loosened lepton requirements. The $W$+jets background was modeled
with simulated events from Algpen interfaced with the Pythia parton
shower. The modeling of the $W$+jets background was critical to the
analysis, so careful comparison between data and Monte Carlo was
carried out. Discrepancies in the jet $\eta$ distributions and the
$\Delta R_{jj}$ distributions were observed; the models were
reweighted to agree with data.
Once confident in the modeling, a Random Forest Classifier (RF) was
used to discriminate between signal and background events. Several
kinematic variables, such as the dijet mass, were used as inputs and
the RF was trained on part of the background to build a classification
for each event.
The distribution of the RF output observed in data was fitted to a sum
of predicted RF templates to extract the signal significance and cross
section. The fit is shown in Figure~\ref{fig:WWWZ_D0}. The measured
cross section is $\sigma(p\bar{p} \rightarrow WW+WZ) = 20.2 \pm 4.5$
pb, with dominant systematic uncertainties from the modeling of the
$W$+jets background distribution and the jet energy scale. The
significance of the observed signal is 4.4$\sigma$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{E08HF1a.eps}
\caption{\label{fig:WWWZ_D0}Fit to Random Forest Classifier output used to extract the $WW+WZ \rightarrow l\nu jj$ cross section.}
\end{figure}
The same analysis was carried out using only the dijet mass
distribution rather than the random forest classifier output. Since
less information about the event is used, the measurement is expected
to be less precise. The dijet mass is shown in
Figure~\ref{fig:WWWZ_D0_Mjj}. The result of the fit was
$\sigma(p\bar{p} \rightarrow WW+WZ) = 18.5 \pm 5.7$, compatible with
the result from the RF classifier.
\begin{figure}
\includegraphics[width=0.48\textwidth]{E08HF2a.eps}
\caption{\label{fig:WWWZ_D0_Mjj}Dijet mass distribution for data, backgrounds, and expected $WW+WZ$ signal in $l\nu jj$ final state.}
\end{figure}
\subsection{$WW+WZ \rightarrow l\nu jj$ at CDF}
The first observation of $WW+WZ \rightarrow l\nu jj$ was presented by
CDF in 2.7 $fb^{-1}$. Events with an isolated electron or muon, exactly two
jets, and $\mbox{$\raisebox{.3ex}{$\not\!$}E_T$}>$20 GeV were chosen. Strong cuts were imposed in
events with an electron to reduce the QCD multi-jet background due to
jets faking electrons. As a result the measurement was dominated by
events with muons.
Validation of the background modeling was also critical for this
analysis. Three regions were chosen according to the dijet mass: the
signal-rich region with $55<M_{jj}<120$ GeV and two control regions
with $M_{jj}<55$ GeV and $M_{jj}>120$ GeV where very little signal was
expected. Good modeling was observed in each region. Some
mismodeling in the dijet mass was observed when the control regions
were combined, and a corresponding systematic uncertainty on the shape
of the $W$+jets background was applied.
Matrix element calulations were used to discriminate between signal
and background. The differential cross sections of signal and
background processes were evaluated for each event. A discriminant
called the Event Probability Discrminant was formed to separate signal
from background. The predicted shapes of signal and background
discriminants were fit to the data to extract the diboson cross
section. The data superimposed on the background templates is shown
in Figure~\ref{fig:WWWZ_CDF}. The measured cross section is $17.7 \pm
3.9$ pb where the dominant systematic uncertainty was the jet energy
scale. The significance of the signal observation was 5.4$\sigma$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{EPDPlot_prelim.eps}
\caption{\label{fig:WWWZ_CDF}Distribution of the discriminant derived from matrix elements used to extract the $WW+WZ$ cross section in the $l \nu jj$ final state}
\end{figure}
A second complimentary search was carried out at CDF using a larger
data sample of 3.9 $fb^{-1}$ by fitting the $M_{jj}$ spectrum. The event
selection criteria were adjusted to achieve a smoothly falling shape
in the $M_{jj}$ distribution of the backgrounds. In particular, the
$p_{T}$ threshold on each individual jet was lowered, but the $p_{T}$
of the dijet system was required to be larger than 40 GeV. The
diboson signal resulted in a bump on top of the background, as shown
in Figure~\ref{fig:WWWZ_Mjj}. A fit with signal and background
templates was carried out, and the extracted cross section was $14.4
\pm 3.1$(stat) $\pm 2.2$(sys) pb, corresponding to an observed
significance of 4.6$\sigma$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{plot_prl_v1.eps}
\caption{\label{fig:WWWZ_Mjj}Distribution of the dijet mass for data and signal and background models for $WW+WZ \rightarrow l\nu jj$.}
\end{figure}
\section{Conclusions}
Several measurements of diboson production cross sections have been
carried out recently at the Tevatron. These measurements can be a
testing ground for techniques used in searches for the Standard Model
Higgs boson. Measurements are performed in many different final
states, ranging from those with several identified leptons to those
with no identified leptons and two jets. Different analysis
techniques are also used, from counting signal events to performing
fits to kinematic quantities to techniques involving classifiers or
matrix element calculations. There is good agreement between results
found at CDF and D0~, as well as good agreement with NLO
predictions.
\bigskip
\begin{acknowledgements}
We thank the Fermilab staff and the technical staffs of the
participating institutions for their vital contributions. This work
was supported by the U.S. Department of Energy and National Science
Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the
Ministry of Education, Culture, Sports, Science and Technology of
Japan; the Natural Sciences and Engineering Research Council of
Canada; the National Science Council of the Republic of China; the
Swiss National Science Foundation; the A.P. Sloan Foundation; the
Bundesministerium f\"ur Bildung und Forschung, Germany; the World
Class University Program, the National Research Foundation of Korea;
the Science and Technology Facilities Council and the Royal Society,
UK; the Institut National de Physique Nucleaire et Physique des
Particules/CNRS; the Russian Foundation for Basic Research; the
Ministerio de Ciencia e Innovaci\'{o}n, and Programa
Consolider-Ingenio 2010, Spain; the Slovak R\&D Agency; and the
Academy of Finland.
\end{acknowledgements}
\bigskip
| proofpile-arXiv_065-6833 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}To date, all spin wave experiments in non-condensed systems have been on spin-$\frac{1}{2}$ or pseudo-spin-$\frac{1}{2}$ systems. Here we ask the following questions: what is the nature of spin waves in a thermal spin-$1$ gas? How do these differ from the well understood spin-$\frac{1}{2}$ case? These questions are of experimental interest, with several groups possessing the technology to study them using ultra-cold gases \cite{mukund, chapman, sengstock}.
In the context of cold gases, spin waves were first discussed by Bashkin \cite{bashkin}, and independently by Lhuillier and Lalo\"e \cite{laloe}. The key finding was that spin exchange collisions can give rise to weakly damped spin waves even in a non-degenerate gas. Ultra cold gases have provided an exciting setting for observing these spin phenomena. In particular, experiments on pseudo spin-$\frac{1}{2}$ Bose and Fermi systems by the JILA \cite{cornell} and Duke \cite{du} groups have observed coherent collective oscillations in an otherwise classical gas.
One expects the physics of a spin-$1$ gas to be far richer than a pseudo-spin-$\frac{1}{2}$ system. This is amply demonstrated by experiments on condensed spin-$1$ gases \cite{mukund2, sengstock2, chapman3, chapman4}. One of the most dramatic observations was that of a dynamical instability in Bose condensed $^{87}$Rb, studied by Sadler \textit{et al.} \cite{sadler}. Beginning with a gas pumped into the $m_{F} = 0$ state, they observed the spontaneous formation of transverse ferromagnetic domains. Will a similar instability be observed in the normal state? We find that there is an exponentially growing mode in the unmagnetized gas, even in the normal state, with a wavelength comparable to typical cloud sizes.
From a theoretical perspective, the source of novel physics in a dilute spin-$1$ Bose gas (such as $^{87}$Rb, $^{23}$Na) is the structure of the interactions, described by \textit{two} coupling constants: $c_{0}$ and $c_{2}$, representing spin-independent and spin-dependent contact interactions. The interaction Hamiltonian density takes the form $H_{\rm int} = c_0 n^2/2 + c_2 \langle \vec{\textbf{S}} \rangle^2/2$, where $n$ and $\vec{\bf S}$ are the local density and spin density respectively \cite{jason}. The coefficient $c_{2}$ has no corresponding analog in the spin-$\frac{1}{2}$ case. This interaction gives rise to spin mixing collisions, where two atoms in the $m_{F} = 0$ hyperfine sub-level can combine to form atoms in the $m_{F} = \pm 1$ states \cite{ketterle}. Another important consideration is the quadratic Zeeman effect, which arises from the hyperfine interaction and the difference in the coupling between the electronic and nuclear spins. The linear Zeeman effect can be neglected in the Hamiltonian as the total spin of the atoms is a conserved quantity.
We begin our analysis by setting up the problem, and reviewing spin waves in a spin-$\frac{1}{2}$ gas. This allows us to highlight the differences with the spin-$1$ case. Next, we turn to the spin-$1$ gas. Starting from a microscopic Hamiltonian, we obtain a linearized Boltzmann equation about the ferromagnetic ($m_{F} = 1$) and polar ($m_{F} = 0$) states, and calculate the spin wave dispersion in each case. We find that a polar gas with ferromagnetic interactions is dynamically unstable towards spin-mixing collisions for small enough Zeeman fields, analogous to the condensed case. By explicit calculation of the dispersion relation, we show that for strong enough anti-ferromagnetic interactions, an instability occurs in the polar state \cite{mueller2}.
Following these analytic calculations, we perform numerical simulations of a trapped gas using an effective $1$D Boltzmann equation. We explore the evolution of transversely polarized spins, and investigate dynamical instabilities.
Our work complements prior work on the kinetics of a normal spin-$1$ Bose gas by Endo and Nikuni \cite{nikuni}. While their focus is on the effect of collisions in the damping of collective modes of a trapped gas, with particular emphasis on dipole modes, we focus here on the collisionless, Knudsen regime.
\section{Basic Setup}
\subsection{Kinetic Equations} As described by Ho \cite{jason} and Ohmi and Machida \cite{Ohmi}, the second-quantized Hamiltonian for a spin-$1$ gas, expressed in a frame where each spin component is rotating at its Larmor frequency is
\begin{widetext}
\begin{eqnarray}\label{eq:1}
{\cal H} = \int d\vec{\textbf{r}}~ \psi_{a}^{\dagger}\left(-\frac{1}{2m} + U( \vec{r}, t) + q \textbf{S}_{z} \cdotp \textbf{S}_{z}\right)\psi_{a} + \frac{c_{0}}{2}\psi^{\dagger}_{a}\psi_{a^{'}}^{\dagger}\psi_{a^{'}}\psi_{a}
+ \frac{c_{2}}{2}\psi^{\dagger}_{a}\psi^{\dagger}_{a^{'}}\vec{\textbf{S}}_{ab}\cdotp \vec{\textbf{S}}_{a^{'}b^{'}}\psi_{b^{'}}\psi_{b}
\end{eqnarray}
\end{widetext}
where $a = (-1, 0, 1)$, is the quantum number for the z-component of the spin, $\psi_{\sigma}(\textbf{r})$ is the field annihilation operator obeying bosonic commutation relations, and $U (\vec{r}, t)$ is the trapping potential. Here $\vec{\textbf{S}}$ denotes the dimensionless vector spin operator. Throughout, we set $\hbar = 1$. Boldface is used to denote matrices, and arrows denote vectors.
The interaction strengths, expressed in terms of the scattering lengths in the spin-$2$ and $0$ channel ($a_{2}, a_{0}$) are $c_{0} = 4\pi(a_{0} + 2a_{2})/3m$ and $c_{2} = 4\pi(a_{2} - a_{0})/3m$. A negative $c_{2}$ favors a ferromagnetic state with $\langle \vec{\textbf{S}} \rangle = 1$, while for positive $c_{2}$, the equilibrium state is one with $\langle \vec{\textbf{S}} \rangle = 0$, where all the atoms are in the $m_{F} = 0$ state, or simply an incoherent mixture.
Additionally, one considers the quadratic Zeeman effect $q \propto B^{2} $, which favors a state with $m_{F} = 0$ ($\langle \vec{\textbf{S}}\cdotp \vec{\textbf{S}} \rangle = 0$). Thus for a gas with negative $c_{2}$ such as $^{87}$Rb, the spin dependent contact interaction competes with the quadratic Zeeman field, giving rise to interesting dynamics \cite{sadler}.
In the condensed gas, dipolar interactions may also be important \cite{mukund, mukund2}. At the lower densities found in a normal gas these interactions, which fall as $1/r^{3}$, may be neglected.
Following standard arguments \cite{kadanoff} we obtain the equations of motion for the Wigner density matrix $\textbf{F}_{ab}(\vec{p}, \vec{R}, t)$, whose elements are $f_{ab}(\vec{p}, \vec{R}, t)= \int d \vec{r} e^{-i\vec{p}\cdotp \vec{r}} \langle \psi^{\dagger}_{a}(\vec{R} - \frac{\vec{r}}{2}, t) \psi_{b}(\vec{R} + \frac{\vec{r}}{2}, t) \rangle$. The diagonal components of the spin density matrix, when integrated in momentum, give the densities of each of the spin species. The off diagonal terms, often referred to as \textit{coherences}, are responsible for spin dynamics. Here $\vec{p}$ represents momentum, $\vec{R}$ and $\vec{r}$ denote the center of mass and relative coordinates. The Wigner function is the quantum analog of the classical distribution function. By taking moments of the Wigner function, we obtain physical observables such as the density $\textbf{n}(\vec{R}, t) = \int \frac{d\vec{p}}{(2 \pi)^{3}}~\textbf{F}(\vec{p}, \vec{R}, t)$ and spin current $\vec{\textbf{j}}(\vec{R}, t) = \int \frac{d\vec{p}}{(2 \pi)^{3}}~\vec{p}~\textbf{F}(\vec{p}, \vec{R}, t)$.
The equation of motion takes the standard form \cite{bashkin}:
\begin{equation}\label{eq:2}
\frac{\partial}{\partial t}\textbf{F} + \frac{\vec{p}}{m}\cdotp\vec{\nabla}_{R}\textbf{F}= i\Big[\textbf{V}, \textbf{F}\Big] + \frac{1}{2}\Big\{\vec{\nabla}_{R}\textbf{V},\vec{\nabla}_{p}\textbf{F}\Big\} + \textbf{I}_{c}
\end{equation}
where $\textbf I_{c}$ is the collision integral and $\textbf V$ is the interaction potential. This form of the Boltzmann equation (\ref{eq:2}) is completely general, and holds for any non-condensed spinor gas. The role of spin enters in determining the dimension of the density matrix, and the exact form of the interaction potential $\textbf{V}$.
The first two terms on the right hand side of the Boltzmann equation (\ref{eq:2}) arise from forward and backward scattering collisions. While the former type merely alter the mean field seen by the atoms, backward scattering collisions allow the colliding atoms to exchange momentum.
The last term in the Boltzmann equation is the collision integral, responsible for energy relaxation. While a detailed derivation of the collision integral is non trivial \cite{nikuni}, the qualitative properties are well described within a simple relaxation time approximation $I_{c} = -(f - f_{0})/\tau$. The relaxation time ($\tau$) is proportional to the elastic scattering rate $\tau^{-1}_{el} \sim 8\pi a^{2}_{0}v_{T}n$, where $v_{T} = \sqrt{\frac{2 k_{B}T}{m}}$ is the thermal velocity and $n$ is the density.
The precise expression for $\bf{V}$ determined by including the interactions within the Hartree-Fock description is\begin{equation}\label{eq:3}
\textbf{V} = \left(U + c_{0}\text{Tr}(\textbf n)\right)\textbf 1 + c_{0}\textbf n + c_{2}\vec{\textbf{S}}\textbf n \cdotp \vec{\textbf{S}} + c_{2} \vec{M}\cdotp\vec{\textbf{S}}
\end{equation}
where $\textbf{1}$ is the identity matrix, Tr denotes the trace operation, and $\vec{\textbf{M}} = \text{Tr}(\vec{\textbf{S}}~\textbf{n})$ is the magnetization. Our form for the interaction energy is equivalent to that of Endo and Nikuni \cite{nikuni}. One may explicitly check that this interaction potential is rotationally invariant in spin space.
In experiments the external trapping potential $U$ is often spin-dependent, an effect which is readily incorporated. The spin independent interaction gives rise to self and cross interaction terms. The latter give rise to coherences and are encoded in the second term in (\ref{eq:3}). In addition, the mean-field potential alters the trapping potential seen by all the atoms by an amount $c_{0}n_{tot}$, where the total density $n_{tot} = \text{Tr}(\textbf{n})$.
The contribution of the spin-dependent interaction is more subtle and can be understood as follows: The first term accounts for spin dynamics such as spin-relaxation collisions. For ferromagnetic interactions ($c_{2} < 0$), the last term increases the density of regions where the atomic spins are aligned with respect to one another ($ |M| = 1$). For a fully polarized gas in the $m_{F} = +1$ sublevel, this is $c_{2}\textbf{S}_{z}$, while for an unmagnetized ($m_{F} = 0$) gas it is zero.
Finally, we note that while the self- and cross- interactions between the three sublevels produce diagonal and off-diagonal contributions to the interaction potential, the spin-relaxation or population exchange interactions only gives rise to coherences, and are absent for a single component gas.
\subsection{Qualitative Features}
Here we elaborate on our argument for why one expects richer spin physics in a spin-$1$ system versus a spin-$\frac{1}{2}$ one. We assume a uniform, collisionless gas with
$\nabla_{R}c_{0}n/m v_{T} \ll c_{0}n$, such that all the physics is governed by the commutator in (\ref{eq:2}).
The pseudo-spin $\frac{1}{2}$ case may be understood starting from the fact that that a $2 \times 2$ matrix ($A$) can be decomposed into $\textbf{A} = A_{0}\textbf{I} + A_{\mu}\sigma_{\mu}$, where $A_{0} = \text{Tr}(\textbf{A})$, and $\vec{\sigma}$ are the Pauli matrices. Expressing the density matrix and interaction potential in this way, the equations of motion are $\frac{\partial}{\partial t}F_{0} + \frac{\vec{p}}{m}\cdotp\vec{\nabla}_{R}F_{0} = 0$ and $\frac{\partial}{\partial t}{F_{\mu}} + \frac{\vec{p}}{m}\cdotp\vec{\nabla}_{R}F_{\mu} = - F_{\rho}V_{\nu}\epsilon_{\rho\nu\mu}$, where $\epsilon_{\rho\nu\mu}$ is the completely antisymmetric unit tensor. The second equation, which is responsible for much of the spin wave physics in a spin-$\frac{1}{2}$ system simply says that the interactions act as an effective magnetic field about which the spins precess.
To analyze the spin-$1$ case, we follow Ohmi and Machida \cite{Ohmi} and use a Cartesian basis ($\psi =\{ \psi_{x}, \psi_{y}, \psi_{z}\}$). This representation is related to the spherical ($\{1, 0, -1\}$) basis as follows \cite{mueller}: $\psi_{x} = \frac{1}{\sqrt{2}}(\psi_{1} - \psi_{-1})$,$\psi_{x} = \frac{i}{\sqrt{2}}(\psi_{1} + \psi_{-1})$ and $\psi_{z} = \psi_{0}$.
In the Cartesian basis the irreducible decomposition of a spin-$1$ system is $\textbf{A} = A_{0}\textbf{I} + i \epsilon_{abc}A^{(1)}_{c} + A^{(s)}$, where the scalar $A_{0} = \frac{\text{Tr}(A)}{3}$, $\textbf{A}^{(a)} = \epsilon_{abc}A^{(1)}_{c}$ is a completely antisymmetric matrix proportional to the vector spin $\langle\textbf{S}\rangle$ order, and $\textbf{A}^{(s)}$ is a symmetric traceless tensor which is related to the spin fluctuations $\langle \textbf{S}_{a}\textbf{S}_{b} + \textbf{S}_{b}\textbf{S}_{a} \rangle$, and is a nematic degree of freedom.
Writing the density matrix ($\textbf{F}$) and interaction ($\textbf{V}$) in terms of their respective irreducible decompositions we obtain three equations of motion: $\frac{\partial}{\partial t}F_{0} + \frac{\vec{p}}{m}\cdotp\vec{\nabla}_{R}F_{0} = 0$, $\frac{\partial}{\partial t}{F^{(a)}} + \frac{\vec{p}}{m}\cdotp\vec{\nabla}_{R}F^{(a)} = i[\textbf{V}^{(a)}, \textbf{F}^{(a)}] + i[\textbf{V}^{(s)}, \textbf{F}^{(s)}]$, and $\frac{\partial}{\partial t}{F^{(s)}} + \frac{\vec{p}}{m}\cdotp\vec{\nabla}_{R}F^{(s)} = i[\textbf{V}^{(a)}, \textbf{F}^{(s)}] + i[\textbf{V}^{(s)}, \textbf{F}^{(a)}]$. Here the second equation describes the evolution of $\langle \textbf{S} \rangle$ and the spin currents, while the last equation describes dynamics of the nematicity ($\textbf{A}^{(s)}$).
The new feature is that the spin and nematic degrees of freedom are coupled by the interaction matrix. Using the Levi-Civita identities, it is easy to see that $i[\textbf{V}^{(a)}, \textbf{F}^{(a)}] $ is the spin-$1$ analog of the corresponding spin-$\frac{1}{2}$ term which acts as an effective magnetic field for the spins. But the last term $i[\textbf{V}^{(s)}, \textbf{F}^{(s)}]$ shows that fluctuations of the nematicity can change the spin dynamics.
\subsection{Spin waves in a two-component gas} As a starting point for understanding the spin-$1$ gas, we review the spin-$\frac{1}{2}$ case. As explained in \cite{oktel}, spin dynamics in the cold collision regime are governed by momentum exchange collisions with a characteristic timescale $\tau^{-1}_{ex} \sim \frac{4\pi a_{12}n}{m}$, where $a_{12}$ is the two-body scattering length. When $k_{B}T \ll \frac{1}{m a^{2}_{12}}$, or alternatively, the thermal deBroglie wavelength is large compared to the scattering length $\Lambda_{T} = \sqrt{\frac{2\pi}{m k_{B}T}} \ge a_{12}$, $\tau_{ex} \ll \tau_{el}$, and for times shorter than $\tau_{el}$, particles exchange momentum several times without significantly altering the energy distribution. Therefore excitations with wavelength longer than $v_{T}\tau_{ex}$ are a
collective effect.
For typical densities ($\sim 10^{14}$cm$^{-3}$) and scattering lengths, $n^{1/3}a \sim 0.01$ the condensation temperature ($T_{c}$) for the interacting system may be approximated to that of an ideal Bose gas \cite{baym}. Thus in order to see collective spin phenomena, we require $T_{c} \sim \langle \omega \rangle N^{\frac{1}{3}} < k_{B}T < \frac{1}{m a^{2}_{12}}$, where $\langle \omega \rangle = (\omega_{x}\omega_{y}\omega_{z})^{1/3}$ is the average trap frequency \cite{pethick}. Although this is typically a wide temperature range $10^{-7} < T < 10^{-3}$K, diffusive relaxation damps out spin waves at higher temperatures.
A two-component gas has a longitudinal and a transverse spin mode. The longitudinal mode has a linear dispersion, but is strongly Landau damped \cite{bashkin}. By contrast, the transverse mode is weakly damped and propagates with dispersion $\omega = \frac{k^{2}v^{2}_{T}\tau_{ex}}{2}\left(1 - i \frac{\tau_{ex}}{\tau_{D}}\right)$, where $\frac{\tau_{ex}}{\tau_{D}} \sim \frac{a_{12}}{\Lambda_{T}} \le 1$ \cite{levy}. These weakly damped spin waves have been observed in the context of spin polarized Hydrogen by Johnson \textit{et al.} \cite{cornell, du, lee}.
\section{Spin waves in the spin-$1$ gas}
In this section we linearize Eq.(\ref{eq:2}) about stationary states. In Sec.IV we consider more general dynamics.
\subsection{Excitations about the $m_{F}=1$ state}We begin by linearizing about the $m_{F} = 1$ state. We start with a homogenous gas of particles with a Maxwellian velocity distribution and initial density $n_{0}$. The self-interaction between two particles is proportional to $a_{2}$, and so we define $\Omega_{int} = (c_{0} + c_{2})n_{0}$. We begin by considering the collisionless limit, $1 \ll \Omega_{int}\tau_{D}$.
\begin{figure}[hbtp]
\begin{picture}(150, 160)(10, 10)
\put(-38, 160){(a)}
\put(92, 160){(b)}
\put(-38, 87){(c)}
\put(92, 83){(d)}
\put(-25, 95){\includegraphics[scale=0.375]{fig1.eps}}
\put(95, 95){\includegraphics[scale=0.375]{fig2.eps}}
\put(-25, 15){\includegraphics[scale=0.375]{fig3.eps}}
\put(95, 15){\includegraphics[scale=0.375]{fig4.eps}}
\put(-28, 110){\begin{sideways}\textbf{$(\omega/c_{0}n_{0})^{2}$}\end{sideways}}
\put(95, 115){\begin{sideways}\textbf{$\omega/c_{0}n_{0}$}\end{sideways}}
\put(-30, 35){\begin{sideways}\textbf{$(\omega/c_{0}n_{0})^{2}$}\end{sideways}}
\put(92, 30){\begin{sideways}\textbf{$(\omega/c_{0}n_{0})^{2}$}\end{sideways}}
\put(155, 87){\textbf{$k \Lambda_{T}$}}
\put(35, 87){\textbf{$k \Lambda_{T}$}}
\put(155, 7){\textbf{$k \Lambda_{T}$}}
\put(35, 7){\textbf{$k \Lambda_{T}$}}
\end{picture}
\caption{\label{fig:-1}(Color Online) (a) Dispersion relation $\omega^2(k)$ for various values of $q$ using $^{87}$Rb parameters ($c_{2} < 0$) at $T= 1\mu$K and $n_{0} = 10^{14}$cm$^{-3}$: q=0 (solid, black), $q = 2|c_{2}|n_{0}$ (blue, dashed), $q = 4|c_{2}|n_{0}$ (green, dotted), $q = 4.5|c_{2}|n_{0}$ (red, thin). (b) Gapless, linear dispersion for small $k$ at $q = 4|c_{2}|n_{0}$. The solid black curve is the small $k$ expansion (\ref{eq:7}). (c) $q = 2|c_{2}|n_{0}$ dispersion in detail. The horizontal black line is $\omega^{2} = q(q - 4|c_{2}|n_{0})$, which is the gap predicted by the small $k$ expansion (\ref{eq:7}). (d) $q=0$ dispersion in detail. Horizontal dashed line at $\omega^{2} = -4c^{2}_{2}n^{2}_{0}$ indicates the most unstable mode. Vertical dotted line is at $k\Lambda_{T} = \frac{\sqrt{2 |c_{2}|c_{0}}n_{0}}{k_{B} T}$.}
\end{figure}
Linearizing about this state, and dropping the collision term, the Boltzmann equation can be written in Fourier space:
\begin{eqnarray}\label{eq:4}
(-\omega + \frac{\vec{k}\cdotp\vec{p}}{m})\tilde{\delta\textbf{F}} = \Big[\textbf{V}_{0}, \tilde{\delta\textbf{F}}\Big] + \Big[\tilde{\delta\textbf{V}}, \textbf{F}_{0}\Big] + \\\nonumber
\frac{1}{2}\Big\{\vec{k}~\tilde{\delta\textbf{V}},\vec{\nabla}_{p}\textbf{F}_{0}\Big\}
\end{eqnarray}
where $\textbf{F}_{0}$ is the initial distribution and $\textbf{V}_{0}$ is the initial interaction potential. The quantities $\tilde{\delta\textbf{F}}$ and $\tilde{\delta\textbf{V}}$ are Fourier transforms of the change in the distribution and interaction, defined in the usual way $\delta f(\vec{k}, p, \omega) = \int d \vec{r}dt e^{i(\vec{k}\cdotp\vec{r} - \omega t)} \delta f(p,r, t)$. Inserting the form for the interaction potential (\ref{eq:3}), we obtain the equations of motion for the density and spin fluctuations.
We find that four long wavelength ($|k|v_{T} \ll \Omega_{int}$), low frequency ($\omega \ll \Omega_{int}$), spin modes propagate in this system. These modes have a quadratic dispersion, characteristic of ferromagnetic systems, but have a $k=0$ offset. Two of the modes are transverse spin waves, and the other two are quadrupolar modes where the longitudinal magnetization fluctuates along with the transverse spin fluctuations ($\langle S_\mu S_\nu\rangle -\langle S_\nu S_\mu\rangle$).
The spin modes are given by the equation $1 = -(c_{0} + c_{2})\int\frac{d\vec{p}}{(2\pi)^{3}}\frac{f_{0} + \frac{1}{2}\vec{k}\cdotp\vec{\nabla}_{p}f_{0}}{\omega +\Omega_{int} + q - \frac{\vec{k}\cdotp\vec{p}}{m}}$ \cite{landau}. The transverse spin modes have dispersion $\omega(k) = \frac{k^{2}v^{2}_{T}}{2 \Omega_{int}} - q$. The fact that this has a negative energy at $k=0$ signifies that in the presence of a quadratic Zeeman shift, the initial state is not the thermodynamic ground state. One can decrease the energy by rotating the spins to be transverse to the magnetic field. However the frequency stays real, meaning that the initial state is metastable, and transverse spin will not be spontaneously generated.
Meanwhile, the quadrupolar modes have dispersion $\omega(k) = -4c_{2}n_{0} + \frac{k^{2}v^{2}_{T}}{2 \Omega_{int}}$. In a gas with positive $c_{2}$, the frequency vanishes at a finite wave-vector, indicating a thermodynamic instability in the anti-ferromagnetic gas. Conversely for negative $c_{2}$ the spin-dependent contact interaction leads to a gap in the quadrupolar mode spectrum.
As pointed out by Leggett \cite{leggett}, one can also derive these equations by writing the macroscopic equations for the density and spin current:
\begin{eqnarray}\label{eq:5}
\partial_{t}\textbf{n} + \frac{\vec{\nabla}_{\textbf{R}}\cdotp\vec{\textbf{j}}}{m} = i\Big[\textbf{V}, \textbf{n} \Big] \\\nonumber
\partial_{t}\vec{\textbf{j}} + \frac{\vec{\nabla}_{\textbf{R}}\textbf{Q}}{m} = i\Big[\textbf{V}, \vec{\textbf{j}} \Big] - \frac{1}{2}\Big\{\vec{\nabla}_{\textbf{R}}\textbf {V},\textbf{n} \Big\} - \frac{\vec{\textbf{j}}}{\tau_{D}}
\end{eqnarray}
where the energy density $\textbf{Q} = \int d{\vec{p}}~(\vec{p}\cdotp\vec{p})~\textbf{F}$. In order to obtain a closed set of equations, we approximate $\textbf{Q} \approx \int d{\vec{p}}~\frac{1}{3}\langle\vec{p}\cdotp\vec{p}\rangle~\textbf{F} = \frac{p_{T}}{2}\textbf{n}$, where $p_{T} = mv_{T}$. In a collisionless gas, sufficiently close to equilibrium, these approaches are equivalent.
An advantage of the Leggett approach is that it provides access to the dispersion at large $k$ more readily than the former approach. Solving (\ref{eq:5}), we get the standard relations $\omega(k) = q - \frac{k^{2}v^{2}_{T}\tau_{D}/2}{(1 + (\Omega_{int}\tau_{D})^{2})} (\Omega_{int}\tau_{D} - i)$, and $\omega(k) = -4c_{2}n_{0} + \frac{k^{2}v^{2}_{T}\tau_{D}/2}{(1 + (\Omega_{int}\tau_{D})^{2})} (\Omega_{int}\tau_{D} - i)$. Setting $\tau_{D}$ to infinity, we once again recover the relationships derived above.
\subsection{Excitations about the $m_{F} = 0$ state} Next we consider the case of a gas in the $m_{F} = 0$ state. As before, low energy dynamics are driven by a combination of the quadratic Zeeman energy, and the spin-dependent contact interaction. As we now show, in the case where $c_{2}$ is negative, these energies compete, driving a {\textit{dynamic}} instability.
Linearizing equation (\ref{eq:4}) yields the relation:
\begin{equation}\label{eq:6}
1 = -\frac{4c^{2}_{2}\chi_{1}\chi_{2}}{(1+ (c_{0}+c_{2})\chi_{2})(1-(c_{0}+c_{2})\chi_{1})}
\end{equation}
where the response functions are $\chi_{1} = \int \frac{d\vec{p}}{(2\pi)^{3}}\frac{(f_{0} + \frac{1}{2}\vec{k}\cdotp\vec{\nabla}_{p}f_{0})}{-\omega +\Omega_{d}+ \frac{\vec{k}\cdotp\vec{p}}{m}}$ and $\chi_{2} = \int \frac{d\vec{p}}{(2\pi)^{3}}\frac{(f_{0} - \frac{1}{2}\vec{k}\cdotp\vec{\nabla}_{p}f_{0})}{-\omega -\Omega_{d}+ \frac{\vec{k}\cdotp\vec{p}}{m}}$ and $\Omega_{d} = (c_{0}-c_{2})n_{0}-q$. For $|k|v_{T} \ll c_{0}n_{0}$ and $\omega \ll c_{0}n_{0}$. The resulting dispersion is:
\begin{equation}\label{eq:7}
\omega^{2}(k) = q(q + 4c_{2}n_{0}) + (q + 2c_{2}n_{0})\frac{k^{2}v^{2}_{T}}{c_{0}n_{0}}
\end{equation}
\begin{figure}[hbtp]
\begin{picture}(50, 100)
\put(-60, 3){\includegraphics[scale=0.55]{pmicro.eps}}
\put(-70, 35){\begin{sideways}\textbf{$(\omega/c_{0}n_{0})^{2}$}\end{sideways}}
\put(90, -6){\textbf{$k \Lambda_{T}$}}
\end{picture}
\caption{\label{fig:-2} Microwave-field induced instability in a polar gas for $q<0$: The modes of a polar gas with ferromagnetic interactions ($c_{2} < 0 $) are plotted for $q = 4c_{2}n$ (top) and $q = 2c_{2}n$ (bottom) (hence $q < 0$). The gaps are given by $q(q + 2c_{2}n_{0})$ in each case. Note that for negative $q$, a finite wave-vector instability sets in. Increasing $|q|$ pushes the instability to larger wave-vectors. The dashed line is the most unstable frequency for $q=0$ (see Fig.~\ref{fig:-1}(d)) given by $\omega^{2} = - (2c_{2}n_{0})^{2}$. The timescale for the onset of the instability is independent of $q$. We set $T = 1\mu$K and $n_{0} = 10^{14}$cm$^{-3}$.}
\end{figure}
The spin and quadrupole modes in the $m_{F} = 0$ state are degenerate. For $c_{2} > 0$, or for large enough $q$, one obtains a gapped quadratic mode. For $c_{2} <0$ however, the dispersion goes from quadratic to linear at $q=4|c_{2}|n_{0}$. Upon lowering $q$ further, the $m_{F} = 0$ state undergoes a {\textit{dynamic}} instability where the excitation frequency is complex. At $q=2|c_{2}|n_{0}$, the leading order term in the small $k$ expansion vanishes.
The structure of the modes should be contrasted with those in the $m_{F} = 1$ state. Even though either case may not be a thermodynamically stable starting point for anti-/ferromagnetic interactions (respectively), only in the antiferromagnetic case do small fluctuations grow exponentially.
Taking the $\tau_{D} \rightarrow \infty$ limit and solving (\ref{eq:5}), the dispersion relation is determined by the equation:
\begin{widetext}
\begin{eqnarray}\label{eq:8}
\Big((\omega - \Omega_{s})(\omega + \Omega_{d}) - \frac{v^{2}_{T}k^{2}}{2}\Big)\Big((\omega + \Omega_{s})(\omega - \Omega_{d}) - \frac{v^{2}_{T}k^{2}}{2}\Big) = -4(c_{2}n_{0})^{2}(\omega - \Omega_{d})(\omega + \Omega_{d})\end{eqnarray}\end{widetext}
where $\Omega_{s} = q + 2c_{2}n_{0}$. Expanding (\ref{eq:8}) to lowest order in $k$ returns Eq.(\ref{eq:7}).
The roots of (\ref{eq:8}) for negative $c_{2}$ are plotted in Fig.~\ref{fig:-1}(a) for various values of the quadratic Zeeman energy $q$. The temperature is set to $1\mu$K, density to $10^{14}$ cm$^{-3}$, and we use the scattering lengths for $^{87}$Rb, $a_{0}(a_{2}) = 5.39(5.31)$nm. For the homogenous gas in consideration here, this is well above the temperature for Bose-Einstein condensation. As shown in Fig.~\ref{fig:-1}(b), at $q=0$, $\omega^{2}(k)$ is negative for small $k$, indicative of an instability.
The timescale for the onset of this instability ($t_{ins}$) is determined by the most unstable mode, which from (\ref{eq:8}) occurs at $\omega^{2} = - (2 |c_{2}|n_{0})^{2}$, giving $t_{ins} \sim \frac{1}{2|c_{2}|n_{0}}$.The corresponding wave-vector is $k_{ins}\Lambda_{T} = \frac{\sqrt{2 |c_{2}|c_{0}}n_{0}}{k_{B} T}$. Solving for where the mode frequency vanishes, we find that all wave-vectors $k$ smaller than a critical wave-vector $k_{c} = \sqrt{2}k_{ins}$ lead to unstable modes. For achievable experimental densities ($\sim 10^{14}$cm$^{-3}$) and temperatures ($\sim 1\mu$K), this wave-length of the instability $\lambda_{c} \sim 100\Lambda_{T} = 10\mu$m, which is comparable to typical cloud sizes. The time for the onset of this instability $\sim 0.25$s. Although this timescale is easily achievable in experiment, small collisional timescales ($\sim 50$ms) at these temperatures may make the instability undetectable in experiment.
\textit{Microwave induced instability in a ferromagnetic polar gas}:$-$
\footnote{This work was motivated by discussions with Mukund Vengalattore.}
Gerbier \textit{et al.} \cite{gerbier} have shown that a weak, microwave driving field, off resonant with the $F=1 \rightarrow F = 2$ transition, may be used to tune the energy levels of the $F=1$ hyperfine sublevels. In this sense, it plays the same role as the quadratic Zeeman shift. By changing the detuning, one can change the effective $q$ from positive ($E_{m_{F} = 0} < E_{m_{F} = \pm1}$) to negative ($E_{m_{F} = 0} > E_{m_{F} = \pm1}$), where $E$ denotes the energy of the hyperfine sublevels.
It is therefore of experimental and theoretical interest to consider what happens to the $m_{F} = 1, 0$ states at negative $q$. For the $m_{F} =1$ state, the spin mode becomes gapped by the quadratic Zeeman energy, and the thermodynamic instability disappears. This is to be expected as a negative $q$ favors a ground state with $\langle \textbf{S}\cdotp\textbf{S} \rangle = 1$.
The polar state however is more interesting. From (\ref{eq:7}), it is clear that for ferromagnetic interactions ($c_{2} < 0$), the first term on the right hand side is always positive, and spectrum becomes gapped. The second term however is negative, signaling a finite wave-vector instability. Analyzing (\ref{eq:8}) (see Fig.\ref{fig:-2}), we find that increasing $|q|$ pushes the instability to larger wave-vectors, but frequency of the most unstable mode remains largely unchanged. The experimental consequence is that $q<0$, one should observe a reduction in the size of ferromagnetic domains, while leaving the timescale for the onset of the instability unaffected.
\begin{figure*}[hbtp]
\begin{picture}(150, 90)(10, 10)
\put(-165, 100){(a)}
\put(-35, 100){(b)}
\put(95, 100){(c)}
\put(220, 100){(d)}
\put(-160, 15){\includegraphics[scale=0.375]{fig17.eps}}
\put(-35, 15){\includegraphics[scale=0.39]{fig16.eps}}
\put(95, 15){\includegraphics[scale=0.39]{fig22.eps}}
\put(220, 15){\includegraphics[scale=0.39]{fig23.eps}}
\put(-45, 42){\begin{sideways}\textbf{$\omega/c_{0}n_{0}$}\end{sideways}}
\put(84, 42){\begin{sideways}\textbf{$\omega/c_{0}n_{0}$}\end{sideways}}
\put(213, 42){\begin{sideways}\textbf{$\omega/c_{0}n_{0}$}\end{sideways}}
\put(-170, 42){\begin{sideways}\textbf{$\omega/c_{0}n_{0}$}\end{sideways}}
\put(200, 7){\textbf{$k \Lambda_{T}$}}
\put(70, 7){\textbf{$k \Lambda_{T}$}}
\put(-60, 7){\textbf{$k \Lambda_{T}$}}
\put(325, 7){\textbf{$k \Lambda_{T}$}}
\end{picture}
\caption{\label{fig:-3}(Color Online) Dispersion relations ($\omega(k)$) for $c_{2} > 0$ and $\sim c_{0}$. (a) Gapped (top) and ungapped (bottom) quadratic dispersion relation for $c_{2} = 0$. (b) The two solid curves in (a) are now shown for $c_{2} = 0.5 c_{0}$ At some wave-vector the two modes become one single mode. The real (black) and imaginary (orange) components are indicated separately. The gap is equal to $\Omega_{d} = c_{0} - c_{2}$. (c) Linear dispersion $\omega(k) = \frac{1}{\sqrt{2}}v_{T}k$, when $c_{2}/c_{0} = 1$. (d) Two modes for $c_{2} = 1.5c_{0}$: Gapped quadratic real mode; an unstable mode: real (black) and imaginary (orange) parts shown. Notice that the instability only occurs if $0 < k < k_{max}(c_{2}/c_{0})$. The parameters for these plots are $T = 1\mu$K, $n \sim 10^{14}$cm$^{-3}$. For definiteness, we picked the atoms to have the mass of $^{87}$Rb and the corresponding $c_{0}$ value. We caution however that this choice is artificial, as $|c_{2}| \ll c_{0}$ in $^{87}$Rb. All these plots were made in zero field ($q=0$).}
\end{figure*}
\textit{Large anti-ferromagnetic interactions}: The modes of the polar gas, when $c_{2} > 0$ can be obtained from (\ref{eq:8}). For small $c_{2}/c_{0}$, the polar state is stable toward spin and nematic fluctuations. However, when $c_{2}$ and $c_{0}$ are commensurate, an instability can set in. This latter limit, which has been largely unexplored, is primarily of academic interest as all current spinor gas experiments have $|c_{2}| \ll c_{0}$. For simplicity we restrict our analysis to the case of no magnetic field ($q=0$).
As shown in Fig.~\ref{fig:-3}(a), for $c_{2} = 0$ we find a gapped and ungapped quadratic mode with dispersions $\omega(k) = \sqrt{\frac{1}{2}({\Omega^{2}_{d} \pm \sqrt{ \Omega^{4}_{d} - (v_{T}k)^{4}}})}$, where $\Omega_{d}$ reduces to $c_{0}n_{0}$ in this limit. As $c_{2}$ is increased, for some $k_{c}$, the gapped an ungapped modes meet, and for $k > k_{c}$, imaginary frequencies appear (Fig.~\ref{fig:-3}(b)). While the exact expression for $k_{c}$ is complicated, and depends on $T$ and density, we estimate the appearance of imaginary wave-vectors when $\Omega_{s} \approx \Omega_{d}$ i.e. $c_{2} \approx c_{0}/3$. The case of $c_{2} = c_{0}$ is special, and there is a single linear mode with frequency $\omega(k) = \frac{1}{\sqrt{2}}v_{T}k$ (Fig.~\ref{fig:-3}(c)). However as $c_{2} > c_{0}$ (Fig.~\ref{fig:-3}(d)), imaginary frequencies appear once again for a bounded set of $k$ values: $0 < k < k_{max}(c_{2}/c_{0})$.
\subsection{Summary of Results}
In Fig. \ref{fig:-4} we summarize the results of our stability analysis in terms of the interaction parameters $c_{0}$ and $c_{2}$ for zero $q$. The following general features are clear:
\begin{figure}[hbtp]
\begin{picture}(50, 100)(10, 10)
\put(-80, 13){\includegraphics[scale=0.75]{phaseplot}}
\put(30,7){\Large{$c_{0}n$}}
\put(55, 8){\small{(nK)}}
\put(-80, 47){\Large{\begin{sideways}$c_{2}n$\end{sideways}}}
\put(-81, 70){\small{\begin{sideways}(nK)\end{sideways}}}
\end{picture}
\caption{\label{fig:-4} Stability phase portrait in terms of the interaction strengths $c_{0}n$ and $c_{2}n$ at zero Zeeman field. The interaction strengths are measured in nK to facilitate comparison with typical experimental values. The four distinct regions are demarcated by solid black lines. I: For $c_{2} < 0$, but $c_{2} > -c_{0}$, the ferromagnetic state is the thermodynamic ground state. II: For $c_{2} > 0$, $c_{2} \le c_{0}/3$, the polar state is the ground state. III: Here $a_{2} (= c_{0} +c_{2}) < 0$, and the gas is dynamically unstable towards collapse. IV: When $c_{2} \ge c_{0}/3$, the polar state becomes metastable. In the presence of finite populations of $\pm1$, phase separation is expected to occur. In this region, the ferromagnetic state is thermodynamically unstable. Finally, the open and filled circles are typical interaction scales in $^{23}$Na and $^{87}$Rb for $n \sim 10^{14}$cm$^{-3}$.}
\end{figure}
\begin{enumerate}
\item For $c_{0} > 0$ and $c_{2}< 0$, the ground state is a ferromagnet.
\item For $c_{0} > 0$ and $c_{2} > 0$, but $c_{2} \le c_{0}/3$, the ground state is polar.
\item In region III the scattering length in the spin-$2$ channel ($a_{2}$) is negative, and the gas becomes mechanically unstable.
\item In region IV, the polar state is metastable, in the presence of finite populations of $\pm1$ components, the system will phase separate \cite{mueller2}. The ferromagnet is thermodynamically unstable here.
\item The regime of current day experiments on spin-$1$ gases is that of $|c_{2}| \ll c_{0}$, indicated by the open and filled circles in the figure. For $^{87}$Rb, at $n \sim 10^{14}$cm$^{-3}$, $c_{0}n \sim 50$nK and $|c_{2}|n \sim 0.25$nK (filled). For $^{23}$Na, at the same density $c_{0}n \sim 85$nK and $c_{2}n \sim 4.25$nK (open).
\end{enumerate}
\section{Numerical Investigation}
In an experiment, the atoms are typically confined to a harmonic trap. Calculating the modes in a trap is much more difficult than in free space. Example calculations \cite{nikuni} do not always yield simple physical pictures. Moreover experiments are typically performed in a limit where linear response is not applicable \cite{du}.
For these reasons, we perform a numerical study of the spin dynamics for a trapped gas. Furthermore while the calculations in part I pertained to the collisionless regime, using a simple relaxation time approximation, we model collisions in the gas.
We address two questions here:
\begin{enumerate}
\item What happens if the initial state is not a stationary state with respect to the external magnetic field ?
\item Can the instability in the polar state be observed experimentally?
\end{enumerate}
To address question $1$, we consider a ferromagnet in the $x$ direction, in the presence of a field along $z$. We find that of coherent population dynamics in a collisionless gas. By controlling the rate of decay of spin current, one may be able to experimentally observe coherent oscillations even in a classical spinor gas.
With regards to $2$, taking into account realistic experimental parameters, the polar state instability should not be visible in $^{87}$Rb. The collision time is set by $c_{0}$, while the time for spin dynamics by $c_{2}$. Owing to the small ratio of $|c_{2}|/c_{0} \sim 0.005$, collisions lead to relaxation before any coherences can develop.
\subsection{Numerical setup}
We consider parameters such that the all the motion takes place in one dimension producing an effective 1D Boltzmann equation. Assuming that the distribution function in the transverse directions is frozen into a Boltzmann form, we can integrate them out \cite{natu}, yielding renormalized scattering lengths ($a_{0}/a_{2}$). In what follows, we normalize all lengths by the oscillator length $\sqrt{1/(m\omega_{z})}$ and momenta by $\sqrt{m k_{B} T}$. Using a phase-space conserving split-step approach \cite{teuk}, we numerically integrate (\ref{eq:2}). We have verified that the tempero-spatial grid is fine enough that our results are independent of step-size.
We use a radial trapping frequency $\omega_{r} = 2\pi \times 250$Hz, axial trapping frequency of $\omega_{z} = 2\pi \times 50$Hz, roughly $5 \times 10^{5}$ atoms, and set the temperature to $1\mu$K. The initial distribution function in position and momentum is given by the equilibrium distribution $e^{-\beta(\frac{p^{2}}{2m} + \frac{1}{2}m\omega^{2}_{z}z^{2})}$.
\subsection{Ferromagnetic gas in the x-direction}
\begin{figure}[hbtp]
\begin{picture}(100, 80)(10, 10)
\put(-48, 87){(a)}
\put(60, 87){(b)}
\put(60, 15){\includegraphics[scale=0.37]{fig8.eps}}
\put(-57, 15){\includegraphics[scale=0.35]{fig9.eps}}
\put(50, 40){\begin{sideways}\textbf{$n_{00}/n_{0}$}\end{sideways}}
\put(-68, 40){\begin{sideways}\textbf{$n_{00}/n_{0}$}\end{sideways}}
\put(160, 08){\textbf{$t(s)$}}
\put(40, 08){\textbf{$t(s)$}}
\end{picture}
\caption{\label{fig:-5}(a) Evolution of central density of $m_{F} = 0$ component (normalized to initial density $n_{0} = n_{00} + n_{11} + n_{-1-1} (t = 0)$) as a function of time for negative $c_{2}$ ($^{87}$Rb parameters (thick line)) and positive $c_{2}$ ($^{23}$Na parameters (dashed curve)) for $q \sim 15$Hz. ($B \sim 0.1$G). The line at $n_{00}/n_{0} = 0.5$ is a guide to the eye. (b) Thick curve: evolution of $m_{F} = 0$ population in $^{23}$Na for larger $q \sim 60$Hz ($B \sim 0.3$G). Dashed curve: suppression of oscillations due to finite relaxation rate ($\tau_{el} \sim 80$ms) in $^{23}$Na within a relaxation time approximation. Parameters used in the simulations: $\omega_{r} = 2\pi \times 250$Hz, $\omega_{z} = 2\pi \times 50$Hz and $N =5 \times 10^{5}$.}
\end{figure}
The spin density matrix for a ferromagnet pointing in the $x-$ direction is: $\textbf{F} = \psi^{*}\psi$, where $\psi = \{\frac{1}{2}, \frac{1}{\sqrt{2}}, \frac{1}{2}\}$. Numerically integrating (\ref{eq:5}), we find that for a magnetic field of $0.1$G ($q \sim 15$Hz), in the absence of collisional relaxation, the populations of the three hyperfine sublevels coherently oscillate about their equilibrium values, as shown in Fig.~\ref{fig:-5}(a). The amplitude of the oscillations depends on the magnitude of the spin dependent contact interaction. The evolution is not simply a precession, rather it involves an oscillation between ferromagnetism and nematicity.
For $c_{2} < 0 (> 0)$ the population of the $m_{F} = 0$ sublevel oscillates between $0.5$ and a larger (smaller) value. The frequency and amplitude of the oscillations also depend strongly on the magnitude of the quadratic Zeeman energy. Increasing $q$ increases the frequency of oscillations, but decreases the amplitude (Fig.~\ref{fig:-5}(b)). Similar results have been obtained by Chang \textit{et al.} \cite{chapman2} in condensed $^{87}$Rb, although they considered a different initial spin configuration.
The coherences oscillate at a frequency set by the external perturbation, in this case the quadratic Zeeman energy. The resulting phase factors can be interpreted as rotations in spin space. The spin-dependent contact interaction couples spin and density, and oscillations in the densities of the three components are seen. Increasing $q$, causes the spin vector to rotate faster, leading to a rapid averaging to its initial value over the timescale of the spin-dependent contact interaction, reducing the size of the effect.
\textit{Collisions}:$-$ Experiments will also involve collisions between the atoms. A detailed derivation of collisional relaxation rates can be found in \cite{nikuni}. We are concerned here with the experimental feasibility, hence we use a simple relaxation time approximation. The relaxation rate is proportional $(\frac{a_{0}+ 2a_{2}}{3})^{2} \sim a^{2}_{0}$. The amplitude of the oscillations in the populations of the three sub-levels is directly proportional to the size of $c_{2}$.
From the trap dimensions, particle number, and the scattering lengths for $^{87}$Rb ($|c_{2}|/c_{0} \sim 0.005$), we estimate $\tau_{el} \sim 20$ms. On this timescale, virtually no oscillations are seen. The story is different for $^{23}$Na where the smaller scattering length implies relaxation times almost 4 times longer than in Rb, and the ratio of $|c_{2}|/c_{0}$ is $10$ times larger. As shown in Fig.~\ref{fig:-5}(a,b) an experiment with sodium atoms should be able to detect population dynamics.
\subsection{Polar state instability}
\begin{figure}[hbtp]
\begin{picture}(100, 150)(10, 10)
\put(-55, 170){(a)}
\put(68, 170){(b)}
\put(-55, 78){(c)}
\put(68, 78){(d)}
\put(68, 5){\includegraphics[scale=0.36]{fig13.eps}}
\put(-50, 5){\includegraphics[scale=0.35]{fig12.eps}}
\put(68, 95){\includegraphics[scale=0.37]{Fig21.eps}}
\put(-55, 95){\includegraphics[scale=0.37]{fig.20.eps}}
\put(62, 135){\begin{sideways}\textbf{$\omega/c_{0}n_{0}$}\end{sideways}}
\put(-65, 125){\begin{sideways}\textbf{$n_{jj}/n_{0}$}\end{sideways}}
\put(165, 87){\textbf{$k\Lambda_{T}$}}
\put(45, 87){\textbf{$t(s)$}}
\put(63, 34){\begin{sideways}\textbf{$n_{ij}/n_{0}$}\end{sideways}}
\put(-62, 34){\begin{sideways}\textbf{$n_{jj}/n_{0}$}\end{sideways}}
\put(165, 0){\textbf{$t(s)$}}
\put(45, 0){\textbf{$t(s)$}}
\end{picture}
\caption{\label{fig:-6}(a) Collisionless evolution of the $m_{F} = 0~(+1)$ densities (solid/dotted curves) for $c_{2} = -c_{0}$, for an initial state $\{\frac{\epsilon}{\sqrt{2}}, 1, \frac{\epsilon}{\sqrt{2}}\}$ (spin fluctuation), where $\epsilon = 0.01$. The dashed curve shows the $m_{F} = 0$ populations when $\psi_{initial} = \{\frac{\epsilon}{\sqrt{2}}, 1, -\frac{\epsilon}{\sqrt{2}}\}$ (nematic fluctuation). The densities are normalized to the total initial density $n_{0} = n_{00} + n_{11} + n_{-1-1}(t = 0)$.
(b) A gapped and ungapped (solid blue and green) exist for $c_{2} = 0$. When $c_{2} \rightarrow -c_{0}$, (dashed blue and green curves) the energy of the gapped mode increases (green dashed) while the other mode (blue dashed) becomes unstable. The dashed orange curve is the imaginary part of this mode.
(c) Collisionless evolution of central density of $m_{F} = 0, +1$ components (dotted/dashed respectively) for negative $c_{2}$ ($c_{2}/c_{0} \sim 0.05$ and $c_{2} \sim 2$Hz) for $q \ll |c_{2}|n$ ($B \sim 1$mG). Conservation of $S_{z}$ implies that the $m_{F} = -1$ component evolves in the same way as $m_{F} = 1$. (d) Collisionless evolution of coherences - top curve is the $n_{0-1}$ component and bottom is the $n_{1-1}$ component normalized to the initial total density. The $n_{1-1}$ component tends to -0.5, coherence is maintained, and the final state is a polar state along $x$.}
\end{figure}
In Sec.II(B) we showed that the polar state of a gas with ferromagnetic interactions has a dynamically unstable mode at long wavelengths. Here we explicitly demonstrate this instability for a trapped gas with ferromagnetic interactions. However we also find that this instability is not observable in current experiments. Given the scattering lengths of $^{87}$Rb, our simulations indicate that the collision times are so short that the coherences necessary to drive this instability will never have time to develop.
Additionally we explore details of the instability. We verify that the polar state is always unstable for small $q$, but the final state is dependent on the magnitude of $|c_{2}|/c_{0}$. For small values of this ratio ($\sim 10^{-3}$), finite populations of $m_{f} = \pm 1$ develop, but not all the atoms are transferred out of the $m_{F} = 0$ state. Increasing $c_{2}$ decreases the timescale for the onset of this instability, as well as increasing the fraction of the population transferred. The $m_{F} = 0$ state is stabilized at large Zeeman fields. Note that $S_{z}$ is always a conserved quantity.
We start with a polar gas with small coherences, which seed the instability. We consider seeds with two different symmetries: $\textbf{F} = \psi^{*}\psi$ where $\psi = \{\psi_{1}, \psi_{0}, \psi_{-1}\} = \{\frac{\epsilon}{\sqrt{2}}, 1, -\frac{\epsilon}{\sqrt{2}}\}$ (nematic) or $\{\frac{\epsilon}{\sqrt{2}}, 1, \frac{\epsilon}{\sqrt{2}}\}$ (spin), where $\epsilon \ll 1$, and we only keep terms ${\cal{O}}(\epsilon)$ in the density matrix. The Wigner functions are assumed to have a Maxwellian form in phase space.
First consider $q = 0$ dynamics. As the polar state is unstable to spin fluctuations, an initial spin fluctuation grows exponentially, while a nematic fluctuation merely changes the size of the nematic director, but does not make it unstable (Fig.~\ref{fig:-6}(a)). To enhance the size of the effect, we pick $c_{2} = -c_{0}$, but we observe an exponentially growing spin mode for typical values of $c_{2}$ as well.
Although the linear stability analysis predicts an instability, it does not determine the final state, which depends in a complicated way on the long time dynamics. In Fig.~\ref{fig:-6}(a) we show the collisionless evolution of the $m_{F} = 0 (+1)$ populations (solid/dashed) starting in the state $\{\frac{\epsilon}{\sqrt{2}}, 1, \frac{\epsilon}{\sqrt{2}}\}$ with $\epsilon = 0.01$. The dashed curve shows that for an initial state with nematic order $\{\frac{\epsilon}{\sqrt{2}}, 1, -\frac{\epsilon}{\sqrt{2}}\}$, no dynamics is seen.
In Fig.~\ref{fig:-6}(b) we plot the modes of the polar gas after solving (\ref{eq:8}). As before there are two modes at $c_{2} = 0$. One mode has a gap of $(c_{0} - c_{2})n_{0}$ (green curves), which increases as $c_{2} \rightarrow -c_{0}$. The other ungapped mode (blue curves) develops an imaginary part (orange) which increases as $c_{2}$ is made more negative.
The quadratic Zeeman effect stabilizes the polar state. Given an initial spin fluctuation about the unmagnetized state, this spin precesses in the Zeeman field at a rate proportional to $q$. If the initial state only has a nematic perturbation, then a finite $q$ is needed to produce the spin fluctuations required to drive an instability. Upon increasing $q$, there are two effects. First there appear wave-vectors where the frequency is real (\ref{eq:8}), and oscillations are observed along with an exponential growth of spin populations. Second, the spin vector precesses more rapidly, and averages to zero on a timescale of $\sim |c_{2}|n$, ultimately stabilizing the polar state.
In Fig.~\ref{fig:-6}(c) we plot the evolution of the central density of the $m_{F} = 0, +1$ sub-levels in time for $q \sim 10$mHz, in a collisionless gas. We take $c_{2}n \sim 2$Hz, such that $|c_{2}|/c_{0} \sim 0.05$, instead of the $0.005$ in $^{87}$Rb. In Fig.~\ref{fig:-6}(d) the coherences are shown. The timescale for the onset of the instability, as well as the final state depends on the size of the off-diagonal seed, and the nature of the perturbation i.e. whether it is a spin or a nematic fluctuation or both, and the strength of the interactions. Here we have shown one final state, $\psi_{f} = \{\frac{1}{\sqrt{2}}, 0,-\frac{1}{\sqrt{2}} \}$. Note that the final state is not simply an incoherent mixture, unlike in a collisional gas. The distribution functions for the $m_{F} = \pm1$ components are identical, and no local magnetization is observed. Local structures may be observed in an experiment where the trapping potentials are made spin dependent. Performing the simulation with a relaxationn rate approximation, taking into acccount collision times ($\tau \sim 50$ms), and scattering lengths for $^{87}$Rb, we do not observe any dynamics in the polar state.
\begin{figure}[hbtp]
\begin{picture}(50, 100)(10, 10)
\put(-50, 15){\includegraphics[scale=0.5]{fig19.eps}}
\put(85, 5){\textbf{$t(s)$}}
\put(-60, 60){\begin{sideways}\textbf{$n_{00}/n_{0}$}\end{sideways}}
\end{picture}
\caption{\label{fig:-7} Evolution of central density of $m_{F} = 0$ component (normalized to initial density $n_{0} = n_{00} + n_{11} + n_{-1-1} (t = 0)$) as a function of time for $c_{2} = 0.5 c_{0}$ without (solid) and with the relaxation rate approximation (dashed). The initial state is $\textbf{F} = \psi^{*}\psi$, where $\psi = \{\frac{\epsilon}{\sqrt{2}}, 1, \frac{\epsilon}{\sqrt{2}}\}$ (spin fluctuation), where $\epsilon = 0.01$. Trap parameters: $5 \times 10^{5}$ atoms, $\omega_{r} = 2\pi \times 250$Hz, $\omega_{z} = 2\pi \times 50$Hz. The relaxation time $\tau \sim 50$ms. For definiteness we assumed $^{87}$Rb atoms, even though the scattering lengths in Rb, do not obey this relationship. Owing to the strong interactions, one should be able to observe this instability even for reasonable collision rates.}
\end{figure}
\textit{Large anti-ferromagnetic interactions}: Next we consider the case where the spin-dependent and spin-independent contact interactions are comparable in magnitude, and both positive. In Fig.~\ref{fig:-7} we plot the central densities of the $m_{F} = 0$ atoms as a function of time for $c_{2} = 0.5c_{0}$ without (solid curve) and with (dashed) the relaxation time approximation, demonstrating the instability. Due to the large interaction energies, the dynamics is complicated.
As Endo and Nikuni \cite{nikuni} have shown, one must be careful while considering the relaxation time approximation in the $c_{0} \sim c_{2}$ limit where more complicated spin dependent collisions may become important. In order to fully model the physics, more than one ``relaxation-rate" may be required.
\section{Summary and Outlook}
We have addressed the role of the spin-dependent interaction in the collisionless dynamics of a thermal spinor gas. By calculating spin and quadrupolar modes, about the magnetized and unmagnetized states, we have shown that the normal state of a spinor has a rich array of spin excitations compared to its well-studied pseudo-spin $\frac{1}{2}$ counterpart. We numerically calculated the dynamics of a non-stationary initial state, finding that the spin-dependent contact interaction drives population dynamics. Finally we provide an explicit demonstration of the instability of the polar state to spin fluctuations.
We conclude that many interesting experiments may be done in the \textit{normal} state of a spinor gas. In order to observe some of the physics described here, it will be necessary to attain a limit where the spin dependent and independent interactions are commensurate in magnitude. We hope that this work motivates experiments in that direction.
\textit{Acknowledgements}:$-$ S.N. would like to thank Stefan K. Baur and Kaden R.A Hazzard for discussions. We thank Mukund Vengalattore for pointing out \cite{gerbier} to us, for several helpful discussions and comments on this manuscript. This work was supported by NSF Grant No.~PHY-0758104.
| proofpile-arXiv_065-6858 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $E$ be a Galois extension of $\mathbb{Q}$ of degree $\ell$. Let
$\pi$ be an automorphic cuspidal representation of $GL_m({\Bbb
A}_E)$ with unitary central character. Then the finite part
$L$-function attached to $\pi$ is given by the product of local
factors for $Re s=\sigma>1$, $L(s, \pi )=\prod_v L_\nu(s, \pi)$ (see \cite{GJ})
where $L_\nu(s,\pi)=\prod_{j=1}^m \Big(1-
\frac{\alpha_{\pi}(v,j)}{q_{v}^{s}}\Big)^{-1}$ and
$\alpha_{\pi}(v,j), 1\leq j\leq m$ are complex numbers given by the
Langlands correspondence, and $q_\nu$ denotes the cardinality of the residue at the place $\nu$. If $\pi_v$ is
ramified, we can also write the local factors at ramified places $v$
in the same form (1.1) with the convention that some of the
$\alpha_{\pi}(v,j)$ may be zero.
For two automorphic cuspidal representations $\pi$ and
$\pi'$ of $GL_m({\Bbb A}_E)$ and $GL_{m'}({\Bbb A}_E)$,
respectively, denote the usual Rankin-Selberg $L$-function by \begin{equation}
L(s, \pi \times \widetilde{ \pi}') =\prod_v L_\nu (s, \pi\times
\widetilde{ \pi}')= \prod_{j=1}^m\prod_{i=1}^{m'}
\le(1-\frac{\alpha_\pi (v,j) \overline{\alpha_{\pi'}
(v,i)}}{q_v^{s}}\right)^{-1}.
\end{equation}
For $\sigma>1$, we have
\begin{eqnarray*}
\frac{L'}{L}\left(s,\pi\times\widetilde{\pi}'\right) =
-\sum_{n=1}^{\infty}\frac{\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)}{n^s},
\end{eqnarray*}
see $\S2$, for the detailed definition of
$a_{\pi\times\widetilde{\pi}'}(n)$.
By a prime number theorem for
Rankin-Selberg $L$-functions $L(s, \pi \times\widetilde{\pi}')$, we
mean the asymptotic behavior of the sum
\begin{eqnarray}
\sum_{n\leq x}\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n).
\end{eqnarray}
A prime number theorem for Rankin-Selberg $L$-functions with $\pi$
and $\pi'$ being classical holomorphic cusp forms has been studied
by several authors. Recently, Liu and Ye \cite{LiuYe4} computed a
revised version of Perron's formula. Using the new Perron's formula,
the authors proved a prime number theorem for
Rankin-Selberg $L$-functions over $\mathbb{Q}$ without assuming the
Generalized Ramanujan Conjecture. Following the method in
\cite{LiuYe4}, we obtain a prime number theorem of the
Rankin-Selberg $L$-functions defined over a number field $E$.
\begin{Theorem}
Let $E$ be Galois extension of $\mathbb{Q}$ of degree $\ell$. Let
$\pi$ and $\pi'$ be irreducible unitary cuspidal representations of
$GL_{m}({\Bbb A}_{\Bbb E})$ and $GL_{m'}({\Bbb A}_{\Bbb E})$,
respectively. Assume that at least one of $\pi$ or $\pi'$ is
self-contragredient. Then \begin{eqnarray*} &&\sum_{n\leq
x}\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n) \nonumber
\\
&& = \le\{\begin{array}{l} \frac{\displaystyle
x^{1+i\tau_0}}{\displaystyle 1+i\tau_0} +O\{x\exp(-c\sqrt{\log x})\}
\\
\hspace{20mm} \text{if}\ \pi'\cong\pi\otimes|\det|^{i\tau_0}\
\text{for}\ \text{some}\ \tau_0\in{\Bbb R};
\\
O\{x\exp(-c\sqrt{\log x})\}
\\
\hspace{20mm} \text{if}\ \pi'\not\cong\pi\otimes|\det|^{it}\
\text{for}\ \text{any}\ t\in{\Bbb R}.
\end{array}
\right.
\end{eqnarray*}
\end{Theorem}
\smallskip
Let $E$ be a cyclic Galois extension of $\mathbb{Q}$ of degree
$\ell$. Let $\pi$ be an automorphic cuspidal representation of
$GL_m({\Bbb A}_E)$ with unitary central character. Suppose that
$\pi$ is stable under the action of $\textrm{Gal}(E/\mathbb{Q})$.
Thanks to Arthur and Clozel \cite{AC}, $\pi$ is the base change lift of
exactly $\ell$ nonequivalent cuspidal representations
\begin{equation} \pi_{\mathbb{Q}},
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}, ...,
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{\ell-1} \end{equation} of $GL_m(\Bbb
A_{\Bbb Q})$, where $\eta_{E/\mathbb{Q}}$ is a nontrivial
character of $\Bbb A_{\Bbb Q}^{\times}/{\Bbb Q}^{\times}$ attached
to the field extension $E$ according to class field theory.
Consequently, we have $L(s, \pi)=L(s, \pi_{\mathbb{Q}}) L(s,
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}})\cdots
L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{\ell-1})$
where
the $L$-functions on the right side are distinct.
Similarly, let $F$ be a cyclic Galois extension of $\mathbb{Q}$ of
degree $q$. Let $\pi'$ be an automorphic cuspidal representation of
$GL_{m'}({\Bbb A}_F)$ with unitary central character, and
suppose that $\pi'$ is stable under the action of
$\textrm{Gal}(F/\mathbb{Q})$. Then we can write
\begin{equation}L(s,\pi')=\prod_{j=0}^{q-1}L(s,\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^j) \end{equation}
where $\pi'_{\mathbb{Q}}$ is an irreducible cuspidal
representation of $GL_{m'}({\Bbb A}_\mathbb{Q})$ and
$\psi_{F/\mathbb{Q}}$ is a nontrivial character of $\Bbb A_{\Bbb
Q}^{\times}/{\Bbb Q}^{\times}$ attached to the field extension $F$.
Then we define the Rankin-Selberg $L$-function over the different
number fields $E$ and $F$ by \begin{eqnarray}
L(s,\pi\times_{BC}\widetilde{\pi}') = \prod_{{0\leq i\leq
\ell-1}\atop{0\leq j\leq
q-1}}L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}\times
\widetilde{\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}}), \end{eqnarray}
where
$L(s,\pi\otimes\eta^{i}\times\widetilde{\pi'\otimes\psi^{j}})$,
$0\leq i\leq \ell-1,\;0\leq j\leq q-1$ are the usual Rankin-Selberg
$L$-functions over $\mathbb{Q}$ with unitary central characters.
Then for $\sigma>1$, we have \begin{eqnarray*}
-\frac{d}{ds}\log L(s,\pi\times_{BC}\widetilde{\pi}')=
\sum_{n=1}^{\infty}\frac{\Lambda(n)a_{\pi\times_{BC}\widetilde{\pi}'}(n)}{n^s},
\end{eqnarray*} where
$$a_{\pi\times_{BC}\widetilde{\pi}'}(n) =\sum_{0\leq i\leq
\ell-1}\sum_{0\leq j\leq
q-1}a_{\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}}
(n)a_{\widetilde{\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}}}(n).$$
By a prime number theorem for Rankin-Selberg $L$-functions $L(s, \pi
\times_{BC}\pi')$ over number fields $E$ and $F$, we mean the
asymptotic behavior of the sum \begin{eqnarray} \sum_{n\leq
x}\Lambda(n)a_{\pi\times_{BC}\widetilde{\pi}'}(n) = \sum_{n\leq x}\sum_{0\leq
i\leq \ell-1}\sum_{0\leq j\leq q-1}\Lambda(n)a_{\pi_{\mathbb{Q}}
\otimes\eta_{E/\mathbb{Q}}^{i}}(n)
a_{\widetilde{\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}}}(n).
\end{eqnarray}
Using the main theorem in Liu and Ye \cite{LiuYe4}, we obtain a
prime number theorem over different number fields $E$ and $F$.
\begin{Theorem}
Let $E$ and $F$ be two cyclic Galois extensions of $\mathbb{Q}$ of
prime degrees $\ell$ and $q$, respectively, with $(\ell,q)=1$. Let $\pi$
and $\pi'$ be unitary automorphic cuspidal representations of
$GL_{m}({\Bbb A}_{E})$ and $GL_{m'}({\Bbb A}_{ F})$,
respectively. Assume that we have base change lifts as in (1.4) and (1.5),
and suppose that $\pi_{\mathbb{Q}}$ is self contragredient, then
\begin{eqnarray*}
&&\sum_{n\leq x}\Lambda(n)a_{\pi\times_{BC}\widetilde{\pi}'}(n) \nonumber
\\
&& = \le\{\begin{array}{l}\frac{\displaystyle
x^{1+i\tau_0}}{\displaystyle 1+i\tau_0} +O\{x\exp(-c\sqrt{\log x})\}
\\
\hspace{4mm} \text{if}\
\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_0}
\cong\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_0}\otimes|\det|^{i\tau_0}
\ \text{for}\ \text{some}\ \tau_0\in{\Bbb R}\text{ and some }
i_0, j_0;
\\
O\{x\exp(-c\sqrt{\log x})\}
\\
\hspace{4mm} \text{if}\
\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}
\ncong\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}\otimes|\det|^{i\tau}\
\text{for}\ \text{any}\ i,j \text{ and } \tau\in{\Bbb R}.
\end{array}
\right. \end{eqnarray*}
\end{Theorem}
We end by rewriting Theorems 1.1 and 1.2 as sums over primes using Conjecture 2.1 and Hypothesis H to show that the main term comes from those primes which split completely in the field extension
\begin{Theorem}(1) Let the notations be as in Theorem 1.1. Assume Hypothesis H and Conjecture 2.1 , then
\begin{equation}\sum_{{p\leq x}\atop {\text{$p$ splits completely}}}(\log p)a_{\pi\times\widetilde{\pi}'}(p)=\frac{x^{1+i\tau_0}}{1+i\tau_0}+O\{x\exp(-c\sqrt{\log x})\}\text{ for }\pi\cong\pi'\otimes|\det|^{i\tau_0} \nonumber \end{equation}
(2) Let the notations be as in Theorem 1.2 and suppose for some $i_0,j_0$ and $\tau_0\in \mathbb{R}$ that $\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_0}\cong\pi_{\mathbb{Q}}'\otimes\psi_{F/\mathbb{Q}}^{j_0}\otimes|\det|^{i\tau_0}$. Assume Hypothesis H and Conjecture 2.1, also suppose that for any prime $p$ for which both $\pi_\nu$ and $\pi'_\omega$ are unramified for any $\nu|p$, $\omega|p$ that we have the following: suppose that there exist primes $\mathfrak{p}$ and $\mathfrak{q}$ in the ring of integers of $E$ and $F$ ,respectively, lying above $p$ which also lie below primes $\mathfrak{P}$ and $\mathfrak{Q}$ inside the ring of integers of $EF$ with the restriction $f_{\mathfrak{Q}/\mathfrak{q}}\leq f_p$ and $f_{\mathfrak{P}/\mathfrak{p}}\leq f_p$. Then
\begin{equation}\sum_{{p\leq x}\atop{\text{$p$ splits completely in $EF$}}}(\log p)a_{\pi\times_{BC}\widetilde{\pi}'}(p)=\frac{x^{1+i\tau_0}}{1+i\tau_0}+O\{x \exp(-c\sqrt{\log x})\} \nonumber \end{equation} \nonumber \end{Theorem}
\medskip
{\it Remark}: Note Theorem 1.3 says to obtain the main term we need only consider those summands for which $f_p=f_p'=1$. Such conditions are useful in controlling sums over primes in the computation of the n-level correlation function attached to a cuspidal representation of $GL_n(\mathbb{A}_E)$ over a number field $E$ (see \cite{LiuYe1}).
\medskip
\smallskip
\section{Rankin-Selberg L-functions}
\setcounter{equation}{0}
In the section we recall some fundamental analytic properties of
Rankin-Selberg $L$-functions: absolute convergence of the Euler product, location of poles, and zero-free region.
Let $E$ be a Galois extension of $\mathbb{Q}$ of degree $\ell$. For
any prime $p$, we have
$E\otimes_{\mathbb{Q}}\mathbb{Q}_p=\oplus_{v|p}E_v$, where $v$
denotes a place of $E$ lying above $p$. Since $E$ is Galois over
$\mathbb{Q}$, all $E_v$ with $v|p$ are isomorphic. Denote by
$\ell_p$ the degree $[E_{\nu}:\mathbb{Q}_p]$ , by $e_p$ the order of
ramification, and by $f_p$ the modular degree of $E_v$ over
$\mathbb{Q}_p$ for $v|p$. Then we have $\ell_p=e_pf_p$, and
$q_v=p^{f_p}$ is the cardinality of the residue class field.
Let $\pi$ be an irreducible cuspidal representation of $GL_m({\Bbb
A}_E)$ with unitary central character. Let $\pi'$ be an automorphic cuspidal representation of
$GL_{m'}(\mathbb{A}_E)$ with unitary central character. The finite-part
Rankin-Selberg $L$-function $L(s, \pi\times\widetilde{\pi}')$ is
given by the product of local factors.
\begin{equation}L(s,\pi\times\tilde{\pi}')=\prod_{\nu<\infty}L_{\nu}(s,\pi\times\tilde{\pi}') \nonumber \end{equation} and we denote
\begin{eqnarray*} L_p(s,\pi\times\widetilde{\pi}')
&=&\prod_{v|p}L_v(s,\pi_v \times \widetilde{ \pi}'_v)
=\prod_{v|p}\prod_{j=1}^m\prod_{i=1}^{m'} \Big(1-\frac{\alpha_\pi
(v,j) \overline{\alpha_{\pi'} (v,i)}}{p^{f_{p}s}}\Big)^{-1}. \end{eqnarray*} Then for
$\sigma>1$, we have \begin{eqnarray}
\frac{L'}{L}(s,\pi\times\widetilde{\pi}') &=&
-\sum_{v}\sum_{j=1}^{m}\sum_{i=1}^{m'}\sum_{k\geq 1}\frac{f_p\log
p}{p^{kf_{p}s}}
\alpha^k_\pi(v,j) \overline{\alpha^k_{\pi'} (v,i)}\nonumber\\
&=&
-\sum_{n=1}^{\infty}\frac{\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)}{n^s},
\end{eqnarray}
where
$$a_{\pi\times\widetilde{\pi}'}(p^{kf_p})=
\sum_{\nu|p}f_p\Big(\sum_{j=1}^{m}\alpha_\pi(\nu,j)^k\Big)
\Big(\sum_{i=1}^{m'}\overline{\alpha_{\pi'}(\nu,i)}^k\Big),$$ and
$a_{\pi\times\widetilde{\pi}'}(p^{k})=0$, if $f_p\nmid k$.
We will use the Rankin-Selberg $L$-functions $L(s, \pi \times
\widetilde\pi')$ as developed by Jacquet, Piatetski-Shapiro, and
Shalika \cite{JacPiaSha}, Shahidi \cite{Sha1}, and Moeglin and
Waldspurger \cite{MoeWal}. We will need the following properties of $L(s,\pi\times\widetilde{\pi}')$
\medskip
{\bf RS1}. The Euler product for $L(s,\pi\times\widetilde{\pi}')$ in
(2.1) converges absolutely for $\sigma>1$ (Jacquet and Shalika
\cite{JacSha1}).
\medskip
{\bf RS2}. Denote $\alpha(g)=|\det(g)|$. When $\pi'\not\cong
\pi\otimes\alpha^{it}$ for any $t\in{\Bbb R}$,
$L(s,\pi\times\widetilde{\pi}')$ is holomorphic. When $m=m'$ and
$\pi' \cong \pi\otimes \alpha^{i\tau_0} $ for some $\tau_0\in\Bbb
R$, the only poles of $L(s, \pi \times \widetilde\pi ')$ are simple
poles at $s=i\tau_0$ and $1+i\tau_0$ (Jacquet and Shalika \cite{JacSha1},
Moeglin and Waldspurger \cite{MoeWal}).
\medskip
Finally, we note the reason for the self-contragredient assumption is that one must apply the following zero-free region due to Moreno to obtain the error term as in the Theorems
\medskip
{\bf RS3}.
$L(s,\pi\times\widetilde{\pi}')$ is non-zero in $\sigma\ge 1$ (Shahidi
\cite{Sha1}). Furthermore, if at least one of $\pi$ or $\pi'$ is
self-contragredient, it is zero-free in the region
\begin{equation}
\sigma > 1-\frac{c}{\log(Q_{\pi}Q_{\pi'}(|t|+2))}, \quad |t|\geq 1
\end{equation}
where $c$ is an explicit constant depending only on $m$ and $n$ (see
Sarnak \cite{Sa}, Moreno \cite{Mor} or Gelbert, Lapid and Sarnak
\cite{GLS}).
\medskip
Let $L(s,\pi\times_{BC}\widetilde{\pi}')$ be the Rankin-Selberg
$L$-function over the number fields $E$ and $F$,
$$
L(s,\pi\times_{BC}\widetilde{\pi}')=\prod_{{0\leq i\leq
\ell-1}\atop{0\leq j\leq
q-1}}L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}
\times\widetilde{\pi'_{\mathbb{Q}}\otimes\psi^j_{F/\mathbb{Q}}})
$$
where $L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}
\times\widetilde{\pi'_{\mathbb{Q}}\otimes\psi^j_{F/\mathbb{Q}}})$
is the usual Rankin-Selberg $L$-function on $GL_{m}\times GL_{n}$
over $\Bbb Q$. Hence
$L(s,\pi\times_{BC}\widetilde{\pi}')$ will have similar analytic
properties as the usual Rankin-Selberg $L$-functions. We will need the following bound for the local parameters proved in \cite{RudSa}
\begin{equation} |\alpha_{\pi}(j,\nu)|\leq p^{f_p(1/2-1/(m^2\ell+1))}\text{ for }\nu|p \end{equation}
this holds for both $\pi_\nu$ ramified and unramified. When $\pi_\nu$ is unramified the generalized Ramanujan conjecture claims that $|\alpha_{\pi}(j,\nu)|=1$.
The best known bound toward this conjecture over an arbitrary number field is $\alpha_\pi(j,\nu)\leq p^{{f_p}/9}$ for $m=2$ \cite{KimSha}. We will not assume the generalized Ramanujan conjecture, but assume a bound $\theta_p$ toward it for any $p$ which is unramified and does not split completely in $E$.
\begin{Conjecture}For any $p$ which is unramified and does not split completely in E, we have for any $\nu|p$ that
\begin{equation} |\alpha_\pi(j,\nu)|\leq p^{f_p\theta_p} \nonumber \end{equation}
where $\theta_p=1/2-1/(2f_p)-\epsilon$ for a small $\epsilon>0$.
\nonumber \end{Conjecture}
Note that for $\pi_\nu$ unramified we have $e_p=1$ and hence $f_p=\ell_p$ where $\ell_p=[E_\nu:\mathbb{Q}_p]$. Since $p$ does not split completely in $E$, we know that $f_p\geq 2$. Thus Conjecture 2.1 is known for $m=2$ according to (2.2). It is trivial for $m=1$. Since $f_p|\ell$, Conjecture 2.1 is known when all prime factors of $\ell$ are $>(m^2+1)/2$. For $m=3$ this means that any $p|\ell$ is $\geq 7$, while for $m=4$, Conjecture 2.1 is true when any $p|\ell$ is $\geq 11$. We end this section by recalling Hypothesis H from \cite{LiuYe1}
\smallskip
$\mathbf{Hypothesis}$ $\mathbf{H}$ {\it Let $\pi$ be an automorphic cuspidal representation of $GL_m(\mathbb{A}_{E})$ with unitary central character, then for any fixed $k\geq 2$}
\begin{equation} \sum_{p}\frac{\log^2 p}{p^{kf_p}}\sum_{\nu|p}\Big|\sum_{1\leq j\leq m}\alpha_{\pi}(i,\nu)^k\Big|^2<\infty \nonumber \end{equation}
\smallskip
\section{Proof of Theorem 1.1 }
\setcounter{equation}{0}
Let $\pi$ and $\pi^{\prime}$ be as in the Theorem 1.1, we will first
need a modified version of Lemma 4.1 of \cite{LiuYe4}. It is a
weighted prime number theorem in the diagonal case. With the same modifications made as in \cite{LiuYe1} Lemma 6.1 over a number field the proof follows as in \cite{LiuYe4}.
\begin{Lemma}
Let $\pi$ be a self-contragredient automorphic irreducible cuspidal
representation of $GL_m$ over E. Then \begin{eqnarray*} \sum_{n\leq
x}\left(1-\frac{n}{x}\right)\Lambda(n)a_{\pi\times\widetilde{\pi}}(n)
=\frac{x}{2}+O\lbrace x\exp(-c\sqrt{\log x})\rbrace \end{eqnarray*} \end{Lemma}
The next lemma again closely follows \cite{LiuYe4}, and allows the
removal of the weight $(1-\frac{n}{x})$ from the previous lemma.
The proof involves a standard argument due to de la Vallee Poussin.
\begin{Lemma}
Let $\pi$ be a self-contragredient automorphic irreducible cuspidal
representation of $GL_m$ over E. Then
\begin{equation}
\sum_{n\leq x}\Lambda(n)a_{\pi\times\widetilde{\pi}}(n)=x+O\lbrace
x\exp(-c\sqrt{\log x})\rbrace.\end{equation}
\end{Lemma}
$\emph{Proof.}$ Since the coefficients of the left hand side of
(3.7) are non-negative, the proof follows as in Lemma 5.1 of
\cite{LiuYe4} with no modification. $\quad\square$
The next lemma also follows exactly as in Lemma 5.2 of
\cite{LiuYe4}, and doesn't require $\pi$ to be self-contragredient.
The proof is an application of a Tauberian theorem of Ikehara.
\begin{Lemma}For any automorphic irreducible cuspidal unitary representation
$\pi$ of $GL_m$ over the number filed E, we have
\begin{equation} \sum_{n\leq x} \Lambda(n)a_{\pi\times\widetilde{\pi}}(n)\thicksim x.
\end{equation}
\end{Lemma}
\emph{Proof of Theorem 1.1.} We suppose throughout that $\pi$ is
self-contragredient. When $\pi'\cong\pi$, the theorem reduces to
Lemma 3.1. We will first consider the case when $\pi$ and $\pi'$
are twisted equivalent, so suppose that
$\pi'\cong\pi\otimes|\textrm{det}|^{i\tau_0}$ for some
$\tau_{0}\in\mathbb{R}$. By Lemma 3.2, we obtain a bound for the
short sum
\begin{equation}
\sum_{x<n\leq x+y}\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)\ll y
\nonumber
\end{equation}
for $y\gg x\exp(-c\sqrt{\log x})$. $\pi'$ is not necessarily
self-contragredient; nevertheless, by Lemma 3.3, we get for $0<y\leq
x$ that
\begin{equation}
\sum_{x<n\leq x+y } \Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)\ll
\sum_{x<n\leq 2x}\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)\ll x.
\nonumber
\end{equation}
By definition of the coefficients $a_{\pi\times\widetilde{\pi}'}(n)$, we
have that for $n=p^{kf_p}$
\begin{eqnarray*}
|\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)|
&\leq&
\log p\sum_{\nu|p}f_p\Big|\sum_{j=1}^{m}\alpha_\pi(\nu,j)^k\Big|
\Big|\sum_{i=1}^{m'}\bar{\alpha}_{\pi'}(\nu,i)^k\Big| \nonumber\\
&\leq& \log
p\Big(\sum_{\nu|p}f_p\Big|\sum_{j=1}^{m}\alpha_\pi(\nu,j)^k\Big|^2\Big)^{1/2}
\Big(\sum_{\nu|p}f_p\Big|\sum_{i=1}^{m'}\bar{\alpha}_{\pi'}(\nu,i)^k\Big|^{2}\Big)^{1/2}
\nonumber\\
&=&
(\log p)a_{\pi\times\widetilde{\pi}}(n)^{1/2}a_{\pi'\times\widetilde{\pi}'}(n)^{1/2}.
\nonumber\end{eqnarray*}
Thus we have,
\begin{eqnarray} \sum_{x<n\leq x+y}|\Lambda(n)a_{\pi\times\widetilde{\pi}'}(n)|
&\leq&
\sum_{x<n\leq x+y}\Lambda(n)a_{\pi\times\widetilde{\pi}}(n)^{1/2}a_{\pi'\times\widetilde{\pi}'}(n)^{1/2}
\nonumber\\
&\leq&
\Big(\sum_{x<n\leq x+y}\Lambda(n)a_{\pi\times\widetilde{\pi}}(n)\Big)^{1/2}
\Big(\sum_{x<n\leq x+y}\Lambda(n)a_{\pi'\times\widetilde{\pi}'}(n)\Big)^{1/2}\nonumber\\
&\ll& \sqrt{yx}.
\end{eqnarray}
now the rest of the proof follows exactly as in \cite{LiuYe4} $\square$
\smallskip
\section{Proof of Theorem 1.2}
\setcounter{equation}{0}
Let $E$ and $F$ be two cyclic Galois extensions of $\Bbb Q$ of
degree $\ell$ and $q$, respectively. Let $\pi$ and $\pi'$ be
irreducible unitary cuspidal representations of $GL_m({\Bbb A}_E)$
and $GL_{m'}({\Bbb A}_F)$ with unitary central characters. In
section 2, we denote by
$$
L(s,\pi\times_{BC}\widetilde{\pi}')
=\prod_{{0\leq i\leq\ell-1}\atop{0\leq j\leq q-1}}
L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}\times
\widetilde{\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}}),
$$
where
$L(s,\pi\otimes\eta_{E/\mathbb{Q}}^{i}\times\widetilde{\pi'\otimes\psi_{F/\mathbb{Q}}^{j}})$,
$0\leq i\leq\ell-1,\;0\leq j\leq q-1$ are the usual Rankin-Selberg
$L$-functions over $\mathbb{Q}$ with unitary central characters.
\begin{Lemma}
Suppose that
$\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_0}\cong\pi'_{\mathbb{Q}}
\otimes\psi_{F/\mathbb{Q}}^{j_0}\otimes|\det|^{i\tau_0},$ for some
$0\leq i_0\leq\ell-1$, $0\leq j_0\leq q-1$ and $\tau_0\in
\mathbb{R}$. Then
\begin{equation} \pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_0}
\cong\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}\otimes |\det|^{i\tau} \nonumber
\end{equation}
implies that $\tau=\tau_{0}$ and $j=j_{0}$. Moreover, if
\begin{equation}
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}\cong\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}\otimes
|\det|^{i\tau} \nonumber \end{equation} for some $i$ and $j$, and
$\tau\in\mathbb{R}$, then $\tau=\tau_{0}$.
\end{Lemma}
\textit{Proof}. By class field theory, $\eta_{E/\mathbb{Q}},
\psi_{F/\mathbb{Q}}$ are finite order idele class characters, so
they are actually primitive Dirichlet characters. Assume that
$$
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_0}
\cong\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}\otimes|\det|^{i\tau},
$$
for some $0\leq i\leq \ell-1$, $0\leq j\leq q-1$ and
$\tau\in\mathbb{R}$. Then we have
$$
\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_0}\otimes|\det|^{i\tau_0}
\cong
\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j}\otimes|\det|^{i\tau}.$$
For any unramified $p$, we get
$$\{\alpha_{\pi'_{\mathbb{Q}}}(p,j)
\psi_{F/\mathbb{Q}}^{j_0}{|p|}_p^{i\tau_0}\}_{j=1}^{m}
=\{\alpha_{\pi'_{\mathbb{Q}}}(p,j)\psi_{F/\mathbb{Q}}^{j}(p){|p|}_p^{i\tau}\}_{j=1}^{m}.$$
Hence,
$$(\psi_{F/\mathbb{Q}}^{j_0}p^{-i\tau_0})^m
=(\psi_{F/\mathbb{Q}}^{j}(p)p^{-i\tau})^m.$$ Since
$\psi_{F/\mathbb{Q}}$ is of finite order, we get by multiplicity one for characters $\tau=\tau_0$, so
that $j=j_{0}$. The last conclusion of the lemma follows from the
same argument just given. $\quad\square$
\begin{Lemma} Suppose that $\pi_{\mathbb{Q}}\otimes
\eta_{E/\mathbb{Q}}^{i_0}\cong
\pi_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_0}\otimes|\det|^{i\tau_0}$
for some $0\leq i_0 \leq \ell-1$, $0\leq j_0 \leq q-1$ and
$\tau_0\in \mathbb{R}$. Then the number of twisted equivalent pairs
$( \pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^i,
\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^j)$ with $0\leq i\leq
\ell-1$, $0\leq j\leq q-1$ divides the greatest common divisor of
$\ell$ and $q$.
\end{Lemma}
{\it Proof.} By relabeling the collection
$\{\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^i\}_{0\leq i \leq
\ell-1}$ if necessary we may assume that
$\pi_{\mathbb{Q}}\cong\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_0}\otimes|\det|^{i\tau_0}$.
Now let $G=(\{\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^i,0\leq
i\leq\ell-1\},*)$ where we define
\begin{equation}
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_1}*\pi_{\mathbb{Q}}
\otimes\eta_{E/\mathbb{Q}}^{i_2}=\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_1+i_2}.
\nonumber
\end{equation}
Since the character $\eta_{E/\mathbb{Q}}$
has order $\ell$ we have $G\cong \mathbb{Z}/\ell\mathbb{Z}$. Now
let
\begin{equation}
H=\{\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^i: \exists\, 0\leq j
\leq q-1, \tau\in\mathbb{R}, \textrm{ such that }
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^i\cong\pi'_{\mathbb{Q}}\otimes
\psi_{F/\mathbb{Q}}^j\otimes|\det|^{i\tau} \}. \nonumber
\end{equation} By hypothesis, we have $\pi_{\mathbb{Q}}\in H$. Assume that
$$
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_1}
\cong\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_1}\otimes|\det|^{i\tau_1}
$$ and
$$\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_2}\cong\pi'_{\mathbb{Q}}
\otimes\psi_{F/\mathbb{Q}}^{j_2}\otimes|\det|^{i\tau_2},
$$
then
\begin{eqnarray}
\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i_1-i_2}
&\cong&
\pi'_{\mathbb{Q}}\otimes \psi_{F/\mathbb{Q}}^{j_1}\otimes\eta_{E/\mathbb{Q}}^{-i_2}\otimes|\det|^{i\tau_1}
\nonumber \\
&\cong&
\pi_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_1-j_2}\otimes|\det|^{i(\tau_1-\tau_2)} \nonumber \\
&\cong&
\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^{j_0+j_1-j_2}\otimes|\det|^{i(\tau_1-\tau_2+\tau_0)}.
\nonumber
\end{eqnarray} Hence $H$ is a subgroup of $G$.
By Lemma 4.1, each $\pi_{\mathbb{Q}}\otimes \eta_{E/\mathbb{Q}}^i$
is twisted equivalent to at most one
$\pi'_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^j$ so by Lagrange's
theorem the number of twisted equivalent pairs divides $\ell$, and
by symmetry of the above argument we also have that it divides $q$,
so the lemma follows. $\quad\square$
The above lemmas are simple but give the following: the second
conclusion says that we can have at most one twisted equivalent
pair when $\ell$ and q are relatively prime, and the first conclusion
says that if the $L$-function $L(s,\pi\times_{BC}\pi')$ has poles at
$1+i\tau_{0}$ and $i\tau_{0}$, then these are the only poles, with
orders possibly bigger than one. If one considers the diagonal case
\begin{eqnarray} L(s,\pi\times_{BC}\widetilde{\pi}) &=&
\prod_{i=0}^{\ell-1}\prod_{j=0}^{q-1}
L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i}\times
\widetilde{\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{j}})\nonumber\end{eqnarray} then we get by $\mathbf{RS2}$ a simple pole of order
$\ell$ at $s=1$ for each factor in the left hand side, since the
other factors on the right hand side are nonzero at $s=1$ by
$\mathbf{RS3}$, so that this differs by the classical case in that
we get multiple poles. Now assuming $\pi_{\mathbb{Q}}$ to be
self-contragredient, and applying Theorem 1.1 to the $L$-function
$L(s,\pi\times_{BC}\widetilde{\pi}')$, we can use the zero-free
region in $\mathbf{RS3}$ to obtain the same error term as before since \begin{eqnarray*}
L(s,\pi_{\mathbb{Q}}\otimes\eta_{E/\mathbb{Q}}^{i-1}\times\widetilde{\pi'_{\mathbb{Q}}\otimes
\psi_{F/\mathbb{Q}}^{j-1}})
=L(s,\pi_{\mathbb{Q}}\times\widetilde{\pi'_{\mathbb{Q}}\otimes
\psi_{F/\mathbb{Q}}^{j-1}\otimes\eta_{E/\mathbb{Q}}^{-(i-1)}}). \end{eqnarray*}
We can apply the zero-free region to all the factors in the
definition of
$L(s,\pi\times_{BC}\widetilde{\pi}')$, to get the same error term as in Theorem 1.1.
Thus Theorem 1.2 follows directly from Theorem 1.1.
\section { Sums over primes}
Note that Theorem 1.1 says
\begin{equation}\sum_{\underset{p^{kf_p}\leq x}{p,k}}\log(p)\sum_{\nu|p}f_p\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^k\overline{\alpha_{\pi'}(j,\nu)}^k \nonumber \end{equation}
\begin{equation}=\frac{x^{1+i\tau_0}}{1+i\tau_0}+O\{x\exp(-c\sqrt{\log x})\} \text{ for }\pi\cong\pi'\otimes|\det|^{i\tau_0} \end{equation}
We can apply the bound in (2.3) to the sum
\begin{equation}\sum_{\underset{p^{f_pk}\leq x}{k> (m^2\ell+1)/2}}\log(p)\sum_{\nu|p}f_p\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^k\overline{\alpha_{\pi'}(j,\nu)}^k \nonumber \end{equation}
\begin{equation} \ll\sum_{\underset{p^{kf_p\leq x}}{k>(m^2\ell+1)/2}}(\log p) p^{2kf_p(1/2-1/(m^2\ell+1))}\ll x\Big( \sum_{n\leq x}\frac{\Lambda(n)}{n^{1+\epsilon}}\Big) \nonumber \end{equation}
for small $\epsilon>0$. By partial summation we get
\begin{equation} \sum_{n\leq x}\frac{\Lambda(n)}{n^{1+\epsilon}}=\Big(x+O\{x\exp(-c\sqrt{\log x})\} \Big)\frac{1}{x^{1+\epsilon}}-\int_{1}^{x}\Big(t+O\{t\exp(-c\sqrt{\log t})\}\Big)\frac{1}{t^{2+\epsilon}}(-1-\epsilon)dt \nonumber \end{equation}
\begin{equation}=O\{x\exp(-c\sqrt{\log x})\} \nonumber \end{equation}
Note that Hypothesis H gives that for fixed $k\geq 2$
\begin{equation} \sum_{p}\frac{|a_{\pi\times\tilde{\pi}'}(p^{kf_p})|(\log(p^{kf_p}))^2}{p^{kf_p}}<\infty \nonumber \end{equation}
using this and partial summation for fixed $k\geq 2$ we can write for $y=\exp(\exp(c\sqrt{\log x}))$ and $x$ sufficiently large (note that $m=m'$)
\begin{equation}\sum_{p^{kf_p}\leq x}\log p \sum_{\nu|p}f_p\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^k\overline{\alpha_{\pi'}(j,\nu)}^k \nonumber \end{equation}
\begin{equation}\ll x \sum_{p^{kf_p}\leq y}\frac{\log^2 p}{p^{kf_p}}\sum_{\nu|p}\Big|\sum_{i=1}^{m}\alpha_{\pi}(i,\nu)^k\Big|^2\frac{1}{\log p} \nonumber \end{equation}
\begin{equation}=x\Big(O(1)\left. \frac{1}{\log t}\right|_{2}^{y}-\int_{2}^{y}O(1)\frac{1}{t(\log t)^2}dt\Big)\ll x\exp\{-c\sqrt{\log x}\} \nonumber \end{equation}
Finally, using Conjecture 2.1 we get
\begin{equation} \sum_{{p^{f_p}\leq x}\atop{\text{p not split}}}\log(p)a_{\pi\times\tilde{\pi}'}(p^{f_p})\ll\sum_{{p^{f_p} \leq x}\atop{\text{p not split}}}\log(p)f_p\sum_{\nu|p}\Big|\sum_{i=1}^{m}\alpha_{\pi}(i,\nu)\Big|^2 \nonumber \end{equation}
\begin{equation}\ll \sum_{{p^{f_p} \leq x}\atop{\text{p not split}}}(\log p)p^{2f_p(1/2-1/(2f_p)-\epsilon)} \ll x\Big(\sum_{n\leq x}\frac{\Lambda(n)}{n^{1+\epsilon}}\Big)\ll x\exp\{-c\sqrt{\log x}\} \nonumber \end{equation}
We can do a similar calculation for Theorem 1.2 by first noting that
\begin{equation} \sum_{\nu|p}f_p\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^k=\sum_{a=0}^{\ell-1}\sum_{i=1}^{m}\alpha_{\pi_\mathbb{Q}\otimes\eta_{E/\mathbb{Q}}^{a}}(i,p)^{f_pk} \end{equation}
for $n=p^{kf_p}$, and similarly
\begin{equation}\sum_{\omega|p}f_p'\sum_{j=1}^{m'}\alpha_{\pi'}(j,\nu)^k=\sum_{b=0}^{q-1}\sum_{j=1}^{m'}\alpha_{\pi_{\mathbb{Q}}\otimes\psi_{F/\mathbb{Q}}^b}(j,\nu)^{f_p'k} \end{equation}
Thus we get
\begin{equation} \sum_{n\leq x}\Lambda(n)a_{\pi\times_{BC}\widetilde{\pi'}}(n)=\sum_{\underset{}{p^{k_1f_p}=p^{k_2f_p'}\leq x}}(\log p)\sum_{\nu|p}\sum_{\omega|p}f_pf_p'\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^{k_1}\overline{\alpha_{\pi'}(j,\omega)}^{k_2} \nonumber \end{equation}
again by (2.1)
\begin{equation}\sum_{\underset{k_1>\min\{ 2/(m^2\ell+1),2/(m'^2q+1)\}}{p^{k_1f_p}=p^{k_2f_p'}\leq x}}(\log p)\sum_{\nu|p}\sum_{\omega|p}f_pf_p'\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^{k_1}\overline{\alpha_{\pi'}(j,\omega)}^{k_2} \nonumber \end{equation}
\begin{equation}\ll\sum_{\underset{k_1>\min\{ 2/(m^2\ell+1),2/(m'^2q+1)\}}{p^{k_1f_p}=p^{k_2f_p'}\leq x}}(\log p)p^{k_1f_p(1/2-1/(m^2\ell+1))+k_2f_p'(1/2-1/(m'^2q+1))} \nonumber \end{equation}
\begin{equation}\ll x\Big( \sum_{n\leq x}\frac{\Lambda(n)}{n^{1+\epsilon}}\Big)\ll x\exp(-c\sqrt{\log x}) \nonumber \end{equation}
Now consider the sum
\begin{equation} \sum_{\underset{\text{ p not split in EF}}{p^{k_1f_p}=p^{k_2f_p'}\leq x}} (\log p)\sum_{\nu|p}\sum_{\omega|p}f_pf_p'\sum_{i=1}^{m}\sum_{j=1}^{m'}\alpha_{\pi}(i,\nu)^{k_1}\overline{\alpha_{\pi'}(j,\omega)}^{k_2} \nonumber \end{equation}
since $p$ doesn't split in $EF$ we have
$f_{\mathfrak{P}/p},f_{\mathfrak{Q}/p}\geq 2$ and since $f_{\mathfrak{P}/p}=f_{\mathfrak{P}/\mathfrak{p}}f_p\leq f_p^2$ and$f_{\mathfrak{Q}/p}=f_p' f_{\mathfrak{Q}/ \mathfrak{q}}\leq f_{p}'^{2}$ we know that $f_p,f_p'\geq 2$ so that $p$ doesn't split in $E$ or $F$. Hence under Conjecture 2.1 we have the above sum is bounded by
\begin{equation} \sum_{p^{k_1f_p}=p^{k_2f_p'}\leq x}(\log p)p^{k_1f_p\theta_p+k_2f_p'\theta_p'}\ll x\Big(\sum_{n\leq x}\frac{\Lambda(n)}{n^{1+\epsilon}}\Big)\ll x\exp\{-c\sqrt{\log x}\} \nonumber \end{equation}
So if the collection of twisted equivalent pairs is nonempty we have the estimate
\begin{equation} \sum_{\underset{p^{k_1}=p^{k_2}\leq x\text{ p splits in $EF$}}{k_1\leq \min\{2/(m^2\ell+1),2/(m'^2q+1)\} }}(\log p)a_{\pi\times_{BC}\pi'}(p^{k_1f_p})=\frac{x^{1+i\tau_0}}{1+i\tau_0}+O\{x\exp(-c\sqrt{\log x})\} \nonumber \end{equation}
Using Hypothesis H as before and (5.2)-(5.3) we can restrict the sum to $k_1=k_2=1$ to get
\begin{equation} \sum_{{p\leq x}\atop {\text{ p splits completely in $EF$}}}(\log p)a_{\pi\times_{BC}\pi'}(p)=\frac{x^{1+i\tau_0}}{1+i\tau_0}+O\{ x\exp(-c\sqrt{\log x})\} \nonumber \end{equation}
as desired.
\bigskip
\centerline {\sc Acknowlegments}
The authors would like to thank Professor Jianya Liu and Professor
Yangbo Ye for their constant encouragement and support. The authors would also like to thank Professor Muthu Krishnamurthy for his helpful advice and suggestions.
| proofpile-arXiv_065-6866 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and the main result}\label{S:Intro}
The problem of testing analyticity by analytic extendibility into complex lines was investigated in many articles.
It was observed in \cite{AV} that
boundary values of holomorphic functions in the unit ball $\mathbb C^n$ can be characterized by analytic extendibility
along complex lines. In \cite{St} this result was generalized to general domains $D \subset \mathbb C^n.$
It turned out that the family of all complex lines is abundant for the characterization of the
boundary values of holomorphic functions, and
can be reduced to narrower those. The extended references can be found in \cite{GS}.
There are two natural families of complex lines to be studied:
lines tangent to a given surface and lines intersecting a given set.
There is a certain progress in the one-dimensional holomorphic extension problem for the families of the first type (defined by the tangency condition), but we are not mentioning the results in this direction, as in this article we are interested in the families of the second type.
Let us start with some notations and terminology.
First of all, everywhere in the article $n \geq 2.$
The notation $A(\partial \Omega),$ \ $\Omega$ is a bounded domain in $\mathbb C^n$, will be used for the algebra of boundary values of holomorphic functions, i.e.,
all functions $f \in C(\partial \Omega)$ such that there exists a function $F \in C(\overline \Omega),$
holomorphic in $\Omega$ and satisfying the condition $F(z)=f(z), z \in \partial \Omega.$
Given a set $V \subset \mathbb C^n$,
denote $\mathcal L_V$ the set of complex lines $L$ such that $L \cap V \neq \emptyset.$
We will say that the family $\mathcal {L}_V$ is {\it sufficient} (for testing function from $A(\partial \Omega)$)
if whenever $f \in C(\partial \Omega)$ continuously
extends, for every $L \in \mathcal {L}_V$, from $L \cap \partial \Omega$ to $L \cap \overline \Omega$ as a function holomorphic in $L \cap \Omega$, one has $f \in A(\partial \Omega).$
In the introduced terminology, the results of \cite{AV}, \cite{St} just claim that the family $\mathcal {L}_V$ is sufficient when the set $V$ is the whole domain, $V=\Omega.$
In \cite{AS}, a stronger result was obtained: the family $\mathcal L_V$ is sufficient for any open set $V$
(see also \cite{KM} for generalizations in this direction).
However, counting the parameters shows that the expected minimal dimension of sufficient families should be $2n-2.$
Indeed, the functions $f$ to be characterized are defined on $2n-1$-dimensional real sphere and hence depend
on $2n-1$ real variables. The condition of the holomorphic extendibility in every fixed complex line is 1-parametric
(see Section \ref{S:concl}). Therefore, in order to have the total number of the parameters $2n-1$ we need more
$2n-2$ parameters, which is the minimal possible number of parameters for the sufficient families.
The minimal $(2n-2)$-parameter family of complex lines, given by the intersection condition, are bunches $\mathcal L_a$ of complex lines intersecting a given one-point set $\{a\}.$
However, it is easy to show that this family is insufficient. Indeed, take $\Omega=B^n$ and $a=0.$ Then
the function $f(z)=|z_1|^2$ is constant on any circle $L\cap \partial B^n,$ and therefore extends holomorphically
(as a constant) in $L \cap B^n,$ but $f \notin A(\partial B^n)$ because it is real-valued and non-constant.
Notice, that in this example, the function $f$ is real-analytic and, moreover, is a polynomial.
Thus, the above example shows that the minimal possible $(2n-2)$-parametric family of the intersection type - the
bunch of lines through one fixed point is insufficient.
However, it turns out, and this is the main result of this article, that adding one more bunch of complex lines
leads already to a sufficient family for real-analytic functions.
\begin{theorem} \label{T:main}
Let $a, \ b \in \overline {B^n}, \ a \neq b.$
Let $f$ be a real-analytic function on the unit sphere $\partial B^n$. Suppose that for every complex line $L,$ containing at least one of the points $a, b$
there exists a function $F_L \in C(L \cap \overline {B^n}),$ holomorphic in $L \cap B^n$ and such that $f(z)=F_L(z)$ for $z \in L \cap \partial B^n.$ Then $f \in A(\partial B^n).$
\end{theorem}
The proof of Theorem \ref{T:main} rests on the recent result of the author \cite{Agr} on meromorphic extensions from chains of circles in the plane. The scheme of the proof is as follows. We start with the case $n=2.$
By averaging in rotations in $z_2$-plane we reduce the original problem to a problem for the Fourier coefficients regarded as functions in the unit disc $|z_1|<1.$ The condition of the holomorphic extendibility into complex lines transforms to the condition of holomorphic extendibility of the Fourier coefficients in a family of circles in the plane $z_1.$ Then we use the result of \cite{Agr} and arrive at the needed conclusion in 2-dimensional case. The $n$-dimensional case follows by applying the 2-dimensional result to complex 2-dimensional cross-sections.
\section{Some lemmas }
We start with $n=2.$
First of all, observe, that the condition of Theorem \ref{T:main} is invariant with respect to the
group $M(B^2)$ of complex Moebius automorphisms of the unit ball $B^2$. Therefore, by applying a suitable automorphisms, we can assume that $a$ and $b$ belong to the complex line $z_2=0,$ without loss of generality.
Thus, from now on $a=(a_1,0), b=(b_1,0).$
For $|a_1|<1$ denote
$$H(a_1,r)=\{|\frac{z_1-a_1}{1-\overline a_1 z_1}|=r\} $$
the hyperbolic circle in the unit disc with the hyperbolic center $a$ and
$$\mathcal H_{a_1}=\{H(a_1,r): 0<r<1\}$$
the family of such circles. For $|a_1|=1$, by $H(a_1,r)$ we understand the horicycle through $a_1$, i.e.,
the circle tangent at $a_1$ from inside to the unit circle.
Denote $$\pi_1: \mathbb C^2 \mapsto \mathbb C$$ the orthogonal projection to the first complex coordinate $z_1.$
The following observation belongs to J. Globevnik:
\begin{lemma}\label{L:projections} We have
$\pi_1 (\{ L \cap \partial B^2, \ L \in \mathcal{L}_{\{a\}} \})=\mathcal H_{a_1},$ i.e., for $a \in B^2$,
the circles
$L \cap \partial B^2, \ L \in \mathcal{L}_{\{a\}},$ project orthogonally onto
the hyperbolic circles with the hyperbolic center $a_1.$ If $a \in \partial B^2,$
then instead of hyperbolic circles one obtains horicycles through $a.$
\end{lemma}
{\bf Proof } Remind that we assume that $a_2=0.$ The conformal automorphism
$$u_{a_1}(z_1)=\frac{z_1-a_1}{1-\overline {a}_1 z_1}$$
extends to a biholomorphic Moebus transformation $U_a$ of the ball $B^2:$
$$U_a(z_1,z_2)=( \frac{z_1-a_1}{1-\overline {a}_1 z_1}, \frac{\sqrt{1-|a|^2}z_2}{1-\overline {a}_1 z_1}).$$
This automorphism preserves complex lines, moves the point $a=(a_1,0)$ to 0 and,correspondingly,
transforms the family $\mathcal{L}_{\{a\}}$ of complex lines containing $a$ into the family $\mathcal {L}_0$ of complex lines containing 0.
The case $a=0$ is obvious, the circles $L \cap \partial B^2$, where $L$ runs over the complex lines containing 0,
project onto of the family $\mathcal {H}_0$ of circles in the disc $|z_1| \leq 1$ centered at the origin.
Then we conclude that the projection of our family is $u_a^{-1}(\mathcal H_0)=\mathcal H_a.$
The case $a \in \partial B^2$ is even simpler. Lemma is proved.
\begin{definition}
We say that a function $F$ in the unit disc $|w|<1$ is
$$F(w)=O((1-|w|^2)^k), \ |w| \to 1,$$
where $k \in \mathbb Z,$
if $F(w)=h(w)(1-|w|^2)^k$ where $h(w)$ is continuous on $|w| \leq 1$ and has only isolated zeros of finite order on the boundary circle $|w|=1.$
\end{definition}
Expand $f(z_1,z_2), \ (z_1,z_2) \in \partial B^2,$ in the Fourier series in the polar coordinates $r, \psi$,
where $z_2=re^{i\psi}, r=|z_2| :$
$$f(z_1,z_2)=\sum\limits_{\nu=-\infty}^{\infty}\alpha_{\nu}(z_1, |z_2|)e^{i\nu\psi}.$$
Since $|z_2|=\sqrt{1-|z_1|^2}$ on the sphere, $\alpha_{\nu}$ depends, in fact, only on $z_1$:
$$\alpha_{\nu}(z_1,|z_2|)=A_{\nu}(z_1).$$
Define
\begin{equation}\label{E:F}
F_{\nu}(z_1):=\frac{ A_{\nu}(z_1) }{ (1-|z_1|^2)^{\frac{\nu}{2}} }.
\end{equation}
Substituting $\sqrt{1-|z_1|^2}=|z_2|$ on $\partial B^2$ we have
\begin{equation}\label{E:f(z_1,z_2)}
f(z_1,z_2)=\sum\limits _{\nu=-\infty}^{\infty} F_{\nu}(z_1)z_2^{\nu}.
\end{equation}
The singularity at $z_2=0$ for negative $\nu$ cancels due to vanishing of $F_{\nu}(z_1)$ on $|z_1|=1:$
\begin{lemma}\label{L:Fnu} Let $F_{\nu}$ is defined by (\ref{E:F}). Then
\begin{enumerate}
\item
$F_{\nu}$ is real-analytic in the open disc $|z_1| \leq 1,$
\item
$F_{\nu}(z_1)=O((1-|z_1|^2)^{k}), \ |z_1| \to 1,$ for some $k \geq 0,$
\item
if $\nu <0$ then $k>0.$
\end{enumerate}
\end{lemma}.
{\bf Proof }
We have
$$A_{\nu}(z_1)e^{i\nu\psi}=\frac{1}{2\pi}\int\limits_0^{2\pi}f(z_1,e^{i\varphi}z_2)e^{-i\nu\varphi}d\psi.$$
Then $f \in C^{\omega}(\partial B^n)$ implies $A_{\nu}(z_1)e^{i\nu\psi}$ is real-analytic on
$\partial B^2.$
By the definition (\ref{E:F}) of $F_{\nu},$
\begin{equation}\label{E:alpha}
A_{\nu}(z_1)e^{i\nu\psi}=F_{\nu}(z_1) z_2^{\nu}.
\end{equation}
The left hand side is in $C^{\omega}(\partial B^2)$ because it is obtained by averaging of the real-analytic function $f$. Therefore, the right hand side in (\ref{E:alpha}) is real -analytic.
If $|z_1|<1$ then $z_2 \neq 0,$ due to $|z_1|^2+|z_2|^2=1.$ When $|z_1|<1$ then $z_2 \neq 0.$ Dividing
both sides in (\ref{E:alpha}) by $z_2^{\nu}$
one obtains $F_{\nu} \in C^{\omega}(|z_1|<1).$ This proves statement 1.
Now consider the case $|z_1^0|=1,$ $z_2^0=0$.
Without loss of generality we can assume $Im z_1^0 \neq 0$ and so $|Re z_1^0| <1.$ Then choose local coordinates $(Re z_1, z_2, \overline {z}_2)$ in a neighborhood of the point
$(z_1^0, z_2^0) \in \partial B^2.$ Then the Taylor series for $f$ near $z_0$ can be written as
$$f(z)=\sum\limits_{\alpha, \beta \geq 0} c_{\alpha,\beta}(Re z_1) z_2^{\alpha}\overline{z_2}^{\beta},$$
where $c_{\alpha,\beta}(Re z_1)$ are real-analytic in a full neighborhood of $Re z_1^0.$
Substitution $z_2=re^{i\psi}$ yields the expression for the $\nu-th$ term in Fourier series:
\begin{equation}\label{E:beta}
A_{\nu}(z_1)e^{i\psi}=\sum\limits_{\alpha-\beta=\nu}c_{\alpha,\beta}r^{\alpha+\beta}e^{i\nu\psi}
=(\sum\limits_{\beta \geq 0}r^{2\beta}c_{\nu+\beta,\beta}(Re z_1))z_2^{\nu}.
\end{equation}
In the last expression we have replaced the indices of the summation $\alpha=\beta+\nu.$
Since $r=|z_2|=\sqrt{1-|z_1|^2}$, by the definition \ref{E:F} of $F_{\nu}$ we obtain:
\begin{equation}\label{E:sum}
F_{\nu}(z_1)=\sum\limits_{\beta \geq 0}(1-|z_1|^2)^{\beta}c_{\nu+\beta,\beta}(Re z_1).
\end{equation}
This gives us the needed representation
$$F_{\nu}=(1-|z_1|)^{k}h(z_1),$$
where $\beta=k$ is the index of the first nonzero term in (\ref{E:sum}).
On $|z_1|=1$ we have
$$F_{\nu}(z_1)=c_{\nu+k,k}(Re z_1)$$
and there are only isolated zeros of finite order because the function $c_{\nu+k,k}(u)$ is real-analytic
for $u$ near $Re z_1^0.$ Since all the argument is true for arbitrary $z_1^0$ of modulus 1, the conclusion about zeros on $|z_1|=1$ follows. This proves statement 2.
If $\nu <0$ then $\beta=\alpha - \nu > \alpha \geq 0$ in (\ref{E:beta}) and therefore the order $k$ (the minimal $\beta$ in the sum (\ref{E:sum})) is positive. This is statement 3.
Lemma \ref{L:Fnu} is proved.
\begin{lemma}\label{L:fourier}
The function $f \in C(\partial B^n)$
admits analytic extension in the complex lines passing through a point $a=(a_1,0) \in B^n$ if and only if
the functions $F_{\nu}(z_1), \ \nu \geq 0,$ defined in Lemma \ref{L:Fnu}, formula (\ref{E:F}), have the property:
\begin{enumerate}
\item
if $\nu \leq 0$ then $F_{\nu}$ extends inside any circle $H(a,r)$ as an analytic function in the corresponding disc,
\item
if $\nu >0$ then $F_{\nu}$ extends inside any circle $H(a,r), 0<r<1$ as a
meromorphic function with the only singular point-a pole at the hyperbolic center $a$ of the order at most $\nu$.
\end{enumerate}
\end{lemma}
{\bf Proof }
Since $a$ and $b$ belong to the complex line $z_2=0$, the rotated function $f(z_1,e^{i\varphi}z_2)$ possesses
the same property of holomorphic extendibility into complex lines passing through $a$ or through $b$.
Then for each $\nu \in \mathbb Z$
$$\frac{A_{\nu}(z_1)}
{ |z_2|^ {\nu} } z_2^{\nu} = A_{\nu}(z_1)e^{i\nu\psi}=\frac{1}{2\pi}\int\limits_0^{2\pi}f(z_1,e^{i\varphi}z_2)e^{-i\nu\varphi}d\psi$$
also has the same extension property. Here $z_2=|z_2|e^{i\psi}.$
It is clear that, vice versa, if all the terms in the Fourier series
have this extension property, then the function $f$ does.
It remains to show that the analytic extendibility in the complex lines
of the summands in Fourier series is equivalent to the
above formulated properties of $F_{\nu}.$
But since on the unit sphere we have $|z_2|=\sqrt{1-|z_1|^2},$ then from the relation (\ref{E:F})
$$\frac{A_{\nu}(z_1)}{|z_2|^{\nu}}z_2^{\nu}=F_{\nu}(z_1)z_2^{\nu}$$
and we conclude that
$F(z_1)z_2^{\nu}$ extends in the complex lines containing $a.$
Let $L$ be such a line, different from $z_1=a$.
Then $L=\{z_2=k(z_1-a_1)\}$ for some complex number $k \neq 0$ and therefore on $\partial B^n$ we have:
$$F_{\nu}(z_1)z_2^{\nu}=F_{\nu}(z_1)k^{\nu}(z_1-a_1)^{\nu}.$$
The function in the right hand side depends only on $z_1$ and hence extends analytically inside
the projection of $L \cap \partial B^n$ which is a hyperbolic circle $H(a,r),$ by Lemma \ref{L:projections}.
Thus, $f$ extends analytically in $L \cap \partial B^n$ if and only if for all $\nu$ the function
$F_{\nu}(z_1)(z_1-a_1)^ {\nu}$ extends analytically from each circle $H(a,r), a<r<1,$
inside the corresponding disc. For $F_{\nu}$ this exactly means the meromorphic extension claimed
in 2. This completes the proof of Lemma.
\section {Proof of Theorem \ref{T:main} in dimension two}\label{S:n=2}
After all the preparations we have done, Theorem \ref{T:main} for the case $n=2$
follows from the result of \cite{Agr}.
In order to formulate this result, as it is stated in \cite{Agr}, we need to introduce the terminology used
in \cite{Agr}. Let $F$ be a function in the unit disc $\Delta.$ We call $F$ {\it regular} if
a) the zero set of $F$ is an analytic set on which $F$ vanishes to a finite order, b)
$F(w)=O((1-|w|^2)^{\nu}), |w| \to 1,$ for some integer $\nu.$
In our case, the coefficients $F_{\nu}(z_1)$ are regular by Lemma \ref{L:Fnu}.
Remind that $H(a,r)=\{|\frac{w-a}{1-\overline w}|=1\}, \ |a| <1, $ denotes the hyperbolic circle of radius $r$
with the hyperbolic center $a.$ We will include the case $|a|=1$ and will use the notation $H(a,r)$ for horicycle through $a$, i.e., for the circle of radius $r,$ tangent at $a$ from inside to the unit circle $|w|=1.$
\begin{theorem}(\cite{Agr}, Corollary 4)\label{T:Agr}
Let $F \in C^{\nu}(\overline\Delta)$ be a regular function in the unit disc.
Let $a, b \in \overline \Delta, \ a \neq b,$ and suppose that $F$ extends from any circle
$H(a,r)$ and $H(b,r), \ 0<r<1,$ as a meromorphic function with the only singular point-a pole, of order at most
$\nu,$ at the point $a$ or $b$, correspondingly. Then $F$ has the form:
\begin{equation}\label{E:Ag}
F(w)=h_0(w)+ \frac{h_1(w)}{1-|w|^2}+\cdots + \frac{h_{\nu}(w)}{(1-|w|^2)^{\nu}}.
\end{equation}
where $h_j, \ j=0, \cdots, \nu$ are analytic functions in $\Delta.$
\end{theorem}
Lemma \ref{L:Fnu} and Lemma \ref{L:fourier} just say that functions $F_\nu(z_1)$ satisfy all the conditions of Theorem \ref{T:Agr}. Therefore the function $F:=F_{\nu}$ has the representation of the form
(\ref{E:Ag}).
Moreover, by Lemma \ref{L:Fnu} , $F_{\nu}(z_1)=O((1-|z_1|^2)^k, \ |z_1| \to 1$ with nonnegative $k.$
Therefore, the number $\nu=-k$ in (\ref{E:Ag}) is nonpositive and hence either $\nu=0$ and $F=F_{\nu}=h_0$ is holomorphic,
or $\nu <0$ and $F_{\nu}=0.$
The same can be explained as follows: $F_{\nu}$ is continuous for $|z_1| \leq 1$ and hence in the decomposition \ref{E:Ag} (with $w=z_1$) nonzero terms with the negative powers of $1-|z_1|^2$ are impossible and only the first term survives:
$$F_{\nu}=h_{0}$$
Therefore, $F_{\nu}$ is analytic in $|z_1|<1.$
Moreover, we know from Lemma \ref{L:Fnu}, statement 3, that if $\nu <0$ then $k>0$ and hence $F_{\nu}(z_1)=0$ for $|z_1|=1.$ By the uniqueness theorem this implies $F_{\nu}=0, \nu <0.$
Then from (\ref{E:f(z_1,z_2)}) we have
\begin{equation}\label{E:expan}
f(z_1,z_2)=\sum\limits_{\nu \geq 0}F_{\nu}(z_1)z_2^{\nu}, \ |z_1|^2+|z_2|^2=1,
\end{equation}
and the right hand side is holomorphic in $z_1,z_2.$ This is just the desired global holomorphic
extension of $f$ inside the ball $B^2.$ Theorem \ref{T:main} for the case $n=2$ is proved.
\begin{remark}\label{R:kernel}
Each function of the form (\ref{E:expan}), where $F_{\nu}$ are of the form (\ref{E:Ag}),
possesses holomorphic extension from the sphere $\partial B^2$ in all cross-sections of the ball $B^2$ by
complex lines intersecting the complex line $z_2=0.$
First example of the function with this property was given by J. Globevnik:
$f(z)=\frac{z_2^k}{\overline z_2}$. This function can be reduced on the sphere to the form (\ref{E:Ag}), (\ref{E:expan}):
$f(z)=\frac{z_2^{k+1}}{1-|z|^2}.$ Notice that $f$ is a function of finite order of smoothness.
\end{remark}
\section{Proof of Theorem \ref{T:main} in arbitrary dimension}\label{S:arbitrary_n}
Let $Q$ be arbitrary complex two-dimensional plane in $\mathbb C^n,$ containing the points $a$ and $b$.
Then $f\vert_{Q \cap \partial B^n}$ extends holomorphically in any complex line $L \subset Q$ passing through $a$ or $b$ and Theorem \ref{T:main} for the case $n=2$ implies that $f$ extends as a holomorphic function $f_Q$
in the 2-dimensional cross-section $Q \cap B^n.$
Let $L_{a,b}$ be the complex line containing $a$ and $b.$ Then for any two 2-planes $Q_1$ and $Q_2,$
containing $a$ and $b,$ we have $Q_1 \cap Q_2 =L_{a,b}$ and $f_{Q_1}(z)=f_{Q_2}(z)=f(z)$ for $z$ from the closed curve $L_{a,b} \cap \partial B^n.$ Hence by the uniqueness theorem $f_{Q_1}(z)=f_{Q_2}(z)$ in $L_{a,b} \cap B^n.$
Thus, the holomorphic extension in the 2-planes $Q$ agree on the intersections and therefore define a function $F$ in $\overline {B^n}.$ This function has the following properties: it is holomorphic on 2-planes $Q, \ a,b \in Q,$ and $F\vert_{\partial B^n}=f.$ It follows from the construction that $F$ is $C^{\infty}$ (and even real-analytic).
From the very beginning, applying a suitable Moebius automorphism of $B^n$,
we can assume that both points, $a$ and $b$ , belong to a coordinate complex line, so that $0 \in L_{a,b}.$
Then $F$ is holomorphic on any complex line
passing through $0.$ By Forelli theorem \cite{R},4.4.5, $F$ is holomorphic in $B^n$ (for real-analytic $F$ holomorphicity of $F$ can be proved directly). The required global holomorphic extension is built.
\section{Concluding remarks}\label{S:concl}
\begin{itemize}
\item
The examples in Remark \ref{R:kernel} show that without strong smoothness assumptions (in our case, real-analyticity)
even the complex lines meeting the set of $n$ affinely independent points are not enough for the one-dimensional holomorphic extension property. It is natural to conjecture that $n+1$ affinely independent points are enough and not only for the case of complex ball:
\begin{conjecture} Let $\Omega$ be a domain in $\mathbb C^n$ with a smooth boundary and \\
$a_1,\cdots, a_{n+1} \in \overline \Omega$ are $n+1$ points belonging to no complex hyperplane.
Suppose that $f \in C(\partial \Omega)$ extends holomorphically in each cross-section $ L \cap \Omega.$
Then $f \in A(\partial \Omega).$
\end{conjecture}
To confirm this conjecture for the complex ball $B^n$, it would be enough to prove the following
\begin{conjecture}
Let $a,b \in \overline B^2, \ a \neq b.$ Any function
$f \in C(\partial B^2)$ having one-dimensional holomorphic extension into complex lines passing through $a$ or $b$ has the form (\ref{E:expan}),(\ref{E:Ag}):
$$f(z)=\sum\limits_{\nu=0}^{\infty}F_{\nu}(z_1)z_2^{\nu},
\ F_{\nu}=\sum\limits_{j=0}^{\nu}\frac{h_j(z_1)}{(1-|z_1|^2)^{j}},$$
where $h_j(z_1)$ are holomorphic.
\end{conjecture}
\item
Theorem \ref{T:main} can be regarded as a boundary Morera theorem (cf. \cite{GS}), because the condition of the holomorphic extendibility into complex lines $L$ can be written as the complex moment condition
$$\int_{L \cap \partial B^n}f \omega=0,$$
for any holomorphic differential (1,0)-form $\omega.$
For each fixed complex line $L$, the above condition depends on one real parameter, because is suffices to
take $\omega$ such that $\omega\vert_{L}=\frac{d\zeta}{\zeta-t}$,
where $\zeta$ is the complex parameter on $L$ and $t$ runs over any fixed real curve in $L \setminus \overline {B^n}.$
Results of the type of Theorem \ref{T:main} can be also viewed as boundary analogs of Hartogs' theorem about separate analyticity
(see \cite{AS}).
\item
In Theorem \ref{T:main}, the points $a, b$ are taken from the closed ball $\overline{B^n}.$
It is shown in \cite{KM1} that finite sets $V \subset \mathbb C^n \setminus \overline{B^n}$ can produce insufficient families $\mathcal{L}_V$ of complex lines.
For example, it is easy to see that the function $f(z)=|z_1|^2$ extends holomorphically (as a constant) from the unit complex sphere in each complex line parallel to a coordinate line. This family of lines can be viewed as $\mathcal{L}_V$ where $V$ is $n$ points in $\mathbb {CP}^n \setminus \mathbb C^n$. To obtain finite points, one can apply Moebius transformation since they preserve complex lines.
\end{itemize}
\bigskip
After this article was written, Josip Globevnik informed the author that he
has proved Theorem \ref{T:main} in the case $n=2$ for infinitely smooth functions. The author thanks J. Globevnik
and A. Tumanov for useful discussions of the results of this article and remarks.
\section*{Acknowledments}
This work was partially supported by the grant from ISF (Israel Science Foundatrion) 688/08.
Some of this research was done as a part of European Science Foundation Networking Program HCAA.
| proofpile-arXiv_065-6867 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Current observations of the Cosmic Microwave Background (CMB) temperature and polarization
fluctuations, in addition to other astronomical data sets~\citep[e.g.][see \citealt{barreiro09} for a recent review]{komatsu09,gupta09},
provide an overall picture for the origin,
evolution, and matter and energy content of the universe, which is usually referred to as the
\textit{standard cosmological model}. In this context, we believe the universe to be
highly homogeneous and isotropic, in expansion, well
described by a Friedmman-Robertson-Walker metric and with a trivial topology.
The space geometry is very close to flat, and it is filled with cold dark matter (CDM) and dark energy
(in the form of a cosmological constant, $\Lambda$), in addition to baryonic matter and electromagnetic
radiation. Large scale structure (LSS) is assumed to be formed by the gravitational collapse of
an initially smooth distribution of adiabatic matter fluctuations, which were seeded by initial Gaussian
quantum fluctuations generated in a very early inflationary stage of the universe evolution.
It is interesting to mention that, besides the success of current high quality CMB
data~\citep[in particular the data provided by the WMAP satellite][]{hinshaw09} in
constraining the cosmological parameters with good accuracy and in
showing the high degree of homogeneity and isotropy of the
Universe~\citep[as predicted by the standard inflation scenario, see for instance][]{liddle00},
it has been, precisely, through the analysis of this very same data, that the
CMB community has been allowed to probe fundamental principles and assumptions of the
\emph{standard cosmological model}. Most notably, the application of
sophisticated statistical analysis to CMB data might help us to
understand whether the temperature fluctuations of the primordial
radiation are compatible with the fundamental isotropic and Gaussian
standard predictions from the \emph{inflationary phase}.
Indeed, the interest of the cosmology community in this field has experienced a
significant growth, since several analyses of the WMAP data have reported
some hints for departure from isotropy and Gaussianity of the CMB
temperature distribution. The literature on the subject is very large, and
is still growing, which makes really difficult to provide a complete and updated
list of publications. We refer to our previous work~\citep{vielva09} for an (almost) complete
review.
Among the previously mentioned analyses, those related to the study of
non-standard inflationary models have attracted a
larger attention. For instance, the
non-linear coupling parameter f$_{\mathrm{NL}}$ that describes the
non-linear evolution of the inflationary potential \citep[see
e.g.][and references therein]{bartolo04} has been constrained by
several groups, from the analysis of the WMAP data: using the angular bispectrum
\citep{komatsu03,creminelli07,spergel07,komatsu09,yadav08,smith09}; applying the
Minkowski functionals
\citep{komatsu03,spergel07,gott07,hikage08,komatsu09}; using
different statistics based on wavelets
\citep{mukherjee04,cabella05,curto09a,curto09b,pietrobon09,rudjord09}, and
by exploring the N-pdf
\citep{vielva09}. Besides marignal detections of f$_{\mathrm{NL}} > 0$~\citep[with a probability of around 95\%,][]{yadav08,rudjord09},
there is a
general consensus on the WMAP compatibility with the predictions
made by the standard inflationary scenario at least at 95\%
confidence level. The current best limits obtained from the CMB data are:
$-4 < {\rm f}_{\mathrm{NL}} < 80$ at 95\% CL by~\cite{smith09}. In addition, very recently, promising constraints
coming out from the analysis of LSS have been reported: $-29 < {\rm f}_{\mathrm{NL}} < 70$ at 95\% CL~\citep{slozar08}.
The aim of this paper is to extend our previous work~\citep{vielva09},
where the full N-pdf of a non-Gaussian model that describes
the CMB anisotropies as a local (pixel-by-pixel) non-linear expansion of the temperature fluctuations (up to second order) was derived.
For this model ---that, at large scales, can be considered
as an approximation to the weak non-linear coupling
scenario--- we are able to build the exact likelihood on
pixel space. Working in pixel space allows one to include easily
non-ideal observational conditions, like incomplete sky coverage and
anisotropic noise. The extension of the present work is to account for higher order moments
in the expansion, in particular, we are able to directly obtain constraints on g$_\mathrm{NL}$, that is
the coupling parameter governing the cubic term of the weak non-linear expansion.
As far as we are aware, direct constraints on g$_\mathrm{NL}$ have been made available only very recently \citep{desjacques09}
and are obtained from LSS analyses: $-3.5\times 10^{5} < {\rm g}_{\mathrm{NL}} < 8.2\times 10^{5}$ at 95\% CL.
This constraint was obtained for the specific case in which the coupling parameter governing the quadratic
term (f$_\mathrm{NL}$) is negligible~\citep[i.e. f$_\mathrm{NL} \equiv 0$, which is the situation required for some
curvaton inflationary models, e.g.][]{sasaki06,enqvist08,huang09}.
We present in this work the first direct measurement of g$_\mathrm{NL}$ obtained from CMB data. In addition to
study the particular case of f$_\mathrm{NL} \equiv 0$, we also consider a more general case in which a joint estimation
of f$_\mathrm{NL}$ and g$_\mathrm{NL}$ is performed.
Finally, and justified by recent findings~\citep[e.g.,][]{hoftuft09}, we compute the N-pdf in two different hemispheres, and
derive from it constraints on f$_\mathrm{NL}$ and g$_\mathrm{NL}$ for this hemispherical division of the celestial sphere.
The paper is organized as follows. In Section~\ref{sec:model} we
describe the physical model based on the local expansion of the CMB
fluctuations and derive the full posterior probability, recalling how it defaults to
the case already addressed in \cite{vielva09}. In Section~\ref{sec:simulations}
we check the methodology against WMAP-like simulations. Results on
WMAP 5-year data are presented in Section~\ref{sec:wmap}. Conclusions are given
in Section~\ref{sec:final}. Finally, in Appendix~\ref{sec:app}, we provide a detailed computation of the full N-pdf.
\section{The non-Gaussian model}
\label{sec:model}
Although current CMB measurements are well described by random Gaussian anisotropies
(as it is predicted by the standard inflationary scenario), observations also
allows for small departures from Gaussianity, that could indicate the presence
of an underlying physical process generated in non-standard models.
As we did in \cite{vielva09} we adopt a parametric non-Gaussian model for the CMB anisotropies,
that accounts for a small and local (i.e. point-to-point)
perturbation of the CMB temperature fluctuations, around its
intrinsic Gaussian distribution:
\begin{eqnarray}
\label{eq:physical_model}
{\Delta T}_i & = & \left({\Delta T}_i\right)_G + a\left[\left({\Delta T}_i\right)_G^2 - \left\langle\left({\Delta T}_i\right)_G^2\right\rangle \right] + \nonumber \\
& & b\left({\Delta T}_i\right)_G^3 + {\cal O}\left(\left({\Delta T}_i\right)_G^4\right).
\end{eqnarray}
The \emph{linear term} ($\left({\Delta T}_i\right)_G$)
is given by a Gaussian N-point probability density function (N-pdf) that is
easily described in terms of the standard inflationary model. The
second and third terms on the right-hand side are the \emph{quadratic} and the
\emph{cubic} perturbation terms, respectively, and they are governed by the
$a$ and $b$ parameters.
The sub-index
$i$ refers to a given direction in the sky that, in practice, is described
in terms of a certain pixelization of the sphere. The operator $\langle \cdot \rangle$
indicates averaging over all the pixels defining the sky coverage.
We have not considered explicitly an \emph{instrumental noise}-like term, since, for the particular
case that we intend to explore (i.e., large-scale CMB data), its contribution to the
measured signal (for experiments like WMAP or Planck) is negligible. Precisely at the large-scale regime, the term
${\Delta T}_i$ is mostly dominated by the Sachs-Wolfe contribution to the CMB fluctuations,
and can be related to the primordial gravitational potential $\Phi$~\citep[e.g.][]{komatsu01} by:
\begin{equation}
\label{eq:sw_limit} {\Delta T}_i \approx -\frac{T_0}{3} \Phi_i,
\end{equation}
where $T_0$ is the CMB temperature. Small departures from Gaussianity of the $\Phi$ potential are usually described via the weak
non-linear coupling model:
\begin{equation}
\label{eq:weak_model} \Phi_i = {\Phi_L}_{,i} +
{\rm f}_\mathrm{NL}\left( {\Phi_L}_{,i}^2 - \langle {\Phi_L}_{,i}^2 \rangle \right) +
{\rm g_\mathrm{NL}} {\Phi_L}_{,i}^3 + {\cal O}\left( {\Phi_L}_{,i}^4 \right).
\end{equation}
Taking into account equations~\ref{eq:physical_model},~\ref{eq:sw_limit} and~\ref{eq:weak_model}, and always
considering the specific case for scales larger than the
horizon scale at the recombination time (i.e. above the degree
scale), it is trivial to establish the following
relations:
\begin{equation}
\label{eq:relation} {\rm f}_\mathrm{NL} \cong -\frac{T_0}{3}a ~{\rm ,}~~~~~~~ {\rm g}_\mathrm{NL} \cong \left(\frac{T_0}{3}\right)^2b.
\end{equation}
At this point, it is worth mentioning that the model in equation~\ref{eq:physical_model} does not
pretend to incorporate all the
gravitational and non-gravitational effects, due to the evolution of the
initial quadratic potential model, but rather allows for a
useful parametrization for describing a small departure from Gaussianity. The relationships
in equation~\ref{eq:relation} have to be understood just as an asymptotic equivalence for large scales.
Let us simplify the notation by
transforming the Gaussian
term $\left({\Delta T}_i\right)_G$ into
a zero mean and unit variance random variable $\phi_i$. It is easy to show that equation~\ref{eq:physical_model}
transforms into:
\begin{equation}
\label{eq:model} x_i = \phi_i + a\sigma\left(\phi_i^2 - 1\right) +
b\sigma^2\phi_i^3+ {\cal O}\left(\sigma^3\right)
\end{equation}
where:
\begin{equation}
\label{eq:equivalences} x \equiv \frac{1}{\sigma}{\Delta T} ~{\rm ,}~~~~~~~ \phi \equiv \frac{1}{\sigma}\left({\Delta T}\right)_G
\end{equation}
and $\sigma^2 \equiv \left\langle\left({\Delta
T}_i\right)_G^2\right\rangle$ is the standard deviation of the CMB fluctuations.
Trivially, the normalized Gaussian variable $\phi$ satisfies:
\begin{eqnarray}
\label{eq:phiproperties}
\langle \phi_i^{2n+1} \rangle & = & 0 \nonumber \\
\langle \phi_i^{2m} \rangle & = & \left(2m-1\right)!! \nonumber \\
\langle \phi_i \phi_j \rangle & = & \xi_{ij},
\end{eqnarray}
where $n \geq 0$ and $m > 0$ are integer numbers, and $\xi_{ij}$ represents the normalized correlation between
pixels $i$ and $j$. Obviously, the N-pdf of the $\bmath{\phi}=\{\phi_1, \phi_2, ...,
\phi_N\}$ random field (where $N$ refers to the number of pixels on
the sphere that are actually observed) is given by a multivariate Gaussian distribution:
\begin{equation}
\label{eq:pdfphi} p(\bmath{\phi}) = \frac {1}{(2\pi
)^{N/2}(\det{\bmath{\xi}} )^{1/2}}e^{-\frac{1}{2}
\bmath{\phi}\bmath{\xi}^{-1}\bmath{\phi}^{t}},
\end{equation}
where $\bmath{\xi}$ denotes the correlation matrix and operator
$\cdot^{t}$ denotes standard matrix/vector transpose.
As it was the case in~\cite{vielva09}, the objective is to obtain the N-pdf
associated to the non-Gaussian $\bmath{x}=\{x_1, x_2, ..., x_N\}$ field, as a function of the
non-linear coupling parameters (or, equivalently, the $a$ and $b$ coefficients):
\begin{equation}
\label{eq:pdfx} p(\bmath{x}\vert a,b ) = p(\bmath{\phi} =
\bmath{\phi} (\bmath{x})) Z.
\end{equation}
In this expression, $Z$ is the determinant of the Jacobian for the $\bmath{\phi} \longrightarrow \bmath{x}$
transformation. Because the proposed model is local (i.e. point-to-point), the Jacobian matrix
is diagonal and, therefore, $Z$ is given by:
\begin{equation}
\label{eq:jacobian} Z = \det{\left[ \frac{\partial \phi_i}{\partial
x_j} \right]} = \prod_i \left( \frac{\partial \phi_i}{\partial
x_i}\right).
\end{equation}
Both, equations~\ref{eq:pdfx} and~\ref{eq:jacobian}, require the inversion of equation~\ref{eq:model}:
i.e., to express $\phi_i$ as a function of $x_i$. After some algebra, it can be proved that:
\begin{equation}
\label{eq:inversmodel}
\phi_i = x_i + \eta_i \sigma + \nu_i \sigma^2 + \lambda_i \sigma^3 + \mu_i \sigma^4 + {\cal O}\left(\sigma^5\right),
\end{equation}
where:
\begin{eqnarray}
\label{eq:numu}
\eta_i & = & -a\left(x_i^2 - 1\right) \nonumber \\
\nu_i & = & \left( 2a^2 - b \right) x_i^3 - 2a^2x_i \nonumber \\
\lambda_i & = & \left( 5ab - 5a^3 \right) x_i^4 + \left( 6a^3 - 3ab \right) x_i^2 - a^3 \nonumber \\
\mu_i & = & \left( 14a^4 - 21a^2b + 3b^2 \right) x_i^5 + \left(-20a^4 +20a^2b\right) x_i^3 \nonumber \\
& & + \left( 6a^4 - 3a^2b \right) x_ i.
\end{eqnarray}
Instead of dealing with $p(\bmath{x}\vert a,b )$, it is equivalent, but more convenient, to work with the
log-likelihood ${\mathcal L} \left(\bmath{x}|a,b\right)$:
\begin{equation}
\label{eq:loglike}
{\mathcal L} \left(\bmath{x}|a,b\right) = \log \frac{p\left(\bmath{x}\vert a,b \right)}{p\left(\bmath{x}\vert 0 \right)}.
\end{equation}
A detailed computation of ${\mathcal L} \left(\bmath{x}|a,b\right)$ is given in Appendix A. Let us just recall here
its final expression:
\begin{eqnarray}
\label{eq:loglikegeneral}
\frac{1}{N} {\mathcal L} \left(\bmath{x}|a,b\right) & = & F\sigma + \left(2a^2 - 3b + G\right)\sigma^2 + H\sigma^3 \nonumber \\
& & + \left(12a^4 - 36a^2b + \frac{27}{2}b^2 + I \right)\sigma^4,
\end{eqnarray}
where $N$ is the number of data points, and $F$, $G$, $H$ and $I$ are
functions of $a$ and $b$ (see~\ref{eq:coeffs}). The desired N-pdf, $p(\bmath{x}\vert a,b)$ is
obtained by the inversion of equation~\ref{eq:loglike}, and taking into account that
$p\left(\bmath{x}\vert 0 \right) \equiv p\left(\phi = x\right)$, i.e., the known Gaussian N-pdf in equation~\ref{eq:pdfphi}.
\section{Application to WMAP simulations}
\label{sec:simulations}
In this Section we aim to investigate the performance of the parameters
estimation from the N-pdf derived in the previous Section. We explore
different non-Gaussian scenarios; in particular, we
study three particular cases of special interest:
\begin{itemize}
\item \emph{Case i}) $a\neq0$, $b=0$. This scenario would correspond, for example, to the case for the slow-roll standard inflation
\item \emph{Case ii}) $a=0$, $b\neq0$. This scenario would correspond to the particular situation for some curvaton models.
\item \emph{Case iii}) $a\neq0$, $b\neq0$. It is a generic case, not representing any specific inflationary
model, but rather a general scenario.
\end{itemize}
In particular, we will study how the determination of the parameters governing the
non-Gaussian terms is performed, and what is the
impact when one is exploring different configurations. In the next subsections, we will focus, first, in
the case in which a slow-roll standard like scenario is assumed (i.e., we only try to adjust for the \emph{quadratic} term,
assuming the \emph{cubic} one is negligible), whatever the data is actually a pure \emph{quadratic} or \emph{cubic} model, or a
general non-Gaussian scenario. Second, we will follow a similar analysis, but assuming the estimation of a pure \emph{cubic} term.
Finally, we
will address the case for a joint estimation of both (\emph{quadratic} and \emph{cubic}) terms. In the following, we will refer
all our results in terms of the non-linear coupling parameters (f$_\mathrm{NL}$ and g$_\mathrm{NL}$), rather than to the $a$ and $b$ coefficients.
In order to carry out this analysis, we have generated Gaussian CMB simulations coherent with the model induced from the WMAP
5-year data at \textsc{\small{NSIDE}}=32 HEALPix~\citep{gorski05} resolution ($\approx 2^\circ$).
The procedure to generate a CMB Gaussian simulation ---$\left(\Delta
T\right)_G$ in equation~\ref{eq:physical_model}--- is as follows.
First, we simulate WMAP observations for the Q1, Q2, V1,
V2, W1, W2, W3, W4 difference assemblies at \textsc{\small{NSIDE}}=512 HEALPix
resolution. The $C_\ell$ obtained with the cosmological parameters
provided by the best-fit to WMAP data alone~\citep[Table 6
in][]{hinshaw09}, are assumed.
Second, a single co-added CMB map is computed afterwards through a
noise-weighted linear combination of the eight maps (from Q1 to W4).
The weights used in this linear combination are proportional to the inverse mean noise variance
provided by the WMAP team. They are
independent on the position (i.e., they are uniform
across the sky for a given difference assembly) and they are
forced to be normalized to unity.
Notice that we have not added Gaussian white noise to the different difference assembly maps,
since we have already checked that instrumental noise plays a
negligible role at the angular resolution in which we are interested~\citep[$\approx 2^\circ$, see][for details]{vielva09}.
Third, the co-added map at
\textsc{\small{NSIDE}}=512 is degraded down to the final resolution of \textsc{\small{NSIDE}}=32, and
a mask representing a sky coverage like the one allowed by
the WMAP KQ75 mask~\citep{gold09} is adopted. At \textsc{\small{NSIDE}}=32 the
mask keeps around 69\% of the sky.
The mask is given in figure~\ref{fig:mask}. Let us remark that observational constraints
from an incomplete sky coverage can be easily taken into account by the
local non-Gaussian model proposed in this work, since it is
naturally defined in pixel space. This is not the case for other common estimators like
the bispectrum, where the presence of an incomplete sky coverage is usually translated into a
loss of efficiency.
\begin{figure}
\includegraphics[width=8cm,keepaspectratio]{./mask_nest.pdf}
\caption{\label{fig:mask}Mask at \textsc{\small{NSIDE}}=32 HEALPix resolution used
in this work. It corresponds to the WMAP KQ75 mask, although the
point source masking has not been considered, since the point
like-emission due to extragalactic sources is negligible at the
considered resolution. At this pixel resolution, the mask keeps around
69\% of the sky.}
\end{figure}
We have generated 500000 simulations of $\left(\Delta T\right)_G$, computed as
described above, to estimate the correlation matrix $\bmath{\xi}$
accounting for the Gaussian CMB cross-correlations. We have
checked that this large number of simulations is enough to obtain an
accurate description of the CMB Gaussian fluctuations.
In addition, we have generated another set of 1000 simulations. These are
required to carry out the statistical analysis to check the performance
of the parameter estimation.
Each one of these 1000 $\left(\Delta T\right)_G$ simulations are
transformed into $\bmath{x}$ (following
equations~\ref{eq:physical_model} and~\ref{eq:equivalences}) to
study the response of the statistical tools as a function of the
non-linear parameters defining the local non-Gaussian
model proposed in equation~\ref{eq:model}.
Finally, let us remark that hereinafter the likelihood maximization is simply performed
by exploring a grid of values in the parameter space of the non-linear coupling
parameters. The step used in the grid is small enough to guarantee a good estimation
both of the likelihood peak and tails.
\subsection{The recovery of f$_\mathrm{NL}$ in the presence of a \emph{cubic} term}
\begin{figure*}
\begin{center}
\includegraphics[angle=90,width=15cm]{./loglike_a_sims.pdf}
\caption{\label{fig:fnl_response}These panels represent the accuracy and efficiency on the estimation
of the f$_\mathrm{NL}$ parameter. From left to right, the columns correspond to simulated
$\bar{\rm g}_\mathrm{NL}$ values of: 0, $3\times 10^{5}$, $5\times 10^{5}$ and $10^{6}$.
Similarly, from top to bottom, rows correspond to simulated $\bar{\rm f}_\mathrm{NL}$ values of:
0, 200, 400 and 600. The histograms show the distribution of the obtained values of
$\hat{\rm f}_\mathrm{NL}$ for each case. The vertical dashed lines indicate the simulated $\bar{\rm f}_\mathrm{NL}$ value, and help
to identify the presence of a possible bias.}
\end{center}
\end{figure*}
The results obtained from the 1000 simulations are given in figure~\ref{fig:fnl_response}.
We have explored 16 different non-Gaussian models, accounting for all the possible combinations
obtained with simulated $\bar{\rm g}_\mathrm{NL}$ values of 0, $3\times 10^{5}$, $5\times 10^{5}$ and $10^{6}$, and $\bar{\rm f}_\mathrm{NL}$
values of 0, 200, 400 and 600. For each panel, we present the histogram of the maximum-likelihood estimation of
the non-linear coupling \emph{quadratic} parameter: $\hat{\rm f}_\mathrm{NL}$. Notice that we refer to a simulated
value of a given non-linear coupling parameter (x$_\mathrm{NL}$), as $\bar{\rm x}_\mathrm{NL}$, whereas that its estimation via
the maximum-likelihood is denoted as $\hat{\rm x}_\mathrm{NL}$. Vertical dashed lines in each panel, indicate the value
of the maximum-likelihood estimation for the parameter.
As it can be noticed from the figure, when the simulated data satisfies the condition of the particular
explored model (i.e., $\bar{\rm g}_\mathrm{NL} \equiv 0$, first column), the f$_\mathrm{NL}$ is accurately and efficiently
estimated, at least for values of $\bar{\rm f}_\mathrm{NL} < 600$. Actually, this is a result that we already obtained in~\cite{vielva09},
which indicates that for $\bar{\rm f}_\mathrm{NL} > 600$, the perturbative model stars to be not valid any longer.
However, when the simulated non-Gaussian maps also contain a significant contribution from a \emph{cubic} term, the bias in the
determination of the f$_\mathrm{NL}$ parameter stars to be evident already for lower values of the simulated $\bar{\rm f}_\mathrm{NL}$. It is
interesting to notice that, even if the simulated $\bar{\rm g}_\mathrm{NL}$ is large (for instance $\bar{\rm g}_\mathrm{NL} = 10^6$), we do
not see any significant bias in $\hat{\rm f}_\mathrm{NL}$, for simulated $\bar{\rm f}_\mathrm{NL}$ values lower than 200.
Summarizing, we can infer that for non-Gaussian scenarios with $|\bar{\rm f}_\mathrm{NL}| \lesssim 400$ and
$|\bar{\rm g}_\mathrm{NL}| \lesssim 5\times 10^{5}$, no significant bias on the estimation of a pure \emph{quadratic} term
is found. It is worth mentioning that these range of values are in agreement with predictions from most of the physically motivated
non-Gaussian inflationary models. Notice that, in general, even for the cases in which a bias is observed, the efficiency in
the determination of f$_\mathrm{NL}$ (somehow related to the width of the histograms) is almost unaltered.
\subsection{The recovery of g$_\mathrm{NL}$ in the presence of a \emph{quadratic} term}
\begin{figure*}
\begin{center}
\includegraphics[angle=90,width=15cm]{./loglike_b_sims.pdf}
\caption{\label{fig:gnl_response}These panels represent the accuracy and efficiency on the estimation
of the g$_\mathrm{NL}$ parameter. From left to right, the columns correspond to simulated
$\bar{\rm g}_\mathrm{NL}$ values of: $0$, $3\times 10^{5}$, $5\times 10^{5}$ and $10^{6}$.
Similarly, from top to bottom, rows correspond to simulated $\bar{\rm f}_\mathrm{NL}$ values of:
0, 200, 400 and 600. The histograms show the distribution of the obtained values of
$\hat{\rm g}_\mathrm{NL}$ for each case. The vertical dashed lines indicate the simulated $\bar{\rm g}_\mathrm{NL}$ value, and help
to identify the presence of a possible bias.}
\end{center}
\end{figure*}
As for the previous case, a graphical representation of the results obtained from the 1000 simulations is given in figure~\ref{fig:gnl_response}.
We have explored the same 16 different non-Gaussian models already described above.
As it can be noticed from the figure, when the simulated data
corresponds to the explored model (i.e., $\bar{\rm f}_\mathrm{NL} \equiv 0$, first row), the g$_\mathrm{NL}$ parameter is reasonably estimated,
at least for simulated $\bar{\rm g}_\mathrm{NL} < 10^6$.
However, when the simulated non-Gaussian maps also contain a significant contribution from the \emph{quadratic} term, a bias in the
determination of the g$_\mathrm{NL}$ parameter stars to be notorious for lower values of the simulated $\bar{\rm g}_\mathrm{NL}$ coefficient. In particular, the
plots of the first column (i.e., $\bar{\rm g}_\mathrm{NL} \equiv 0$) show a clear bias on $\hat{\rm g}_\mathrm{NL}$. This
indicates that, when the analyzed case corresponds to a pure \emph{quadratic} scenario, and a pure \emph{cubic} model is assumed, the g$_\mathrm{NL}$
estimator is sensitive to the \emph{quadratic} non-Gaussianity and, somehow, it absorbs the non-Gaussianity in the
form of a fake \emph{cubic} term. In particular, an input value of $\bar{\rm f}_\mathrm{NL} \equiv 400$ is determined as a pure $\hat{\rm g}_\mathrm{NL} \approx 2.5\times 10^5$. Notice that this was not the
situation for the previous case, where the f$_\mathrm{NL}$ estimation was not sensitive to the presence of a pure \emph{cubic} model
(at least for reasonable values of $\bar{\rm g}_\mathrm{NL}$). This is an expected results, since, any skewned distribution would imply the presence of
a certain degree of kurtosis, whereas the opposite is not necessary true.
\subsection{The general case: the joint recovery of f$_\mathrm{NL}$ and g$_\mathrm{NL}$}
\begin{figure*}
\begin{center}
\includegraphics[angle=90,width=15cm]{./loglike_a_b_sims.pdf}
\caption{\label{fig:fnl_gnl_response}These panels represent the accuracy and efficiency on the joint estimation
of the f$_\mathrm{NL}$ and g$_\mathrm{NL}$ parameters. From left to right, the columns correspond to simulated
$\bar{\rm g}_\mathrm{NL}$ values of: $0$, $3\times 10^{5}$, $5\times 10^{5}$ and $10^{6}$.
Similarly, from top to bottom, rows correspond to simulated $\bar{\rm f}_\mathrm{NL}$ values of:
0, 200, 400 and 600. The contours show the distribution of the obtained values of the
pair $\left( \hat{\rm f}_\mathrm{NL}, \hat{\rm g}_\mathrm{NL} \right)$ for each case. The vertical and horizontal dashed lines indicate the
simulated $\bar{\rm f}_\mathrm{NL}$ and $\bar{\rm g}_\mathrm{NL}$ values, respectively, and help
to identify the presence of a possible bias.}
\end{center}
\end{figure*}
Finally, we have also explored the case of a joint estimation of the \emph{quadratic} and \emph{cubic} terms. The results obtained from the 1000 simulations
are given in figure~\ref{fig:fnl_gnl_response}.
As for the previous cases, we have explored the same 16 different non-Gaussian models already described above.
The plots represent the contours of the 2-D histograms obtained for the pair $\left( \hat{\rm f}_\mathrm{NL}, \hat{\rm g}_\mathrm{NL} \right)$ of
the maximum-likelihood estimation. Vertical and horizontal dashed lines indicate the simulated $\bar{\rm f}_\mathrm{NL}$ and
$\bar{\rm g}_\mathrm{NL}$ values, respectively.
As it can be noticed from the figure, only for the regime $|\bar{\rm f}_\mathrm{NL}| \lesssim 400$ and
$|\bar{\rm g}_\mathrm{NL}| \lesssim 5\times 10^{5}$, we obtain an accurate and efficient estimation of the non-linear coupling parameters.
As it was reported above, this regime correspond to the boundaries obtained from the pure f$_\mathrm{NL}$ case.
It is interesting to notice the presence of very large biases for cases outside of the previous range. In particular, estimations tend to
move towards a region of the parameter space of larger values of both, f$_\mathrm{NL}$ and g$_\mathrm{NL}$. Only a secondary peak in the 2-D histogram
corresponds to the simulated pair of values.
This result, combined with the previous ones, indicates that the non-Gaussian model proposed in equation~\ref{eq:physical_model}
is only valid up to values of the \emph{quadratic} and \emph{cubic} terms of around $1\%$ and $0.05\%$, respectively.
\section{Application to WMAP 5-year data}
\label{sec:wmap}
We have studied the compatibility of the WMAP 5-year data with a non-Gaussian model as the
one described in equation~\ref{eq:physical_model}.
In particular, we
have analyzed the co-added CMB map generated from the global noise-weighted
linear combination of the reduced foreground maps for the Q1, Q2,
V1, V2, W1, W2, W3 and W4 difference assemblies~\citep[see][for
details]{gold09}. The weights are proportional to the inverse average noise variance across
the sky, and are normalized to unity.
This linear combination is made at \textsc{\small{NSIDE}}=512 HEALPix resolution,
being degraded afterwards down to \textsc{\small{NSIDE}}=32.
Under these circumstances, we are in the same condition as for the analysis performed on
the simulations described in Section~\ref{sec:simulations}. Therefore, the theoretical multinormal
covariance of the CMB temperature fluctuations ($\bmath{\xi}$) is the one already computed
with the 500000 simulations (see previous Section).
Two different analysis were performed. The first accounts for an all-sky study (except for the
sky regions covered by the Galactic mask described in the previous Section), where constraints
on the non-linear coupling parameters from different scenarios are presented. We will present
as well results derived from a model selection approach, where we investigate which are the
models that are more favoured by the data. The second analysis explores the
WMAP data compatibility with the local non-Gaussian model in two different hemispheres. In particular,
we have studied independently the two hemispheres related to the dipolar pattern described
in~\cite{hoftuft09}.
\subsection{All-sky analysis}
\begin{figure*}
\begin{center}
\includegraphics[width=5.8cm,keepaspectratio]{./wmap_fnl.pdf}
\includegraphics[width=5.8cm,keepaspectratio]{./wmap_gnl.pdf}
\includegraphics[width=5.8cm,keepaspectratio]{./wmap_fnl_gnl.pdf}
\caption{\label{fig:data_like}Likelihood distribution
of the non-linear parameters obtained by analyzing 1000
simulations, according to the local non-Gaussian model given in equation~\ref{eq:physical_model}.
Left panel correspond to a pure f$_\mathrm{NL}$ analysis: $p\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$. Middle plot shows
the result for a pure g$_\mathrm{NL}$ model: $p\left(\bmath{x} | {\rm g}_\mathrm{NL} \right)$. Right panel provides the 68\%, 95\% and 99\%
contour levels of the likelihood obtained from a joint f$_\mathrm{NL}$, g$_\mathrm{NL}$ analysis of the WMAP 5-year data:
$p\left(\bmath{x} | {\rm f}_\mathrm{NL}, {\rm g}_\mathrm{NL} \right)$.}
\end{center}
\end{figure*}
We have computed the full N-pdf in
equation~\ref{eq:pdfx}, for three different scenarios: a non-Gaussian model with a pure
\emph{quadratic} term (i.e., g$_\mathrm{NL} \equiv 0$), another case with a pure \emph{cubic} term
(i.e., f$_\mathrm{NL} \equiv 0$), and a general non-Gaussian model (i.e., f$_\mathrm{NL} \neq 0$ and g$_\mathrm{NL} \neq 0$),
Results are given in figure~\ref{fig:data_like}. Left panel shows the likelihood obtained for the first
case: $p\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$. Actually, this result is the one
that we already obtained in our previous work~\citep{vielva09}. The maximum-likelihood estimation for the
\emph{quadratic} factor is $\hat{{\rm f}}_\mathrm{NL} = -32$\footnote{Notice that in~\cite{vielva09} we used a different
definition between the primordial gravitational potential $\Phi$ and the CMB temperature. This difference implies that,
in our previous work, the f$_\mathrm{NL}$ parameter had an opposite sign with respect to the definition used in
this paper.}. The constraint on the non-Gaussian parameter
is: $-154 < {\rm f}_\mathrm{NL} < 94$ at 95\%.
The middle panel in figure~\ref{fig:data_like} presents the likelihood obtained from a model with a pure \emph{cubic} term:
$p\left(\bmath{x} | {\rm g}_\mathrm{NL} \right)$. The maximum-likelihood estimation for the
\emph{quadratic} factor is $\hat{{\rm g}}_\mathrm{NL} = 42785$. The constraint on the parameter is:
$-5.6\times 10^5 < {\rm g}_\mathrm{NL} < 6.4\times 10^5$ at 95\%. This result is compatible with a previous finding
obtained from the analysis of LSS data~\citep{desjacques09}. The result reported in this work is, as far as we know, the first direct
constraint of g$_\mathrm{NL}$ from CMB data alone.
The right panel in figure~\ref{fig:data_like} shows the contour levels at the 68\%, 95\% and 99\% CL, for the
likelihood obtained from an analysis of a general \emph{quadratic} and \emph{cubic} model: $p\left(\bmath{x} | {\rm f}_\mathrm{NL}, {\rm g}_\mathrm{NL} \right)$.
Notice that the maximum likelihood estimation
for the f$_\mathrm{NL}$ and g$_\mathrm{NL}$ parameters are similar to those obtained from the previous cases (where the pure models
were investigated). Even more, the marginalized distributions for the two parameters are extremely similar to the
likelihood distributions discussed previously, and, therefore, the constraints on the non-linear coupling
parameters are virtually the same.
Finally, we want to comment a few words about two issues: the incorporation of possible \emph{a priori} information related to the parameters
defining the non-Gaussian model, and the application of model selection criteria (or hypothesis tests) to discriminate
among the
Gaussian model and different non-Gaussian models.
As we largely discussed in our
previous work~\citep{vielva09}, one of the major advantages of computing the full N-pdf on the non-Gaussian model is that,
in addition to provide a maximum-likelihood estimation for the non-linear coupling parameters, we have a full description
of the statistical properties of the problem.
More in particular, if we could have any physical (or empirical) motivated prior for the f$_\mathrm{NL}$ and g$_\mathrm{NL}$ parameters, it
could be used together with the likelihood function to perform a full Bayesian parameter estimation.
This aspect has not
been considered in this work, precisely because such a well motivated prior is lacking.
Actually, a possible and trivial \emph{a priori} information that
could be used in this specific case, would be to limit the range of values that can be taken by
f$_\mathrm{NL}$ and g$_\mathrm{NL}$, such as the non-Gaussian model is, indeed, a
local perturbation of a Gaussian field (i.e., the typical values that we discussed in Section~\ref{sec:simulations}).
However, these priors do not seem to be quite useful since, first, we do not have any evidence
to chose any different form for the prior that an uniform value over the parameters range; and, second, the limits of these ranges are
somehow arbitrary. These kind of priors do not provide any further knowledge on the Bayesian parameter
determination: as it is well known, such estimation would be totally driven by the likelihood itself, since it is fully defined within
any reasonable \emph{a priori} ranges.
The possibility of performing a model selection approach is an extra advantage of dealing with the full N-pdf.
Of course, under the presence of an hypotetical well motivated prior on the non-linear coupling
parameters, model selection could be done in terms of the Bayesian evidence or the ratio of posterior
probabilities~\citep[see][for a specific discussion on this application]{vielva09}.
However, the lack of such a prior (as we discussed above), makes the application of a full Bayesian model selection
framework significantly less powerful than in other situations:
as it is very well known, the use of uniform priors for all the
parameters would provide very little information,
since the results would be very much dependent on the size of the parameters range.
Despite this, we can still make a worthy use of the likelihood to perform model selection.
In particular, some asymptotic model selection criteria, like the \emph{Akaike Information Criteria}~\citep[AIC,][]{akaike73} and the
\emph{Bayesian Information Criteria}~\citep[BIC,][]{schwarz78}, can be applied. Both methods provide a ranging index for competitive hypotheses, where the
most likely one is indicated by the lowest value of the index. The AIC and BIC indices depend on the maximum value of
the log-likelihood ($\max \left[ {\mathcal L} \left(\bmath{x} | \Theta \right) \right] \equiv \hat{{\mathcal L}}$):
\begin{eqnarray}
{\rm AIC} \left( H_i \right) & = & 2\left(p - \hat{{\mathcal L}}\right), \nonumber \\
{\rm BIC} \left( H_i \right) & = & 2\left(\frac{p}{2}\log N - \hat{{\mathcal L}}\right), \nonumber
\end{eqnarray}
where $p$ is the number of parameters that determine the hypothesis or model $H_i$. We have applied these two asymptotic
model selection criteria to the WMAP 5-year data. Defining the Gaussian model as $H_0$, the pure \emph{quadratic} model as
$H_1$, the pure \emph{cubic} model as $H_2$, and the general non-Gaussian model as $H_3$, and considering the maximum
value for the log-likelihoods obtained for all these cases, we obtain:
${\rm AIC} \left( H_0 \right) < {\rm AIC} \left( H_1 \right) < {\rm AIC} \left( H_2 \right) < {\rm AIC} \left( H_3 \right)$,
and
${\rm BIC} \left( H_0 \right) < {\rm BIC} \left( H_1 \right) < {\rm BIC} \left( H_2 \right) < {\rm BIC} \left( H_3 \right)$.
That is, the most likely model is the Gaussian one (what is in agreement with the results obtained from the
parameter determination, since f$_\mathrm{NL} \equiv 0$ and g$_\mathrm{NL} \equiv 0$
can not be rejected at any meaningful confidence level). Among the non-Gaussian models, a pure f$_\mathrm{NL}$ model is the most likely
scenario, being a joint f$_\mathrm{NL}$, g$_\mathrm{NL}$ model the most disfavoured by the WMAP 5-year data.
\subsection{Hemispherical analysis}
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,keepaspectratio]{./joint_north_nest.pdf}
\includegraphics[width=8cm,keepaspectratio]{./joint_south_nest.pdf}
\caption{\label{fig:dipole_sky}These plots show the areas of the sky that are
independently analyzed. The panel on the left accounts for the sky that, being
allowed by the Galactic mask (see Section~\ref{sec:simulations}), corresponds to the
\emph{Northern hemisphere} of the sky division considered in this work.
Equivalently, the right panel presents the region of the sky that is
analyzed when the \emph{Southern hemisphere} is addressed.}
\end{center}
\end{figure*}
Among the large number of the WMAP \emph{anomalies} that have been reported in the
literature, an anisotropy manifested in the form of a hemispherical asymmetry is one
of the topics that has been more largely studied~\citep[e.g.,][]{eriksen04a,hansen04b}. Most of the works related to
this issue, have reported that such asymmetry is more marked for a north-south
hemispherical division relatively close to the Northern and Southern Ecliptic hemispheres.
In a recent work,~\cite{hoftuft09} reported that large scale WMAP data was compatible with
such kind of anisotropy, in the form of a dipolar modulation defined by
a preferred direction pointing toward the Galactic coordinates $\left(l, b \right) =
\left(224^\circ, -22^\circ \right)$.
Motivated by these results, we have repeated the analysis described in the previous
subsection, but addressing independently the two hemispheres associated to
the dipolar pattern found by~\cite{hoftuft09}. Hereinafter, we will refer to
the \emph{Northern hemisphere} of this dipolar pattern, as the half the of celestial sphere
whose pole is closer to the Northern Ecliptic Pole, and, equivalently, we will
indicate as the \emph{Southern hemisphere} the complementary half of the sky.
The corresponding areas of the sky that are analyzed are shown in figure~\ref{fig:dipole_sky}.
The left and right panels show the allowed sky regions, when the \emph{Northern} and
\emph{Southern hemispheres} of the dipolar pattern are independently addressed, respectively.
Notice that the regions not allowed by the Galactic mask are also excluded from the analysis.
The portion of the sky that is analyzed is around 34\% for the \emph{Northern hemisphere}, and
around 35\% for the \emph{Southern} half.
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,keepaspectratio]{./wmap_in_dipole_fnl.pdf}
\includegraphics[width=8cm,keepaspectratio]{./wmap_in_dipole_gnl.pdf}
\caption{\label{fig:dipole_posterior}The left panel present the likelihood on f$_\mathrm{NL}$
obtained from a pure \emph{quadratic} analysis ---$p\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$---, whereas
the right plot provides the likelihood on g$_\mathrm{NL}$ from a pure \emph{cubic} study ---$p\left(\bmath{x} | {\rm g}_\mathrm{NL} \right)$.
Dashed lines correspond to the \emph{Northern hemisphere}, whereas dot-dashed lines are
for the \emph{Southern} half. Vertical lines indicate the maximum likelihood
estimation of the non-linear coupling parameters: $\hat{\rm f}_\mathrm{NL}$ and $\hat{\rm g}_\mathrm{NL}$.}
\end{center}
\end{figure*}
As we discussed in the previous subsection, the constraints of the f$_\mathrm{NL}$ and g$_\mathrm{NL}$
parameters obtained from the analysis of pure \emph{quadratic} and \emph{cubic} non-Gaussian
models, do not differ significantly from those obtained from a general analysis of
a joint scenario. This is expected for a regime of relatively low values of the non-linear coupling parameters.
For that reason, in the present study we will only consider the following two
cases: a pure \emph{quadratic} (i.e. g$_\mathrm{NL} \equiv 0$) and a pure \emph{cubic} (i.e. f$_\mathrm{NL} \equiv 0$) models.
Results are given in figure~\ref{fig:dipole_posterior}. We present the likelihood
probabilities for the first case ---$p\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$--- in the left
plot, and the one corresponding to the second case ---$p\left(\bmath{x} | {\rm g}_\mathrm{NL} \right)$---
in the right panel. Each plot shows the results for the \emph{Northern} (dashed lines), and the \emph{Southern}
(dot-dashed lines) hemispheres. The maximum likelihood estimation for the non-linear coupling parameters are
given as vertical lines.
The right panel shows that both hemisphere have a similar likelihood ($p\left(\bmath{x} | {\rm g}_\mathrm{NL} \right)$)
for the case of a pure \emph{cubic} model. However, interestingly, it is not the case when addressing a
g$_\mathrm{NL} \equiv 0$ scenario. In this case, we notice two important results. First, whereas the f$_\mathrm{NL}$ estimation from
the analysis of the \emph{Northern hemisphere} provides a constraint compatible
with the Gaussian scenario, it is not the case for the \emph{Southern hemisphere}. In fact, we find that
f$_\mathrm{NL} < 0$ at 96\% CL. In particular we find: $\hat{\rm f}_\mathrm{NL} = -164 \pm 62$. Second, the distance between both distributions
is too large. Let us make use of the Kullback--Leibler divergence~\citep[KLD,][]{kullback51} as a measurement
of the distance between the two likelihoods $p_n\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$ and $p_s\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$:
\begin{equation}
D_{n,s} = \int {\rm d}{\rm f}_\mathrm{NL} p_n\left(\bmath{x} | {\rm f}_\mathrm{NL} \right) \log{\frac{p_n\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)}{p_s\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)}},
\end{equation}
where $p_n\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$ and $p_s\left(\bmath{x} | {\rm f}_\mathrm{NL} \right)$ are the likelihoods for the
\emph{Northern} and the \emph{Southern hemispheres}, respectively.
Actually, we use the symmetrized statistitic $D$, defined as:
\begin{equation}
D = \frac{1}{2}\left( D_{n,s} + D_{s,n} \right).
\end{equation}
We have found that
the distance $D$ for the likelihood distributions of the f$_\mathrm{NL}$ parameter estimated
in the \emph{Northern} and the \emph{Southern hemispheres} defined by the dipolar pattern described by~\cite{hoftuft09}
is much larger than it would be expected from Gaussian and isotropic random CMB simulations. In particular,
such a distance has a p-value $\approx 0.04$. This result is a further evidence on the largely discussed WMAP
North-South asymmetry, and it is as well an indication that such asymmetry is manifested in terms of the non-Gaussianity
of the CMB temperature fluctuations, in agreement with previous
results~\citep[e.g.,][]{park04,hansen04b,eriksen04b,vielva04,cruz05,eriksen05,land05,monteserin08,rath09,rossmanith09}.
At this point, it is worth recalling that the analysis described above has been performed assuming isotropy, i.e., we have used the same type of correlations to described the second-order statistics in both the \emph{Northen} and in the \emph{Southern hemispheres}.
However, the result obtained by~\cite{hoftuft09} indicates that these two hemisphere might be described by two different correlations (i.e., the sky would not be isotropic any longer).
The dipolar modulation proponed by~\cite{hoftuft09} was small (its amplitude was lower than 0.7\%), but significant (a $3.3\sigma$ detection was claimed).
Assuming this point, we have repeated our previous analysis, but using different statistical properties for the correlation matrices in the two halves.
The way we have estimated these new correlation matrices is as follows: we have generated 500,000 simulations (in the same way it has been already described at the beginning of Section~\ref{sec:wmap}, and, once the co-added maps are degraded to \textsc{\small{NSIDE}}=32, each one of the simulations have been modified by applying the dipolar modulation estimated by~\cite{hoftuft09} from the WMAP data.
It is from these modulated simulations that we have estimated the new correlation matrices needed to estimate the likelihood probabilities. The result of this test is presented in figure~\ref{fig:modulated_sims}. The conclusions related to the f$_\mathrm{NL}$ estimation are essentially the same: on the one hand the analysis of the \emph{Northern hemisphere} provides a constraint compatible with the Gaussian scenario, whereas the \emph{Southern hemisphere} is not; on the other hand, the distance between both distributions is again too large (it also has a p-value --as compared to, in this case, anisotropic simulations-- increases up to $\approx 0.09$ (estimated in terms of the KLD).
However, despite this slight change in the f$_\mathrm{NL}$ hemispherical estimation, dramatic differences can be observed for the pure \emph{cubic} scenario.
Interestingly, accounting for the dipolar modulation correction reveals an extra departure form anisotropy related to the g$_\mathrm{NL}$ constraints. The dipolar modulation makes the maximum likelihood estimation of g$_\mathrm{NL}$ highly incompatible between both hemispheres. In particular, the distance between both distributions is extremely rare as compared with the expected behaviour from
Gaussian and anisotropic CMB simulations (generated, as explained above, by applying the dipolar modulation reported by~\cite{hoftuft09}): it has a p-value of $\approx 0.002$, in the sense of the KLD.
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,keepaspectratio]{./wmap_in_dipole_modulated_fnl.pdf}
\includegraphics[width=8cm,keepaspectratio]{./wmap_in_dipole_modulated_gnl.pdf}
\caption{\label{fig:modulated_sims} As in figure~\ref{fig:dipole_posterior}, but for the case
in which the WMAP data has been analyzed using correlation matrices that account for the dipolar modulation.}
\end{center}
\end{figure*}
We have also studied whether a WMAP data corrected by the dipolar modulation found by~\cite{hoftuft09} could
present a behaviour compatible with the Gaussian and isotropic hypotheses. Results for the corrected WMAP data are given in figure~\ref{fig:dipole_posterior_corrected}.
Notice that we do not see any
significant differences from the previous situation (i.e., the case in which uncorrected WMAP data was analyzed assuming anisotropy): the
dipolar modulation does not affect
to the f$_\mathrm{NL}$ and g$_\mathrm{NL}$ constraints.
Summarizing, the results obtained in this subsection seem to confirm that there is some kind of anomaly related to an hemispherical asymmetry as the one defined by the dipolar pattern reported by~\cite{hoftuft09}, in the sense of the f$_\mathrm{NL}$ parameter. Even more, when WMAP data is analyzed using correlations compatible with the dipolar modulation suggested by~\cite{hoftuft09}, not only asymmetries related to the f$_\mathrm{NL}$ parameter are clear, but also associated to the \emph{cubic term} (i.e., the g$_\mathrm{NL}$ parameter). Intriguingly, the correction of the WMAP data in terms of this dipolar modulation is not enough to obtain a CMB signal compatible with a Gaussian and isotropic random field.
At this point, it is worth mentioning that the dipolar modulation of~\cite{hoftuft09}
was obtained by considering second-order moments of the CMB data and, therefore, this correction only
addresses the problems related to an asymmetry in terms of this order.
Hence, it is not totally surprising that this dipolar modulation correction is not sufficiently satisfactory to
solve the anomaly reported in this work, since such anomaly is related to higher-order moments.
It is also need to point out that we have tested that the dipolar modulation correction of the WMAP data does not affect the
results obtained from an all-sky analysis of the CMB data.
Finally, let us recall that, in a recent work,~\cite{rudjord09} searched for specific asymmetries related to the local estimation of the f$_\mathrm{NL}$ parameter, by using needlets. Contrarily to our findings, in this work no significant asymmetry was found when analyzing WMAP data. There are some differences between the analyses that could explain the discrepancy, although they have to be taken as mere suggestions.
First, the kind of non-Gaussianity that is probed by each work is different: whereas the~\cite{rudjord09} paper explore a f$_\mathrm{NL}$ model that is local in the gravitational potential (from which the non-Gaussian temperature fluctuations are obtained taking into account all the gravitational effects), here we adopt a local model in the Sachs-Wolfe regime. Second, they work at the best WMAP resolution (around 10-20 arcmins), whereas we focus on scales of around $2^\circ$. Third, we explore an specific division of the sky (the one reported by~\cite{hoftuft09}), whereas they consider several divisions that, not necessarily, have to match the one used by us (they explore hemispherical divisions within an interval of around $30^\circ$).
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,keepaspectratio]{./wmap_dipole_corrected_fnl_opt2.pdf}
\includegraphics[width=8cm,keepaspectratio]{./wmap_dipole_corrected_gnl_opt2.pdf}
\caption{\label{fig:dipole_posterior_corrected} As in figure~\ref{fig:dipole_posterior}, but for the case
in which the WMAP data has been corrected by the dipolar modulation.}
\end{center}
\end{figure*}
\section{Conclusions}
\label{sec:final}
We have presented an extension of our previous work~\citep{vielva09}, by
defining a parametric non-Gaussian model for the CMB temperature
fluctuations.
The non-Gaussian model is a local perturbation of the standard CMB Gaussian field,
which (under certain circumstances) is related to
an approximative form of the weak non-linear coupling inflationary model
at scales larger than the
horizon scale at the recombination time
\citep[i.e., above the degree
scale, see for instance][]{komatsu01,liguori03}.
For this model, we are able to build the posterior probability of the data given the non-linear
parameters f$_\mathrm{NL}$ and g$_\mathrm{NL}$. From these pdfs, optimal maximum likelihood estimators of these
parameters can be obtained.
We have verified with WMAP-like simulations that the maximum likelihood estimation of the
\emph{quadratic} non-linear coupling parameters ($\hat{\rm f}_\mathrm{NL}$) is unbiased, at least
for a reasonable range of values, even when non-Gaussian simulations also account for a
\emph{cubic} term. In particular, we found that for simulated non-Gaussian coefficients
such as $|\bar{\rm f}_\mathrm{NL}| \lesssim 400$ and $|\bar{\rm g}_\mathrm{NL}| \lesssim 5\times 10^{5}$,
the estimation of f$_\mathrm{NL}$ is accurate and efficient. However, when trying to study
the case in which only a pure \emph{cubic} model is addressed, the situation is different.
In particular, the simulated \emph{quadratic} term has an important impact on the
estimation of g$_\mathrm{NL}$. For instance, if a pure \emph{quadratic} non-Gaussian model
is simulated with a value of $\bar{\rm f}_\mathrm{NL} \equiv 400$, a value of $\hat{\rm g}_\mathrm{NL} \approx 2.5\times 10^5$
is wrongly estimated. This results indicates, obviously, that the \emph{quadratic} term is
more important than the \emph{cubic} one in the expansion of the local non-Gaussian model, and, therefore, that
not accounting properly for the former might have a dramatic impact on the latter. Contrarily, the
opposite situation is much more unlikely.
Finally, we have investigated the joint estimation of the f$_\mathrm{NL}$ and g$_\mathrm{NL}$ parameters.
In this case we find that for a similar regime as the one mentioned above
(i.e., $|\bar{\rm f}_\mathrm{NL}| \lesssim 400$ and $|\bar{\rm g}_\mathrm{NL}| \lesssim 5\times 10^{5}$),
an accurate and efficient estimation of the non-linear coupling parameters is obtained.
However, for larger values of these coefficients, we find that the parameter estimation is
highly biased, favouring a region of the parameter space of larger values for both
coefficients.
We have addressed, afterwards, the analysis of the WMAP 5-year data. We have consider two
different analyses. First, we have investigated the case of an all-sky analysis (except
for the Galactic area not allowed by the WMAP KQ75 mask). Second, and motivated by
previous findings, we have performed a separated analysis of two hemispheres. In particular, the
hemispherical division associated to the dipolar pattern found by~\cite{hoftuft09}
was considered.
Regarding the all-sky analysis, we find, for the case in which a pure \emph{quadratic}
model is investigated, the same result that we already found in our previous
work~\citep{vielva09}. In particular, we determine that $-154 < {\rm f}_\mathrm{NL} < 94$ at 95\%.
Equivalently, for the case of a pure \emph{cubic} non-Gaussian model we establish
$-5.6\times 10^5 < {\rm g}_\mathrm{NL} < 6.4\times 10^5$ at 95\%. This is in agreement with a recent
work by~\cite{desjacques09}, where an analysis of LSS data was performed. The result that
we provide in this paper is, as far as we know, the first direct
constraint on g$_\mathrm{NL}$ from CMB data alone.
Finally, we have also investigated the case of a joint estimation of the \emph{quadratic}
and \emph{cubic} non-linear coupling parameters. In this case, the constraints obtained
on f$_\mathrm{NL}$ and g$_\mathrm{NL}$ are virtually the same as the ones already reported for the independent
analyses.
We have performed a model selection to evaluate which of the four hypotheses (i.e.
Gaussianity, a pure \emph{quadratic} model, a pure \emph{cubic} model and a
general non-Gaussian scenario) is more likely. Since a well motivated prior for the
non-linear coupling parameters is lacking, we have used asymptotic model selection
criteria (like AIC and BIC), instead of more powerful Bayesian approaches, like
the Bayesian Evidence or the posterior ratio test. Both methodologies (AIC and BIC)
indicates that the Gaussian hypothesis is more likely than any of the non-Gaussian
models. We also found that, among the non-Gaussian scenarios, the one with a pure \emph{quadratic}
model is the most favoured one, whereas the general one (i.e. f$_\mathrm{NL} \ne 0$ and g$_\mathrm{NL} \ne 0$)
is the most unlikely.
The analysis of the WMAP data in two hemispheres revealed that, whereas
both halves of the sky present similar constraints on the g$_\mathrm{NL}$ parameter (and, in
both cases, not indicating any significant incompatibility with the zero value), the
analysis of a pure \emph{quadratic} scenario showed a clear asymmetry. First, the f$_\mathrm{NL}$ value
in the hemisphere whose pole is closer to the Southern Ecliptic Pole, is
$\hat{\rm f}_\mathrm{NL} = -164 \pm 62$. Which implies that f$_\mathrm{NL} < 0$ at 96\%.
Even more, the distance between both likelihoods (given in terms of the KLD) presents a p-value
of $\approx 0.04$.
We have also analyzed the WMAP data after by considering different correlation properties in each
hemisphere (according to the dipolar modulation described by~\cite{hoftuft09}).
We have tested that the
behaviour found for the f$_\mathrm{NL}$ is practically the same as before, and that, in addition,
an extra anomaly appears associated with the g$_\mathrm{NL}$ parameter. In particular,
the distance between both likelihoods is anomalously large as well (it corresponds to
a p-value of $\lesssim 0.002$.
Further test was performed after correcting WMAP data from the dipolar modulation. In this case the
asymmetries in the maximum-likelihood estimations of both non-linear coupling parameters remain unaltered.
Hence, these results indicate that, as it has been previously reported
in other works, there are evidences of anisotropy in the WMAP data, reflected as
an asymmetry between two opposite hemispheres. Such anomaly is related to
a different distribution for a non-linear coupling parameter related to the
\emph{quadratic} term. However, a correction in
terms of a dipolar modulation as the one proposed by~\cite{hoftuft09}, seems
not to be sufficient to account for this anomaly related to the likelihood
distribution of the f$_\mathrm{NL}$ parameter.
\section*{Acknowledgements}
We thank Bel\'en Barreiro and Enrique Mart{\'\i}nez-Gonz\'alez for useful comments. We acknowledge partial financial support
from the Spanish Ministerio de Ciencia e Innovaci{\'o}n project
AYA2007-68058-C03-02. PV also thanks financial support from
the \emph{Ram\'on y Cajal} programme. The authors acknowledge the computer resources, technical
expertise and assistance provided by the Spanish Supercomputing
Network (RES) node at Universidad de Cantabria. We acknowledge the
use of Legacy Archive for Microwave Background Data Analysis
(LAMBDA). Support for it is provided by the NASA Office of Space
Science. The HEALPix package was used throughout the data analysis~\citep{gorski05}.
| proofpile-arXiv_065-6868 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Abstract}
Spectral energy distributions (SEDs) of the central few tens of parsec region of some of the nearest, most well studied, active galactic nuclei (AGN) are presented.
These genuine AGN-core SEDs, mostly from Seyfert galaxies, are
characterised by two main features: an IR bump with the maximum in the 2 -10 $\mu$m range, and an increasing X-ray spectrum with frequency in the 1 to $\sim$ 200 keV region. These dominant features are common to Seyfert type 1 and 2 objects alike. In detail, type 1 AGNs are clearly distinguished from type 2s by their high spatial resolution SEDs: type 2 AGN exhibit a sharp drop shortward of 2 $\mu$m, with the optical to UV region being fully absorbed; type 1s show instead a gentle 2 $\mu$m drop ensued by a secondary, partially-absorbed optical to UV emission bump. On the assumption that the bulk of optical to UV photons generated in these AGN are reprocessed by dust and re-emitted in the IR in an isotropic manner, the IR bump luminosity represents $\gtrsim 70\%$ of the total energy output in these objects, the second energetically important contribution are the high energies above 20 keV. \\
Galaxies selected by their warm infrared colours, i.e. presenting a
relatively-flat flux distribution in the 12 to 60 $\mu$m range have often being classified as active galactic nuclei (AGN). The results from these high spatial resolution SEDs
question this criterion as a general rule. It is found that the intrinsic shape of the infrared spectral energy distribution of an AGN and inferred bolometric luminosity largely depart from those derived from large aperture data. AGN luminosities can be overestimated by up to two orders of magnitude if relying on IR satellite data.
We find these differences to be critical for AGN luminosities
below or about $10^{44}$ erg~ s$^{-1}$. Above this limit, AGNs tend to dominate the light of their host galaxy regardless of the integration aperture size used. Although the number of objects presented in this work is small, we tentatively mark this luminosity as a threshold to identify galaxy-light- vs AGN- dominated objects. \\
\section{Introduction \label{section1}}
The study of the spectral energy distributions (SED) over the widest possible spectral range is an optimal way to characterise the properties of galaxies in general. Covering the widest spectral range is the key to differentiate physical phenomena which dominate at specific spectral ranges: e.g. dust emission in the IR, stellar emission in the optical to UV, non-thermal processes in the X-rays and radio, and to interrelate them as most of these phenomena involve radiation reprocessing from a spectral range into another. The availability of the SED of a galaxy allows us to determine basic parameters such as its bolometric luminosity (e.g. Elvis et al. 1994; Sanders \& Mirabel 1996; Vasudevan \& Fabian 2007), and via modelling of the SED, its star formation level, mass and age (e.g. Bruzual \& Charlot 2003; Rowan-Robinson et al. 2005; Dale et al. 2007).
The construction of bona-fide SEDs is not easy as it involves data acquisition from different ranges of the electromagnetic spectrum using very different telescope infrastructure. That already introduces a further complication as the achieved spatial
resolution, and with it the aperture size used, vary with the spectral range. SEDs based on the integration of the overall galaxy light may be very different from those extracted from only a specific region, for example the nuclear region. In this specific case, the aperture size matters a lot, as different light sources, such as circumnuclear star formation, the active nucleus, and the subjacent galaxy light, coexist on small spatial scales and may contribute to the total nuclear output with comparable energies (e.g. Genzel et al. 1998; Reunanen et al. 2009).
In the specific case of SEDs of AGN, it is often assumed that the AGN light dominates the integrated light of the galaxy at almost any spectral range and for almost any aperture. This assumption becomes mandatory at certain spectral ranges, such as the high energies, the extreme UV or the mid-to-far-IR, because of the spatial resolution limitations imposed by the available instrumentation, which currently lies in the several arcsecs to arcminutes at these wavelengths. In the mid- to far- IR in particular, the available data, mostly from IR satellites, are limited to spatial resolutions of a few arcsecs at best. Thus, the associated SEDs include the contribution of the host galaxy, star forming regions, dust emission and the AGN, with the first two components being measured over different spatial scales in the galaxy depending on the object distance and the spatial resolution achieved at a given IR wavelength.
In spectral ranges where high spatial resolution is readily available, the importance, if not dominance, of circumnuclear star formation relative to that of the AGN has become clear in the UV to optical range (e.g. Munoz-Marin et al. 1997), or in the near-IR (Genzel et al. 1998). In the radio regime, the comparison of low- and high- spatial-resolution maps shows the importance of the diffuse circumnuclear emission and the emission from the jet components with respect to that of
the core itself (e.g Roy et al., 1994; Elmouttie et al, 1998; Gallimore et al. 2004; Val, Shastri \& Gabuzda, 2004).
Even with low resolution data a major concern shared by most works is the relevance of the host galaxy contribution to the nuclear integrated emission from the UV to optical to IR. To overcome these mixing effects introduced by poor spatial resolution, different strategies or assumptions have been followed by the community. In quasars, by their own nature, the dominance of the AGN light over the integrated galaxy light at almost any wavelength is assumed; conversely, in lower luminosity AGN, the contribution of different components is assessed via modelling of the integrated light (Edelson \& Malkan 1986; Ward et al. 1987; Sanders et al. 1988; Elvis et al. 1994; Buchanan et al. 2006 among others).
In this paper, we attempt to provide a best estimate of the AGN light contribution on very nearby AGN by using very high spatial resolution data over a wide range of the electromagnetic spectrum. Accordingly, SEDs of the central few hundred parsec region of some of the nearest
and brightest AGN are compiled. The work is motivated by the current possibility to obtain subarcsec resolution data in the near-to-mid-IR of bright AGN, and thus at comparable resolutions to those available with radio
interferometry and the HST in the optical to UV wavelength range. This is possible thanks to the use of adaptive optics in the near-IR, the diffraction limit resolutions provided by 8 -10 m telescopes in the mid-IR as well as interferometry in the mid-IR.
The selection of targets is driven by the requirements imposed by the use of adaptive optics in the near-IR, which limits the observations to the availability of having bright point-like targets with magnitudes V$~< $15 mag in the field, and the current flux detection limits in mid-IR ground-based observations. AGN in the near universe are sufficiently bright to satisfy those criteria.
The near- to mid- IR high resolution data used in this work come mostly from the ESO Very Large Telescope (VLT), hence this study relies on Southern targets, all well known objects, mostly Seyfert galaxies: Centaurus A, NGC 1068, Circinus, NGC
1097, NGC 5506, NGC 7582, NGC 3783, NGC 1566 and NGC 7469. For comparison purposes, the SED of the quasar 3C 273 is also included.
The compiled SEDs make use of the highest spatial
resolution data available with current instrumentation across the electromagnetic spectrum. The main sources of data include: VLA-A array and ATCA data in radio,
VLT diffraction-limited images and VLTI interferometry in the mid-infrared (mid-IR), VLT adaptive-optics images in the near-infrared, and \textit{HST} imaging and spectra in the optical-ultraviolet.
Although X-rays and $\gamma$-rays do not provide such a fine resolution, information when available for these galaxies are also included in the SEDs on the assumption that above 10 keV or so we are sampling the AGN core region. Most of the data used comes from the Chandra and INTEGRAL telescopes.
The novelty in the analysis is the spatial resolutions achieved in the infrared (IR), with typical full-width at half-maximum (FWHM) $\lesssim $ 0.2 arcsec in the 1--5 $\mu$m, $< $0.5 arcsec in the 11 -- 20 $\mu$m.
The availability of IR images at these spatial resolutions
allow us to pinpoint the true spatial location of the AGN -- which happens not to have an optical counterpart in most of the type 2 galaxies studied -- and extract its luminosity within aperture diameters of a few tens of parsec.
The new compiled SEDs are presented in sect. 3. Some major differences but also similarities between the SEDs of type 1 and type 2 AGN arise at these resolutions. These are presented and discussed in sections 4 and 5.
The SEDs and the inferred nuclear luminosities are further compared with those extracted in the mid-to-far IR from large aperture data from IR satellites, and the differences discussed in sect. 6.
Throughout this paper, $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ is used. The central wavelength of the near-IR broad band filters used are: $I$-band (0.8 $\mu$m), $J$-band (1.26 $\mu$m), $H$-band (1.66 $\mu$m), $K$-band (2.18 $\mu$m), $L$-band (3.80 $\mu$m) and $M$-band (4.78 $\mu$m). \\
\section{The high spatial resolution SED: data source}
This section describes the data used in building the
high spatial resolution SEDs. Each AGN is analysed in turn, and the compiled SED is shown in Fig. 1. The data used in the SEDs are listed per each object in tables 3 to 11.
For each AGN, an upper limit to the core size determined in the near-IR
with adaptive optics and/or in the mid-IR with interferometry is provided first.
The data sources used in constructing the respective SEDs are described next. When found in the literature, a brief summary of the nuclear variability levels especially in the IR is provided. This is mostly to asses the robustness of the SED shape and integrated luminosities across the spectrum. Finally, as a by product of the analysis, an estimate of the extinction in the surrounding of the nucleus based on near-IR colours derived from the high spatial resolution, mostly $J$--$K$, images is provided. The colour images used are shown in Fig. 2. These are relative extinction values, resulting after comparing the average colour in the immediate surrounding of the nucleus with that at further galactocentric regions, usually within the central few hundred parsecs. These reference regions are selected from areas presenting lower extinction as judged from a visual inspection of the colour images. The derived extinction does not refer to that in the line of sight of the nucleus, which
could be much larger - we do not compare with the nucleus colours but with those in its surrounding.
The simplest approach of considering a foreground dust screen is used. The extinction law presented in Witt et al. (1992), for the UV to the near-IR, and that in Rieke \& Lebofsky (1985), for the mid-IR, are used.
For some objects, the near-IR adaptive optics data used to compile the SED are presented here for the first time. For completeness purposes, table 12 is a list of all the adaptive optics VLT / NACO observations used in this work. The list of filters and observations date are provided. For some objects these data was already presented in the reference quoted in this table. The nucleus of these AGN was in all cases bright enough to be used as a reference for the adaptive optics system to correct for the atmospheric turbulence. The optical nuclear source was used in all cases with the exception of Centaurus A and NGC 7582, for which the IR nucleus was used instead. The observation procedure and data reduction for the objects presented here for the first time are the same as those discussed in detail in the references quoted in table 12, we refer the interested reader to those for further details.
There are five type 2, three type 1 and an intermediate type 1 / LINER AGN in the sample, in addition to the quasar 3C 273, included for comparative purposes. The AGN is unambiguously recognised in the near-IR images of all objects as the most outstanding source in the field of 26 x 26 $arcsecs^{2}$ covered by the VLT/NACO images. This is especially the case for the type 2 sources where the AGN reveals in full realm from 2 $\mu$m longwards. The comparable resolution of the NACO-IR- and the HST-optical- images allows us to search for the optical counterpart of the IR nucleus, in some cases using accurate astrometry based on other point like sources in the field (e.g Prieto et a; 2004; Fernandez-Ontiveros et al. 2009 ). In all the type 2 cases analysed the counterpart optical emission is found to vanish shortward of 1 $\mu$m. Accordingly, all the type 2 SEDs show a data gap in the optical - UV region. In addition all the SEDs show a common data gap spanning the extreme UV - soft X-rays region, due to the observational inaccessibility of this spectral range, and in the mid-IR to millimetre range due to the lack of data at subarcsec resolution scales. Surprisingly, for some of these well studied objects, neither radio data at subarcsec scales exist, e.g. Circinus galaxy. In these cases, to get nevertheless an estimate on the nuclear power in this range, the available radio data is included in the SED.
The SED are further complemented with available data in the X-rays, which extends up to 100 - 200 keV for all sources. Only for Cen A and 3C 273, the SEDs are further extended to the gamma-rays, these being the only two so far detected at these high energies. \\
{\it Centaurus A} \\
Centaurus A is the nearest radio galaxy in the Southern
hemisphere. The adopted scale is 1 arcsec $\sim$ 16 pc (D = 3.4 Mpc; Ferrarese et al. 2007). The nucleus of this galaxy begins to show up in the optical longward of 0.7 $\mu$m (Marconi et al. 2000) and so far remains unresolved down to a size $<<$1 pc FWHM in the 1 -- 10 $\mu$m range (from VLT adaptive optics near-IR imaging by Haering-Neumayer et al. 2006 and VLT interferometry in the mid-IR by Meisenheimer et al. 2007).
The currently compiled SED of Cen A core is shown in Fig. 1. The radio data are from VLBA quasi simultaneous observations collected in 1997
at 22.2 and 8.4 GHz (March epoch was selected) and in 1999 at 8.4, 5 and 2.2 GHz, in all cases with resolution of a few milliarcsec (Tingay et al. 2001 and Tingay
and Murphy, 2001). These spatial resolutions allow for a better separation of the core emission from the jet. The peak values reported by the authors are included in the SED.
In the
millimetre range, peak values from SCUBA in the 350--850 $\mu$m range from Leeuw's et al. (2002) are included in the SED. SCUBA has poor spatial resolution, FWHM $\sim$ 10--15
arcsec, however we take these measurements as genuine core fluxes as they follow fairly well the trend defined by the higher resolution data at both radio and mid-IR wavelengths. This common trend is a strong indication that the AGN is the dominant light source in the millimetre range.
In the mid-IR range, VLT / VISIR diffraction-limited data, taken on March 2006, at 11.9 and 18.7 $\mu$m, from Reunanen et al. (2009) are included in the SED.
Further mid-IR data are from VLTI / MIDI interferometric observations in the 8 -- 12 $\mu$m range taken on February and May 2005 with resolutions of 30 mas (Meisenheimer et al. 2007). The visibility's analysis indicates that at least 60\% of the emission detected by MIDI comes from an unresolved nucleus with a size at 10 $\mu$m of FWHM $<1$ pc. This is also confirmed by more more recent MIDI observations by Burtscher et al (2009). The SED includes two sets of MIDI fluxes measured directly in the average spectrum from both periods: 1) the total integrated flux in the MIDI aperture
(0.52 $\times$ 0.62 arcsec); 2) the core fluxes measured on the
correlated spectrum. At 11.9 $\mu$m, the MIDI total flux
and the VISIR nuclear flux, both from comparable aperture sizes, differ by 15\%. The difference is still compatible with the photometry errors which in particular may be large in the MIDI spectrum-photometry.
The near-IR is covered with VLT/ NACO adaptive optics data taken in $J$-, $H$-, $K$- and $L$-bands, and in the narrow-band line-free filter centred at 2.02 $\mu$m.
These are complemented with \textit{HST}/WFPC2
data in the $I$-band (Marconi et al. 2000). Shortward of this
wavelength, Cen A's nucleus is unseen. An upper limit derived from the HST / WFPC2 image at 0.5 $\mu$m is included in the SED.
At high energies, Cen A's nucleus becomes visible
again, as well as its jet. The SED includes the 1 KeV nuclear flux extracted by Evans et al. (2004) from
\textit{Chandra} observations in 2001 and and average of the 100 keV fluxes derived by Rothschild et al. (2006) from \textit{INTEGRAL} observations collected in 2003--2004.
In the gamma-rays, the SED includes \textit{COMPTEL} measurements in the 1 -- 30 MeV
range taken during the 1991--1995 campaign by Steinle et al. (1998).
For comparative purposes, Fig. 1 includes large aperture data -identified with crosses- in
the mid to far IR region selected from \textit{ISO} (Stickel et al. 2004) and \textit{IRAS} (Sanders et al. 2003). For consistency purposes, given the poor spatial resolution of the SCUBA data, these are also labelled with crosses in the SED.
Cen A is a strongly variable source at high energies, where flux variations could be up to an order of magnitude (Bond et al. 1996). However, during the \textit{COMPTEL} observations used in the SED the observed variability on
the scales of few days is reported to be at the 2 sigma level at most (Steinle et al. 1998).
Combining OSSE, COMPTEL and EGRET, the gamma-ray luminosity (50 keV - 1 GeV) varies by 40\%.
No report on variability monitoring of this source in the IR was found. The comparison between the VLT / VISIR data used in this work and equivalent mid-IR data obtained four years apart, 2002, by
Whysong \& Antonucci ( 2004) using Keck
and Siebenmorgen et al. (2004) using ESO / TIMMI2,
indicates a difference with VISIR in the 40 to 50\% range.
Cen A's nucleus is the single source in this study which high spatial resolution SED, from radio to millimetre to IR, can be fit with a single synchrotron model (Prieto et al. 2007; see also end of sect. 7 ). Thus a flux difference of a factor of 2 in the IR is consistent with genuine nuclear variability and the dominance of a non-thermal component in the IR spectral region of this nucleus.
Relative extinction values in the nuclear region of Cen A were measured from
VLT / NACO $J$--$K$ (Fig. 2) and $H$--$K$ colour maps. The average reddest colours around the nucleus are $J$--$K$ = 1.5, $J$--$H$ = 1.5 and the bluest colours
within $\sim$ 100 pc
distance from the nucleus, are $J$--$K$ = 0.68 and
$J$--$H$ = 1.04. Taking these as reference colours for Cen A stellar population, the inferred extinction around the nucleus is $A_V\sim$ 5 -- 7 mag.
By comparison, the extinction inferred from the depth of the silicate 9.6 $\mu$m feature in the line of sight of the nucleus -- from the
VLT / MIDI correlated spectra -- is a factor 2 to 3 larger, $A_v\sim$ 13 - 19 mag (see Table 2). \\
\begin{figure}[p]
\centering
\plottwo{sed-cena.ps}{sed-cir.ps}
\plottwo{sed-n1068.ps}{sed-n5506.ps}
\plottwo{sed-n7582.ps}{sed-3c273.ps}
\caption{Fig. 1: SEDs of the central parsec-scaled region of the AGN in this study: filled points represent the highest
spatial resolution data available for these nuclei; the thin V-shape
line in the mid-IR region shown in some SEDs corresponds to the spectrum of an unresolved source as measured by VLTI / MIDI (correlated flux);
Crosses refer to large aperture data in the mid-IR (mostly from \textit{IRAS} and \textit{ISO}), and the millimetre when available. The frequency range is the same in all plots except for Cen A and 3C 273 which extends up to the gamma-rays. The continuous line in 3C 273 plot is the SED of this object after applying an extinction Av = 15 mag.}
\label{Fig. 1}
\end{figure}
\begin{figure}[p]
\figurenum{1}
\centering
\plottwo{sed-n1097.ps}{sed-n3783.ps}
\plottwo{sed-n1566.ps}{sed-n7469.ps}
\caption{Continued}
\label{Fig. 1 cont}
\end{figure}
\begin{figure}[p]
\centering
\plotone{figure2a.ps}
\caption{Near-IR colour maps of the galaxies in this study. Most are $J$ -- $K$, or $J$ -- narrow band $K$ maps when available. Only the very central 2 to 3 arcsec radius FoV is shown. The small bar at the bottom of each panel indicates the spatial resolution of the maps, and in turn the upper limit size to the AGN core in the $K$-band. }
\label{Fig. 2}
\end{figure}
\begin{figure}[p]
\figurenum{2}
\centering
\plotone{figure2b.ps}
\caption{Continued}
\label{Fig. 2 cont}
\end{figure}
{\it Circinus}\\
Circinus is the second closest AGN in the Southern
Hemisphere. The adopted scale is 1 arcsec $\sim$ 19 pc (D = 4.2 Mpc, Freeman et al. 1977).
Circinus' nucleus is the only one in our sample that is spatially resolved in the VLT/ NACO adaptive optics images from 2 $\mu$m onward. The nucleus resolves into an elongated, $\sim$ 2 pc size, structure oriented perpendicular to the ionisation-gas cone axis (Prieto et al. 2004). Its spectral energy distribution is compatible with dust emission at a characteristic temperature of 300 K; this structure has thus most of the characteristics of the putative nuclear torus (Prieto et al. 2004). Further VLTI / MIDI interferometry in the 8 -- 12 $\mu$m range also resolves the central emission into a structure of the same characteristics (Tristram et al. 2007).
Circinus' SED is shown in Fig. 1.
The highest spatial resolution radio data available for this source -- in the range of a few arcsecs only -- is presented by Elmouttie et al. (1998). Because of this poor resolution, only the central peak values given by that reference are used in the SED. The beam sizes at the available frequencies are 6 $\times$ 5.4 arcsec at 20 cm, 3.4 $\times$ 3.1 arcsec at 13 cm, 1.4 $\times$ 1.3 at 6 cm, and 0.9 $\times$ 0.8 arcsec at 3 cm.
Mid-IR nuclear fluxes are taken from VLT/VISIR diffraction-limited images at 11.9 and 18.7 $\mu$m (Reunanen et al. 2009).
Further mid-IR data are taken from VLTI/MIDI interferometry spectra covering the 8 -13 $\mu$m range (Tristram et al. 2007). Two sets of MIDI data are included in the SED: the correlated fluxes,
which correspond to an unresolved source detected with visibility of 10\%, and the total fluxes that are measured within the MIDI
0.52 $\times$ 0.62 arcsec slit. In the latter case, only fluxes at 8 and 9.6 $\mu$m are included. For sake of simplicity, flux measurements at the longer MIDI wavelengths are not included as they are in very good agreement with the VISIR flux at 11.9 $\mu$m within 5 \%.
Near-IR nuclear fluxes, in the 1 -- 5 $\mu$m range, are taken from VLT / NACO adaptive optics images by Prieto et al. (2004).
Shortward of 1 $\mu$m, Circinus' nucleus is undetected, an upper limit at 1 $\mu$m derived from NACO $J$-band image is included in the SED.
At the high energies, the SED includes the \textit{ROSAT} 1 keV flux after correction for the hydrogen column density, derived by Contini et al. (1998), and \textit{INTEGRAL}
fluxes at different energy bands in the 2 -- 100 keV range (Beckmann et
al. 2006).
For comparative purposes, the SED includes large aperture data in
the mid to far IR region selected from \textit{IRAS} (Moshir et al. 1990).
There are no reports on significant variability in Circinus at near-IR wavelengths. In comparing the data analysed in this work, we find that the 12 $\mu$m nuclear flux measured in 2002 by Siebenmorgen et al. (2004) and that of VLT/ VISIR collected four years apart, differ by less
than 5\%. At the high energies, in
the 20 -- 40 keV range, Soldi et al (2005) report a 10\% maximum
variability in a time interval of 2 days.
An estimate of the extinction toward the nucleus is derived from the VLT/NACO $J$--2.42 $\mu$m colour map (Fig. 2 ). In the surroundings of the nucleus $J$--2.42$\mu$m $ ~>$ 2.2 is found. The colours become progressively bluer with increasing distance and in relatively clean patches in the central 150 pc radius the average value is $J$--2.42 $\mu$m $\sim$ 1. The colours comparison yields a relative extinction of $A_V \sim $ 6 mag, assuming a foreground dust screen. Because of the finest scales, a few parsecs, at which Circinus's nucleus is studied, Prieto et al. (2004) further considered a simple configuration of the dust being mixed with the stars, in which case, an extinction of $A_V \sim $ 21 mag is found. This value is more in line with the range of extinctions derived from the optical depth of the 9.6 $\mu$m silicate feature measured in the VLTI / MIDI correlated spectrum, $22 < A_V < 33 mag $ (Tristram et al. 2007). \\
{\it NGC 1068}\\
NGC 1068, Circinus and Cen A are the brightest Seyfert nuclei in the sample but NGC 1068 is a factor four further away, at a distance of 14.4 Mpc (Bland-Hawthorn et al. 1997). The adopted scale is 1 arcsec $\sim$ 70 pc.
NGC 1068's radio core -- identified with the radio source S1 -- is resolved at 5 and
8.4 GHz into an elongated, $\sim$ 0.8 pc disk-like structure, oriented perpendicular to the radio jet axis (Gallimore, Baum \& O'Dea 2004). VLTI / MIDI
interferometry in the 8 -- 12 $\mu$m range resolves the nucleus into two components: an inner component of about 1 pc size and a cooler, 300 K disk-like component, with size of 3 x 4 pc (Raban et al. 2009).
At 2$\mu$m speckle observations by Weigelt et al. (2004) resolve the central emission into a compact core with size
$1.3 \times 2.8 pc$, and a north- western and south-eastern extensions.
Shortward of 1$\mu$m, and on the basis of the SED discussed below, NGC 1068's nucleus is fully obscured.
The highest spatial resolution SED of NGC 1068' nucleus from radio to optical is presented in Hoenig, Prieto \& Beckert (2008). The authors provide a fair account of the continuum spectrum from radio to IR to optical SED, by including in their model the major physical processes that must contribute to the integrated core emission: synchrotron, free-free emission from ionised gas and the dust-torus emission.
The SED included here is that presented in Hoenig et al. but further complemented with high energy data. Furthermore, the IR nuclear photometry is extracted in a different way: the nucleus of NGC 1068 in the mid- to near- IR presents additional emission structure extending, particularly, North and South of it (Bock et al. 2000; Rouan et al. 2004). To minimise the contamination by this extended emission, the near-IR nuclear fluxes are here extracted within an aperture diameter comparable with the nuclear FWHM measured at the corresponding wavelength.
The SED is shown in Fig. 1. The radio data are taken from
VLA observations at 43 GHz with 50 mas beam (Cotton et al. 2008) and 22 GHz with 75 mas beam from Gallimore et al. (1996), VLBA observations at 8.4 and 5 GHz from Gallimore et al. (2004), and VLBA at 1.7 GHz from Roy et al. (1998).
1 and 3 mm core fluxes are taken from Krips et al. (2006), and correspond to beam sizes larger than 1 arcsec, thus the may include the contribution from the jet. We still include these data in the SED as the jet components have a steep spectrum (Gallimore et al. 2004), and thus their contribution to the core emission is expected to decrease at these frequencies.
In the mid-IR, the nuclear fluxes in the 8 to 18 $\mu$m range are from Subaru (Tomono et al, 2001), complemented with an additional nuclear flux at 25 $\mu$m from Keck (Bock et al. 2000). All these measurements are extracted from deconvolved images in aperture diameters of $\sim 200$ mas. Despite the uncertainties inherited to the deconvolution process, we note
that the fluxes derived by both group of authors at common wavelengths, each using different instrumentation and telescope, differ by $< 30\%$.
The SED also includes the 8 -- 12 $\mu$m VLTI/MIDI correlated spectrum of the unresolved component sampled at four wavelengths. This correlated spectrum was derived from a 78 m baseline which provides a spatial resolution of about 30 mas (Jaffe et al. 2004). The total flux measured in the MIDI 500 mas slit render fluxes larger by a factor of 2 than those measured by Bock et al. and Tomono et al. in their 200 mas aperture. This difference is due to the inclusion in the MIDI slit of the light contribution from the nuclear extended emission of NGC 1068 mentioned above. Thus, the MIDI total fluxes are not included in the SED.
Near-IR data, in the 1 -- 5 $\mu$m range, are from VLT / NACO adaptive optics images at $J$-, $H$-, $K$-, $M$- bands and the 2.42 $\mu$m narrow band line-free filter. The nucleus is unresolved at the achieved resolutions, the nuclear fluxes were extracted in aperture diameter comparable to the nucleus FWHM: 0.1 arcsec in $J$-, $K$- and 2.42 $\mu$m bands, 0.22 arcsec in $H$-band, 0.16 arcsec in $M$-band. The inspection of the VLT NACO J-band image indicates a relatively faint source at the position of the K-band nucleus, and thus we take the extracted J-band flux as an upper limit.
In the X-ray, the following measurements are included in the SED: \textit{Chandra} 1 keV flux from Young, Wilson \& Shopbell (2001), \textit{EXOSAT} flux in the 2 -- 10 keV range by Turner \& Pounds (1989), and \textit{INTEGRAL} data in the 20 -- 40 keV and 40 -- 100 keV bands from Beckmann et al. (2006).
The SED includes large aperture data in the mid- to far- IR from \textit{IRAS} (Sanders et al. 2003) and \textit{ISO} (Stickel et al. 2004) and in the sub-millimetre from Hildebrand et al. (1977).
Monitoring of the source in the near-IR by Glass (2004) indicates long-term variability, with an increase by a factor of two in flux in the period 1970 - 1995.\\
Fig. 2 shows the VLT / NACO $J$ -- 2.42-$\mu$m colour map. Contrary to other AGN in the sample, the nuclear region of NGC 1068 presents
a lot of emission structure as seen in the colour map. The average colour around the centre is $J$ -- 2.42-$\mu$m $\sim 3$, and gets progressively bluer with increasing distance; at $\sim$ 150 pc radius, the average colour $J$ -- 2.42-$\mu$m is $\sim$ 0.7. Taking this value as intrinsic to the central region of the galaxy, the inferred extinction in the surrounding of the nucleus is $A_V \sim$10--12 mag (Table 2). For comparison, the extinction derived from the depth of the silicate feature in the VLTI / MIDI correlated spectrum is $A_V \sim 7$ mag for the more external, 3 x 4 pc cooler component, in rather good agreement with the extinction in the nuclear surrounding. Conversely, the extinction derived for the hotter parsec-scale inner component is $A_V\sim$ 30 mag (Raban et al. 2009). \\
{\it NGC 1097}\\
NGC 1097 is one of the nearest LINER / Seyfert type 1 galaxy in the Southern Hemisphere, at a comparable distance as NGC 1068. The adopted scale is 1 arcsec $\sim$ 70 pc (distance = 14.5 Mpc, Tully 1988). The nucleus is visible at all wavelengths from UV to radio. In the IR, it is unresolved down to the highest resolution achieved in this object with VLT / NACO: FWHM $\sim$ 0.15 arcsecs in $L$-band ($<$ 11 pc).
The SED is shown in Fig. 1. At radio waves, the following measurements are included in the SED: sub-arcsec resolution data from VLA A-array at 8.4 GHz (Thean et al. 2000) and at 8.4 GHz, the latter with a beam resolution of $0.66 x 0.25 arcsec^2$ (Orienti \& Prieto, 2009), VLA B-array data at 1.4 GHz with beam resolution of $2.5\times1.5 arcsec^2$ (Hummel et al. 1987), and VLA B-array archived data at 15 GHz re-analysed in Orienti \& Prieto (2009), and for which a final beam resolution of 1.15 $\times$ 0.45 $arcsec^2$ is obtained. Despite the relatively larger beam of VLA B-array data, we believe the associated nuclear fluxes to be fully compatible with those derived with the finer scale VLA A-array data. This is based on the analysis of equivalent VLA A- and B-array data at 8.4 GHz which yielded same nuclear fluxes, indicating that the nucleus is unresolved.
The mid-IR is covered with diffraction-limited resolution data from the VLT / VISIR images at 11.88 and 18.72 $\mu$m (Reunanen et al. 2009). The near-IR, 1 to 4 $\mu$m, is covered with VLT/NACO adaptive
optics images at $J$-, $H$-, $K$- and $L$- bands, which have spatial resolutions FWHM $< \sim$ 0.2 arcsec (Prieto et al. 2004, and this work).
Optical and UV data were extracted from archival \textit{HST} / WFPC2 F220W and F555W images and
\textit{HST}/ACS F814W image. Further UV data available from \textit{IUE} and \textit{GALEX} were not considered as their spatial resolution beams, 20 and 6 arcsecs FWHM respectively include emission from the prominent NGC 1097's starforming ring located at 5 arcsec radius from the centre.
In the X-rays, the absorption-corrected fluxes derived by Therashima et al. (2002) in the \textit{ASCA} 0.1--4 keV and 2--10 keV windows are used. Further observations at higher energies were not found in the literature.
For comparison purposes, the SED includes large aperture data, for the mid-IR to the millimetric wavelengths, from \textit{IRAS},
\textit{Spitzer} and SCUBA, all taken from Dale et al. (2007).
NGC 1097 has shown variability in the optical, exhibiting broad emission lines. To our knowledge, there is no report on variability at any other spectral range.
Using the VLT / NACO colour maps $J$--$H$ and $H$--$K$, Prieto, Maciejewski \& Reunanen (2005) estimate a moderate extinction of $A_V\sim$ 1 mag towards the centre. To estimate this extinction, the colours in the surrounding of the nucleus were compared with those at further locations, inward the circumnuclear star forming ring and not contaminated by the nuclear dust filaments. These filaments are readily seen in the NACO $J$--$K$ colour map shown in Fig. 2. \\
{\it NGC 5506} \\
This is a Seyfert type 1.9 nucleus in an edge-on disk galaxy, and largely covered by dust lanes.
As it is found in our VLT / NACO adaptive optics images, the nucleus dominates
the galaxy light at IR wavelengths from 1 $\mu$m onward, but there is no equivalent counterpart in HST optical images (Malkan et al. 1998). In the 1 to 20 $\mu$m range, the nucleus is unresolved down to the best spatial resolution achieved with the NACO observations, that is the $K$-band which sets an upper
limit for the size of the source of FWHM $\sim$ 0.10 arcsecs ($<$13 pc).
The adopted scale is 1 arcsec $\sim$ 126 pc (redshift taken from NED).
The SED is shown in Fig. 1. VLBA maps show the nuclear region resolved in three blobs. In the SED, the emission from the brightest and smallest of the three blobs, also with the flattest spectral index ($\alpha$ = + 0.06), called BO component in Middelberg et al. (2004), is taken into account. Reported peak values from VLBA and VLBI at 8.3, 5 and 1.7 GHz, with some of them taken at multiple epoch, are all included in the SED.
In addition, PTI (Parkes Tidbinbilla Interferometer) data at 2.3 GHz from Sadler et al. (1995) from a 0.1
arcsec beam is also included.
In the mid-IR, the extracted nuclear fluxes from VLT / VISIR diffraction-limited data at 11.8 and 18.7
$\mu$m from Reunanen et al. (2009) are included. Additional fluxes at 6 and 9.6 $\mu$m were directly measured on the 6 --13 $\mu$m spectrum published in Siebenmorgen et al. (2004, their Fig. 15), which combines ESO/TIMMI2 and ISOPHOT data. Although the ESO/ TIMMI2 data correspond to a
1.2 arcsec slit-width, it perfectly joins the large aperture ISOPHOT spectrum, thus the measured fluxes should be rather genuine of the pure nuclear emission. The near-IR
1 -- 4 $\mu$m data are extracted from the VLT / NACO adaptive optics images. As the images are dominated by the central source with bare detection of the host galaxy, the nuclear fluxes were integrated within aperture sizes of 0.5 arcsec in diameter.
Below 1 $\mu$m, the nucleus is undetected, an upper limit in the $R$-band
derived from \textit{HST} / WFPC2 archive images is set as a reference in the SED. In the X-rays, \textit{INTEGRAL} fluxes in the 2 -- 100 keV range from Beckmann et al. (2006) are included. The soft X-rays, 0.2 -- 4 keV, are covered with \textit{Einstein} data (Fabbiano et al. 1992).
For comparison purposes, large aperture data from the mid- to far- IR, collected with \textit{IRAS} (Sanders et al. 2003) are included in the SED.
There is no apparent nuclear variability in the IR over a time-scale of years. This follows from the existing agreement between ISOPHOT, \textit{IRAS}, ground based spectra taken in 2002 (Siebenmorgen et al. 2004), and VLT/ VISIR data taken in 2006, all furthermore having very different spatial resolution. The nucleus is however highly variable in the X-rays by factors of
$\sim$ 2 in scales of a few minutes (Dewangan \& Griffiths 2005).
Fig. 2 shows a VLT / NACO $J$--$K$ colour image of the central 2.5 kpc region. The nucleus and diffraction rings are readily seen, these are further surrounded by a diffuse halo sharply declining in intensity.
Taken as a reference $J$--$K$ $\lesssim1.8 $ as the average colour in the outermost regions in this halo, $\sim$ 150 pc radius, and $J$--$K$ $\sim 2.8 $ that in the surrounding of the nucleus, the comparison of both yields a relative extinction towards the centre of $A_V \sim>$5 mag.
Due to the faintness of the galaxy, the true extinction around the nucleus might be much higher than that. Most probable the colour of the halo is largely affected by the nucleus PSF wings. For comparison, the extinction derived from the depth of the silicate feature at 9.6 $\mu$m in Siebenmorgen's et al. 1.2 arcsec slit-width spectrum is $A_V\sim$ 15 mag (Table 2). \\
{\it NGC 7582} \\
This is a Seyfert type 2 nucleus surrounded by a ring of star forming regions. The East side of the galaxy is largely obscured by dust lanes. These fully obscure the nucleus and many of the starforming regions at optical wavelengths. Most of them and a very prominent nucleus are revealed in seeing-limited VLT / ISAAC near-IR images (Prieto et al. 2002). The achieved spatial resolution in the current adaptive optics images allow us for very accurate astrometry on the position of the nucleus, by taking as a reference the star forming regions identified in both the IR and HST optical images. In this way, a weak optical counterpart source at the IR nucleus location is found, along with a rich network of new star forming regions, some as close as 0.5 arcsec (50 pc) from the centre, the furthest being seen up to about 320 pc radius. The ages and masses of these regions are analysed in Fernandez-Ontiveros et al. (in preparation).
The nucleus is unresolved down to the best resolution achieved in these observations, which yield a FWHM $\sim $ 0.1 arcsec ($<$ 11 pc) at 2 $\mu$m. The adopted scale is 1 arcsec $\sim$ 105 pc (taking the redshift from NED).
The SED is shown in Fig. 1. The published radio maps at 8.4 and 5 GHz obtained with the VLA-A array show a diffuse nuclear region (e.g. Thean et al. 2000).
In an attempt to improve the spatial resolution, an unpublished set of VLA-A array data at those frequencies was retrieved from the VLA archive and analysed by filtering out the antennas that provide the lowest resolution. In this way, a nuclear point-like source, and some of the surrounding star forming knots could be disentangled from the diffuse background emission. The final beam resolution corresponds to a FWHM = 0.65 $\times$ 0.15 arcsec$^2$ at 8.4 GHz, and 0.96 $\times$ 0.24 arcsec$^2$ at 5 GHz (Orienti \& Prieto 2009). The radio data used in the SED corresponds to the nuclear fluxes extracted from these new radio maps. There is no further high resolution radio data available for this galaxy. \\
Mid-IR nuclear fluxes are taken from the analysis done on diffraction-limited VLT/VISIR images at 11.9 and 18.7
$\mu$m by Reunanen et al. (2009).
Additional measurements in the 8 -- 12 $\mu$m range are extracted from
an ESO / TIMMI2 nuclear spectrum taken with an 1.2 arcsec slit width (Siebenmorgen et
al. 2004). Although within this slit-width the contribution from the nearest circumnuclear star forming regions is certainly included, the derived fluxes follow the trend defined by the higher spatial resolution VISIR and NACO data, which indicates the relevance of the AGN light within at least 50 pc radius (0.6 arcsec) from the centre. In the near-IR, 1 -- 4 $\mu$m, the nuclear fluxes
are extracted from a \textit{HST} / NICMOS $H$-band image, and the VLT / NACO- narrow-band images at
2.06 $\mu$m and 4.05 $\mu$m, and $L$-band image, using aperture diameters of 0.3 arcsecs. This aperture is about twice the average FWHM resolution obtained in the NACO images. In the optical, NGC 7582's nucleus becomes very absorbed.
We find a weak optical counterpart to the K-band nucleus in the \textit{HST}/ WFPC2 F606W image (Malkan, Gorjiam \& Tam 1998). The estimated flux was derived by integrating the emission in an aperture size of 0.3 arcsecs in diameter centered at the location of the IR nucleus. \\
A bright nuclear source becomes visible again at the higher energies. The nuclear fluxes included in the SED are extracted from \textit{SAX} data in the 10 -- 100 keV band (Turner et al. 2000), and XMM data in the 2 -- 12 keV (Dewangan \& Griffiths 2005). In the latter case, the reported absorption-corrected flux is used. An additional soft X-ray flux is extrated from the ASCA 0.25 - 2 keV band integrated flux, not corrected by absorption, reported by Cardamone, Moran \& Kay ( 2007).
For comparison purposes, large aperture data in the mid- to far- IR are from \textit{IRAS} (Sanders et al. 2003)
are also included.
NGC 7582's nucleus has shown variability in the optical, exhibiting broad emission lines. It is also variable in the X-rays by factors of to 2 in intensity on scales of moths to years (Turner et al. 2000). There is no reported variability in the IR. On the basis of the mid-IR data used in this work, the inferred nuclear fluxes from TIMMI2 and VISIR observations collected four years are in excellent agreement.
An $H$--2.06 $\mu$m colour map is shown in Fig. 2. This is constructed from
\textit{HST}/NICMOS $H$-band and a VLT / NACO narrow band image at 2.06 $\mu$m. The central kpc region shown in the map reveals multiple star forming knots interlaced with dust. The average colour within a radius of 180 pc from the centre and outside the star forming knots is $H$ -- 2.6 $\mu$m $\sim$ 1.6. Further out, beyond 400 pc radius and avoiding the dust filaments, the average colour is
$H$-- 2.02$\mu$m $\sim$ 0.9. The relative extinction towards the centre is estimated $A_V \sim$ 9 mag (Table 2).
The depth of the silicate feature at 9.6 $\mu$m points to larger values: $A_V \sim$ 20 mag (Siebenmorgen et al. 2004). \\
{\it NGC 1566}\\
NGC 1566 is a Seyfert type 1 nucleus at a distance of 20 Mpc (Sandage \& Bedke 1994). Accordingly, the adopted scale is 1 arcsec $\sim$ 96 pc.
The HST/ WFPC2 images (e.g. Malkan, Gorjiam \& Tam 1998) show dust lanes circumscribing the nuclear region. Some of them can be followed up to the centre, where they seem to bend and spiral about. The near-IR VLT / NACO diffraction-limited images reveal a smooth galaxy bulge but some of the innermost dust lanes still leave their mark even at 2 $\mu$m. Both, at near- and mid- IR wavelengths, the nucleus is unresolved down to the best resolution achieved, FWHM $< 0.12~ arcsec $ in $K$-band (11 pc).
The SED is shown in Fig. 1. The high spatial resolution radio data are taken form ATCA observations at 8.4 GHz with a beam resolution FWHM = 1.29 $\times$ 0.75 arcsecs (Morganti et al. 1999), and from PTI at 2.3 and 1.7 GHz with a beam resolution FWHM $<$0.1 arcsecs (Sadler et al 1995).
In the mid-IR, the nuclear fluxes are extracted from VLT / VISIR diffraction-limited images at 11.9 and 18.7 $\mu$m (Reunanen et al. 2009). In the near-IR, those are extracted
from VLT/ NACO adaptive optics images in $J$- , $K$- and $L$-bands, within an aperture diameter of 0.4 arcsecs.
UV fluxes, in the 1200 to 2100 A range, were directly measured on the re-calibrated pre-COSTAR \textit{HST}/FOS nuclear spectra published by Evans \& Koratkar (2004). These fluxes were measured on the best possible line-free spectral windows. FOS spectra were collected with the circular 0.26 arcsec aperture diameter. Optical nuclear fluxes were extracted from \textit{HST}/WFPC2 images with the filters F160BW, F336W, F547M, F555W and F814W.
The X-ray data are from \textit{BeppoSAX} in the 20 -- 100 keV range (Landi et al. 2005), and from \textit{Einstein} in the 0.2 -- 4 keV range (Fabbiano et al. 1992). In the latter case, an average of the two reported measurements is used.
Large aperture data in the mid- and far- IR are from \textit{IRAS} (Sanders et al. 2003), the flux at 160 $\mu$m is from \textit{Spitzer} (Dale et al. 2007).
This nucleus may be variable by about 70\% in the X-rays (see Landi et al. 2005) but seems quieter in the optical and the near-IR. Variability by a factor of at most 1.3 over a 3 year monitoring period is reported in the near-IR, with the optical following a similar pattern (Glass 2004).\\
A VLT / NACO $J$--$K$ colour map is presented in Fig. 2. On the basis of this map, the average colour in the surrounding of the nucleus is $J$--$K$ $\sim $ 1.5, the colours get bluer with increasing distance and at about 300 pc radius, the average colour in regions outside the dust filaments is $J$--$K$ $\sim$ 0.3. The comparison of both implies an extinction towards the nucleus of $A_V \sim$ 7 mag (Table 2). \\
{\it NGC 3783}\\
NGC 3783 is a Seyfert type 1 nucleus in a SBa galaxy. The adopted scale is 1 arcsec $\sim$ 196 pc (redshift taken from NED). The optical \textit{HST} ACS images show a prominent nucleus surrounded by fingers of dust (also seen in the HST/WFPC2 F606W image by Malkan et al, 1998), a bulge and a star forming ring at about 2.5 kpc radius from the centre. In the near-IR VLT /NACO images, the emission is dominated by an equally prominent nucleus within a smooth and symmetric bulge; the star forming ring just get outside the field of view of these images. An upper limit to the size of the nuclear region is FWHM $<0.08$ arcsec (16 pc) at 2 $\mu$m. First results with VLTI / MIDI interferometry with spatial resolutions $\sim 40 mas$ at 12 $\mu$m indicate a partially resolved nuclear region (Beckert et al. 2008). Modelling of these data with a clumpy dusty disk indicates a central structure $\sim 14 pc$ in diameter at 12 $\mu$m. This result requires further confirmation with different base-line configurations.
The SED is shown in Fig. 1. It includes VLA data at 8.4 GHz (FWHM $\leq 0.25$ arcsec, Schmit et al. 2001), at 4.8 GHz (FWHM $\sim $ 0.4 arcsec, Ulvestead \& Wilson, 1984) and at 1.4 GHz (FWHM $\sim 1 $ arcsec$^2$, Unger et al. 1987).
VLT / VISIR diffraction-limited data at 11.9 and 18.7 $\mu$m from Reunanen et al. (2009) cover the mid-IR range. In the near-IR, 1 to 4 $\mu$m, the nuclear fluxes are extracted from our VLT / NACO adaptive optics images in the J-, K- and L- bands using apertures of $\sim$ 0.4 arcsec diameter. In the optical, nuclear-aperture photometry was extracted from archived \textit{HST} ACS images taken with the filters F547M and F550M and the WFPC2 image with the filter F814W. The UV range is covered with archival
\textit{HST} /STIS spectra in the 1100--3200 A region. Continuum fluxes were measured on best possible line-free regions. As NGC 3783's nucleus is very strong, we decided in this case to supplement the SED with additional UV measurements from larger aperture data, specifically, from the \textit{FUSE} spectrum published in Gabel et al. (2003) - we measured a data point at $\sim$ 1040 A on their estimated continuum flux level in their Fig 1 - in a $U$-band measurement in a 9.6 arcsec aperture from Las Campanas 0.6 m telescope reported in McAlary et al. (1983).
In the high energy range, nuclear fluxes at specific energies were extracted from observations with OSSE in the 50 -- 150 keV range (Zdziarski, Poutanen \& Johnson 2000), \textit{INTEGRAL} in the 17 -- 60 keV (Sazonov et al. 2007), \textit{XMM} in the 2--10 keV range - in the later case, the reported flux from the summed spectrum of several observations in Yaqoob et al. (2005) was used - and Einstein, in the 0.2 -4 KeV (Fabbiano et al. 1992).
Large aperture data in the mid-to-far IR are from \textit{IRAS} (Moshir et al. 1990).
NGC 3783 is known to be variable in the optical and the near-IR by factors of up to 2.5 in intensity on the scale of months (Glass et al. 2004); and on scales of minutes by a factor of 1.5 in the X-rays (Netzer et al. 2003).
A VLT / NACO $J$--$K$ colour map of the central 1.5 kpc region is shown in Fig. 2. The nucleus is the most prominent feature, this appears surrounded by diffraction rings and atmospheric speckles. The colour distribution is rather flat across the galaxy: J - K $\sim < 1$. The inferred extinction towards the nucleus is moderate: $A_V <$ 0.5 mag.
In the VLTI / MIDI interferometric spectrum the silicate feature at 9.6 $\mu$m is absent (Beckert et al. 2008).\\
{\it NGC 7469}\\
This type 1 nucleus is the most distant source in the sample, 20 times further than Cen A. The adopted scale is 1 arcsec $\sim$ 330 pc (from the redshift taken from NED). The nucleus is surrounded by a starforming ring that extends from 150 pc to about 500 pc radius (as seen in the HST /ACS UV image in Munoz-Marin et al. 2007). The nucleus is unresolved at near-IR wavelengths down to best resolution achieved with VLT / NACO adaptive optic images. That was in the H-band, from which an upper limit to the nucleus size FWHM $ < 0.08 $ arcsec (26 pc) is derived.
The SED for this source is shown in Fig. 1. This nucleus is known to have undergone an high state level in the optical - UV in the period 1996 -- 2000, followed by a slower return to a low state level reaching a minimum at 2004. Changes in the continuum intensity up to a factor of 4 were measured (Scott et al. 2005). The nucleus is also variable in the X-rays by a factor of 2.5 (Shinozaki et al. 2006). On the other hand, monitoring of the source with IRAS pointed observations in 1983 in a time period of 22 days indicates a stable source at the 5\% level (Edelson \& Malkan 1987).
Accordingly, special attention was paid in selecting data the most contemporaneous possible, with most of them being taken from the year 2000 on. Thus, the SED is based on the following data sources.
The radio regime is covered with PTI data at 1.7 and 2.3 GHz, with beam size of $\sim 0.1 arcsecs$ (Sadler et al. 1995), MERLIN data at 5 GHz, with beam size of $< 0.05 arcsecs$ (Alberdi et al. 2007) and VLA archived data at 8.4 and 14 GHz, reanalysed in Orienti \& Prieto (2009), and for which resolution beams of $< 0.3~ arcsec$ and $< 0.14 ~arcsec$ respectively are obtained.
In the IR, nuclear fluxes were extracted from VLT / VISIR diffraction-limited images at 11.9 and 18.7 $\mu$m collected in 2006 (Reunanen et al. 2009),
VLT/ NACO adaptive optics images in $J$-, $H-$, $K-$- and $L$-bands and the narrow-band continuum filter at 4.05 $\mu$m, all collected from 2002 on (Table 12), and the HST / NICMOS narrow-band continuum image at 1.87 $\mu$m, collected in 2007.
In the optical and UV, the nuclear fluxes were extracted from the HST /ACS F330W (collected in 2002), F550M and F814W (both in 2006), and WFPC2 F218W (1999) and F547M (2000) images. The aperture size used in all cases, VLT-IR and HST-optical-UV, was 0.6 arcsec in diameter. This selection was a compromise between getting most of the light in the nuclear PSF wing and avoiding the star forming ring.
In the X-rays, nuclear fluxes at specific energies were extracted from observations with INTEGRAL in the 17 - 60 keV band (Sazonov et al. 2007), XMM in the 2-10 keV band (Shinozaki el al. 2006) and ROSAT in the 0.1 - 2.4 keV band (Perez-Olea \& Colina 1996). In the case of XMM and ROSAT, the fluxes included in the SED were derived from the luminosities provided by the authors, and we thus assume they are intrinsic to the source, although this is not explicitly mentioned to be the case.
The ROSAT flux is nevertheless not genuine from the nucleus as it includes the contribution of the complete star forming ring.\\
The SED is complemented with large aperture data in the IR, taken from IRAS (Sanders et al. 2003) and Spitzer (Weedman et al. 2005), and millimetre from SCUBA (Dunne et al. 2000) and the Caltech Submillimeter
Observatory (Yang \& Phillips 2007).
A VLT / NACO $J$--$H$ colour map of the central 4 kpc region is shown in Fig. 2. This was selected instead of the usual $J$--$K$ because of the better spatial resolution reached in the H-band. The map shows the nucleus and a ring of diffuse emission which just encloses the star forming regions, with radius of 500 pc. Further out from the ring, the signal in the individual near IR images drops dramatically, and
a reliable estimate of the intrinsic galaxy colours is not possible. Thus, an estimate of the relative extinction around the
nucleus is not provided in this case.
As in NGC 3783, the available VLTI /MIDI interferometric spectra do not show evidence for a silicate feature.
\section{Comparing with a Quasar: the SED of 3C 273 }
3C 273 is one of the most luminous quasars in the sky reaching a luminosity of several
$10^{46}$ erg sec$^{-1}$ at almost any energy band.
In gamma-rays, its luminosity reaches $10^{47}$ erg sec$^{-1}$, which is suspected to be due to strong beaming at these energies.
3C 273 is a radio loud source while all the other AGN discussed in this work are radio quiet, with the possible exception of Cen A. It is also the most distant object in the sample: the adopted scale is 1 arcsec $\sim$ 3.2 kpc (redshift taken from NED). The high power of 3C 273 makes its SED insensitive to the spatial resolution data used at almost any band, except in the low frequency regime where some of the
jet components are somewhat comparable to the core strength.
That together with its excellent wavelength coverage lead us to use this SED as a reference for a no obscured AGN. 3C 273 is variable mainly at high energies by factors of 3 to 4. But from optical to radio, the reported variability is 20 to 40\% (Lichti et al. 1995; Turler et al. 1999), which might affect the SED in detail but has minimal impact on the overall shape.
The SED of 3C 273 is shown in Fig. 1. The data from 1 $\mu$m up to the gamma-rays are from Turler et al. (1999). These data are indeed an average of multiple observations at distinct epochs. Further in wavelength, the following set of data are used: for consistency with the rest of the work presented here, the data beyond 1 $\mu$m up to 5 $\mu$m are taken from our VLT / NACO adaptive optics images (collected in May 2003); the difference with the larger aperture data used in Turler et al. is less than 10\%. The 6 to 200 $\mu$m range is covered with \textit{ISO} (collected in 1996, Haas et al, 2003). In the millimetre and radio waves we used higher resolution data than that in Turler et al. to better isolate the core from the jet components. Those are: VLBI observations at 3 mm from Lonsdale, Shepher \& Robert (1998) and 147, 86, 15 and 5 GHz from Greve et al. (2002), Lobanov et al. (2000), Kellerman et al. (1998) and Shen et al. (1998) respectively. Additional VLBA measurements at 42 and 22 GHz - peak values - are taken from Marscher et al. ( 2002).
A VLT / NACO $J$-$K$ colour map is shown in Fig. 2. This just shows the central point-like source and various diffraction rings.
The SED of 3C 273 differs from all those shown in this work in the radio domain mainly, presenting a flatter spectrum as expected from a radio loud source. In the optical to UV, the difference is also important with type 2 nuclei because of the dust absorption in the latter but less so with type 1s.
For comparative purposes, an artificial extinction of $A_V = 15$ mag was applied to the SED of 3C 273 and the result is shown on top of its SED in Fig. 1. With extinction values in this range, which is on average what we measure from the near-IR colour maps in the galaxy sample, the UV to optical region in 3C 273 becomes fully absorbed, presenting a closer resemblance with that shown by the Seyfert type 2 nuclei.
The comparison of 3C 273 with the type 1 nuclei is more illustrative. The average of the three genuine type 1 SEDs compiled in this work is shown together with that of 3C 273 in Fig. 3. The SEDs are normalised to the mean value of their respective power distributions, in this way they appear at comparable scale about the optical region. All the SEDs show in the optical to UV the characteristic blue bump feature of type 1 sources and the usual inflexion point at about 1 $\mu$m. However, whereas the blue bump is stronger in 3C 273, indeed the dominant feature in the UV to IR region, that is weaker and softer in energy in the Seyfert 1's.
Conversely, in the IR the type 1 objects show a broad bump feature in the 10 - 20 $\mu$m range whereas
3C 273 shows a flatter distribution over the same spectral region, the general trend being of a shallow decrease in power with decreasing frequency. \\
A weaker UV bump is an indication of more dust in the line of sight to these type 1 nuclei, but the lack of the IR bump in 3C 273, which is a key diagnostic of the existence of a central obscuring dust structure in any AGN, points to no much dust in this nucleus. This absence may still be due to a strong non-thermal contribution - this is a flat-spectrum radio source - which would smear out any IR bump. However, this will make the dust contribution to the IR even lower. Some hot dust may still exists in 3C 273, as traced by the detection of silicate features in emission at 10 and 20 $\mu$m (Hao et al. 2005). Thus, the
overall evidence points to a poor dusty environment in this object as compared with those of lower luminosity AGN.
There are also first results with VLTI / MIDI and Keck interferometry on 3C 273 which indicate a slightly resolved nuclear structure at 10 and 2 $\mu$m respectively (Tristram et al. 2009 and Pott 2009). However it is more plausible that this structure is caused by the jet of 3C 273 rather than by dust.
At the high energies, the comparison of the Seyfert's SED shape with that of 3C 273 is limited by the poorer spectral coverage of the former. Still, in the overlapping region, from the soft X-rays till $\sim $200 keV, the SED of all the Seyfert's present a gentle rise in power with increasing frequency. This trend may point out to the existence of a further emission bump at much higher energies but that may escape detection with present facilities. A broad emission bump peaking at the MeV region is detected in both 3C 273 and Cen A, both with jets detected in the X-rays.
\begin{figure}[p]
\centering
\plotone{average_sy1.eps}
\caption{Average SED - in red - of the type 1 nuclei in this work: NGC 1566, NGC 3783 and NGC 7469.
3C 273 - in yellow - is shown for comparison. Prior averaging, each SED was set to its rest frame system and then normalised to the mean value of its $\nu F_\nu$ distribution. In the X-rays, the average is determined in the common to the three objects energy-window, 1 to $\sim$ 60 keV. }
\label{Fig. 3}
\end{figure}
\section{Results: the new SED of nearby AGN }
\subsection{The SED shape}
The high spatial resolution SEDs (Fig. 1) are all characterised by two main features: a emission bump in the IR and an increasing trend in power at the high energies.
The available high spatial resolution data spans the UV, optical, and IR up to about 20 $\mu$m. Shortward of $\sim 1 \mu$m, all the type 2 nuclei are undetected, or barely detected (e.g NGC 7582). Longward of 20 $\mu$m, there is a wide data gap up to the radio frequencies due to the lack of data of comparable spatial resolution. Subject to these limitations, all type 2 nuclei are characterised by a sharp decay in power from 2 $\mu$m onwards to the optical wavelengths. Conversely,
type 1 objects present also a decay shortward of 2 $\mu$m but this recovers at about 1 $\mu$m to give rise to the characteristic blue bump feature seen in Quasars and type 1 sources in general (e.g. Elvis et al. 1994).
The inflexion point at about
1$\mu$m is also a well known feature in AGN, generally ascribed to a signature of dust emission at its limiting sublimation temperature (Sanders et al. 1989). The only LINER in the sample, NGC 1097, appears as an intermediate case between both type 1 and 2: it is detected up to the UV wavelengths but there is no blue bump, its overall SED being more reminiscent of a type 2 nucleus.
Longward of 2 $\mu$m all the SEDs tend to flat and there is some hint of a turnover towards lower power at about 20 $\mu$m in some objects. The large data gap in the far-IR to the millimetric wavelengths leaves us with the ambiguity on the exact shape of the IR bump and its width. Because of the small physical region sampled in the near- to mid- IR, on the scale of tens of parsec, an important contribution from cold dust at these radii that would produce a secondary IR-millimetre bump is not anticipated. Instead, a smooth decrease in $\nu F_\nu$ towards the radio frequencies is expected. This suggestion follows from the SED turnover beyond 20 $\mu$m shown by the objects for which large aperture data in the far-IR or the millimetre wavelengths could be included in their SEDs, namely NGC 3783, NGC 5506 and Cen A.
The complete galaxy sample is detected in the 0.2 --100 keV region - with the exception of NGC 1097 for which no reported observations beyond 10 keV were found. All show a general increase in power with frequency. At higher energies, Cen A is the only source detected up to the MeV region. Indeed, among low-power AGN, Cen A is so far the only source detected at gamma-rays (Schonfelder et al. 2000). 3C 273 is of course detected at these energies and as Cen A, both exhibit a rather broad bump peaking in the MeV region.
Fig. 4 shows the average of the type 2 SEDs - excluding the case of Cen A because of its non-thermal nature, see sect. 7 - compared with that of the type 1's (same average SED shown in Fig. 3). The same procedure to produce the average type 1's is used for type 2's: each type 2 SED is normalised to the mean of its power distribution, the resulting SEDs being then averaged. The resulting average template for each type are plotted one on top of the other in the figure. It can be seen
that the most relevant feature in both SED types is the IR bump. This can be reconciled with emission from dust with an equivalent grey-body temperature of $\sim$ 300 K in average (section 5.2 ). The location and shape of the bump longward of
2~$\mu$m is similar for both types; it is shortward of this wavelength where the difference arises: type 1s present a shallower 2 $\mu$m-to-optical spectrum which is further ensued by the blue bump emission. This dramatic difference suggests a clearer sight line to hot dust in type 1s but and obscured one in type 2s. This is fully consistent with the torus model: the shallower spectrum reflects the contribution of much hotter dust from the inner region of the torus which we are able to see directly in type 1s; in the type 2s this innermost region is still fully absorbed by enshrouding colder dust.
The second important feature in both SED templates is the high energy spectrum. Within the common sampled energy band -- 0.1 to $\sim$ 100 keV -- this region appears rather similar in both AGN types, the general trend being that of a gentle increase in power with increasing frequency.
\begin{figure}[p]
\centering
\plotone{average_sy.eps}
\caption{Average SED template of type 1s, in thick line - includes NGC 3783, NGC 1566 and NGC 7469 - and of type 2s, dash line - Circinus, NGC 1068, NGC 5506 and NGC 7582. Before averaging, each SED
was set to its rest frame system and then normalised to the mean value of its $\nu F_\nu$ distribution. The average for type 2s in the X-rays is determined in the common energy window, 1 to 70 keV, that of type 1s in the energy window 1 to 60 keV. }
\label{Fig. 4}
\end{figure}
\subsection{Core luminosities}
Having compiled a more genuine SED for these AGN, a tighter estimate of
their true energy output can be derived by integrating the SED.
This is done over the two main features in the SED: the IR bump and the high-energy spectrum (Fig. 1). For the type 1 sources, a further integration was done over the blue bump. The estimated energies associated with each of these regions are listed in Table 1.
The procedure used is as follows.
The integration over the IR bump extends from the inflection point at about 1 $\mu$m in type 1s, from the optical upper limit in type 2s, up to the radio frequencies. Direct numerical integration on the $F_v$ vs $\nu$ plane was done. For those sources whose nuclear flux is independent on the aperture size, namely, NGC 5506, NGC 3783 and 3C 273, the integration includes the large aperture IR data as well. For Cen A, the millimetre data were also included. For all other sources, a linear interpolation between 20 $\mu$m and the first radio frequency point was applied. Effectively, the proposed integration is equivalent to integrating over the 1 -- 20 $\mu$m range only as this region dominates the total energy output by orders of magnitude in all the objects.
As a control, a modified black body (BB) spectrum, $B_{\nu}\times\nu^{1.6}$, was fitted to this spectral range. The derived effective temperature converges into the 200 - 400 K range in most cases, and the inferred IR luminosities are of the same order of magnitude as those derived by the integration procedure over the SED. Focussing on the objects for which their nuclear fluxes are independent on the aperture size -- NGC 3783 and NGC 5506 -- a further test was done by comparing their integrated luminosity in the 1 -- 20 $\mu$m range with that in the 1 -- 100 $\mu$m range, the latter including all the available data. The 1 -- 20 $\mu$m luminosity is found about a factor of 1.5 smaller. Thus, the IR luminosity is probably underestimated by at least this factor in all other sources.\\
At high energies, due to energy-band overlapping between different satellites, a direct integration over the SED was avoided. Instead, we used X-ray luminosities reported in the literature, selecting those derived from the hardest possible energy band, usually in the 20 -- 100 keV range.
X-ray luminosities above 20 keV are less subjected to absorption and thus expected to be a fair indication of the nuclear budget (with the possible exception of NGC 1068 due to the large X-ray column density, $N(H) > 10^{25} cm^{-2}$, inferred in this case, Matt et al. 1997). \\
For the Seyfert type 1 nuclei, the integration over the blue bump spans the 0.1 to 1 $\mu$m region.
The same integration procedures described above were applied to 3C 273 as well.
On the basis of these energy budgets, an estimate of the bolometric luminosity in these AGNs is taken as the sum of the IR plus X-ray - above 20 keV - luminosities (Table 1). In doing so, it is implicitly assumed that the IR emission is a genuine measurement of the AGN energy output and accounts for most of the optical to UV to X-ray luminosity
generated in the accretion disk.
\section{The extinction gradient towards the centre}
Table 2 gives a comparison of nuclear extinction values inferred from different methods.
A first order estimate is derived from the near-IR colour maps presented in this work. With these maps, the colours in the surrounding of the nucleus are compared with those at further galactocentric radii, usually at several hundred parsecs. Colour excesses are found to progressively increase towards the central region, an indication of an increasing dust density towards the nucleus. In some cases however the distribution of colours is rather flat, e.g. in NGC 1097, NGC 3783. These central extinctions, in some cases inferred at distances of 30 - 50 pc from the centre, are systematically lower, by factors of 2 - 3, than those inferred from the silicate feature at 9.6 $\mu$m (Table 2 ).
Considering the very high spatial resolution of some of the silicate feature measurements, the difference is an indication that the distribution of absorbers at the nuclear region is not smooth but has a high peak concentration at the very centre.
A further comparison with the optical extinction inferred from the X-rays column density, assuming the standard Galactic dust-to-gas ratio and extinction curve (Bohlin et al. 1978), is also given in Table 2. It is known that the extinctions derived this way are always very large. For the objects in this work, they are several factors, or even orders of magnitude in the Compton thick cases, higher than those inferred from the silicate feature. Only in NGC 5506, the values derived from both methods agree. Such high discrepancies have to be due to the inapplicability of the Galactic dust-to-gas ratio conversion in AGN environments (see Gaskell et al. 2004).
\section{Discussion}
{\it The IR SED: large aperture vs high spatial resolution} \\
We have compiled spectral energy distributions at subarcsec scales for a sample of nearby well known AGN. These SEDs reveal major differences in the IR region when compared with those based on IR satellite data. First,
the trend defined by the large aperture IR data (crosses in fig. \ref{Fig. 1}) is different from that defined by
the high spatial resolution data (filled points in fig. \ref{Fig. 1}). Second, the true AGN fluxes can be up to an order of magnitude lower that those inferred from
large aperture data, hence the bolometric luminosities based on IR satellites data can be overestimated by orders of magnitude.
The number of objects studied at these high spatial resolutions is small. They are however among the nearest AGN. Subjected to this limitation, if we take the new SEDs as a reference for the Seyfert class, the above results have two further implications:
\noindent
1) the AGN contribution to the mid-to-far IR emission measured by e.g. \textit{IRAS}, \textit{ISO}, \textit{Spitzer} is minor, the bulk of the emission measured by these satellites comes from the host galaxy. This result fully confirms previous work by Ward et al. (1987), who pointed out the relevance of the host galaxy light in the IRAS fluxes in already type 1 AGNs. On this basis it is understandable the radio -- far-IR correlation followed by AGN and normal star forming galaxies alike (Sopp \& Alexander 1991; Roy et al. 1998; Polleta et al. 2007). A common trend indicates that the far-IR emission is unrelated to the AGN. The shape of the high spatial resolution SEDs of the sample AGN shows that the large aperture mid-IR is also unrelated in most cases.
\noindent
2) the selection or discrimination of AGN populations on the basis of mid to far IR colours may not be applicable on a general basis. For example, Grijp's et al. (1987) criteria based on the \textit{IRAS}
density flux ratio at 60 and 25 $\mu$m, $f_{60} / f_{25} > 0.2$, to find predominantly AGN. Recently, Sanders et al (2007) proposes the use the colour of the Spitzer 3.6 to 24 $\mu$m spectrum to help separating type 1- from type 2- AGN.
Considering the shape and luminosity of the high spatial resolution SEDs, it is somewhat surprising that large aperture mid-to-far IR colours may keep track of the existence of a central AGN in most galaxies. \\
However, the above criteria should apply to cases where the AGN dominates the IR galaxy light in a similar way as in quasars. Two of the high spatial resolution SEDs studied reflect this situation: NGC 5506, a type 2- , and NGC 3783, a type 1- nucleus. In both, large aperture data remain genuine of the nuclear emission, as it is also the case of the quasar 3C 273 used here for comparative purposes. The SEDs of these objects show a smooth connection between the large aperture far- to mid- IR data and the high spatial resolution mid- to near- IR data (Fig. 1). In NGC 5506, the nucleus's high contrast in the IR is due to the low surface brightness and edge-on morphology of the host galaxy.
However, this is not the case for NGC 3783 which resides in a bright face-on spiral. 3C 273 resides in an elliptical galaxy but as an extreme powerful quasar it dominates the light of its host at any wavelength. The relevance of the AGN in these objects is more obvious in the $J$--$K$ colour maps shown in Fig. 2. These are dominated by the central source, and almost no trace of the host galaxy at further radii from the centre are detected indicating a flat and smooth galaxy light profile. This morphology is rather different from what is seen in the other AGN in the sample whose colour maps highlight the central dust distribution. \\
NGC 5506 and NGC 3783 happen to be among the most powerful nuclei in the sample, besides 3C 273, with IR luminosities above $ 10^{44}$ erg s$^{-1}$. The nearest in power is NGC 1068 with
LIR $\sim 8.6\times 10^{43}$ erg s$^{-1}$ (Table 1), a mere factor of 2 lower.
Thus, there is a tentative indication that AGN luminosities above $10^{44}erg s^{-1}$ may easily be traceable in the IR regardless of the aperture flux used. One would expect quasars to naturally comply with this criterion but the generalisation is not that obvious. A third AGN in the sample, NGC 7469, has an IR core luminosity of $2\times 10^{44}erg s^{-1}$, but its host galaxy dominates the total IR budget by a factor of seven (Table 1).\\
{\it Total energy budget: IR vs X-ray luminosities} \\
The derived nuclear luminosities in the IR should provide a tighter quantification of the dust-reprocessed optical to UV and X-rays photons produced by the AGN - the accretion luminosity and X-ray corona. On that premise, an estimate of the total energy budget in these objects as the sum of their IR and hard X-ray emissions is evaluated ( $ L_{IR} + L_{> 20 keV}$ in Table 1). We compute the same number in type 1s as well, on the assumption that the blue-bump emission visible in these objects is accounted for in the IR reprocessed emission and thus is not summed up to the total budget (following similar reasoning as in e.g Vasudevan \& Fabian, 2007). We account for the X-ray contribution for energies above 20 keV as photons beyond this energy should provide a genuine representation of the nucleus plus jet emission.
In comparing the IR and X-ray luminosities in Table 1, the IR luminosity is found to dominate the total budget, by more than $\sim 70\% $ in seven out of the ten cases studied. Indeed, in most of these cases, the X-ray emission is a few percent of the total. The three exceptions include Cen A, in the border line with an IR contribution about 60\% of the total, and 3C 273 and NGC 1566, where the IR contribution reduces to less than 30\% of the total. Cen A and 3C 273 are the two sources in the sample with a strong jet, also in the X-rays.
As it can be explored from the table 1, there is not dependance of the IR to X-ray ratio on the AGN luminosity but the sample is small. This ratio is furthermore vulnerable to variability. As reported for each object in sect. 3, variability in the X-rays is common in these objects, by a factor of 2 to 3 in average - up to a factor of 10 has been seen in Cen A - whereas variability in the IR is at most a factor of 2, so far only known for Cen A, NGC 1068 and NGC 3783. Thus, X-ray variability which is also faster in time scales, can modify the IR to X-ray ratios up and down by the same factors. Still, even if accounting for a positive increase in the X-ray luminosity by these factors, the IR luminosity remains the dominant energy output for most cases. Just a factor of 2 decrease in the X-rays in the three objects with a reduced IR core, Cen A, 3C 273 and NGC 1566, will place their X-ray and IR luminosities to the same level.
Focusing on the Seyfert type 1 objects, all characterised by a blue bump component in the SED, the luminosity associated with the observable part of this region is $\sim 15\% $ of the IR luminosity, whereas in 3C 273, integrating over the same spectral region, that is 90\% (Table 1). A fraction of the blue bump energy is unobserved as it falls in the extreme UV to soft-Xrays data gap, still, the fact that there is almost a factor six difference in these relative emissions between 3C 273 and the Seyfert type 1 nuclei is an indication that Seyfert AGN are seen through much more dust. This is in line with conclusions reached by Gaskell et al. (2004) who argue on the presence of additional reddening by dust in radio-quiet as compared with radio-loud AGNs. If this is the case, the IR bump luminosity may be one of the most tight measurements of the accretion luminosity in Seyfert galaxies. \\
{\it Centaurus A: a special case}\\
Among all AGN analysed in this work Centaurus A's SED singles out as a particular case.
The data points from VLBA over the millimetre to the high resolution IR measurements follow a rather continuous trend. This region can be fitted by a simple synchrotron model with spectral index, $F_{\nu} \sim \nu^{-0.3}$, and still the $\gamma$-ray emission
be explained as inverse Compton scattering of the radio synchrotron electrons
(Prieto et al. 2007, see also Chiaberge et al. 2001). Such a flat synchrotron spectrum
has been suggested by Beckert \& Duschl (1997 and references therein) for low luminosity AGN, among which Cen A nucleus can be considered. The available high spatial resolution IR observations indicate that most of the emission comes from a very compact source less than 1 pc in diameter (Meisenheimer et al. 2007). This together with the apparent synchrotron nature of its SED points to a rather torus-free nucleus. Cen A is one of the lowest power sources in the sample, with $ L(IR)\sim10^{42}$ erg s$^{-1}$. On theoretical grounds, it is being argued that AGNs of this low power may be unable to support a torus structure and should thus show a bare nucleus at IR wavelengths (e.g. Hoenig \& Beckert 2007; Elitzur \& Shlosman 2006).
In other respects, Cen A's SED is similar to those of type 2 nuclei in this study, in the sense that its optical to UV is totally obscured but this may be caused by the large scale dust lanes crossing in front of its nucleus. The future availability of millimetre data of high spatial resolution will help to confirm the nature of this SED. \\
\section{Conclusions}
Sub-arcsec resolution data spanning the UV, optical, IR and radio have been used to construct spectral energy distributions of the central, several tens of parsec, region of some of the nearest and brightest active galactic nuclei. Most of these objects are Seyfert galaxies. \\
These high spatial resolution SEDs differ largely from those derived from large aperture data, in particular in the IR: the shape of the SED is different and the true AGN luminosity can get overestimated by orders of magnitude if based on IR satellite data. These differences appear to be critical for AGN luminosities below $10^{44} erg~s^{-1}$ in which case large aperture data sample in full the host galaxy light. Above that limit we find cases among these nearby Seyfert galaxies where the AGN behaves as the most powerful quasars, dominating the host galaxy light regardless of the integration aperture-size used. \\
The high spatial resolution SED of these nearby AGNs are all characterised by two major features in their power distribution: an IR bump with maximum in the 2 -10 $\mu$m range, and an increasing trend in X-ray power with frequency in the 1 to $\sim$ 200 keV region, i.e. up to the hardest energy that was possible to sample. These dominant features are common to Seyfert type 1 and 2 objects alike. \\
The major difference between type 1 and 2 in these SEDs arises shortward of 2 $\mu$m. Type 2s are characterised by a sharp fall-off shortward of this wavelength, with no optical counterpart to the IR nucleus being detected beyond 1 to 0.8 $\mu$m.
Type 1s show also a drop shortward of 2 $\mu$m but this is more gentle - the spectrum is flatter - and recovers at about 1 $\mu$m to give rise to the characteristic blue-bump feature seen in quasars.
The flattening of the spectrum shortward of 2 $\mu$m is also an expected feature of type 1 AGNs. Interpreting the IR bump as AGN reprocessed emission by the nuclear dust, in type 1s the nearest to the centre hotter dust can be directly seen, hence the flattening of their spectrum, whereas in type 2s this hot dust is still fully obscured. \\
Longward of 2 $\mu$m, all the AGN types show very similar SEDs, the bulk of the IR emission starts from this wavelength on and the shape of the IR bump is very similar in all the AGNs. This is compatible with an equivalent black-body temperature for the bulk of the dust in the 200 - 400 K range in average. Although the current shape of the IR bump is limited by the availability of high angular resolution data beyond 20 $\mu$m for most objects, due to the small region sampled in these SEDs, of just a few parsecs in some galaxies, a major contribution from colder dust that will modify the IR bump is not expected. \\
It can thus be concluded that at the scales of a few tens of parsec from the central engine, the bulk of the IR emission in either AGN type can be reconciled with pure dust emission. It follows that further contributions from a non-thermal synchrotron component and/or a thermal free-free emission linked to cooling of ionised gas are insufficient to overcome that of dust at these physical scales.
The detailed modelling of NGC 1068's SED - this being one of the most complete we have compiled - in which these three contributions -- synchrotron, free-free and a dust torus components - are taking into account illustrates that premise, that is, the dominance of dust emission in the IR, even at the parsec-scale resolution achieved for this object in the mid-IR with interferometry (Hoenig et al. 2008).
Only the two more extreme objects in this analysis, Cen A, on the low luminosity rank, and 3C 273, on the highest,
present a SED that is not dominated by dust but by a synchrotron component. We tend to believe that is due to a much reduced dust content in these nuclei. \\
Over the nine orders of magnitude in frequency covered by these SEDs, the power stored in the IR bump is by far the most energetic fraction of the total energy budget measured in these objects. Evaluating this total budget as the sum of the IR and hard X-ray -- above 20 keV -- luminosities, the IR part accounts for more than 70\% of the this total in seven out of the ten AGN studied. In the three exceptions, the IR fraction reduces to $<\sim 30\% $ (3C 273 and NGC 1566), $< \sim 60\%$ in Cen A. Even if accounting for variability in the X-rays, by a factor 2 to 3 in average, the IR emission remains in all cases dominant over, or as important as, in the last three cases, the X-ray emission.
If comparing with the observed blue bump luminosity in the type 1 nuclei, this represents less than 15\% of the IR emission. Putting all together, the IR bump energy from these high spatial resolutions SEDs may represent the tightest measurement of the accretion luminosity in these Seyfert AGN. \\
The average high spatial resolution SED of the type 2 and of the type 1 nuclei analysed in this work, and presented in Fig. 4, can be retrieved from http://www.iac.es/project/parsec/main/seyfert-SED-template.
\\
This work was initiated and largely completed during the stay of K. Tristram, N. Neumayer and A. Prieto at the Max-Planck Institut fuer Astronomie in Heidelberg.
| proofpile-arXiv_065-6879 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Gamma ray bursts (GRBs) are transient events able to outshine
the $\gamma$-ray sky for a few seconds to a few minutes. The discovery of their
optical \citep{Paradijs97} and X-ray \citep{Costa97} long-lasting
counterparts represented a breakthrough for GRB science. Unfortunately, due
to technological limitations, the X-ray observations were able to track the afterglow
evolution starting hours after the trigger: only
after the launch of the \emph{Swift} satellite in 2004 \citep{Gehrels04}
was this gap between
the end of the prompt emission and several hours after the onset
of the explosion filled with X-ray observations. A canonical
picture was then established (see e.g., \citealt{Nousek06}), with
four different stages describing the overall structure of the
X-ray afterglows: an initial steep decay, a shallow-decay
phase, a normal decay and a jet-like decay stage. Erratic flares
are found to be superimposed mainly to the first and second stage of emission.
An interesting possibility is that the four light-curve phases instead belong
to only two different components of emission (see e.g., \citealt{Willingale07}):
the first, connected to the activity of the central engine giving rise
to the prompt emission, comprises the flares (\citealt{Chincarini07} and
references therein) and the steep-decay phase;
the second is instead related to the interaction of the outflow with the
external medium and manifests itself in the X-ray regime through the
shallow, normal and jet-like decay. Observations able
to further characterise the two components are therefore of particular
interest.
The smooth connection of the X-ray steep decay light-curve phase with the
prompt $\gamma$-ray emission strongly suggests a common physical origin
(\citealt{Tagliaferri05}; \citealt{Obrien06}): the high latitude
emission (HLE) model (\citealt{Fenimore96}; \citealt{Kumar00}) predicts
that steep decay photons originate from the delay in the arrival time
of prompt emission photons due to the longer path length from larger angles
relative to our line of sight, giving rise to the
$\alpha=\beta+2$ relation (where $\alpha$ is the light-curve
decay index and $\beta$ is the spectral energy index). No spectral evolution
is expected in the simplest formulation of the HLE effect in the
case of a simple power-law prompt spectrum. Observations
say the opposite: significant variations of the photon index have been
found in the majority of GRBs during the steep decay phase
(see e.g., \citealt{Zhang07}); more than this, the absorbed simple
power-law (SPL) has proved to be a poor description of the spectral energy
distribution of the steep decay phase for GRBs with the best statistics\footnote
{The limited $0.3 -10 \,\rm{keV}$ spectral
coverage of the Swift X-Ray Telescope, XRT \citep{Burrows05}, and the degeneracy
between the variables of the spectral fit can in principle lead to the
identification of an SPL behaviour in intrinsically non-SPL spectra with
poor statistics.}.
A careful analysis of these events has shown their spectra to be best
fit by an evolving Band function \citep{Band93}, establishing the link
between steep decay and prompt emission photons also from the spectral
point of view (see e.g., GRB\,060614, \citealt{Mangano07}; GRB\,070616,
\citealt{Starling08}): caused by the shift of the
Band spectrum, a temporal steep decay phase and a spectral softening
appear simultaneously (see e.g. \citealt{Zhang09}, \citealt{Qin09}).
In particular, the peak energy of the $\nu F_{\nu}$ spectrum is found to
evolve to lower values, from the $\gamma$-ray to the soft X-ray
energy range. Both the low (as observed for GRB\,070616) and the high-energy
portion of the spectrum are likely to soften with time, but
no observation is reported to confirm the high energy index behaviour
during the prompt and steep decay phase.
The observed spectral evolution with time is an invaluable footprint
of the physical mechanisms at work: observations able
to constrain the behaviour of the spectral parameters with time are
therefore of primary importance.
By contrast, no spectral evolution
is observed in the X-ray during the
shallow decay phase (see e.g. \citealt{Liang07}) experienced by most GRBs between
$\sim 10^2$ s and $10^3 -10^ 4$ s. An unexpected discovery of the
\emph{Swift} mission, the shallow decay is the first light-curve
phase linked to the second emission component. A variety of
theoretical explanations have been put forward. The proposed
models include: energy injection (\citealt{Panaitescu06}; \citealt{Rees98};
\citealt{Granot06}; \citealt{Zhang06});
reverse shock (see e.g., \citealt{Genet07});
time dependent micro-physical parameters (see e.g. \citealt{Granot06b};
\citealt{Ioka06});
off-axis emission \citep{Eichler06}; dust scattering
\citep{Shao07}.
The predictions of all these models can only be compared
to observations tracking the flat and decay phase of the
second emission component, since its rise is usually missed
in the X-ray regime, being hidden by the tail of the prompt emission.
GRB\,081028 is the first and unique event for which \emph{Swift}
was able to capture the rise of the second emission component\footnote
{There are a handful of long GRBs detected by \emph{Swift}
with a possible X-ray rise of non-flaring origin. Among them:
GRB\,070328, \cite{Markwardt07}; GRB\,080229A, \cite{Cannizzo08};
GRB\,080307, \cite{Page09}
(see \citealt{Page09} and references therein).
However, in none of these cases has an X-ray
steep decay been observed.
A smooth rise in the X-rays has been observed in the short GRB\,050724.}:
the time properties of its rising phase can be constrained for the
first time while contemporaneous optical observations
allow us to track the evolution of a break energy of the spectrum
through the optical band. GRB\,081028 is also one of the lucky cases
showing a spectrally evolving prompt emission where the evolution
of the spectral parameters can be studied from $\gamma$-rays to
X-rays, from the trigger time to $\sim 1000$ s. A hard to soft spectral
evolution is clearly taking place beginning with the prompt emission
and extending to the steep decay phase, as already found for other
\emph{Swift} GRBs (GRB\,060614, \citealt{Mangano07}; GRB\,070616,
\citealt{Starling08}, are showcases
in this respect). Notably, for GRB\,081028 a softening of the slope of a Band
function \citep{Band93} above $E_{\rm p}$ is also observed.
The paper is organised as follows: \emph{Swift} and ground-based
observations are described in Sect. \ref{sec:observations}; data
reduction and preliminary analysis are reported in Sect.
\ref{Sec:datared}, while in Sect. \ref{sec:analysis} the results
of a detailed spectral and temporal multi-wavelength analysis
are outlined and discussed in Sect. \ref{sec:discussion}.
Conclusions are drawn in Sect. \ref{sec:conclusion}.
The phenomenology of the burst is presented in the observer frame unless
otherwise stated. The convention $F_{\nu}(\nu,t)\propto \nu^{-\beta}t^{-\alpha}$
is followed, where $\beta$ is the spectral energy index, related to the
spectral photon
index $\Gamma$ by $\Gamma=\beta+1$.
All the quoted uncertainties are given at 68\% confidence
level (c.l.): a warning is added if it is not the case.
The convention
$Q_{x}=Q/10^{x}$ has been adopted in cgs units unless otherwise stated.
Standard cosmological quantities have been adopted:
$H_{0}=70\,\rm{km\,s^{-1}\,Mpc^{-1}}$, $\Omega_{\Lambda }=0.7$,
$\Omega_{\rm{M}}=0.3$.\\
\section{Observations}
\label{sec:observations}
GRB\,081028 triggered the \emph{Swift} Burst Alert Telescope (BAT; \citealt{Barthelmy05})
on 2008-10-28 at 00:25:00 UT \citep{Guidorzi08}. The spacecraft immediately slewed to the
burst allowing the X-ray Telescope (XRT; \citealt{Burrows05}) to collect
photons starting at $T+191\,\rm{s}$ after the trigger: a bright and fading X-ray
afterglow was discovered. The UV/Optical Telescope (UVOT, \citealt{Roming05})
began observing at $T+210\,\rm{s}$. In the first orbit of observations, no
afterglow candidate was detected in any of the UVOT filters in either the individual
or co-added exposures. A careful re-analysis of the acquired data
revealed the presence of a source with a White band magnitude of $20.9$ at
$\sim T+270\,\rm{s}$ (this paper). A refined position was quickly available
thanks to the XRT-UVOT alignment procedure and the match of UVOT field
sources to the USNO-B1 catalogue (see \citealt{Goad07} for details):
R.A.(J2000)=$08^{\rm{h}}07^{\rm{m}}34.76^{\rm{s}}$,
Dec.(J2000)=$+02^{\circ}18'29.8\arcsec$ with a 90\% error radius of 1.5 arcsec \citep{Evans08}.
Starting at $\sim T+9\,\rm{ks}$ the X-ray light-curve shows a remarkable re-brightening
\citep{Guidorzi08b}, see Fig. \ref{Fig:plottot_lc}: this was later detected in
ground-based near-infrared (NIR) and optical observations. Preliminary analysis results for this burst
were reported in \cite{Guidorzi08c}.
The Telescope a Action Rapide pour les Objets Transitoires (TAROT;
\citealt{Klotz08}) began observing $566.4\,\rm{s}$ after the trigger
under poor weather conditions: no variable source was detected down to
$R\sim17.4$.
The optical afterglow was discovered by the Gamma-Ray Burst Optical and Near-Infrared
Detector (GROND; \citealt{Greiner08}). The observations started 20.9 ks after the
trigger: the afterglow was simultaneously detected in the $g'r'i'z'JHK$ bands
\citep{Clemens08} with the following preliminary magnitudes:
$g'= 19.9\pm{0.1}$; $r'=19.3\pm{0.1}$; $i'=19.2\pm{0.1}$; $z'=19.1\pm{0.1}$;
$J=19.0\pm{0.15}$; $H=18.7\pm{0.15}$; $K=19.0\pm{0.15}$, with a net exposure
of $264$ and $240\,\rm{s}$ for the $g'r'i'z'$ and the $JHK$ bands
respectively. Further GROND observations were reported by \cite{Clemens08b}
$113\,\rm{ks}$ after the trigger with $460\,\rm{s}$ of total exposures in
$g'r'i'z'$ and $480\,\rm{s}$ in $JHK$. Preliminary magnitudes are reported below:
$g'= 21.26\pm{0.05}$; $r'=20.49\pm{0.05}$; $i'=20.24\pm{0.05}$; $z'=19.99\pm{0.05}$;
$J=19.6\pm{0.1}$. The source showed a clear
fading with respect to the first epoch, confirming its nature as a GRB afterglow.
The Nordic Optical Telescope (NOT) imaged the field of GRB\,081028 $\sim6\,\rm{hr}$
after the trigger and independently confirmed the optical afterglow with a
magnitude $R\sim19.2$ \citep{Olofsson08}. Because of the very poor
sky conditions only 519 frames out of 9000 could be used, with a total exposure
of 51.9 s. The average time for the observations is estimated to be 05:53:00 UT.
Image reduction was carried out by following standard procedures.
An UV/optical re-brightening was discovered by the UVOT starting
$T+10\,\rm{ks}$, simultaneous to the X-ray re-brightening. The afterglow
was detected in the $v$, $b$ and $u$-band filters \citep{Shady08}.
The UVOT photometric data-set of GRB\,081028 is reported in Tab.
\ref{Tab:UVOTdata}. We refer to \cite{Poole08} for a detailed description
of the UVOT photometric system.
The rising optical afterglow was independently confirmed by the Crimean telescope
for Asteroid Observations (CrAO) and by the Peters Automated Infrared
Imaging Telescope (PAIRITEL; \citealt{Bloom06}). CrAO observations were carried out starting at
$\sim T+1\,\rm{ks}$ and revealed a sharp rising optical afterglow peaking after
$T+9.4\,\rm{ks}$: $R=21.62\pm0.07$ at $t=T+1.8\,\rm{ks}$; $I=21.32\pm0.09$ at
$t=T+3.6\,\rm{ks}$; $I=21.43\pm0.09$ at $t=T+5.5\,\rm{ks}$; $I=21.20\pm0.08$ at
$t=T+7.5\,\rm{ks}$; $I=20.66\pm0.05$ at $t=T+9.4\,\rm{ks}$ \citep{Rumyantsev08}.
PAIRITEL observations were carried out $40\,\rm{ks}$ after the trigger: the
afterglow was simultaneously detected in the $J$, $H$, and $K_{\rm{s}}$ filters with
a preliminary photometry $J=17.7\pm0.1$, $H =17.0\pm0.1$ and $K_{\rm{s}} = 16.1\pm 0.1$
\citep{Miller08}. A total of 472 individual 7.8 s exposures were obtained under bad conditions
(seeing $\ga$ 3$''$) for a total exposure time of $\sim$3682 s.
The data were reduced and analysed using the standard PAIRITEL pipeline
\citep{Bloom06}. Photometry calibration was done against the 2MASS system.
The resulting fluxes and magnitudes are consistent with the values
reported by \cite{Miller08}: however, this work should be
considered to supersede the previous findings.
The ground-based photometric data-set of GRB\,081028 is reported in Tab.
\ref{Tab:OpticalData} while the photometric optical observations of GRB\,081028 are
portrayed in Fig. \ref{Fig:plottot_lc}.
A spectrum of the GRB\,081028 afterglow was taken with the Magellan Echellette Spectrograph (MagE)
on the Magellan/Clay 6.5-m telescope at $\sim T+27\,\rm{ks}$ for a total
integration time of $1.8\,\rm{ks}$. The identification of absorption features
including SII, NV, SiIV, CIV and FeII allowed the measurement of the redshift $z=3.038$
together with the discovery of several intervening absorbers \citep{Berger08}.
According to \cite{Schlegel98} the Galactic reddening along the line of sight
of GRB\,081028 is $E(B-V)=0.03$.
\section{Swift Data Reduction and preliminary analysis}
\label{Sec:datared}
\label{SubSec:SwiftXRTdata}
\begin{figure}
\vskip -0.0 true cm
\centering
\includegraphics[scale=0.65]{BATphotonindex.eps}
\caption{Top panel: BAT 15-150 keV mask weighted light-curve (binning
time of 4.096 s). Solid blue line: $15-150\,\rm{keV}$ light-curve
best fit using Norris et al. (2005) profiles. The typical $1~\sigma$ error size
is also shown. Bottom panel: best fit photon index $\Gamma_{\gamma}$
as a function of time (errors are provided at the 90\% c.l.).}
\label{Fig:BATplindex}
\end{figure}
The BAT data have been processed using standard Swift-BAT analysis tools
within \textsc{heasoft} (v.6.6.1). The ground-refined coordinates
provided by \cite{Barthelmy08} have been adopted in the following analysis.
Standard filtering and screening criteria have been applied.
The mask-weighted background subtracted
$15-150\,\rm{keV}$ is shown in Fig. \ref{Fig:BATplindex}, top panel.
The mask-weighting procedure is also applied to produce weighted,
background subtracted counts spectra.
Before fitting the spectra, we group the
energy channels requiring a 3-$\sigma$ threshold on each group;
the threshold has been lowered to 2-$\sigma$ for spectra with
poor statistics. The spectra are fit within \textsc{Xspec} v.12.5
with a simple power-law with pegged normalisation (\textsc{pegpwrlw}).
The best fit photon indices resulting from this procedure are shown
in Fig. \ref{Fig:BATplindex}, bottom panel.
\begin{figure*}
\centering
\includegraphics[scale=0.7]{plottot.eps}
\caption{Complete data set for GRB\,081028
starting 200 s after the trigger including X-ray (XRT, flux density estimated at 1 keV),
UV/visible/NIR (UVOT, GROND, PAIRITEL, CrAO, NOT) observations.
The arrows indicate 3-$\sigma$ upper limits of UVOT observations.
The shaded regions indicate the time intervals of
extraction of the SEDs.}
\label{Fig:plottot_lc}
\end{figure*}
XRT data have also been processed with \textsc{heasoft}
(v. 6.6.1) and corresponding
calibration files: standard filtering and screening criteria have been applied.
The first orbit data were acquired entirely in WT mode reaching a maximum count rate $\sim140
\,\rm{counts\,s^{-1}}$. We apply standard pile-up corrections following
the prescriptions of \cite{Romano06} when necessary. Starting from $\sim
10\,\rm{ks}$ \emph{Swift}-XRT switched to PC mode to follow the fading
of the source: events are then extracted using different region shapes
and sizes in order to maximize the signal-to-noise (SN) ratio. The background
is estimated from a source free portion of the sky. The resulting X-ray light-curve
is shown in Fig. \ref{Fig:plottot_lc}: the displayed data binning assures
a minimum SN equals to 4 (10) for PC (WT) data. In this way the strong
variability of WT data can be fully appreciated without losing
information on the late time behaviour. We perform automatic time resolved spectral
analysis, accumulating signal
over time intervals defined to contain a minimum of $\sim2000$ photons each.
The spectral channels have been grouped to provide a minimum of 20 counts per bin.
The Galactic column density in the direction of the burst is estimated to be
$3.96\times10^{20}\,\rm{cm^{-2}}$ (weighted average value from
the \citealt{Kalberla05} map). Spectral fitting is done within \textsc{Xspec}
(v.12.5) using a photo-electrically absorbed simple power law (SPL) model.
The Galactic
absorption component is frozen at the Galactic value together with the redshift,
while we leave the intrinsic column density free to vary during the first run of
the program. A count-to-flux conversion factor is worked out from the best
fit model for each time interval for which we are able to extract a spectrum.
This value is considered reliable if the respective $\chi^2/\rm{dof}$ (chi-square over
degrees of freedom) implies a P-value (probability of obtaining a result at least as
extreme as the one that is actually observed) higher than 5\%.
The discrete set of reliable count-to-flux conversion factors is then used to
produce a continuous count-to-flux conversion factor through interpolation.
This procedure produces flux and luminosity light-curves where the possible
spectral evolution of the source is properly taken into account
(Fig. \ref{Fig:plottot_lc}).
In the case of GRB\,081028 this is particularly important:
the simple power law photon index evolves from $\Gamma\sim1.2$ to $\Gamma\sim3$
during the steep decay phase (Fig. \ref{Fig:PhotonIndex_nhtot}), inducing a variation of
a multiplicative factor $\sim1.7$ in the count-to-flux conversion factor.
As a second run, we remove one degree of freedom from the spectral fitting
procedure, noting the absence of spectral evolution during the X-ray re-brightening
in the X-ray regime
(see Sect. \ref{SubSec:specXRT}). This gives the possibility to obtain a reliable
estimate of the intrinsic neutral Hydrogen column density $N_{\rm{H,z}}$ of
GRB\,081028: the PC spectrum accumulated over the time interval
$10-652\,\rm{ks}$ can be adequately fit by an absorbed SPL model with best fit
photon index $\Gamma=2.09\pm0.07$ and
$N_{\rm{H,z}}=(0.52\pm0.25)\times10^{22}\,\rm{cm^{-2}}$ ($90\%$ c.l. uncertainties are provided).
The flux-luminosity calibration procedure is then re-run freezing the intrinsic absorption
component to this value.
The UVOT photometry was performed using standard tools
\citep{Poole08} and is detailed in Tab. \ref{Tab:UVOTdata}.
\section{Analysis and results}
\label{sec:analysis}
\subsection{Temporal analysis of BAT (15-150 keV) data}
\label{SubSec:taBAT}
\begin{table}\footnotesize
\begin{center}
\begin{tabular}{l|cc}
\hline
& Pulse 1& Pulse 2\\
\hline
$t_{\rm{peak}}$ (s)& $72.3\pm3.5$ & $202.7\pm3.3$\\
$t_{\rm{s}}$ (s)&$5.4\pm17.5$ &$125.6\pm18.1$\\
$t_{\rm{rise}}$ (s)& $32.6\pm3.7$&$36.4\pm4.1$\\
$t_{\rm{decay}}$ (s)& $63.4\pm8.1$&$70.0\pm5.2$\\
$w$ (s)& $96.0\pm7.9$&$105.4\pm6.2$\\
$k$& $0.32\pm0.09$&$0.31\pm0.07$\\
$A$ ($\rm{count\,\,\rm{s^{-1}}}\,\rm{det^{-1}}$)& $(3.6 \pm0.2)10^{-2}$ &$(3.5 \pm0.2)10^{-2}$\\
Fluence ($\rm{erg\,\,cm^{-2}}$) & $(1.81\pm0.14)10^{-6}$ & $(1.83\pm0.11)10^{-6}$\\
\hline
$\chi^{2}/\rm{dof} $& \multicolumn{2}{c}{171/114}\\
\hline
\end{tabular}
\caption{Best fit parameters and related quantities resulting
from the modelling of the prompt 15-150 keV emission with two
Norris et al. (2005) profiles. From top to bottom: peak time,
start time, $1/e$ rise time, $1/e$ decay time, $1/e$ pulse
width, pulse asymmetry, peak count-rate and statistical information.
The $\chi^2$ value mainly reflects the partial failure of the
fitting function to adequately model the peaks of the pulses
(see Norris et al. 2005 for details).}
\label{Tab:BATta}
\end{center}
\end{table}
The mask-weighted light-curve consists of two main pulses peaking at
$T+70\,\rm{s}$ and $\sim T+200\,\rm{s}$ followed by a long lasting
tail out to $\sim T+400\,\rm{s}$.
In the time interval $T-100\,\rm{s}$ $T+400\,\rm{s}$, the light-curve
can be fit by a combination of two \cite{Norris05} profiles
(Fig. \ref{Fig:BATplindex}, top panel), each profile consisting
of the inverse of the product of two exponentials, one increasing
and one decreasing with time. The best fit parameters and
related quantities are reported in Table \ref{Tab:BATta}: the parameters
are defined following \cite{Norris05}; we account for the entire
covariance matrix during the error propagation procedure. The GRB
prompt signal has a $T_{90}$ duration of $261.0\pm28.7$ s and a $T_{50}= 128.2 \pm 7.7 $ s.
The temporal variability of this burst has been characterised in two
different ways. First, following \cite{Rizzuto07} we compute
a variability measure
$\rm{Var}(15-150\,\rm{keV})=(5.0\pm0.14)\times 10^{-2}$. Second,
we adopt the power spectrum analysis in the time domain
(\citealt{Li01}; \citealt{Li02}): unlike the
Fourier spectrum, this is suitable to study the rms variations at
different time-scales. See \cite{Margutti08} and Margutti et al. in
prep. for details about the application of this technique to the GRB
prompt emission. In particular, we define the fractional power density
(fpd) as the ratio between the temporal power of the source signal
and the mean count rate
squared. This quantity is demonstrated to show a peak at the
characteristic time scales of variability of the signal. We assess the
significance of each fpd peak via Montecarlo simulations.
The fpd of GRB\,081028 shows a clear peak around 70 s (time scale related
to the width of the two \citealt{Norris05} profiles). Below 70 s the
fpd shows a first peak at $\Delta t\sim2\,\rm{s}$ and then a second peak at
$\Delta t\sim6\,\rm{s}$, both at 1-$\sigma$ c.l. The signal shows power
in excess of the noise at 2-$\sigma$ c.l. significance for time scales
$\Delta t\geq 32\,\rm{s}$.
\subsection{Spectral analysis of BAT (15-150 keV) data}
\label{SubSec:specBAT}
\begin{table*}\footnotesize
\begin{center}
\begin{tabular}{ccccccccc}
\hline
Interval & Model &$t_{\rm{start}}$ & $t_{\rm{stop}}$& $\Gamma,\alpha$& $E_{\rm{p}}$& Fluence& $\chi^{2}/\rm{dof}$ & P-value\\
& & (s) & (s) & & (keV) & ($\rm{erg\,cm^{-2}}$) & &\\
\hline
$T_{90}$ & Pl &52.9 & 317.2 &$1.82\pm0.09$ & -- &$(3.3\pm0.20)\times 10^{-6}$& $31.8/31$&$43\%$\\
& Cutpl &52.9 & 317.2 &$1.3\pm0.4$ & $65^{+42}_{-11}$& $(3.15\pm0.20)\times 10^{-6}$& $25.8/30$& $69\%$\\
\hline
Total & Pl & 0.0 & 400.0 & $1.89\pm0.09$& -- & $(3.7\pm0.20)\times 10^{-6}$&$37.4/32$&$23\%$\\
& Cutpl & 0.0 & 400.0 & $1.3\pm0.4$&$55^{+20}_{-9}$&$(3.45\pm0.19)\times 10^{-6}$&$30.1/31$&$51\%$\\
\hline
Pulse 1 & Pl & 10.0 & 150.0 &$1.91\pm0.13$&-- &$(1.60\pm0.12)\times 10^{-6}$&$18.0/24$&$80\%$\\
& Cutpl & 10.0 & 150.0 &$1.1\pm0.6$&$49^{+18}_{-9}$&$(1.47\pm0.11)\times 10^{-6}$&$12.0/23$&$97\%$\\
\hline
Pulse 2 & Pl & 150.0 & 290.0 &$1.77\pm0.11$&-- &$(1.79\pm0.11)\times 10^{-6}$&$33.8/29$&$25\%$\\
& Cutpl & 150.0 & 290.0 &$1.22\pm0.45$&$69^{+87}_{-14}$&$(1.47\pm0.11)\times 10^{-6}$&$29.3/28$&$40\%$\\
\hline
\end{tabular}
\caption{Best fit parameters derived from the spectral modelling of 15-150 keV data
using a power law with pegged normalisation (Pl, \textsc{pegpwrlw} within \textsc{Xspec})
and a cut-off power-law model with the peak energy of the $\nu F_{\nu}$ spectrum as free parameter
(Cutpl). From left to right: name of the interval
of the extraction of the spectrum we refer to throughout the paper; spectral model
used; start and stop times of extraction of the spectrum; best fit photon index $\Gamma$
for a Pl model or cutoff power-law index for a Cutpl model; best fit peak energy of
the $\nu F_{\nu}$ spectrum; fluence; statistical information about the fit. }
\label{Tab:BATspec}
\end{center}
\end{table*}
We extract several spectra in different time intervals and then fit the data using
different models to better constrain the spectral evolution of GRB\,081028
in the 15-150 keV energy band.
The first spectrum is extracted during the $T_{90}$ duration of the
burst; a second spectrum is accumulated during the entire duration
of the 15-150 keV emission; finally, the signal between 10 s and 290 s
from trigger has been split into two parts, taking 150 s as dividing time,
to characterise the spectral properties of the two prompt emission pulses.
The resulting spectra are then fit using a simple power-law and a
cut-off power-law models within \textsc{Xspec}. The results
are reported in Table \ref{Tab:BATspec}. The measured simple power law
photon index around 2 suggests that BAT observed a portion of
an intrinsically Band spectrum \citep{Band93}. Consistent with this
scenario, the cut-off power law model always provides a better fit which
is able to constrain the peak energy value ($E_{\rm{p}}$, peak of the
$\nu F_{\nu}$ spectrum) within the BAT energy range.
The best fit parameters of the cut-off power-law model applied to the
total spectrum of Table \ref{Tab:BATspec} imply
$E_{\rm{iso,\gamma}}(1-10^4\,\rm{keV})=(1.1\pm0.1)\times 10^{53}\,\rm{erg}$.
The respective rest frame peak energy is
$E_{\rm{p,i}}=(1+z)E_{\rm{p}}=222^{+81}_{-36}\,\rm{keV}$, placing GRB\,081028
within the 2-$\sigma$ region of the Amati relation \citep{Amati06}.
The burst is characterised by an isotropic $10^2-10^3$ keV (rest frame)
$L_{\rm{iso}}=(2.85\pm0.25)\times10^{51}\,\rm{erg\,s^{-1}}$. This information
together with the variability measure $\rm{Var}(15-150\,\rm{keV})=(5.0\pm0.14)\times 10^{-2}$
makes GRB\,081028 perfectly consistent with the luminosity variability relation
(see \citealt{Reichart01}; \citealt{Guidorzi05}; \citealt{Rizzuto07}).
\subsection{Temporal analysis of XRT (0.3-10 keV) data}
\label{SubSec:taXRT}
\begin{table*}\footnotesize
\begin{center}
\begin{tabular}{rrrrrrrrrrrrr}
\hline
$n_2$& $c$& $n_1$& $a$ & $b$ & $d_1$& $t_{\rm{br}_1}$& $n_3$ & $e$& $d_2$& $t_{\rm{br}_2}$ & $\chi^2/\rm{dof}$\\
& & & & & & (ks) & & & & (ks) & \\
\hline
$10^{18.3\pm5.5}$& $-5.2\pm1.5$& $1.2\pm1.1$& $-4.5\pm3.3$& $2.1\pm0.1$& $2.4\pm2.0$& $15.5\pm6.3$& $-$& $-$& $-$& $-$& $164.8/145$\\
$10^{29.9\pm6.5}$& $-7.60\pm1.8$ & $0.31\pm0.02$ & $-1.8\pm0.3$&$1.3\pm0.1$&$0.1$& $19.5\pm0.7$&$0.06$&$2.3\pm0.1$& $0.05$& 62& $147.1/143$\\
\hline
\end{tabular}
\caption{Best-fit parameters of the XRT light-curve modelling starting from 3 ks
after the trigger. The first
(second) line refers to Eq. \ref{Eq:plBeuermannfree} (Eq. \ref{Eq:rebfin}).}
\label{Tab:XRTrebfit}
\end{center}
\end{table*}
\begin{figure}
\centering
\includegraphics[scale=0.58]{Components.eps}
\caption{0.3-10 keV X-ray afterglow split into different components.
Green dot-dashed line: steep decay; purple long dashed line:
pre-rebrightening component; light grey region: first re-brightening component;
dark grey region: second re-brightening component; red solid line:
best fit model. See Sect. \ref{subsubsec:drop} for details.}
\label{Fig:XRTcomponents}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.435]{SteepDecay.eps}
\caption{Upper panel: steep decay portion of GRB\,081028
X-ray afterglow. The XRT signal has been split into
3 energy bands so that the different temporal
behaviour can be fully appreciated. Lower panel:
$(0.3-1\, \rm{keV})/(1-10\, \rm{keV})$ hardness ratio evolution with
time. The signal clearly softens with time.
In both panels, the vertical black
dashed lines mark the orbital data gap.}
\label{Fig:xrtsteepdecaybands}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.526]{XRTfitreb.eps}
\caption{XRT 0.3-10 keV count-rate light-curve of GRB\,081028
starting from 3 ks with best fit model superimposed (Eq. \ref{Eq:rebfin},
Table \ref{Tab:XRTrebfit}).
\emph{Inset:} residuals with respect to the best-fit model.}
\label{Fig:XRTfitreb}
\end{figure}
The XRT (0.3-10 keV) light-curve consists of two parts: a steep decay phase
with flares and variability superimposed ($100\,\rm{s}<t<7000 \,\rm{s}$),
followed by a remarkable re-brightening with smoothly rising and
decaying emission between 7 ks and $1000\,\rm{ks}$. The two light-curve
phases are studied separately.
GRB\,081028 is one of the rare cases in which the XRT caught the
prompt emission. The light-curve shows a flat phase up to
$t\sim 300\,\rm{s}$ followed by a steep decay. Starting from
$\sim690\,\rm{s}$ the light-curve is dominated by a flare which
peaks at 800 s but whose decaying phase is temporally coincident with
the orbital data gap. The steep decay behaviour before the flare is
inconsistent with the back-extrapolation of the post orbital
data gap power-law decay, as shown in Fig. \ref{Fig:XRTcomponents}.
The strong spectral evolution detected by the XRT (Sect. \ref{SubSec:specXRT})
requires a time resolved flux calibration of the light-curve before the
light-curve fitting procedure. In the time interval
$320\,\rm{s}<t<685\,\rm{s}$ the 0.3-10 keV light-curve best fit is given by a
simple power-law with $\alpha=3.6\pm0.1$ ($\chi^{2}/\rm{dof}=768.3/736$).
Figure \ref{Fig:xrtsteepdecaybands} shows the different temporal behaviour of the
detected signal when split into different energy bands: harder photons decay faster.
The 0.3-1 keV light-curve
decays following a power-law with index $\alpha \approx -2.5$; the decay steepens
to $\alpha\approx -3.5$ and $\alpha \approx -3.8$ for the 1-2 keV and
2-10 keV signal, respectively.
During the re-brightening there is no evidence for spectral evolution in the XRT energy band
(see Sect. \ref{SubSec:specXRT}). For this reason we model the count-rate
light-curve instead of the flux calibrated one: this gives the possibility
to obtain a fully representative set of best fit parameters\footnote
{This is in general not true in cases of strong spectral evolution as
shown in the first part of this section.} determined with the
highest level of precision. The count-to-flux calibration introduces
additional uncertainty inherited by the spectral fitting
procedure. Starting from 3 ks (the inclusion of the last part of the steep decay
is necessary to model the rising part of the re-brightening),
the count-rate light-curve can be modelled by a power-law plus Beuermann function
\citep{Beuermann99} where the smoothing parameter $d_1$ is left free to vary:
\begin{equation}
\rm{n_{2}}\cdot t^{\rm{c}}+n_1 \Big[\Big(\frac{t}{t_{\rm{br}_1}}\Big)^{\frac{a}{d_1}}+\Big(\frac{t}{t_{\rm{br}_1}}\Big)^{\frac{b}{d_1}}\Big]^{-d_1}
\label{Eq:plBeuermannfree}
\end{equation}
The best fit parameters are reported in Table \ref{Tab:XRTrebfit}.
The drawback of this model is that the best-fit slopes are
asymptotic values and do not represent the actual power-law slopes.
While due to the smooth transition between the rising and decaying phases,
this makes the comparison between observations and model predictions difficult.
Freezing $d_1$ at 0.1 to have a sharp transition results in an
unacceptable fit (P-value$\sim10^{-4}$) and suggests a light-curve steepening
around 50 ks. The possibility of a break is investigated as follows: we select
data points starting from 20 ks and fit the data using a SPL
or a broken power-law (BPL) model. Given that the SPL and BPL models are
nested models and the possible values of the second model do not lie on the
boundary of definition of the first model parameters \citep{Protassov02}, we can
apply an F-test: with a probability of chance improvement $\sim1\%$, we find moderate
statistical evidence for a break in the light-curve at 62 ks.
The final fitting function is given by Eq. \ref{Eq:rebfin}:
\begin{equation}\label{Eq:second}
\Bigg\{\begin{array}{ll}
{\rm{n_2}}\cdot \rm{t}^{\rm{c}}+n_1\Big[\Big(\rm{\frac{t}{t_{\rm{br1}}}}\Big)^{\frac{a}{d_1}}+\Big(\frac{t}{t_{\rm{br}_1}}\Big)^{\frac{b}{d_1}}\Big]^{-d_1}&t<40\,\rm{ks}\\
f\cdot \rm{n_3}\Big[\Big(\rm{\frac{t}{t_{\rm{br_2}}}}\Big)^{\frac{b}{d_2}}+\Big(\frac{t}{t_{\rm{br_2}}}\Big)^{\frac{e}{d_2}}\Big]^{-d_2}&t>40\,\rm{ks}\\
\end{array}
\label{Eq:rebfin}
\end{equation}
where $f$ is function of the other fitting variables and assures the continuity
of the fitting function at 40 ks.
The light-curve of GRB\,081028 fits in this case with $\chi^{2}/\rm{dof}=147.1/143$
and an P-value=$39\%$:
the best fit parameters are reported in Table \ref{Tab:XRTrebfit} while
a plot of the result is provided in Fig. \ref{Fig:XRTfitreb}.
The fit of the flux-calibrated light-curve gives completely consistent results.
The model predicts
$F_{\rm{X,p}}=(1.53\pm0.08)\times 10^{-11}\,\rm{erg\,cm^{-2}\,s^{-1}}$,
where $F_{\rm{X,p}}$ is the flux at the peak of the re-brightening.
\subsubsection{Count-rate drop around 250 ks}
\label{subsubsec:drop}
The drop of the count-rate around 250 ks is worth attention: the statistical
significance of this drop is discussed below. We select data with $t>60$ ks.
These data can be fit by a simple power-law with index
$\alpha=1.9\pm0.2$ ($\chi^2/\rm{dof}=11.0/12$, P-value=53\%). According to this model the
drop is not statistically significant (single trial significance of $\sim2.6~\sigma$). However,
this model under-predicts the observed rate for $t<60\,\rm{ks}$: an abrupt
drop of the count-rate during the orbital gap at 80 ks would be required in this
case. Alternatively there is not any kind of switch-off of the
source during the orbital gap and the flux around 80 ks joins smoothly
to the flux component at $t<60$ ks, as portrayed in Fig. \ref{Fig:XRTfitreb}.
A careful inspection of the figure reveals the presence of a non random
distribution of the residuals of the last 14 points, with the points before
250 ks being systematically low and those after 250 ks being systematically
high. While this fit is
completely acceptable from the $\chi^2$ point of view, a runs test shows
that the chance probability of this configuration of residuals is less than
$0.1\%$. This would call for the introduction of a new component to model
the partial switch-off and re-brightening of the source around 250 ks.
A possible description of the light-curve behaviour for $t>20\,\rm{ks}$
(peak time of the main re-brightening) is represented by a Beuermann plus
Beuermann function with smoothing parameters frozen to give sharp
transitions; the first decaying power-law index is frozen to $b=1.3\pm1.3$
while the break time of the first Beuermann component is frozen to $t_{\rm{br_2}}=62\,\rm{ks}$
as reported in Table \ref{Tab:XRTrebfit}. The light-curve decays with
$\alpha_2=3.1\pm0.2$ ($\alpha_3=1.5\pm0.7$) for $60\,\rm{ks}<t<250\,\rm{ks}$
($t>316\,\rm{ks}$), see Fig. \ref{Fig:XRTcomponents}. This additional
component would account for $\sim10\%$ of the total re-brightening $0.3-10$ keV
energy which is $\sim1.1\times 10^{52}\,\rm erg$.
The temporal properties of the second re-brightening seem to point to
refreshed shocks (see e.g. \citealt{Kumar00b}; \citealt{Granot03}): the decaying
power-laws before and after
the drop are roughly consistent with each other but shifted upwards in the
count-rate axis. Since at this epoch the observed X-ray frequencies are above both
the cooling and the injection frequencies, in the standard afterglow scenario
the X-ray flux is $\propto E_{\rm iso}^{(p+2)/4}$ independent of
the external medium density profile (see e.g.
\citealt{Panaitescu00}, their appendices B and C): the observed jump in flux
would therefore require an increase of the energy in the forward shock
by a factor of $\sim3$. Given the marginal
statistical evidence, the properties of the second re-brightening
will not be discussed further.
\subsection{Spectral analysis of XRT (0.3-10 keV) data}
\label{SubSec:specXRT}
\begin{figure}
\centering
\includegraphics[scale=0.435]{PhotonIndex_t.eps}
\caption{0.3-10 keV light-curve (grey points, arbitrary units) with
best fit 0.3-10 keV photon index superimposed (black points). Each point comes from
the fit of a spectrum consisting of $\sim2000$
photons: the model \textsc{tbabs*ztbabs*pow} within \textsc{Xspec} with the
intrinsic column density $N_{\rm{H,z}}$ frozen to
$0.52\times10^{22}\,\rm{cm^{-2}}$ has been used. An exception is
represented by the first data point after the orbital gap: see
Sect. \ref{SubSec:specXRT} for details.
The vertical red dashed lines mark the time interval of the first
orbital gap. An abrupt change of the spectral properties
of the source temporally coincident with the onset of
the re-brightening is apparent. }
\label{Fig:PhotonIndex_nhtot}
\end{figure}
The very good statistics characterising the X-ray afterglow of GRB\,081028
gives us the possibility to perform a temporally resolved spectral analysis. Figure \ref{Fig:PhotonIndex_nhtot}
shows the dramatic evolution of the photo-electrically absorbed
simple power-law photon index with time during the first 1000 s of
observation, with $\Gamma$ evolving from $1.2$ to $2.7$. The intrinsic neutral
Hydrogen column density $N_{\rm{H,z}}$ has been frozen to
$0.52\times10^{22}\,\rm{cm^{-2}}$ for the reasons explained below.
If left free to vary, this parameter shows an unphysical rising and
decaying behaviour between 200 s and 600 s.
The temporal behaviour of the light-curve in the time interval $4-7.5$ ks
(after the orbital gap, see Fig. \ref{Fig:XRTcomponents}) physically connects these data
points with the steep decay phase. We test this link from the spectroscopic point
of view. The 0.3-10 keV spectrum extracted in this time interval contains 133
photons. Spectral channels have been grouped so as to have 5 counts per bin and then
weighted using the Churazov method \citep{Churazov96} within \textsc{Xspec}. A fit
with a photo-electrically absorbed power-law (\textsc{tbabs*ztbabs*pow} model)
gives $\Gamma=2.63\pm0.25$, (90\% c.l., $\chi^{2}/\rm{dof}=25.6/23$, P-value=32\%),
confirming
that this is the tail of the steep decay detected before the orbital gap
as shown by Fig. \ref{Fig:PhotonIndex_nhtot}.
The light-curve re-brightening around $7\,\rm{ks}$ translates into an
abrupt change of the 0.3-10 keV spectral properties (Fig. \ref{Fig:PhotonIndex_nhtot}),
with $\Gamma$ shifting from $2.7$ to $2$. The possibility of a spectral evolution
in the X-ray band during the re-brightening is investigated as follows:
we extracted three spectra in the time intervals 7-19.5 ks (spec1, rising phase);
19.5-62 ks (spec2, pre-break decaying phase); 62 ks- end of observations (spec3,
post-break decaying phase). A joint fit of these spectra with an absorbed simple
power-law model (\textsc{tbabs*ztbabs*pow} model) where the intrinsic Hydrogen
column density is frozen to $0.52\times10^{22}\,\rm{cm^{-2}}$ (see Sect.
\ref{SubSec:SwiftXRTdata}) and the photon index is tied to the same value,
gives $\Gamma=2.04\pm0.06$ with $\chi^2/\rm{dof}=118.0/167$.
Thawing the photon indices we obtain: $\Gamma_1=2.13^{+0.14}_{-0.14}$;
$\Gamma_2=2.03^{+0.07}_{-0.07}$; $\Gamma_3=2.00^{+0.13}_{-0.12}$
($\chi^2/\rm{dof}=115.8/165$). Uncertainties are quoted at 90\% c.l..
The comparison of the two results
implies a chance probability of improvement
of 22\%: we conclude that there is no evidence for spectral evolution during the
re-brightening in the 0.3-10 keV energy range. The same conclusion is reached
from the study of the $(1-10\,\rm{keV})/(0.3-1\,\rm{keV})$ hardness ratio.
\subsection{Spectral energy distribution during the re-brightening:
evolution of the break frequency}
\label{subsec:SED}
\begin{figure*}
\centering
\includegraphics[scale=0.66]{sed.eps}
\caption{Observer-frame SED1, SED2, SED3 and SED4 from optical to X-ray
extracted at $t\sim10\,\rm{ks}$, $t\sim20\,\rm{ks}$, $t\sim41\,\rm{ks}$ and
$t\sim112\,\rm{ks}$, respectively. Red solid line: photo-electrically absorbed
model corresponding to Eq. \ref{Eq:specbreak}. This proved
to be the best fit model for SED1, SED2 and SED3. Blue dashed line:
photo-electrically absorbed simple power law. This is the best
fit model for SED4. For all SEDs an SMC extinction curve at the redshift of the source
is assumed. The best fit parameters are reported in Table \ref{Tab:SED}.}
\label{Fig:SED}
\end{figure*}
The re-brightening properties can be constrained through the study of the
temporal evolution of the spectral energy distribution (SED) from the optical
to the X-ray. We extract 4 SEDs, from the time intervals indicated by the shaded bands in Fig.
\ref{Fig:plottot_lc}:
\begin{enumerate}
\item SED 1 at $t\sim 10\,\rm{ks}$ corresponds to the rising portion of the
X-ray re-brightening and includes XRT and UVOT observations;
\item SED 2 is extracted at $t\sim20\,\rm{ks}$, peak of the X-ray re-brightening.
It includes XRT, UVOT, GROND and NOT observations;
\item SED 3 at $t\sim 41\,\rm{ks}$ describes the afterglow spectral energy
distribution during the decaying phase of the re-brightening, before the detected
light-curve break. It includes X-ray data from $\sim30$ ks to $\sim62$ ks,
UVOT and PAIRITEL observations;
\item SED 4 corresponds to the post-break decaying portion of the re-brightening,
at $t\sim112$ ks and includes XRT and GROND observations.
\end{enumerate}
When necessary, optical data have been interpolated to the time of extraction
of the SED. Uncertainties have been propagated accordingly.
At a redshift of 3.038, we expect some contamination in the spectrum from absorption
systems located between the Earth and GRB\,081028 \citep{Madau95}. This means that
the $g'$ filter of GROND and all UVOT filters but the v band are marginally or
strongly affected by Lyman absorption: these filters are consequently excluded from the
following analysis.
The Galactic and intrinsic absorption at wavelengths shorter than the
Lyman edge are modelled using the photo-electric cross-sections of
\cite{Morrison83}. We adopt the analytical description of the Galactic
extinction by \cite{Pei92}, while the host galaxy absorption is assumed to
be modelled by a Small Magellanic Cloud-like law (from \citealt{Pei92}).
An absorbed SPL model from the optical to the X-ray range is not able
to account for SED 1, SED 2 and SED 3 (Fig. \ref{Fig:SED}), while it
gave the best fit model for SED 4.
For the first three SEDs a satisfactory fit is given by a
broken power-law with X-ray spectral index $\beta_x\sim1$; optical spectral index
$\beta_o\sim0.5$ and $N_{\rm{H,z}}$ consistent with the value reported in
Sect. \ref{SubSec:SwiftXRTdata} ($0.52\times10^{22}\,\rm{cm^{-2}}$).
The best fit break frequency is found to evolve with time to lower
values following a power-law evolution with index $\alpha\sim2$. This
evolution is faster than expected for the cooling frequency of a
synchrotron spectrum (see e.g. \citealt{Sari98}; \citealt{Granot02}): in the following,
we identify the break frequency with the injection frequency. We freeze the
Galactic contribution to give $E(B-V)=0.03$ \citep{Schlegel98}, while leaving
the intrinsic component free to vary.
The broken power-law model has been then refined as follows. \cite{Granot02}
showed that under the assumption of synchrotron
emission from a relativistic blast wave that accelerates the electrons
to a power law distribution of energies $N(\gamma_e)\propto \gamma_e^{-p}$,
it is possible to derive a physically
motivated shape of spectral breaks. Interpreting the break frequency
as the injection frequency in the fast cooling regime, the broken
power-law model reads (see \citealt{Granot02}, their Eq. 1):
\begin{equation}
F_{\nu}= F_{\rm{n}}\Big[\Big(\frac{\nu}{\nu_{b}}\Big)^{-s\beta_1} + \Big(\frac{\nu}{\nu_{b}}\Big)^{-s\beta_2}\Big]^{-1/s}
\label{Eq:specbreak}
\end{equation}
where $\nu_{\rm{b}}$ and $F_{\rm{n}}$ are the break frequency
and the normalisation, respectively;
$\beta_1= -0.5$ and $\beta_2=-p/2$ are the asymptotic spectral
indices below and above the break under the conditions above;
$s\equiv s(p)$ is the smoothing parameter: in particular,
for an interstellar (wind) medium $s=3.34-0.82p$ ($s=3.68-0.89p$)
(\citealt{Granot02}, their Table 2). The free parameters of the final
model are the following:
normalization of the spectrum $F_{\rm{n}}$, break frequency
$\nu_{\rm{b}}$, power-law index of the electron distribution $p$,
intrinsic neutral Hydrogen column density $N_{\rm{H,z}}$,
and host reddening. The ISM or wind environments give perfectly consistent results.
We choose to quote only ISM results for the sake of brevity. For SED 4
we use an absorbed simple power-law with spectral index $-p/2$.
The four SEDs are first fit separately; as a second step we perform a joint fit
where only the spectral normalisation and break frequency are
free to take different values in different spectra. We find fully
consistent results with improved uncertainties thanks to the tighter
constraints imposed by the joint fit. The best fit results are
reported in Table \ref{Tab:SED} and portrayed in Fig. \ref{Fig:SED}.\\
\begin{table}\footnotesize
\begin{center}
\begin{tabular}{ccc}
\hline
SED & Parameter & Value\\
\hline
1,2,3,4 &p & $1.97\pm0.03$\\
1&$\rm{Log}_{10}(\nu_{\rm{b}}/10^{15}\rm{Hz})$ &$2.0\pm0.1$\\
2&$\rm{Log}_{10}(\nu_{\rm{b}}/10^{15}\rm{Hz})$ &$1.4\pm0.1$\\
3&$\rm{Log}_{10}(\nu_{\rm{b}}/10^{15}\rm{Hz})$ &$0.4\pm0.1$\\
\hline
$\chi^{2}/\rm{dof}$&\multicolumn{2}{c}{134.7/138}\\
P-value & \multicolumn{2}{c}{56\%}\\
\hline
\end{tabular}
\caption{Best fit parameters for the simultaneous fit of SED1, SED2,
SED3 and SED4. For SED1, SED2 and SED3 the emission model is expressed
by Eq. \ref{Eq:specbreak}, while for SED4 we used a simple power-law with
spectral index $p/2$. The spectral normalisations and break
frequencies have been left free to take different values in different spectra.
The intrinsic neutral Hydrogen column value is found to be consistent with the
value inferred from the X-ray spectra.}
\label{Tab:SED}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.6]{BreakFrequency.eps}
\caption{Spectral break frequency (See Eq. \ref{Eq:specbreak}) evolution with time
as found from a simultaneous fit of SED1, SED2, SED3 and SED4 with
best fit models superimposed. Red solid line (blue dashed line): simple power-law
with zero time $t_0$=0 ks (2 ks) and best power-law index $\alpha=2.6$ (2.3).
The satisfactory fit of SED4 with a simple power-law provides the upper limit shown.}
\label{Fig:BreakFrequency}
\end{figure}
The spectral break frequency $\nu_{\rm{b}}$ evolves with time to lower
values, as shown in Fig. \ref{Fig:BreakFrequency}. The
consistency of SED4 optical and X-ray data with a simple power-law model with
index $-p/2$,
suggests that the break frequency has crossed the optical band by the
time of extraction of SED4. This translates into
$\rm{Log}_{10}(\nu_{\rm{b}}/10^{15}\rm{Hz})<-0.33$ for $t>112\,\rm{ks}$.
The decrease of the break frequency with time can be modelled by a simple
power-law function: this leads to an acceptable fit
($\chi^2/\rm{dof}=1.4/1$, P-value=24\%)
with best fit index $\alpha=2.6\pm0.2$. Using $t_{0}=2$ ks
as zero time of the power-law model we obtain: $\alpha=2.3\pm0.2$
($\chi^2/\rm{dof}=2.1/1$, P-value=15\%).
The fit implies a limited rest frame optical extinction
which turns out to be $E(B-V)_{\rm{z}}\sim 0.03$.
A $3-\sigma$ upper limit can be derived from the joint fit of
the four SEDs, leaving all the parameters but the one related to the optical
extinction free to vary. The upper limit is computed as the value which increases
the $\chi^2$ by a $\Delta\chi^2$ corresponding to a $3~\sigma$ c.l..
This procedure leads to: $A_{V,z}<0.22$.
\subsection{Peak energy evolution with time}
\label{SubSec:Epeak}
\begin{figure}
\centering
\includegraphics[scale=0.58]{Epeak.eps}
\caption{Time resolved combined analysis of XRT and BAT data.
Upper panel: BAT 15-150 keV and XRT 0.3-10 keV flux light-curves. No extrapolation
of the BAT data into the XRT energy range has been done. The vertical dashed lines
mark the intervals of extraction of the spectra: these are numbered
according to Table \ref{Tab:Epeak}, first column. Central panel: best fit photon
indices evolution with time. Lower panel: best fit $E_{\rm{p}}$ parameter as a
function of time. The decay has been fit with a simple power-law model starting
from 200 s from trigger: $E_{\rm{p}}(t)\propto (t-t_{0})^{\alpha}$.
Starting from 405 s $E_{\rm{p}}$ is likely to be outside the XRT energy range:
$E_{\rm{p}}<0.3\,\rm{keV}$ (solid green line).
}
\label{Fig:Epeak}
\end{figure}
\begin{table*}\footnotesize
\begin{center}
\begin{tabular}{ccccccccccc}
\hline
Interval&$t_i$ & $t_f$ & & Model & $\alpha_{\rm{B}}$& $\beta_{\rm{B}}(\Gamma)$&$E_{\rm{p}}$&$\chi^2/\rm{dof}$ & P-value\\
&(s) & (s) & & & & & (keV) & & \\
\hline
3&203 & 222 & BAT+XRT& Cutpl & $1.19\pm0.05$& ---& $61.0^{+20.0}_{-11.9}$&100.4/112 & 77\%\\
& & & & Pl & ---&$1.37\pm0.02$& ---& 147.3/114& 2\%\\
\hline
4&222 & 247 & BAT+XRT& Cutpl & $1.28\pm0.06$& ---& $41.5^{+17.1}_{-9.4}$&80.5/101& 93\%\\
& & & & Pl & ---& $1.44\pm0.03$& ---& 108.2/102& 31\%$^{+}$\\
\hline
5&247 & 271 & BAT+XRT& Cutpl & $1.38\pm0.17$& ---& $16.1^{+9.6}_{-4.9}$ &77.4/88& 78\%\\
& & & & Pl & ---& $1.54\pm0.04$& ---& 96.5/89& 3\%$^{+}$\\
\hline
6&271 & 300 & BAT+XRT& Cutpl & $1.57\pm0.07$& ---& $12.5^{+4.5}_{-2.7}$ &129.8/91& 1\%\\
& & & & Pl & ---& $1.76\pm0.03$& ---& 158.6/92& 0.001\%\\
\hline
7&300 & 323 & XRT & Cutpl & $1.20\pm0.16$& ---& $5.2^{+3.7}_{-1.3}$ &77.1/81& 60\%\\
& & & & Pl & ---& $1.49\pm0.05$& ---& 87.6/82& 32\%\\
\hline
8&323 & 343 & XRT & Cutpl & $0.82\pm0.18$& ---& $2.9^{+0.3}_{-0.3}$& 78.7/83& 61\%\\
& & & & Pl & ---&$1.61\pm0.05$& ---& 149.8/84& 0.001\%\\
\hline
9&343 & 371 & XRT & Cutpl & $1.38\pm0.17$& ---& $2.0^{+0.3}_{-0.3}$& 94.3/84& 15\% \\
& & & & Pl & ---&$1.91\pm0.05$& ---& 131.2/82& 0.1\%\\
\hline
10&371 & 405 & XRT & Band & $\sim1.10$&$2.3^{+0.1}_{-0.2}$& $<1.1$& 82.4/77&31\%\\
& & & & Cutpl & $1.81\pm0.016$& ---&$1.0^{+0.3}_{-0.9}$&102.4/78& 3\%\\
& & & & Pl & ---& $2.07\pm0.06$& ---& 109.7/79& 1\%\\
\hline
11&405 & 456 & XRT & Pl & ---& $2.32\pm0.06$& ---& 100.1/78& 5\%\\
\hline
12&456 & 530 & XRT & Pl & ---& $2.34\pm0.06$& ---& 103.3/79& 3\%\\
\hline
13&530 & 664 & XRT & Pl & ---& $2.61\pm0.07$& ---& 98.1/76 & 5\%\\
\hline
14&664 & 838 & XRT & Pl & ---& $2.71\pm0.06$& ---& 89.3/73 & 7\%\\
\hline
15&838 & 851 & XRT & Pl & ---& $2.68\pm0.18$& ---& 15.7/10& 1\%\\
\hline
\end{tabular}
\caption{Best fit parameters derived from the spectral modelling of XRT and BAT
data using photo-electrically absorbed models (\textsc{tbabs*ztbabs} within
\textsc{Xspec}). The BAT and XRT normalisations are always tied to the same
value. Three different models have been used: a simple power-law (Pl);
a cut-off power law and a Band function both with the peak energy of the $\nu F_{\nu}$
spectrum as free parameter. From left to right: name of the interval
of the extraction of the spectrum we refer to throughout the paper (intervals 1 and 2
correspond to Pulse 1 and Pulse 2 of Table \ref{Tab:BATspec});
start and stop time of extraction of each spectrum; energy range of the fit: ``XRT+BAT"
stands for a joint BAT-XRT data fitting; model used; best fit low and high energy
photon indices for a Band or Cutpl
power-law or best fit photon index $\Gamma$ for a Pl model; statistical information
about the fit. The $^{+}$ symbol indicates an apparent trend in the residuals of the fit.}
\label{Tab:Epeak}
\end{center}
\end{table*}
The consistency of the prompt BAT spectrum with a cut-off power-law
(Sect. \ref{Tab:BATspec}) and the spectral variability detected in the
XRT energy range (Sect. \ref{SubSec:specXRT}, Fig. \ref{Fig:PhotonIndex_nhtot})
suggests that the peak of the $\nu F_{\nu}$ spectrum is moving through the
BAT+XRT bandpass. To follow the spectral evolution, we time slice the BAT and XRT
data into 14 bins covering the 10-851 s time interval. The spectra are then fit
within \textsc{Xspec} using a Band function (\textsc{ngrbep}) or a cut-off
power-law (\textsc{cutplep}) with $E_{\rm{p}}$ as free parameter; alternatively
a simple power law is used. Each model is absorbed by a Galactic
(hydrogen column density frozen to $3.96\times10^{20}\,\rm{cm^{-2}}$) and
intrinsic component ($N_{\rm{H,z}}$ frozen to $0.52\times10^{22}\,\rm{cm^{-2}}$,
see Sect. \ref{SubSec:specXRT}).
When possible we take advantage of the simultaneous BAT and XRT
observations, performing a joint BAT-XRT spectral fit. The normalisation for
each instrument is always tied to the same value.
The best fit parameters are reported in Table \ref{Tab:Epeak}:
the simple power law model gives a poor description of the spectra up to
$\sim400$ s, as the curvature of the spectra requires a cut-off power-law
or a Band function. In particular, this is the case when
the high energy slope enters the XRT bandpass.
The $E_{\rm{p}}$ parameter is well constrained and evolves to lower energies
with time; at the same time both the high and low energy photon indices
are observed to gradually vary, softening with time (Fig. \ref{Fig:Epeak}).
The $E_{\rm{p}}$ decay with time can be modelled by a simple
power-law starting $\sim200$ s after trigger: $E_{\rm{p}}\propto(t-t_0)^\alpha$.
The best fit parameters are reported in Table \ref{Tab:Epeakfit}.
The uncertainty of the inter-calibration of the BAT and XRT has
been investigated as possible source of the detected spectral evolution
as follows. For each time slice, we multiply the fit model by a
constant factor which is frozen to 1 for the BAT data. For XRT, this factor
is left free to vary between 0.9 and 1.1, conservatively allowing the XRT
calibration to agree within $10\%$ with the BAT calibration. The best fit
parameters found in this way are completely consistent with the ones listed
in Table \ref{Tab:Epeak}.
The inter-calibration is therefore unlikely to be the main source of the
observed evolution.
\begin{table}\footnotesize
\begin{center}
\begin{tabular}{cccc}
\hline
$t_0$ &$\alpha$ & $\chi^2/\rm{dof}$ &Model \\
\hline
0 (s)& $-7.1\pm0.7$&1.7/4&---\\
$109\pm89$ (s)&$-4.2\pm2.4$&2.5/5&---\\
$154\pm13$ (s)&$-3$ &3.2/4& Adiabatic cooling\\
200 (s)& $-1$ &42.1/4 &High latitude emission\\
\hline
\end{tabular}
\caption{Best fit parameters and statistical information
for a simple power-law fit to the $E_{\rm{p}}$
decay with time starting from 200 s after trigger:
$E_{\rm{p}}\propto(t-t_0)^\alpha$.}
\label{Tab:Epeakfit}
\end{center}
\end{table}
\section{Discussion}
\label{sec:discussion}
In GRB\,081028 we have the unique opportunity to observe a smoothly
rising X-ray afterglow after the steep decay: this is the
first (and unique up to July 2009) long GRB \emph{Swift}-XRT light-curve
where a rise with completely different properties to typical X-ray flares
(\citealt{Chincarini07}, \citealt{Falcone07}) is seen at $t\geq 10\,\rm{ks}$.
At this epoch,
canonical X-ray light-curves (e.g., \citealt{Nousek06}) typically show a
shallow decay behaviour with flares superimposed in a few cases
(Chincarini et al., in prep.): only in GRB\,051016B is a
rising feature detected at the end of the steep decay \footnote
{See the \emph{Swift}-XRT light-curve repository, \cite{Evans09} and
\cite{Evans07}.}.
In this case,
the sparseness of the data prevents us from drawing firm conclusions, so that a flare
origin of the re-brightening cannot be excluded.
The very good statistics of GRB\,081028 allows us to track the detailed
spectral evolution from $\gamma$-rays to X-rays, from the
prompt to the steep decay phase: this analysis fully qualifies the
steep decay as the tail of the prompt emission. At the same time,
it reveals that the steep decay and the following X-ray re-brightening
have completely different spectroscopic properties (Fig.
\ref{Fig:PhotonIndex_nhtot}): this, together with the temporal
behaviour, strongly suggests that we actually see two different emission
components overlapping for a small time interval, as was first suggested
by \cite{Nousek06}.
The small overlap in time of the two components is the key
ingredient that observationally allows the detection of the rising phase:
this can be produced by either a steeper than usual steep decay or a
delayed onset of the second component. We tested both possibilities
comparing GRB\,081028 properties against a sample of 32 XRT light-curves
of GRBs with known redshift and for which the steep-flat-normal decay
transitions can be easily identified. While 63\% of the GRBs are
steeper than GRB\,081028 ($\alpha_1\sim2$), no GRB in the sample shows a
rest frame steep-to-flat transition time greater than $1\,\rm{ks}$,
confirming in this way the ``delayed-second-component" scenario.
Alternatively, the peculiarity of GRB\,081028 could reside in a steeper
than usual rise of the second component: unfortunately this possibility
cannot be tested.
This section is organised as follows: in Sect. \ref{sebsec:specevolsteep}
we discuss the spectral evolution during the prompt and steep decay phases
in the context of different interpretations. The afterglow modelling of Sect.
\ref{sec:aftmodellingtot} favours an
off-axis geometry: however, this seems to suggest a different physical origin of the prompt
plus steep decay and late re-brightening components. This topic is further investigated from
the prompt efficiency perspective in Sect. \ref{subsec_prompt_eff}.
\subsection{Spectral evolution during the prompt and steep decay emission}
\label{sebsec:specevolsteep}
\begin{figure}
\centering
\includegraphics[scale=0.44]{BandEvolution.eps}
\caption{Qualitative description of the spectral evolution
with time detected in GRB\,081028 from the prompt to the steep decay
phase: the peak energy ($E_{\rm p}$) moves to lower energies while
both the high and low energy components soften with time. Arbitrary flux density
units are used.}
\label{Fig:BandEvolution}
\end{figure}
The evolution of the peak energy $E_{\rm p}$ of the $\nu F_{\nu}$ spectrum
from the $\gamma$-ray to the X-ray band described in Sect. \ref{SubSec:Epeak}
offers the opportunity to constrain the mechanism responsible
for the steep decay emission.
Spectral evolution through the prompt and steep decay phase has been
noted previously, with the $E_{\rm p}$ tracking both
the overall burst behaviour and individual prompt pulse structures
(see e.g., \citealt{Peng09} for a recent time resolved
spectral analysis of prompt pulses). In particular,
\cite{Yonetoku08} find $E_{\rm p}\propto t^{\sim-3}$ for GRB\,060904A;
\cite{Mangano07} model the prompt to steep decay transition of GRB\,060614
with a Band (or cut-off power-law) spectral model with $E_{\rm p}$ evolving
as $t^{\sim-2}$, while \cite{Godet07} and \cite{Goad07b} report on the
evolution of the $E_{\rm p}$ through the XRT energy band during single X-ray
flares in GRB\,050822 and GRB\,051117, respectively. A decaying $E_{\rm p}$
was also observed during the 0.3-10 keV emission of GRB\,070616
\citep{Starling08}.
The detection of strong spectral
evolution violates the prediction of the curvature effect in its simplest
formulation as found by \cite{Zhang07} in 75\% of the analysed GRBs tails:
this model assumes the instantaneous spectrum at the
end of the prompt emission to be a simple power-law of spectral index
$\beta$ and predicts the $\alpha=2+\beta$ relation, where $\beta$ is not supposed to vary
(see e.g., \citealt{Fenimore96}; \citealt{Kumar00}).
The curvature effect of a comoving Band spectrum predicts instead $E_{\rm p}\propto
t^{-1}$ and a time dependent $\alpha=2+\beta$ relation (see e.g.,
\citealt{Genet09}; \citealt{Zhang09}): from Fig. \ref{Fig:Epeak},
lower panel and Table \ref{Tab:Epeakfit} it is apparent that the observed
$E_{\rm p}\propto t^{-7.1\pm0.7}$
is inconsistent with the predicted behaviour even when we force the
zero time of the power-law fit model to be $t_0=200\,\rm s$,
peak time of the last pulse detected in the 15-150 keV energy range,
as prescribed by \cite{Liang06}. However, a more realistic version of the HLE
might still fit the data: a detailed
modelling is beyond the scope of this paper and will be explored
in a future work.
The adiabatic expansion cooling of the gamma-ray producing source,
which lies within an angle of $1/\gamma$ (where $\gamma$ is the
Lorentz factor of the fireball) to the observer line of sight,
has also been recently proposed as a possible mechanism responsible for
the steep decay \citep{Duran09}. This process gives a faster
temporal evolution of the break frequency as it passes through the X-ray
band: typically $E_{\rm p}\propto t^{-3}$.
Two fits to the data have been done, the first fixing
the break evolution to $t^{-3}$ and the other one leaving $t_0$ and the break
temporal evolution as free parameters. Both fits are consistent with the
adiabatic cooling expectation and set $t_0$ close to the beginning of the
last pulse in the BAT light-curve (see Table \ref{Tab:Epeakfit}).
However, the adiabatic expansion cooling of a thin ejecta predicts a light-curve decay
that is linked to the spectral index $\beta$ by the relation $\alpha=3\beta+3$,
where $\alpha$ is the index of the power-law decay. Since $\alpha_{\rm{obs}}
\sim 3.6$ this would imply $\beta\sim0.2$ which is much harder than
observed (Sect. \ref{SubSec:specXRT}). This makes the adiabatic cooling
explanation unlikely.
Both the curvature effect and the adiabatic model
assume an abrupt switch-off of the source after the end of the prompt
emission: the inconsistency of observations with both models
argues against this conclusion and favours models where the X-ray
steep decay emission receives an important contribution from the
continuation of the central engine activity. In this case, the steep decay
radiation reflects (at least partially) the decrease in power of the GRB jet.
An interesting possibility is given by a decrease of power originating
from a decrease in the mass accretion rate \citep{Kumar08}.
Alternatively, the observed spectral softening could be caused
by cooling of the plasma whose cooling frequency identified with $E_{\rm p}$
decreases with time as suggested by \cite{Zhang07}.
While the spectral peak is moving, we also observe a softening of the
spectrum at frequencies both below and above the peak when our data
allow us to constrain the low and high energy slopes of a comoving Band
spectrum. A softening of the low energy index in addition to the
$E_{\rm p}$ evolution has been already observed in the combined BAT+XRT
analysis of GRB\,070616 (\citealt{Starling08}, their Fig. 5).
This result is consistent with the finding that while short GRBs have a
low energy spectral component harder than long GRBs
(i.e., $|\alpha_{\rm B,short}|<|\alpha_{\rm B,long}|$, where
$\alpha_{\rm B}$ is the low energy photon index of the \citealt{Band93} function),
no difference is
found in the $\alpha_{\rm B}$ distribution of the two classes of GRBs
when only the first 1-2 s of long GRB prompt emission is considered
\citep{Ghirlanda09}: a soft evolution of the $\alpha_{\rm B}$ parameter
with time during the $\gamma$-ray prompt emission of long GRBs is therefore required.
Our analysis extends this result to the X-ray regime and
indicates the softening of both the high and low spectral components
from the prompt to the steep decay phase. The overall spectral evolution
is qualitatively represented in Fig. \ref{Fig:BandEvolution}.
\subsection{Afterglow modelling}
\label{sec:aftmodellingtot}
\subsubsection{Failure of the dust scattering, reverse shock and onset of the afterglow models}
\label{subsec_aftmod}
This subsection is devoted to the analysis of the X-ray re-brightening
in the framework of a number of different theories put forward
to explain the shallow decay phase of GRB afterglows.
According to the dust scattering model \citep{Shao07} the shallow
phase is due to prompt photons scattered by dust grains in
the burst surroundings: this models predicts a strong spectral
softening with time and a non-negligible amount of dust extinction
which are usually not observed \citep{Shen09}.
Both predictions are inconsistent with our data.
A spherical flow is expected
to give rise to a peak of emission when the
spectral peak enters the energy band of observation
(see e.g., \citealt{Granot02}): the SED analysis
of Sec. \ref{subsec:SED} clearly shows that $E_{\rm p}$ was already
below the X-ray band during the X-ray rising phase, well before the
peak, thus ruling out the passage of the break frequency through the
X-ray band as an explanation of the peak in the X-ray light-curve.
\cite{Sari99b} argue that the reverse shock has a much lower
temperature and is consequently expected to radiate at lower
frequencies than the forward shock, even if it contains an amount
of energy comparable to the GRB itself, making a reverse shock origin
of the X-ray re-brightening unlikely. However, following \cite{Genet07},
in the case of ejecta having a tail of Lorentz factor
decreasing to low values, if a large amount of the energy dissipated
in the shock ($\epsilon_e$ near its equipartition value) is transferred
to only a fraction of electrons (typically $\xi_e \sim 10^{-2}$), then
the reverse shock radiates in X-rays. In this case, it can also produce a plateau or
re-brightening, the latter being more often obtained in a constant
density external medium, that qualitatively agrees with the GRB\,081028 afterglow.
Alternatively, the detected light-curve peak could be the onset of the
afterglow: in this scenario, the rising (decaying) flux is to be
interpreted as pre-deceleration (post-deceleration) forward shock
synchrotron emission.
The observed break frequency scaling $\nu_{\rm b} \propto t^{-2.6
\pm 0.2}$ is inconsistent with the expected cooling frequency evolution
$\nu_{\rm c}\propto t^{-1/2}$ or $\nu_{\rm c}\propto t^{1/2}$ for an
ISM or a wind environment, respectively (see e.g. \citealt{Granot02}).
We therefore consider a fast cooling scenario where
$\nu_{\rm b}\equiv \nu_{\rm m}$.
The initial afterglow signal from a thick shell is likely to overlap
in time with the prompt emission \citep{Sari99b}, so that it would
have been difficult to see the smoothly rising X-ray re-brightening
of GRB\,081028. For this reason only the onset of the forward shock
produced by thin shells will be discussed.
Following \cite{Sari99b}, the observed peak of the X-ray
re-brightening implies a low initial fireball Lorentz factor
$\gamma_{0}\sim 75(n_0\epsilon_{\gamma,0.2})^{-1/8}$, where $n_0=n/(1\,
\rm{cm^{-3}})$ is the circumburst medium density and
$\epsilon_{\gamma,0.2}=\epsilon_{\gamma}/0.2$ is the radiative efficiency.
Since the X-ray frequencies are always above the injection frequency $\nu_{\rm m}$,
the X-ray light-curve should be proportional to $t^2\gamma(t)^{4+2p}$:
during the pre-deceleration phase this means $F_X\propto t^{2}$ for
an ISM and $F_X\propto t^{0}$ for a wind. The ISM scaling
is consistent with the observed power-law scaling $\propto t^{1.8\pm0.3}$
if a sharp transition between the rising and the decaying part of the
re-brightening is required. The asymptotic value of the power-law index during
the rising phase is instead steeper than 2, as indicated by the fit of the
re-brightening where the smoothing parameter is left free to vary: $\propto t^{4.5\pm3.3}$
(see Table \ref{Tab:XRTrebfit} for details).
The injection frequency is expected to scale as $\nu_m \propto \gamma(t)^{4-k} t^{-k/2}$,
where the density profile scales as $R^{-k}$.
This implies that for radii $R<R_{\gamma}$
(or $t<t_{\gamma}$) $\nu_{\rm m}\propto t^{0}$ for an ISM and
$\nu_{\rm m}\propto t^{-1}$ for a wind, while for $R>R_{\gamma}$
($t>t_{\gamma}$) the fireball experiences a self-similar deceleration
phase where $\gamma\propto t^{-3/8}$ for an ISM and $\gamma\propto t^{-1/4}$ for a wind,
and $\nu_{\rm m}\propto t^{-3/2}$ in both cases.
$R_{\gamma}$ is the radius where a surrounding mass smaller than the
shell rest frame mass by a factor $\gamma_{0}$ has been swept up;
$t_{\gamma}$ is the corresponding time: for GRB\,081028 $t_{\gamma}\sim
20\,\rm ks$ (observed peak of the re-brightening). While for $t>t_
{\gamma}$ the observed evolution of the break frequency
is marginally consistent with $t^{-3/2}$, it is hard to
reconcile the observed $\nu_{\rm m} \propto t^{-\alpha}$ with $\alpha \sim 2.6-2.4$
decay with the expected constant behaviour or $\propto t^{-1}$ decay for $t<t_{\gamma}$.
This argument makes the interpretation of the re-brightening as onset of the
forward shock somewhat contrived. Moreover, the identification of
$t=20\,\rm{ks}$ with the deceleration time is also disfavoured by the
earlier very flat optical light-curve. An alternative explanation is discussed
in the next subsection.
\subsubsection{The off-axis scenario}
\label{subsec_aftmod_offaxis}
For a simple model of a point source at an angle of
$\theta$ from the line of sight, moving at a Lorentz factor $\gamma
\gg 1$ with $\gamma\propto R^{-m/2}$, where $R$ is its radius, the
observed time is given by:
\begin{equation}
t = \frac{R}{2c\gamma^2}\left(\frac{1}{1+m}+\gamma^2 \theta^2\right)\
\end{equation}
The peak in the light curve occurs when the beaming cone widens enough
to engulf the line of sight, $\gamma(t_{\rm{peak}}) \sim 1/\theta$, so that
before
the peak $t \approx R\theta^2/2c \propto R$. We consider an
external density that scales as $R^{-k}$ (with $k < 4$) for which $m =
3-k$. When the line of sight is outside the jet aperture, at an angle $
\theta$ {\it from the outer edge of the jet}, the
emission can be approximated to zeroth order as arising from a point
source located at an angle $\theta$ from the line of sight \citep{Granot02b}.
We have:
\begin{equation}\label{Eq:offaxis1}
\frac{t_0}{t} \sim \frac{\nu}{\nu_0}=\frac{1-\beta}{1-\beta \cos \theta}
\equiv a_{\rm aft} \approx \frac{1}{1+\gamma^2 \theta^2}
\end{equation}
where $\beta = (1-\gamma^{-2})^{1/2} = v/c$ and
the subscript $0$ indicates the $\theta = 0$ (on-axis) condition.
The observed flux is given by
\begin{equation}\label{eq_flux_theta}
F_{\nu}(\theta, t) \approx a_{\rm aft}^3 F_{\nu/a}(0,at)\
\end{equation}
and peaks when $\gamma\sim 1/\theta$. In the following we use the notations
$a_{\rm aft} \approx 1/(1+\gamma^2 \theta^2)$; $a$ for the particular case where
$\gamma = \Gamma_0$ (where $\Gamma_0$ is the initial Lorentz factor of the
fireball): $a \approx 1/(1+\Gamma_0^2 \theta^2)$.
For $t\ll t_{\rm{peak}}$, $\gamma\theta \gg 1$ and therefore $a_{\rm{aft}}
\approx (\gamma\theta)^{-2} \propto \gamma^{-2} \propto R^{3-k} \propto
t^{3-k}$. In this condition the local emission from a spherically expanding
shell and a jet would be rather similar to each other, and
the usual scalings can be used for an on-axis viewing angle
(e.g., \citealt{Granot02}):
\begin{equation}\label{nu_01}
\nu_{m,0} \propto R^{-3(4-k)/2} \propto t^{-3/2}
\end{equation}
\begin{equation}\label{nu_02}
\nu_{c,0} \propto R^{(3k-4)/2} \propto t^{(3k-4)/(8-2k)}
\end{equation}
with respective off-axis frequencies:
\begin{equation}\label{nu_03}
\nu_m \approx a\,\nu_{m,0} \propto R^{(k-6)/2} \propto t^{(k-6)/2}
\end{equation}
\begin{equation}\label{nu_04}
\nu_c \approx a\,\nu_{c,0} \propto R^{(2+k)/2} \propto t^{(2+k)/2}
\end{equation}
For $t>t_{\rm{peak}}$,~ $a_{\rm{aft}} \approx 1$ and $\nu \approx
\nu_0$, so that the break frequencies have their familiar temporal
scaling for a spherical flow (eq.~\ref{nu_01} and \ref{nu_02})\footnote{While
these expressions are derived for a spherical flow, they are
reasonably valid even after the jet break time $t_{\rm jet}$ as
long as there is relatively very little lateral expansion as shown
by numerical simulations (see e.g., \citealt{Granot01c};
\citealt{Zhang09b} and references therein).}.
For a uniform external medium ($k = 0$), $\nu_c \propto t$ and
$t^{-1/2}$ before and after the peak, respectively, while for a
stellar wind environment ($k = 2$) the corresponding temporal scalings
are $t^2$ and $t^{1/2}$. In both cases this is inconsistent with the
observed rapid decrease in the value of the break frequency
($\nu_{\rm{b}}\propto t^{-2.6}$) unless we require a very sharp
increase in the magnetic field within the emitting region due
to a large and sharp increase in the external density \citep{Nakar07}.
We consider this possibility unlikely (see Sect. \ref{subsec_aftmod}).
Alternatively, the break frequency could be $\nu_m$, for a fast
cooling spectrum where $\nu_c$ is both below $\nu_m$ and below the
optical. In this case, for $t<t_{\rm{peak}}$ we have $\nu_m\propto t^{-3}$
($t^{-2}$) for a $k = 0$ ($k= 2$) environment; after the peak
$\nu_m \propto t^{-3/2}$ independent of $k$. Since we observe
$\nu_{\rm b} \propto t^{-2.6 \pm 0.2}$ (or $\nu_{\rm b}\propto
(t-t_0)^{-2.3 \pm 0.1}$ with $t_0=2\,\rm ks$)
over about a decade in time around the light-curve peak, this is
consistent with the expectations for a reasonable value of $k$.
Constraints on the model parameters are derived as follows: given that
we see only one break frequency in our SEDs, which we identify with $\nu_{\rm
m}$, we must require $\nu_{\rm c}<\nu_{\rm opt}(\approx10^{15}\,\rm Hz)$.
The tightest constraints are derived at $t_{\rm peak}$, when $\nu_{\rm c}$
reaches its maximum value (it increases with time before
$t_{\rm peak}$ and decreases with time after $t_{\rm peak}$ for
$k<4/3$). From
\cite{Granot02},
their Table 2, spectral break 11, this means:
\begin{equation} \label{eq_cdition_nuc<nuopt}
\epsilon_B^{3/2} n_0 E_{k,54}^{1/2}(1+Y)^2 > 10^{-3}\
\end{equation}
where $\epsilon_{\rm{B}}$ is the fraction of the downstream (within the shocked region)
internal energy going into the magnetic field; $n_0 = n/(1\;{\rm cm^{-3}})$
is the
external medium density; $E_{k,54}=E_{{\rm k,iso}}/(10^{54}\;{\rm ergs})$ is
the isotropic kinetic energy; $Y$ is the Compton parameter which for fast
cooling
reads $Y \approx [(1+4\epsilon_e/\epsilon_B)^{1/2}-1]/2$,
\citep{Sari01}; $\epsilon_{\rm{e}}$ is the fraction of the internal energy that is
given just behind the shock front to relativistic electrons that form a power-law
distribution of energies: $N_{\rm e}\propto \gamma_{\rm e}^{-p}$ for
$\gamma_{\rm max}>\gamma_{\rm e}>\gamma_{\rm min}$.
Assuming equipartition ($\epsilon_e = \epsilon_B=1/3$), $Y \approx 0.62$, Eq.
\ref{eq_cdition_nuc<nuopt} translates into:
\begin{equation} \label{eq_cdition_nuc<nuopt2}
n_0 \gtrsim 2 \times 10^{-3} E_{k,54}^{-1/2}
\end{equation}
For an efficiency of conversion of the kinetic to gamma-rays energy
$\epsilon_{\gamma}=1\%$ the observed $E_{\rm \gamma,iso}=1.1\times
10^{53}\,\rm erg$
(see Sect. \ref{SubSec:specBAT}) implies:
$n_0 \gtrsim 6 \times 10^{-4}$.
Using the best fit simple power-law
models for the break frequency evolution with time of Sect. \ref{subsec:SED}
we have $\nu_{\rm b}(112\,\rm ks)\sim 1.5\times 10^{14}\,\rm Hz$. Following
\cite{Granot02}, their Table 2, spectral break 9, this means (a value
that roughly agrees with the results for a range of values for $p$
derived below is adopted):
\begin{equation} \label{eq_cdition_num_sim_nuopt_p}
\left(\frac{\bar{\epsilon}_e}{\xi_e}\right)^2 \epsilon_B^{1/2}
\sim 2 \times 10^{-3} \, E_{k,54}^{-1/2}
\end{equation}
where $\bar{\epsilon}_e = \epsilon_e\gamma_m/\langle\gamma_e\rangle$ and
$\xi_{\rm{e}}$ is the fraction of accelerated electrons.
The value of $p$ is $p=1.97 \pm0.03$ with intrinsic reddening
$E(B-V)_{z}=0.03$ ($\chi^{2}/\rm{dof}=135/138$). Freezing the intrinsic reddening to
$E(B-V)_{z}=0.06$ gives $p=2.03\pm 0.02$ ($\chi^{2}/\rm{dof}=140.8/139$) while freezing
it to $E(B-V)_{z}=0.08$ gives $p=2.08\pm0.02$ ($\chi^{2}/\rm{dof}=158.6/139$). We thus
take $p=2.0\pm 0.1$. In particular, we calculate the range of values obtained
for the microphysical parameters in the three cases $p=2.1$, $p=2$ and $p=1.9$
since the expression of
$\bar{\epsilon}_e$ changes when $p>2$, $p=2$ and $p<2$ \citep{Granot06b}:
\begin{eqnarray} \label{eq_bar_epsilon_e}
\frac{\bar{\epsilon}_e}{\epsilon_e} = \left\{ \begin{array}{ll}
\approx (p-2)/(p-1) & p>2\\
1/\ln(\gamma_{\rm max}/\gamma_{\rm min}) & p=2\\
(2-p)/(p-1) (\gamma_{\rm min}/\gamma_{\rm max})^{2-p} & p<2
\end{array} \right.
\end{eqnarray}
$\gamma_{\rm max}$ is obtained by equating the acceleration and cooling
times of an electron, and is $\gamma_{\rm max} = \sqrt{3q_e/(\sigma_T B' (1+Y))}$.
Calculating the magnetic field value by $B' = \gamma_{\rm aft} c \sqrt
{32\pi\epsilon_{\rm B}n m_p}$ and assuming $n_0=1$, $\epsilon_e =
0.3$, $\epsilon_B = 0.1$ and $\gamma_{\rm aft}=30$ we obtain $\gamma_{\rm max}
\sim 10^7$. Taking $\gamma_{\rm min} \sim 500$ (obtained for $p\sim 2.1$),
$(\gamma_{\rm min}/\gamma_{\rm max}) \sim 5 \times 10^{-5}$ (given the way
this ratio appears in equation (\ref{eq_bar_epsilon_e}) - either in a
logarithm or with a power $2-p=0.1$ in our case - the dependence of
the ratio $\bar{\epsilon}_e/\epsilon_e$ on it is very weak, and variations
in its value have only a small effect). Then, since for $p=2.1$, $(p-2)/(p-1)
\sim 0.1$, and for $p=2$, $1/\ln(\gamma_{\rm max}/\gamma_{\rm min}) \sim 0.1$, for
$p\ge2$ we obtain $(\epsilon_e/\xi_e)^2 \epsilon_B^{1/2} \sim 0.2$. From the
equipartition value - giving the maximum possible values $\epsilon_e/\xi_e=
\epsilon_B=1/3$ - we obtain an upper limit on the fraction of accelerated
electrons: $\xi_e \lesssim 0.3$. For $p=1.9$ we have $(2-p)/(p-1) (\gamma_{
\rm min}/\gamma_{\rm max})^{2-p}\sim 0.04$, and then $(\epsilon_e/\xi_e)^2
\epsilon_B^{1/2} \sim 1.25$, and then $\xi_e \lesssim 0.2$. The constraint
on the microphysical parameters being very close in all cases, the exact
value of $p$ is then not of primary importance and the approximation $p=2.0\pm0.1$
is then consistent.
The evolution of the peak frequency being consistent with an off-axis
interpretation of the afterglow, we further test this scenario by
deriving the viewing and half-opening angle of the jet. The jet break
time is given by \cite{Sari99} for the ISM and \cite{Chevalier00}
for the wind environments:
\begin{equation}
t_{\rm jet} \approx \left\{ \begin{array}{ll}
1.2\; (1+z)\left(\frac{E_{54}}{n_0}\right)^{1/3}
\left(\frac{\Delta \theta}{0.1} \right)^{8/3}\;{\rm days} & (k=0)\ \\
\\
6.25\; (1+z)\left(\frac{E_{54}}{A_*}\right)
\left(\frac{\Delta \theta}{0.1} \right)^{4}\;{\rm days} & (k=2)\
\end{array}\right.
\end{equation}
From Table \ref{Tab:XRTrebfit} we read a post-break power-law decay index
$b=2.1\pm0.1$ ($e=2.3\pm0.1$) if $t_{\rm jet}\sim t_{\rm peak}$
($t_{\rm jet}=t_{\rm br_{2}}$). Both are consistent with being post-jet
break decay indices. We therefore
conservatively assume $t_{\rm jet}<1\,\rm day$, which leads to:
\begin{equation}\label{theta2}
\Delta \theta <
\left\{ \begin{array}{ll}
0.055 \left( \frac{E_{54}}{n_0} \right)^{-1/8}\;{\rm rad} & \quad (k=0)\ \\
\\
0.045 \left( \frac{E_{54}}{A_*} \right)^{-1/4}\;{\rm rad} & \quad (k=2)\
\end{array} \right.
\end{equation}
Evaluating Eq. 9 of \cite{Nousek06} at $t=t_{\rm peak}$, when
$\gamma\sim 1/\theta$ we obtain:
\begin{equation}\label{theta1}
\frac{1}{\gamma(t_{\rm peak})}\approx\theta =\left\{ \begin{array}{ll}
0.03\left(\frac{E_{54}}{n_0} \right)^{-1/8}\rm{rad} &\quad (k = 0)\ \\
\\
0.03 \left(\frac{E_{54}}{A_*}\right)^{-1/4}\rm{rad} &\quad (k = 2)\
\end{array} \right.
\end{equation}
Using Eq. \ref{eq_cdition_nuc<nuopt2} for the ISM environment we
finally have $\theta>0.014E_{k,54}^{-3/16}\,\rm rad$. From the
comparison of Eq. \ref{theta1} and Eq. \ref{theta2} it is apparent that
$\theta>\Delta\theta/2$. Moreover, the slope of the rising part
of the re-brightening of the afterglow is $\sim 1.8$, which is
in rough agreement with the rising slope of the re-brightening obtained
from model 3 of \cite{Granot02b} - see their Fig. 2 - for
$\theta \sim 3 \Delta \theta$. This is consistent with $\theta> \Delta \theta/2$.
The off-axis interpretation implies that the value
of the observed gamma-ray isotropic energy $E_{\gamma,\rm{iso},\theta}$ corresponds
to an actual on-axis input of $E_{\gamma,\rm{iso},0} \approx a^{-2}E_{\gamma,
\rm{iso},\theta}$ if $\theta<\Delta \theta$
and $E_{\gamma,\rm{iso},0} \approx a^{-3}E_{\gamma,\rm{iso},\theta}$ if $\theta>\Delta \theta$.
Since $E_{\rm \gamma,iso,\theta} \sim 10^{53}\;$erg,
this may lead to very high energy output for this burst, which may
be unphysical. It is therefore important to obtain limits on the
Lorentz factor of the prompt emission, since
$a^{-1} \approx 1+\Gamma_0^2\theta^2$.
Lower limits to $\Gamma_0$ can be obtained following \cite{Lithwick01},
requiring the medium to be optically thin to annihilation of photon pairs
(Eq. \ref{eq_gam_mins_theta_num})
and to scattering of photons by pair-created electrons and positrons
(Eq. \ref{eq_gam_mins_theta_num2})\footnote{See Appendix \ref{App1}
for a complete derivation of Eq. \ref{eq_gam_mins_theta_num} and
\ref{eq_gam_mins_theta_num2}.}:
\begin{eqnarray}\label{eq_gam_mins_theta_num}
\Gamma_{{\rm min, \gamma \gamma}} =
\frac{\widehat{\tau}_{\theta}^{1/(2\beta_{\rm B}+2)}\left(\frac{150\;{\rm
keV}}
{m_e c^2}\right)^{(\beta_{\rm B}-1)/(2\beta_{\rm B}+2)}}{(1+z)^{(1-\beta_{\rm B})/(\beta_{\rm B}+1)}}
\nonumber\\
\times\left\{
\begin{array}{ll}
a^{-1/2} & \theta <\Delta \theta\\
\left(a_*\right)^{1/(2\beta_{\rm B}+2)} a^{-(\beta_{\rm B}+2)/(2(\beta_{\rm B}+1))}
& \theta > \Delta \theta
\end{array}\right. \
\end{eqnarray}
\begin{eqnarray}\label{eq_gam_mins_theta_num2}
\Gamma_{{\rm min,e^{\pm}}} = \widehat{\tau}_{\theta}^{1/(\beta_{\rm B}+3)}
(1+z)^{(\beta_{\rm B}-1)/(\beta_{\rm B}+3)}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\nonumber \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times
\left\{ \begin{array}{ll}
a^{-2/(\beta_{\rm B}+3)} & \theta < \Delta \theta\\
\left(a_*\right)^{1/(\beta_{\rm B}+3)} a^{-3/(\beta_{\rm B}+3)}
& \theta > \Delta \theta
\end{array}\right. \
\end{eqnarray}
where $\beta_{\rm B}$ is the high energy photon index of the prompt
Band spectrum.
From \cite{Blandford76}, the Lorentz factor at the deceleration
radius and at the peak of the re-brightening can be related by $\gamma
(R_{\rm peak}) = \gamma(R_{\rm dec})(R_{\rm peak}/R_{\rm dec})^{-(3-k)/2}$.
The Lorentz factor at the deceleration radius is a factor $g<1$ of
the Lorentz factor of the prompt emission $\Gamma_0$. Combining this
with $a^{-1} = 1+\Gamma_0^2\theta^2$ and $\theta=1/\gamma(t_{\rm peak})$,
we obtain the following expression for the parameter $a$:
\begin{equation}
a^{-1} = 1+ g^{-2}\left(\frac{R_{\rm peak}}{R_{\rm dec}}\right)^{3-k}\ .
\end{equation}
Since $g \lesssim 1/2$, and $R_{\rm dec} \lesssim R_{\rm peak}$,
we have $a^{-1} \gtrsim 5$ which, when substituted
in equation \ref{eq_gam_mins_theta_num} and \ref{eq_gam_mins_theta_num2} and keeping
the strongest constraint, implies $\Gamma_0 \gtrsim 46$. To consider the
other extreme case, where the deceleration time is $\sim T_{\rm GRB}$,
one should be careful in translating the ratio of radii to ratio of times:
for a prompt emission with a single pulse, the duration of the GRB
$T_{\rm GRB}$ is the duration of the pulse, which changes with the
parameter $a$ from on-axis to off-axis, as then does $t_{\rm dec}$.
We can therefore use off-axis values of the time $t\sim R$ which means
$R_{\rm peak}/R_{\rm dec} \sim t_{\rm peak}/t_{\rm dec} \sim t_{\rm peak}/
T_{\rm GRB}$ in our case here. Since $t_{\rm peak} \sim 2\times10^{4}\;$s
and $T_{\rm GRB}=264.3\;$s (we identify the duration of the GRB with the
$T_{90}$ parameter), $a^{-1} \gtrsim 300$ for $k=2$ (then
$\Gamma_0 \gtrsim 230$) and $a^{-1} \gtrsim 1.7\times10^{3}$ ($\Gamma_0
\gtrsim 17\times10^3$) for $k=0$. In the case of a prompt emission with
several pulses, as it is the case for GRB081028, each pulse duration
increases by a factor $a^{-1}$ from on-axis to off-axis, however the
total duration of the burst does not increase much, approximately by
a factor of order unity, since the enlargement of pulses is somewhat
cancelled by their overlapping. In this case, the GRB duration to
consider is the on-axis one, for which $t\propto R/\gamma^2\propto R^{4-k}$;
since $t_{\rm peak}$ is the limit between the on-axis and off-axis
cases we can use $t_{\rm peak}\propto R_{\rm peak}^{4-k}$ and then
$a^{-1} = 1+ g^{-2}\left(\frac{t_{\rm peak}}{T_{\rm GRB}}\right)^{
(3-k)/(4-k)} \gtrsim 100$ (or $\Gamma_0 \gtrsim 136$) for $k=0$
and $a^{-1} \gtrsim 36$ (or $\Gamma_0 \gtrsim 94$) for $k=2$.
The lower limit on the value of $a^{-1}$ thus ranges between\footnote{Since GRB\,081028 is composed of at least two pulses,
we consider the most relevant case, when the observed off-axis duration
of the prompt emission is close to the on-axis one.} $\sim 5$
and $\sim 10^2$: this implies values
of the isotropic on-axis gamma-ray energy output to range between
$E_{\rm \gamma,iso,0} \sim 3\times10^{54}\;$erg and $E_{\rm \gamma,iso,0} \sim 10^{57}\;$erg
if $\theta < \Delta \theta$ and even greater values for $\theta > \Delta \theta$:
between $E_{\rm \gamma,iso,0} \sim 1.4\times10^{55}\;$erg and $E_{\rm \gamma,iso,0} \sim 10^{59}\;$erg.
These very high values could suggest that the observed prompt emission is from a
different component than the observed afterglow emission. This possibility independently
arises from the prompt efficiency study: the next section is dedicated to an investigation
of this topic.
\subsection{Prompt efficiency}
\label{subsec_prompt_eff}
The study of the efficiency of the conversion of the total initial energy
into gamma-rays can in principle shed light on the physical mechanism at
work. In the particular case of GRB\,081028, this study helps us to understand
if the prompt and afterglow emission originated from physically different
regions. The first part of this sub-section is dedicated to the on-axis case;
the second part to the off-axis case.
Assuming that all energy not radiated in gamma-rays
ends up in the kinetic energy $E_{\rm{k}}$ of the afterglow,
the important parameters are the energy radiated in gamma-rays,
$E_{\gamma}$, the kinetic energy of the afterglow, $E_{\rm{k}}$ and
a parameter
$f\equiv E_{\rm k}(10\;{\rm{hr}})/E_{\rm k,0}$ (\citealt{Granot06b};
\citealt{Fan06}, hereafter FP06) that accounts for energy injection
during the shallow decay phase (since energy injection is the most common
explanation for this phase), where $E_{\rm k,0}$ is the initial kinetic
energy of the afterglow, before energy injection. Accounting for
energy injection, the efficiency of the
prompt emission reads:
\begin{equation}
\epsilon_{\gamma}\equiv \frac{E_{\gamma}}{E_{\rm k,0}+E_{\gamma}} = \frac{f\tilde{\epsilon_{\gamma}}}{1+(f-1)\tilde{\epsilon_{\gamma}}}
\end{equation}
where $\tilde{\epsilon_{\gamma}} \equiv E_{\gamma}/(fE_{\rm k,0}+E_{\gamma})
=E_{\gamma}/(E_{\rm k}(10\;{\rm{hr}})+E_{\gamma})$ is the prompt efficiency in
the case of no energy injection. All the listed
quantities are isotropic equivalent quantities.
The value of this parameter can be calculated
with a good estimate of $E_{\rm k}(10\;{\rm{hr}})$ that can be obtained from
the X-ray luminosity at 10 hours if the X-ray frequency $\nu_X$ is above
both $\nu_m$ and $\nu_c$ (FP06; \citealt{Lloyd04}, hereafter LZ04).
This is the case for GRB\,081028 (see Sect. \ref{subsec_aftmod_offaxis})
which shows an isotropic X-ray luminosity of $L_{\rm{x,iso}}(\,\rm 10\,hr,\rm{obs})\sim(6.3\pm1.0)
\times 10^{47}\;\rm{erg\,s^{-1}}$. The calculation of the kinetic energy is done
following the prescriptions of FP06: unlike LZ04, they integrate their model over
the observed energy band $0.3-10\;$keV and consider the effect of inverse Compton
cooling. Equation (9) of FP06 gives the kinetic energy at ten hours:
\begin{eqnarray}
\label{Eq:Ek}
E_{\rm k}(10\;{\rm{hr}})=R\,L_{X,46}^{4/(p+2)}\Big(\frac{1+z}{2}\Big)^{(2-p)/(p+2)}\times
\nonumber
\\\epsilon_{\rm{B,-2}}^{-(p-2)/(p+2)}
\epsilon_{\rm{e,-1}}^{4(1-p)/(p+2)}
(1+Y)^{4/(p+2)}
\end{eqnarray}
where $R=9.2\times 10^{52}[t(10\,\rm{hr})/T_{90}]^{\frac{17\epsilon_{e}}{16}}\,\rm{erg}$.
This implies we need to make some assumptions on the microphysical parameters
$\epsilon_e$, $\epsilon_B$ and $Y$. For the
latter, as the afterglow is likely to be in fast cooling
(see Sect. \ref{subsec_aftmod_offaxis}), then $Y>1$ and we take $Y \sim
(\epsilon_e/\epsilon_B)^{1/2}$ following FP06. \cite{Medvedev06} showed that during the
prompt emission it is most likely that $\epsilon_e\approx\sqrt{\epsilon_B}$. The
values of the microphysical parameters being poorly constrained
(Sect. \ref{subsec_aftmod_offaxis}), we set $\epsilon_e=0.3$ and $\epsilon_B=0.1$,
which is consistent with the values obtained in subsection \ref{subsec_aftmod_offaxis}
(see eq. \ref{eq_cdition_num_sim_nuopt_p} when $\xi_e <1$ and eq.
\ref{eq_bar_epsilon_e} and the paragraph below it).
Taking $p\sim2$, we thus obtain
$E_{\rm k}(10\;{\rm{hr}}) = 1.3\times10^{55}\;$erg. Combined with the
observed isotropic gamma-ray energy of the prompt emission
$E_{\rm \gamma} = 1.1\times10^{53}\;$ergs,
we have
$\tilde{\epsilon_{\gamma}} = 8.6\times10^{-3}$ (corresponding to a ratio $E_{\rm k}(10\;{\rm{hr}})/E_{\rm \gamma} \approx 116$):
this is low, even compared to the values obtained by FP06 (their values being between
$0.89$ and $0.01$ - see their Table 1), which are already lower than previous estimates
by LZ04. Now returning to the efficiency including energy injection, we can obtain an
estimate of $E_{\rm k,0}$ by using the previous formula but at the peak of
the re-brightening and taking $R=9.2\times 10^{52}$ erg (thus ignoring
energy radiative losses since the end of the prompt emission), which with its peak
luminosity $L_{\rm{peak}} = 1.2 \times 10^{48}\;$erg s$^{-1}$ gives an initial kinetic
energy injected into the afterglow $E_{\rm k,0} = 1.16 \times 10^{55}\;$erg and
then an efficiency of the prompt emission which is as low as
$\epsilon_{\gamma}= 9.4\times 10^{-3}$.
This calculation assumes an on-axis geometry and accounts for energy injection.
\begin{figure}
\centering
\includegraphics[scale=0.44]{EisoLisoRest.eps}
\caption{Distribution of $E_{\gamma}/t L_{X}(t)$ with $t=10\,\rm{hr}$
rest frame, for the sample of 31 long GRBs detected by \emph{Swift} with $E_{\gamma}$ provided by Amati et al. (2008).
Black solid line: Gaussian best fit to the distribution. The dashed black line marks the position of GRB\,081028 in the distribution, while the
black solid arrow is pointed to the direction of increase of the radiative efficiency
parameter $\epsilon_{\gamma}$.}
\label{Fig:EisoLisoRest}
\end{figure}
To strengthen the result of the above paragraph that the efficiency of
GRB\,081028 when considered on-axis is low, we analyse its prompt
and afterglow fluencies and compare them to a sample of Swift bursts from
\cite{Zhang07b}, since fluences require no assumptions to be obtained.
The prompt $1-10^4\,\rm{keV}$
gamma-ray fluence\footnote{Depending on the high energy
slope of the Band spectrum, we have $S_{\rm \gamma}\sim 6.6\times 10^{-6}\,\rm{erg\,
cm^{-2}}$ for $\beta_{\rm B}= -2.5$ and $S_{\rm \gamma}\sim 9.5\times 10^{-6}\,\rm{erg\,
cm^{-2}}$ for $\beta_{\rm B}= -2.1$.} of GRB\,081028 is $S_{\rm \gamma}\sim 8\times 10^{-6}\,\rm{erg\,
cm^{-2}}$ and its afterglow X-ray fluence, calculated by $S_{\rm X} \sim t_{\rm peak} F_{\nu}
(t_{\rm peak})$ to be consistent with \cite{Zhang07b}
method, is $S_{\rm X} \approx 3\times 10^{-7}\rm{erg\,cm^{-2}}$, so that their
ratio is $S_{\gamma}/S_{\rm{X}} \approx 26.7$, placing GRB\,081028 in the lower
part of Fig. 6 of \cite{Zhang07b}. Compared to their sample of 31 {\it Swift} bursts, the $15-150\;$
keV fluence of GRB081028, which is $3.2\times 10^{-6}\rm{erg\,
cm^{-2}}$ is well within their range of values (spanning from $S_{\rm X,min} \approx
8\times 10^{-8}\rm{erg\,cm}^{-2}$ to $S_{\rm X,max} \approx 1.5\times 10^{-5}\rm{erg\,cm}^{-2}$; sixth column
of their table 1), whereas its X-ray fluence is higher than most of them (see columns 6-9 of their
table 2). It thus means that whereas GRB\,081028 released as much energy in its prompt
emission as most bursts, more kinetic energy was injected in its outflow.
This gives a lower efficiency than most of the GRBs analysed by \cite{Zhang07b},
consistent with the scenario above.
Figure \ref{Fig:EisoLisoRest} clearly shows that this is likely
to be extended to other \emph{Swift} long GRBs: at late afterglow epoch the X-ray band
is above the cooling frequency and the X-ray luminosity is a good probe of
the kinetic energy.
In particular $E_{\rm{k}}\propto L_{\rm{x,iso}}$ (see Eq. \ref{Eq:Ek}): this means that high (low)
values of the ratio $E_{\gamma}/L_{\rm{x,iso}}$ are linked to high (low) values of
radiative efficiency.
The afterglow modelling of the previous section favours an off-axis
geometry. In this case, considering that $E_{\rm k,iso} \sim 10^{55}\;$erg, for the lower
limit $a^{-1}\sim 5$ (see Sect. \ref{subsec_aftmod_offaxis}) the efficiency of the prompt emission
becomes $\epsilon_{\gamma} \sim 0.23$, which is a more usual value
(it is in the middle of the efficiency distribution of FP06).
However, the upper limit of the range of values for $a^{-1}$ gives an
efficiency of $99\%$ (when $\theta < \Delta \theta$, and thus an even
higher value for $\theta > \Delta \theta$), which is exceptionally high and
very hard to
reconcile with models of the prompt emission. This would suggest that
the observed prompt emission is from a different component than the
observed afterglow emission.
An alternative way of achieving a more reasonable gamma-ray efficiency is if
the observed prompt gamma-ray emission is from material along our line of sight,
which has $E_{\rm{k,iso}} \sim E_{\rm{\gamma,iso}}$, while the peak in the X-ray and optical
light-curves at $\sim2\times 10^4\;$s is from a narrow jet-component pointed away from
us that has a significantly higher $E_{\rm{k,iso}}$. In this picture the afterglow
emission of this material along our line of sight (and possibly also between
our line of sight and the core of the off-axis jet component) could account for
the very flat (almost constant flux) early optical emission (from the white
light detection at $275\;$s, through the $R$-band detection at $1780\;$s, and
the I-band detections at several thousand seconds). This early optical emission
appears to be from a different origin than the contemporaneous X-ray emission,
and is most likely afterglow emission, regardless of the origin of the prompt emission:
the observed X-ray and optical emission in the time interval
$1.8\,\rm{ks}\leq t \leq9.5\,\rm{ks} $ implies a spectral index $|\beta_{OX}|<0.5$.
Conversely, assuming $\beta_{OX}=0.5$, the expected X-ray contribution of the on-axis
component at these times is $\approx 3\times 10^{-4}\,\rm{mJy}$ which is lower than the
observed X-ray flux for $t<9\,\rm{ks}$ and comparable to the observed one at
$t\sim 9\,\rm{ks}$.
\section{Summary and conclusions}
\label{sec:conclusion}
The 0.3-10 keV X-ray emission of GRB\,081028 consists of
a flat phase up to
$\sim300$ s (the XRT is likely to have captured the prompt emission in the
X-ray energy band) followed by a steep decay with flares superimposed extending
to $\sim7000$ s (component 1). The light-curve then shows a re-brightening which
starts to rise at $t\sim 8000\,\rm s$ and
peaks around $20\,\rm ks$ (component 2). The different spectral and temporal
properties strongly characterise the XRT signal as due to two distinct
\emph{emission} components.
However, their further characterisation as emission
coming from \emph{physically} distinct \emph{regions} is model dependent.
The strong hard-to-soft evolution characterising the prompt and steep
decay phase of GRB\,081028 from trigger time to 1000 s is
well modelled by a shifting Band function: the spectral peak energy
evolves to lower values, decaying as $E_{\rm peak}\propto t^{-7.1\pm0.7}$
or $E_{\rm peak}\propto (t-t_0)^{-4.2\pm2.4}$ when the zero-time of the
power-law is allowed to vary: the best fit constrains
this parameter to be $t_0=109\pm89$ s. In either case our results
are not consistent with the $\propto t^{-1}$ behaviour
predicted by the HLE in its simplest formulation. While a more
realistic version of this model might still account for the
observed $E_{\rm peak}$ evolution, other possibilities must be
investigated as well:
the adiabatic expansion cooling of the $\gamma$-ray source predicts
a steeper than observed light-curve decay and is therefore unlikely.
While the peak is moving, a softening of both the low and high-energy
portions of the spectrum is clearly detected.
The failure of both the curvature effect and the adiabatic cooling argues
against the abrupt switch-off of the GRB source after the prompt emission
and suggest the continuation of the central engine activity during the
steep decay. An off-axis explanation
may reconcile the high latitude emission or the adiabatic expansion
cooling models with the data. This will be explored in a future work.
GRB\,081028 has afforded us the unprecedented opportunity to track a
smoothly rising X-ray afterglow after the steep decay: the rising
phase of the emission component later accounting for the shallow
light-curve phase is usually missed, being hidden by the steep
decay which is the tail of the prompt emission both from the
spectral and from the temporal point of view. The peculiarity
of GRB\,081028 lies in a small overlap in time between the steep
decay and the following re-brightening caused by an unusual delay
of the onset of the second component of emission. Contemporaneous optical
data allow the evolution of the SED during the re-brightening to be constrained:
the spectral distribution is found to be best described by a
photo-electrically absorbed smoothly broken power-law with a break
frequency evolving from $1.6\times 10^{15}\,\rm Hz$ downward to the
optical band. The break frequency can be identified with the
injection frequency of a synchrotron spectrum in the fast cooling regime
evolving as $\nu_{\rm b}\propto t^{-2.6\pm0.2}$.
The intrinsic optical absorption is found to satisfy $A_{V,z}<0.22$.
The observed break frequency scaling is inconsistent with the
standard predictions of the onset of the forward shock emission even if
this model is able to account for the temporal properties
of the X-ray re-brightening (note that in this context
the delay of the second emission component is due to a lower
than usual fireball Lorentz factor or external medium density).
Alternative scenarios have therefore
been considered. While a dust scattering origin
of the X-ray emission is ruled out since we lack
observational evidence for a non-negligible dust extinction and
strong spectral softening, a reverse shock origin cannot be
excluded. However, this can be accomplished only by requiring
non-standard burst parameters: the ejecta should have a tail of
Lorentz factors decreasing to low values; $\epsilon_{e}$ should be
near equipartition; only a small fraction $\xi_{e}\sim 10^{-2}$
of electrons should contribute to the emission.
The predictions of the off-axis model have been discussed in detail:
according to this model a peak of emission is expected when the
beaming cone widens enough to engulf the line of sight.
The delayed onset of the second emission component is not a consequence
of unusual intrinsic properties of the GRB outflow but is
instead an observational artifact, due to the off-axis condition.
The observed evolution of $\nu_{\rm b}$ is consistent with the
expected evolution of the injection frequency of a fast cooling
synchrotron spectrum for $0\lesssim k\lesssim2$. We interpret the light-curve
properties as arising from an off-axis view, with $\theta\sim3\Delta
\theta$ and $\theta\sim0.03(\frac{E_{54}}{n_{0}})^{-1/8}$ for $k$=0
(or $\theta\sim0.03(\frac{E_{54}}{A_{*}})^{-1/4}$ for $k$=2),
$\theta$ being the angle from the outer edge of the jet and $\Delta\theta$
the jet opening angle.
In this scenario, the peculiarity of GRB\,081028, or the reason why we do
not observe more GRB\,081028-like events, may be attributed to the following
reasons. Since GRB\,081028 is a particularly bright (and therefore rare)
event when viewed on-axis (with high on-axis $E_{\rm{iso}}$ and $L_{\rm{iso}}$ values),
it is detectable by an off-axis observer even at the cosmological distance
implied by its redshift z = 3.038. In addition, GRB\,081028 appears to be
characterized by a particularly narrow jet, for which the ratio of the
detectable off-axis solid angle to on-axis solid angle is larger than for
wider (but otherwise similar) jets. Finally, GRB\,081028 might have a
peculiar angular structure that is not representative of most GRBs, which
would undermine the drawing of statistical conclusions under the
assumption of a similar angular structure for most or all GRB jets.
The radiative efficiency is one of the key parameters in GRB science:
a precise estimate of this parameter would allow one to distinguish
between different models put forward to explain the observed emission.
For the on-axis model, with $\epsilon_{\gamma}\sim 10^{-2}$, the GRB\,081028
efficiency turns out
to be lower than the values obtained by FP06 and LZ04
for a sample of pre-\emph{Swift} GRBs:
this directly implies that instead of having released as much energy
in the prompt emission as most bursts of the two samples, GRB\,081028
has a much greater kinetic energy injected in the outflow.
Figure \ref{Fig:EisoLisoRest} clearly shows that this conclusion is
likely to be extended to other \emph{Swift} bursts with secure
$E_{\gamma,\rm{iso}}$ measurement. This picture changes if we
consider the off-axis interpretation: if the deceleration time is much
longer than the prompt duration the
prompt and afterglow emission are consistent with originating from
the same physical component and the efficiency of the burst is
comparable to most bursts; if instead the deceleration time is close to
the end of the prompt emission, then the on-axis isotropic energy output
would imply an extremely high efficiency of $99\%$ which is very hard to
explain. This suggests that the prompt and afterglow emission come
from different physical components.
GRB\,081028 demonstrates the evolution of GRB spectral properties
from the onset of the explosion to $\sim10^{6}\,\rm s$ after trigger
and shows that this is likely to be attributed to two distinctly
contributing components of emission. These can be constrained only by prompt,
broad-band coverage and good time resolution observations.
\section*{Acknowledgments}
J.G. gratefully acknowledges a Royal Society Wolfson Research Merit Award.
Partly based on observations made with the Nordic Optical Telescope,
operated on the island of La Palma jointly by Denmark, Finland,
Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de
los Muchachos of the Instituto de Astrofisica de Canarias.
The Dark Cosmology Centre is funded by the Danish National Research
Foundation. The PAIRITEL work of J.S.B., A.A.M, and D.S. was partially
supported by NASA grant NNX09AQ66G.
This work is supported by ASI grant SWIFT I/011/07/0, by the
Ministry of University and Research of Italy (PRIN MIUR 2007TNYZXL), by
MAE and by the University of Milano Bicocca (Italy).
| proofpile-arXiv_065-6880 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
The behavior of
ultracold superfluid Fermi gases
continues to attract attention from
the experimental and theoretical communities.
Through a Feshbach resonance
\cite{KetterleVarenna,Chinreview}
which tunes the strength of the attractive
interaction, these trapped gases can exhibit a crossover from BCS to
Bose Einstein condensation (BEC).
Unlike
the Bose superfluids, at the time of their discovery, the Fermi superfluids
are not associated with any ready-made theory (such as
Gross Pitaevskii, or Bogoliubov theory for bosons).
This provides
an opportunity for theorists to work hand in hand with
experimentalists to arrive at the same level of understanding
of the fermionic as was reached for the bosonic superfluids.
While the Fermi systems are harder to address than the
Bose counterparts, the payoff for progress is great.
Moreover,
there is a general belief that these systems may lead to
important insights into high temperature superconductor (HTSCs),
in part because the HTSCs exhibit an anomalously short coherence
length which suggests that they may be mid-way between BCS and
BEC
\cite{LeggettNature,Ourreview,Sawatzky,Chienlattice2}.
There is, as yet, no clear consensus about the theory which
underlies BCS-BEC crossover although there are three rather well
studied analytic many body theories which have emerged.
The goal of the present paper is to present a comparison and
assessment of these approaches with a particular focus on the two
of the three, which seem most reliable.
In addition to assessing the theoretical approaches,
we address physical consequences and how the theories may
be differentiated through the behavior of the
centrally important spectral function and related density of
states. We do this in the context of radio-frequency-(RF) based probes.
What is among the most interesting features of BCS-BEC
crossover is the fact that the normal state (out of which
superfluidity forms) is different from the normal state (Fermi liquid)
associated with strict BCS theory. The
normal state of, for example, a unitary gas
consists of pre-formed pairs which persist below
$T_c$ in the form of non-condensed pair excitations of the condensate.
This excitation branch is in addition to the usual gapped
fermionic excitations.
The normal state is often said to exhibit a ``pseudogap" which has features
in common with the exotic normal state of the high temperature
superconductors. This pseudogap \cite{Ourreview,Strinaticuprates}
reflects the formation of quasi-bound
pairs which in turn require an energy input (called $\Delta$) in
order to break the pairs and create fermionic excitations.
Physically, what differs from one crossover theory to another
\cite{firstordertransitionpapers,Strinaticuprates,Ourreview,Drummond3}
is the nature
of these non-condensed or pre-formed pairs which, respectively,
appear below
and above
$T_c$.
Unlike the pair fluctuations of traditional superconductors (which
are associated with low dimensionality and impurity effects) these pairs
are present because of a stronger-than-BCS attractive interaction.
As a consequence, the pairing gap $\Delta$ persists to temperatures
which can be several times $T_c$ for the case of the unitary gases.
In this paper we address the
temperature dependence of the spectral function particularly
in the normal state.
The density of states (DOS), which
can be obtained from the spectral function, will also be presented. We
compare with experiments in the context of
RF spectra of both unpolarized and polarized
Fermi gases.
Quantum Monte Carlo simulations
\cite{QMCTc,MCpair} provide useful information such as the superfluid
transition temperature $T_c$, entropy, condensate fraction, etc.
and recently reveal evidence of non-condensed pairs \cite{Bulgacpairs}
along with a pseudogap \cite{BulgacPG} in the normal phase. Our focus
is on two different finite-temperature BCS-BEC crossover theories and
we present a detailed comparison of the results obtained from the two
theories as well as an assessment of other BCS-BEC crossover theories.
\subsubsection{Analysis of Different Crossover Theories}
A fair amount of controversy \cite{Ourreview,CompareReview,Drummond5,HaussmannRF}
has surfaced in the literature regarding the
three alternative analytic pairing fluctuation schemes. In this paper we
address some of these issues and clarify misleading claims.
At this early stage of understanding we do not believe it is suitable
to invoke (possibly fortuitous)
fits to particular experimental or Monte-Carlo derived
numbers to establish which of these theories is ``best". Rather
in line with the goal of this paper,
one has to look at the differences at a more general level.
One has, furthermore, to subject these theories to careful
consistency tests.
Each of the three many body approaches is associated with a different
ground state. Thus far, only one of these can be written down analytically.
In this context we note that one
can trace the historical origin of the BCS-BEC literature to the observation
that the simplest BCS-like wavefunction
\begin{equation}
\Psi_0=\Pi_{\bf k}(u_{\mathbf{k}}+v_{\mathbf{k}}
c_{\vect
k\up}^\dagger
c_{-\vect k\down} ^\dagger)
|0\rangle ,
\label{eq:1}
\end{equation}
is much more general than originally presumed
\cite{Leggett,Eagles,NSR}. To implement this generalization, all
that is required is that one
solve for the
two variational parameters $u_{\mathbf{k}}$ and $v_{\mathbf{k}}$
in concert with a self consistent condition on the fermionic
chemical potential $\mu$. As the attraction is increased,
$\mu$ becomes different from the
Fermi energy $E_F$, and in particular, in
the BEC regime, $\mu$ is
negative.
This ground state is often called the ``BCS-Leggett" state
and the two variational parameters $u_{\mathbf{k}}$ and $v_{\mathbf{k}}$ can
be converted to two more physically accessible parameters associated
with the zero temperature gap (or equivalently order parameter)
and $\mu$.
The three theories currently of interest can be related to a t-matrix
scheme.
Within a given t-matrix scheme one treats the fermionic
self energy and the pair-propagator using a coupled equation
scheme, but drops the contributions from higher order Green's
functions.
This t-matrix is called $t_{pg}(Q)$, where
$Q=(i\Omega_{l},{\bf q})$ is a four-vector and $\Omega_{l}$ denotes the boson Matsubara frequency; it characterizes the non-condensed pairs which are
described physically and formally in different ways in the different
theories.
Here the subscript $pg$ is associated with the pseudogap (pg)
whose presence is dependent on the non-condensed or pre-formed
pairs.
Quite generally we can write the t-matrix in a ladder-diagram series
as $$ t_{pg}(Q) = \frac{U}{1 + U \chi(Q)},$$ where $\chi(Q)$ is the
pair susceptibility and $U$ denotes the attractive coupling constant.
The
Nozi\`{e}res-Schmitt-Rink (NSR)
theory
\cite{NSR}
is associated with a pair susceptibility
$\chi(Q)$
which is a product
of two bare Green's functions.
The
fluctuation exchange or FLEX approach is associated
with two dressed Green's functions and
has been discussed by Haussmann \cite{Haussmann,
Zwerger}, Zwerger and their
collaborators in the context of the cold gases, and even
earlier in the context of the cuprates \cite{Tremblay2,Tchern,Micnas95}.
It is also called the
Luttinger-Ward formalism \cite{HaussmannRF}, or Galitskii-Feynman
theory \cite{Morawetz}.
Finally, it is well known
\cite{Kadanoff,Patton1971,Abrahams}
that BCS theory (and now
its BCS-BEC generalization) is associated with one bare and
one dressed Green's function in the pair susceptibility.
These differences would seem to be rather innocuous
and technical but they have led to significant qualitative
differences and concurrently strong claims by various
proponents.
We
stress that
while there are several variants, as we discuss below, the
version of the NSR scheme which seems to us most free of concerns is
that discussed in References~\cite{Strinaticuprates} which introduced
a more physical treatment of the number equation. This revision of
strict NSR theory was, in part, an answer to J. Serene \cite{Serene}
who raised a question about a central approximation in the theory in
which the number equation ($n = 2 \sum_K G(K)$, where $G(K)$ is the
single particle Green's function) is approximated by
$n=-\frac{\partial\Omega_{th}}{\partial\mu}$, where the
thermodynamical potential $\Omega_{th}$ is approximated by a
ladder-diagram series. It was shown that this amounts to taking the
leading order in a Dyson series for $G(K)$.
The present paper concentrates on the normal state behavior,
although all three classes of theories have been extended
below $T_c$. What is essential about these extensions is that
the non-condensed pair excitations associated with
$t_{pg}$ are gapless, as in
boson theories.
Indeed, it is in these $ T \leq T_c$
extensions that
a number of concerns have been raised.
In particular, in the leading order extended NSR theory (or so called
``Bogoliubov level" approach),
\cite{Strinati4,Drummond3,Randeriaab},
the gap
equation (which is assumed to take the usual BCS
form, rather than derived, for example,
variationally) does not contain explicit pairing fluctuation
contributions; these enter indirectly only via the fermion chemical
potential $\mu$. At this level, the number equation is the only way in which
explicit pairing
fluctuations are incorporated.
At the so called ``Popov level" calculation, the
gap equation is presumed to contain pair fluctuations
\cite{PS05}
but there is some complexity in ensuring the concomitant gaplessness of
the pair excitations. Similar issues arise with the FLEX
or Luttinger-Ward approach in which (\cite{HaussmannRF} and references therein)
gapless sound modes
must
be imposed somewhat artificially.
While the order of the transition at $T_c$ is second order in
the BCS-Leggett scheme it is first order \cite{firstordertransitionpapers,Drummond3}
in NSR based approaches
(as well as for the fully renormalized pair susceptibility
scheme. This leads to unwanted features in the density profiles
\cite{Strinati4} and $T$ dependent superfluid density \cite{Griffingroup2},
$\rho_s(T)$.
Despite these unphysical aspects,
the NSR-based scheme captures the physics of Bogoliubov theory of weakly interacting bosons \cite{Strinati4}
and should, in principle,
be the quantitatively better low $T$ state, particularly in the
BEC limit. Nevertheless some issues have been identified \cite{Lerch}
which suggest the breakdown of true quasi-particles associated with
Bogoliubov-like theories for paired fermions. This, in turn,
derives from
the self consistent treatment of coupling between the non-condensed
pairs and the sound modes. Further analysis will be required to
establish if this is compatible with experimental or
theoretical constraints.
A very early concern about the so-called ``GG"
or FLEX approach was raised in a paper by Kadanoff and Martin \cite{Kadanoff}
in 1961:
``The similarity [to a Bethe-Salpeter equation] has
led several people to surmise that the symmetrical
equation [involving fully dressed G's everywhere] solved in
the same approximation would be more accurate. This surmise is
not correct. The Green's functions resulting from that equation
can be rejected in favor of those used by BCS by means of
a variational principle."
Importantly this
approach does not have a true pseudogap.
Despite claims by the Zwerger group \cite{HaussmannRF}
that theirs is a more fully ``consistent" theory,
and in this context appealing to Ref.~\cite{Moukouri},
the authors of Ref.~\cite{Moukouri} instead say:
``We thus conclude that ... approaches such as FLEX are
unreliable in the absence of a Migdal theorem and that there is
indeed a pseudogap."
Similar observations have appeared elsewhere in the literature
\cite{Moukouri,Tremblay2,Fujimoto,Micnas95}.
As noted in Ref.~\cite{Fujimoto} `` vertex corrections to the self
energy, which are discarded in the previous studies [of FLEX] are
crucially important for the pseudogap". Additional concerns have been
noted recently \cite{Morawetz} that in the FLEX (or $GG$ t-matrix) theory the propagator $G$ does not display
quasiparticle poles associated with
the gap. ``This is because the Dyson equation,
$G(k) = 1/(z-{\bf k}^2/2m-\Sigma(k))$, excludes identical poles
of $G$ and $\Sigma$ while the linear relation demands them''.
In recent work below $T_c$
\cite{Drummond2,Drummond3,Randeriaab}
a non-variational gap equation was used to derive an additional term in the number equation related to $\partial
\Omega_{th} / \partial \Delta_{sc} \neq 0$. Here $\Delta_{sc}$ is the
order parameter and, here, again, $\Omega_{th}$ is the thermodynamical
potential. This extra term means there is no variational free energy
functional, such as required by Landau-Ginsburg theory. Of concern are
arguments that by including $\partial \Omega_{th} / \partial
\Delta_{sc} \neq 0$, it is possible to capture the results of Petrov et
al \cite{Petrov} for the inter-boson scattering length. We see no
physical connection between the exact four-fermion calculations and
the non-variational component of the many body gap equation. It
should, moreover, be stressed that all other t-matrix schemes have
reported an effective pair-pair scattering length given by $a_B=2a$
which is larger than the value $a_B=0.6a$ obtained from a four-body
problem \cite{Petrov}. Here $a$ is the $s$-wave scattering length of
fermions. Indeed, our past work \cite{Shina2} and that of Reference
\cite{PS05} have shown that one needs to go beyond the simple t-matrix
theory to accommodate these four-fermion processes.
Additional concerns arise from the fact that an NSR-based scheme has
difficulty \cite{Parish,Hupolarized} accommodating polarization effects in the
unitary regime. As stated by the co-workers in Reference
\cite{Hupolarized}: ``Unfortunately, in a region around the unitary
limit we find that the NSR approach generally leads to a negative
population imbalance at a positive chemical potential difference
implying an unphysical compressibility.".
The central weakness of the BCS-Leggett approach (and its finite-$T$
extension) appears to be the fact it focuses principally on the
pairing channel and is not readily able to incorporate Hartree
effects. The evident simplicity of this ground state has raised
concern as well. Clearly, this is by no means the only ground state
to consider but, among all alternatives, it has been the most widely
applied by the cold gas community including the following notable
papers
\cite{Stringari,Stringaricv,Randeria2,Cote,SR06,Rice2,Kinnunen,Machida2,BECBCSvortex,StrinatiJosephson,Basu}.
The central strengths of the finite-$T$ extended BCS-Leggett
approach in comparison with others
are that (i) there
are no spurious first order transitions and (ii) the entire range of
temperatures is accessible.
(iii) Moreover, polarization effects may be readily included
\cite{ChienPRL,heyan},
(iv) as may
inhomogeneities which are generally treated using
Bogoliubov deGennes theory \cite{BECBCSvortex,StrinatiJosephson},
based on this ground state.
The above analysis leaves us with two theoretical schemes which we wish
to further explore: the NSR approach (which in the normal phase
follows directly from the original paper \cite{NSR}) and
the BCS-Leggett-based scheme, as extended away from
zero temperature, and in particular above
$T_c$.
As t-matrix approaches to many body theory,
these are similar in spirit, but different
in implementation.
It is clearest below $T_c$, that the two theories focus on different physics.
NSR approaches view the dominant processes as the coupling
of the order parameter collective modes to the non-condensed
pairs and the BCS-Leggett scheme focuses on the steady
state equilibrium between the gapped fermions and the non-condensed
pairs.
Thus NSR focuses more fully on the bosonic degrees
of freedom and BCS-Leggett focuses on
the fermionic degrees of freedom.
Above $T_c$, because the NSR scheme involves only bare
Green's functions, it is simpler. Thus, it has been studied
at a numerical level in a more systematic fashion. In the
literature, the BCS-Leggett
approach at $T \neq 0$,
has been addressed numerically \cite{Maly1,Maly2,Marsiglio,Morawetz},
assessed more theoretically \cite{Tremblay2},
as well as applied to different physical contexts
\cite{Torma1,Torma2,Micnaslattice}.
In this paper we apply an approximation based on prior
numerical work \cite{Maly1,Maly2} to simplify the calculations.
\subsubsection{The Fermionic Spectral Function}
A central way of characterizing these different BCS-BEC
crossover theories is through the behavior of the fermionic
spectral function, $A ({\bf k}, \omega)$. For the most part, here,
we restrict our consideration to the normal
state where $A ({\bf k}, \omega)$ should indicate the presence
or not of a pseudogap. A momentum integrated form
of the spectral function is reflected in
radio frequency studies-- both tomographic \cite{MITtomo}
or effectively homogeneous and trap averaged
\cite{Grimm4,KetterleRF}.
One of the principal observations of this paper is that these
momentum integrated probes are not, in general, sufficiently sensitive to pick up
more than gross differences between the three crossover theories.
However, there are now momentum resolved RF studies \cite{Jin6} which
probe the spectral function more directly, in a fashion similar to
angle-resolved photoemission spectroscopy (ARPES) probes of condensed
matter. A central aim of this paper is to show how these studies in
future will be able to differentiate more clearly between the
different crossover schools. Here we confine our attention to
homogeneous systems, although experiments are necessarily done for the
trapped case. In addition to RF spectroscopy, it was proposed
\cite{GeorgesSpectral} that the spectral function can also be measured
in Raman spectroscopy.
We note that for the HTSCs, ARPES studies have been centrally important
in revealing
information about the
superconducting as well as the pseudogap phases
\cite{arpesstanford_review}.
Indeed, the close relation between ARPES and radio frequency probes
has been discussed in our recent review \cite{RFlong}.
It was shown in Ref.~\cite{ANLPRL} that the spectral function of HTSCs
in the pseudogap phase appears to exhibit
dispersion features
similar to those
in the superconducting phase.
This spectral function is modeled \cite{Norman98,Maly1,Maly2}
by a broadened BCS form with
a self energy
\begin{equation}
\Sigma_{pg}(K) \approx
\frac{\Delta_{pg}({\bf k})^2}{\omega+\epsilon_k-\mu+i\gamma}
\label{eq:2a}
\end{equation}
Here $\Delta_{pg}({\bf k})$ is the ($s$ or $d$-wave) excitation gap of
the normal phase and $\gamma$ is a phenomenological damping.
Frequently, one adds an additional, structureless imaginary damping term
$i \Sigma_0$, as well.
High temperature superconductor experiments at temperatures
as high as $T \approx 1.5 T_c$ have reported
that in the regions of the Brillouin zone (where the pseudogap is
well established), the dispersion of the fermionic excitations
behaves like
\begin{equation}
E_{\bf k} \approx \pm \sqrt{ (\epsilon_{\bf k} - \mu) ^2 + \Delta_{pg}({\bf k})^2}
\label{eq:3a}
\end{equation}
Importantly Eq.~(\ref{eq:3a}) has also been used in the cold gas
studies \cite{Jin6} in the region near and above $T_c$ and implemented
phenomenologically below $T_c$ \cite{GeorgesSpectral}. This both
demonstrates the presence of pairing and ultimately provides
information about the size of the pairing gap. It has been shown that
Eq.~(\ref{eq:2a}) is reasonably robust in the extended BCS-Leggett
state above $T_c$, at least up to temperatures \cite{Maly1,Maly2} of
the order of $\approx 1.3T_c$. By contrast this approximate self
energy is not generally suitable to NSR theory
\cite{Maly1,Maly2}, although for $T/T_c = 1.001$ a fit to
Eq.~(\ref{eq:3a})
has been obtained. In a similar context we note that in the FLEX
approach, the spectral function and associated self energy is not of
the broadened BCS form. Mathematically, this BCS-like structure in the
self energy and fermionic dispersion (which is numerically obtained)
comes from the facts that the effective pair chemical potential
$\mu_{pair} \rightarrow 0$ at and below $T_c$, and that by having one
bare and one dressed Green's function in $\chi(Q)$ there is a gap in
the pair excitation spectrum so that the pairs are long lived; in this
way $\gamma$ (which scales with the inverse pair lifetime) is
small. Physically, we can say that this behavior reflects the
stability of low momentum pairs near $T_c$ and below.
These differences between the three different crossover
theories become less apparent for the
momentum integrated
RF signals. In the BCS-Leggett approach at low temperatures
the dominant structure comes from pair
breaking of the condensate (which would be associated with the negative
root in Eq.~(\ref{eq:3a})). Despite the fact that their
fermionic dispersions are different, both other theories yield a very similar
``positive detuning branch" in the RF spectrum \cite{StoofRF,HaussmannRF}.
However, at higher temperatures, both for polarized and unpolarized gases,
there is theoretical
evidence of the ``negative detuning branch" arising from the
positive root in Eq.~(\ref{eq:3a}) in the BCS-Leggett
based approach \cite{momentumRF,RFlong}. This is absent in the two other
schemes, at least within the normal state. It also appears to be difficult
to see experimentally in the unpolarized case, although it is clearly
evident once even a small polarization \cite{Rice2,KetterleRF}
is present \cite{MITtomoimb}.
This paper is organized as follows. Sections~\ref{sec:NSRtheory} and
\ref{sec:G0Gtheory} briefly review NSR theory and BCS-Leggett theory
as extended to non-zero $T$. Section~\ref{sec:Spectral} addresses a
comparison of the spectral function at unitarity and on the BEC side
obtained from the two theories. In subsections ~\ref{sec:DOS} we plot
a comparison of the related density of states at unitarity and in
~\ref{sec:RF} we address a comparison of RF spectra in the two
theories for an unpolarized Fermi gas which also addresses
experimental data. Also included is a prediction of RF spectra on the
BEC side of resonance. The remaining sections (Section~\ref{sec:RFTc}
and Section~\ref{sec:RFpol}) do not focus on comparisons because the
issues discussed pertain to questions which only BCS-Leggett theory
has been able to address. Here we propose a subtle signature of the
superfluid transition in Section~\ref{sec:RFTc} which could be
addressed in future and in Section~\ref{sec:RFpol} we address the
theoretical RF spectrum of polarized Fermi gases at unitarity and its
comparison with experimental data. Section~\ref{sec:conclusion}
concludes this paper.
We remark that in this paper we study $s$-wave pairing in three
spatial dimensions which is more relevant to ultra-cold Fermi gases
while HTSCs should be modeled as $d$-wave pairing in quasi-two
dimensions. However, as one will see, there are many interesting
common features in these two systems.
\section{NSR Theory Above $T_c$}
\label{sec:NSRtheory}
The normal state treatment of NSR theory which we apply here
follows directly from the original paper
in Ref.~\cite{NSR}. Here the different variants of NSR theory
as introduced by different groups \cite{Drummond3,Drummond5,Strinati4,PS05}
are not as important.
Although there is still the concern \cite{Serene}
that the number equation is only approximate, the numerics
are simpler if we follow the original approach; comparisons
with more
recent work in Ref.~\cite{OhashiNSR}
(based on the fully consistent number equation
($n=2\sum_{K}G(K)$)
seem to validate this
simplification.
This same more consistent number equation is used throughout the
work by the Camerino group \cite{Strinaticuprates}.
NSR theory builds on the fact that the fermion-fermion attraction
introduces a correction to the thermodynamic potential:
$\Delta\Omega_{th}=\Omega_{th}-\Omega_{f}=\sum_{Q}\ln[U^{-1}+\chi_{0}(Q)]$,
where $\Omega_{f}$ is the thermodynamic potential of a non-interacting
Fermi gas, $\chi_{0}(Q)$ is the NSR pair susceptibility, and
$\sum_{Q}=T\sum_{l}\sum_{\bf q}$. In the normal phase,
\begin{eqnarray}
\chi_{0}(Q)&=&\sum_{K}G_{0}(K)G_{0}(Q-K) \nonumber \\
&=&\sum_{\mathbf{k}}\frac{f(\xi_{\mathbf{k}+\mathbf{q}/2})+f(\xi_{\mathbf{k}-\mathbf{q}/2})-1}{i\Omega_{l}-(\xi_{\mathbf{k}+\mathbf{q}/2}+\xi_{\mathbf{k}-\mathbf{q}/2})}.
\end{eqnarray}
Here $K=(i\omega_{\nu},{\bf k})$ where $\omega_{\nu}$ is the fermion
Matsubara frequency, $\sum_{K}=T\sum_{\nu}\sum_{\bf k}$,
$G_{0}(K)=1/(i\omega_{\nu}-\xi_{\bf k})$ is the non-interacting
fermion Green's function, $\xi_{\bf k}=k^2/2m-\mu$ where $m$ and $\mu$
denote the mass and chemical potential of the fermion, and $f(x)$ is
the Fermi distribution function. We set $\hbar\equiv 1$ and
$k_{B}\equiv 1$.
NSR theory is constrained by the condition
$U^{-1}+\chi_{0}(0)>0$ since $U^{-1}+\chi_{0}(0)=0$ signals an
instability of the normal phase and the system becomes a superfluid as
temperature decreases.
The fermion chemical potential is determined
via the NSR number equation
\begin{equation}\label{eq:NSRneq}
n=-\partial\Omega_{th}/\partial\mu,
\end{equation}
where $n$ is the total fermion
density.
As noted in \cite{NSR}, when $\mu<0$,
$U^{-1}+\chi_{0}(i\Omega_{l}\rightarrow \Omega+i0^{+},{\bf q})=0$ has
solutions which correspond to bound states. Those bound states have
contributions proportional to $b(\Omega^{(0)}_{\bf q})$ to the total
density, where $b(x)$ is the Bose distribution function and
$\Omega^{(0)}_{\bf q}$ is the energy dispersion of the bound
states.
\begin{figure}
\includegraphics[width=3.in,clip]
{Fig1.eps}
\caption{Fermion self-energy and ladder diagrams in (a) NSR theory and (b) $GG_0$ t-matrix theory. The thin solid line, thick solid line, thick dashed line, dotted line, wavy line, vertical thin dashed line denote $G_0$, $G$, $t_0$, $t_{sc}=-\Delta_{sc}^{2}\delta(Q)/T$, $t_{pg}$, $U$.}
\label{fig:SE}
\end{figure}
The NSR Green's function is $G(K)=[G_{0}(K)-\Sigma_{0}(K)]^{-1}$
and its retarded form is $G_{R}(\omega,{\bf k})
=G(i\omega_{n}\rightarrow\omega+i0^{+},{\bf k})$.
Following a t-matrix formalism, one can also consider the corrections
to the fermion self-energy
$\Sigma_{0}(K)=\sum_{Q}t_{0}(Q)G_{0}(Q-K)$. Figure~\ref{fig:SE}
illustrates the structure of fermion self-energy in NSR theory and in
the finite temperature theory associated with
the BCS-Leggett ground state, which will be summarized in the next
section. Here the t-matrix is given by
$t_{0}(Q)=1/[U^{-1}+\chi_{0}(Q)]$. The retarded form of the fermion
self-energy has the structure $\Sigma_{0}(i\omega_{\nu}\rightarrow
\omega+i0^{+},{\bf k})=\Sigma_{0}^{\prime}(\omega,{\bf
k})+i\Sigma_{0}^{\prime\prime}(\omega,{\bf k})$, where
$\Sigma_{0}^{\prime}$ and $\Sigma_{0}^{\prime\prime}$ correspond to
the real and imaginary part of the self-energy. We separate the
contribution of the bound states from the rest (called the continuum
contribution). The continuum contribution is
\begin{eqnarray}
\Sigma_{0c}^{\prime}(\mathbf{k},\omega)&=&\sum_{\mathbf{q}}\Big\{f(\xi_{\mathbf{q}})\mbox{Re}[t_{0R}(\mathbf{q}+\mathbf{k},\omega+\xi_{\mathbf{q}})]+ \nonumber \\
& &\mathcal{P}\int_{-\infty}^{\infty}\frac{d\Omega}{\pi}\frac{b(\Omega)}{\Omega-\omega-\xi_{\mathbf{q}}}\mbox{Im}[t_{0R}(\mathbf{q}+\mathbf{k},\Omega)]\Big\}, \nonumber \\
\Sigma_{0c}^{\prime\prime}(\mathbf{k},\omega)&=&\sum_{\mathbf{q}}[b(\omega+\xi_{\mathbf{q}})+f(\xi_{\mathbf{q}})]\mbox{Im}[t_{0R}(\mathbf{q}+\mathbf{k},\omega+\xi_{\mathbf{q}})]. \nonumber \\
\end{eqnarray}
Here $t_{0R}(\Omega,{\bf q})=t_{0}(i\Omega_{l}\rightarrow\Omega+i0^{+},{\bf q})$ and $\mathcal{P}$ denotes the Cauchy principal
integral. In the presence of bound states, $t_{0R}(\Omega,{\bf q})$ has poles which result in bound state contributions to the fermion self-energy
\begin{eqnarray}
\Sigma_{0b}^{\prime}(\mathbf{k},\omega)&=&-\mathcal{P}\sum_{\mathbf{q}}b(\Omega_{bs})\frac{1}{\frac{\partial\mbox{Re}[t_{0R}^{-1}]}{\partial\Omega}\Big|_{\Omega_{bs}}}\left[\frac{1}{\Omega_{bs}-\omega-\xi_{\mathbf{q}-\mathbf{k}}}\right] \nonumber \\
\Sigma_{0b}^{\prime\prime}(\mathbf{k},\omega)&=&-\sum_{\mathbf{q}}\pi b(\Omega_{bs})\frac{1}{\frac{\partial\mbox{Re}[t_{0R}^{-1}]}{\partial\Omega}\Big|_{\Omega_{bs}}}\delta(\Omega_{bs}-\omega-\xi_{\mathbf{q}-\mathbf{k}}). \nonumber \\
\end{eqnarray}
Here $\Omega_{bs}=\Omega_{bs}({\bf q})$ denotes the location of the pole in $t_{0R}$.
\section{BCS-Leggett Theory: Broken Symmetry Phase}
\label{sec:G0Gtheory}
We first review BCS-Leggett theory as it has been applied in the
broken symmetry phase.
The first three equations below represent a t-matrix
approach to the derivation of the \textit{standard} BCS gap equation.
In this way we set up a machinery which is readily
generalized to include BCS-BEC crossover theory.
BCS theory can be viewed as
incorporating \textit{virtual} non-condensed pairs.
Because they
are in
equilibrium with the condensate, the non-condensed pairs
must have a vanishing
``pair chemical potential", $\mu_{pair} =0$. Stated alternatively
they must be gapless. The t-matrix can be derived from the ladder diagrams in the particle-particle channel (see Fig.~\ref{fig:SE}):
\begin{equation}
t_{pg} (Q) \equiv \frac {U} { 1 + U \sum_{K} G(K) G_0(-K+Q)},
\label{eq:3}
\end{equation}
with $t_{pg}(Q=0)\rightarrow \infty$, which is equivalent to $\mu_{pair} =0$, for $T \leq T_c$. Here $G$, and $G_0$ represent dressed and
bare Green's functions, respectively.
To be consistent with the BCS ground state of Eq.~(\ref{eq:1}),
the self energy is
\begin{eqnarray}
\Sigma_{sc} (K)&=& \sum_Q t_{sc}(Q) G_0(-K+Q) \nonumber \\
&=&-\sum_Q \frac {\Delta_{sc}^2}{T} \delta(Q) G_0 (-K+Q) \nonumber \\
&=& -\Delta_{sc}^2 G_0(-K).
\label{eq:4}
\end{eqnarray}
$\Delta_{sc}(T)$ is the
order parameter while $\Delta(T)$ is the pairing gap.
From this one can write down the full Green's function, $G(K)=[G_{0}^{-1}(K)-\Sigma_{sc} (K)]^{-1}$. Finally, Eq.~(\ref{eq:3})
with $\mu_{pair} = 0$ gives the BCS gap equation below $T_c$:
\begin{equation}
1 = -U \sum_{\bf k}
\frac{1 - 2 f (E_k^{sc})} { 2 E_k^{sc}}
\label{eq:5}
\end{equation}
with
$E_k^{sc} \equiv \sqrt{ (\epsilon_k- \mu)^2 + \Delta_{sc}^2}$.
We have, thus, used Eq.~(\ref{eq:3}) to
derive the standard BCS gap equation within a t-matrix
language and the result appears in Eq.~(\ref{eq:5}).
Eq.~(\ref{eq:3}) above can be viewed as representing
an extended version of the Thouless criterion of strict BCS which applies
for all $ T \leq T_c$.
This derivation
leads us to reaffirm the well known result \cite{Kadanoff,Patton1971,Abrahams}
that BCS theory
is associated with one bare and
one dressed Green's function in the pair susceptibility.
Next, \textit{to address BCS-BEC crossover, we
feed back the contribution of the non-condensed pairs
which are no longer virtual as they are in strict BCS theory}, above.
Eq.~(\ref{eq:3}) is taken as a starting point. Equation~(\ref{eq:4}) is revised
to accommodate this feedback.
Throughout,
$K,Q$ denote four-vectors.
\begin{widetext}
\begin{equation}
\Sigma(K) = \sum_{Q} t(Q) G_0 (-K + Q) = \sum_Q [t_{sc}(Q) + t_{pg}(Q) ]
G_0 (-K + Q) = \Sigma_{sc}(K) + \Sigma_{pg}(K)
\label{eq:6}
\end{equation}
\begin{equation}
\mbox{Numerically},~
\Sigma_{pg}(K) \approx
\frac{\Delta_{pg}^2}{i\omega_{\nu}+\epsilon_{\bf k}-\mu+i\gamma}+i\Sigma_{0};
~\mbox{analytically}, ~~\Sigma_{sc}(K) =
\frac{\Delta_{sc}^2}{i\omega_{\nu}+\epsilon_{\bf k}-\mu}.
\label{eq:9}
\end{equation}
\begin{equation}
\gamma,\Sigma_{0} ~ \mbox{ small: } \Sigma(K) \approx - (\Delta_{sc}^2 + \Delta_{pg}^2) G_0(-K) \equiv
- \Delta^2 G_0(-K) ~\Rightarrow
\Delta_{pg}^2 \equiv -\sum_Q t_{pg}(Q)
\label{eq:7}
\end{equation}
\begin{equation}
t_{pg} (Q=0) = \infty \Rightarrow~~ 1 = -U
\sum_{\bf k} \frac{1 - 2 f(E_{\bf k})}{2 E_{\bf k}},~~~
E_{\bf k} \equiv \sqrt{ (\epsilon_{\bf k}- \mu)^2 + \Delta^2},~~
\label{eq:8}
\end{equation}
\end{widetext}
Note that
Eqs.~(\ref{eq:4}) and (\ref{eq:6}) introduce the self energy which is
incorporated into the fully dressed Green's function $G(K)$, appearing
in $t_{pg}$. Also note the number equation $n = 2 \sum_K G(K)$ is to
be solved consistently:
\begin{equation}
n = 2 \sum_K G(K) = \sum _{\bf k} \left[ 1 -\frac{\xi_{\bf k}}{E_{\bf k}}
+2\frac{\xi_{\bf k}}{E_{\bf k}}f(E_{\bf k}) \right]
\label{eq:neq}
\end{equation}
where $\xi_{\bf k} = \epsilon_{\bf k} - \mu$.
This leads to a closed set of equations for the pairing gap $\Delta(T)$,
and the pseudogap $\Delta_{pg}(T)$ (which can be derived from Eq.~(\ref{eq:7})). The BCS-Leggett approach with the dispersion shown in Eq.~(\ref{eq:8}) thus provides a microscopic derivation for the pseudogap model implemented in Ref.~\cite{GeorgesSpectral}.
To evaluate $\Delta_{pg}(T)$ numerically, we assume the main contribution to $t_{pg}(Q)$ is from non-condensed pairs with small $Q$, which is reasonable if temperature is not too high \cite{Maly1,Maly2}. The inverse of $t_{pg}$ after analytical
continuation is approximated as $t_{pg}(\omega,{\bf q})\approx [Z(\Omega -
\Omega^0_{\bf q}+\mu_{pair}) + i \Gamma^{}_Q]^{-1}$, where
$Z=(\partial\chi/\partial\Omega)|_{\Omega=0,q=0}$, $\Omega^0_{\bf
q}=q^2/(2M_{b})$ with the effective pair mass
$M_{b}^{-1}=(1/3Z)(\partial^{2}\chi/\partial q^{2})|_{\Omega=0,q=0}$
which takes account of the effect of pair-pair interactions. Near
$T_c$, $\Gamma^{}_Q \rightarrow 0$ faster than $q^2$ as $q\rightarrow
0$. Following this approximation, $\Delta_{pg}(T)$ essentially
vanishes in the ground state where $\Delta = \Delta_{sc}$.
The entire derivation contains one simplifying
(but not fundamental) approximation. Detailed numerical calculations
\cite{Maly1,Maly2} show that
$\Sigma_{pg}$ can be written as in Eqs.~(\ref{eq:2a}), which is the
same as that in Eq.~(\ref{eq:9}), with the
observation that as
$T_c$ is approached
from above, $\gamma$ and $\Sigma_{0}$ which appears in Eq.~(\ref{eq:9})
become small.
To zeroth
order, then,
we drop
$\gamma$ and $\Sigma_{0}$
(as in Eq.~(\ref{eq:7})), and thereby can more readily solve the gap equations.
To first order we include this lifetime effect
as in Eq.~(\ref{eq:9}) in addressing spectral functions and
other correlations.
The actual value of $\gamma$ makes very little
qualitative difference and the previous numerical calculations
\cite{Maly1,Maly2} do not include
$d$-wave or trap effects so that we should
view $\gamma$ as a phenomenological parameter.
For the HTSCs,
the expression for $\Sigma_{pg}$ in Eq.(\ref{eq:2a}) is
standard in the field \cite{Norman98,Normanarcs,FermiArcs},
and we can use specific
heat jumps or angle resolved photoemission to deduce $\gamma$,
as others \cite{Norman98} have done.
For the cold gases
the
precise value of $\gamma$, and its $T$-dependence are not particularly
important, as long as it is non-zero at
finite $T$.
In this paper we will deduce reasonable values for
$\gamma(T)$ and $\Sigma_{0}$ from tomographic RF experiments.
\subsection{Extension Above $T_c$}
We can expand $t_{pg}(Q)$ at small $Q$, and in the normal state
to find
\begin{equation}
t_{pg}^{-1} (0) \equiv Z \mu_{pair} = U^{-1} + \chi(0)
\end{equation}
where the residue $Z$ and pair dispersion (not shown) $\Omega_q$,
are then determined \cite{heyan2}.
This is associated with the normal state gap equation
\begin{equation}
U^{-1} + \sum_{\bf k}
\frac{1-2 f(E_{\mathbf{k}})}{2 E_{\mathbf{k}}}= Z\mu_{pair} \,,
\label{eq:pggap}
\end{equation}
Similarly, above $T_c$, the pseudogap contribution to $\Delta^2(T) =
{\Delta}_{sc}^2(T) + \Delta_{pg}^2(T)$ is given by
\begin{equation}
\Delta_{pg}^2=\frac{1}{Z} \sum_{\bf q}\, b(\Omega_q -\mu_{pair}) \,\,.
\label{eq:1a}
\end{equation}
The number equation remains unchanged.
In summary, when the temperature is above $T_c$, the order parameter is
zero, and $\Delta=\Delta_{pg}$. Since there is no condensate,
$\mu_{pair}$ is nonzero, and the gap equation is modified.
From these
equations, one can determine $\mu$,
$\Delta$ and $\mu_{pair}$.
\subsection{Incorrect Criticism from the Drummond Group}
The Drummond group \cite{Drummond5} has made a number of
incorrect claims about our past work which we address here.
The authors claim to have numerically studied the behavior
associated with the three possible
pair susceptibilities.
We note that there is no elemental numerical data in their paper,
nor do they present details beyond their use of an
``adaptive step Fourier transform"
algorithm.
This should be compared with
work by the Tremblay group
\cite{Moukouri,Tremblay2,Tremblay3}
and others \cite{Fujimoto,Marsiglio}. It is hoped that in future they
will present plots of the t-matrix and self energy to the community,
to the same degree that we have shared the output of our numerical
schemes in References \cite{Maly1} and \cite{Maly2}. Important will
be their counterparts to Figs. 8a and 9 (lower inset) in
Ref.~\cite{Maly1}, which show how reliable the form in
Eq.~(\ref{eq:9}) is for the full $GG_0$ self energy. More
specifically: they have argued that the ``decomposition into $pg$ and
$sc$ contributions [see Eq.~(\ref{eq:6}) above], omits important
features of the full theory". This claim is incorrect and is based on
their Fig.~1 of Ref.~\cite{Drummond5} which can be seen to be
unrelated to the pg and sc decomposition, since their analysis of our
so-called ``pseudogap theory" is confined to the normal
phase. \textit{The decomposition only applies below $T_c$}.
Moreover, we refute the argument that the decomposition
into sc and pg terms shown in Eq.~(\ref{eq:6}) above is
unphysical. This decomposition is associated with the
fact that there are necessarily both condensed and non-condensed
pairs in the Fermi gases at unitarity. This break-up
is standard in studies of Bose gases. The
details of how to describe the $pg$ contribution,
but not its necessary presence in a decomposed
fashion,
are what varies from theory to theory.
Importantly, the
``discrepancies" associated with thermodynamical
plots based on our approach should be attributed to
the absence of a Hartree term, not to any deeper physics.
The reader can see that if the usual $\beta$ parameter
is changed from $-0.41$
to around $-0.6$ the BCS-Leggett curve will be aligned with
the others.
\section{Comparisons of the Spectral Function}
\label{sec:Spectral}
\begin{figure}
\begin{center}
\includegraphics[width=3.4in,clip]
{Fig2.eps}
\caption{(Color online) Spectral function obtained from NSR theory (left column) and from the extended BCS-Leggett theory (right column) at unitarity. Temperatures ($T/T_F$) from top to bottom are: (a) and (b) $0.24$, (c) and (d) $0.34$, (e) and (f) $0.55$. The ranges of $k/k_{F}$ and $\omega/E_F$ are $(0,2)$ and $(-2,3)$, respectively. }
\label{fig:Sp_Ia0}
\end{center}
\end{figure}
The fermionic spectral function is given by $A(\omega,{\bf k})
=-2\mbox{Im}[G_{R}(\omega,{\bf k})]$, where $G_{R}$ is the retarded
Green's function. In this section we want to explore its behavior both
as a function of wavevector ${\bf k}$ and of frequency $\omega$. Of
particular interest is the question of whether there is a pseudogap in
the spectral function. There are different criteria for arriving at
an answer. Importantly, depending of this choice the answer will be
different for NSR theory (and also, it appears for the FLEX theory of
Ref.~\cite{HaussmannRF}). The following definitions come from
different measurements of HTSCs which are not internally
contradictory. Following Ref.~\cite{Timusk} and references therein,
we examine two criteria for the pseudogap in HTSCs.
\begin{enumerate}
\item One can define the existence of a pseudogap as
associated with the observation that
$A ({\bf k}, \omega)$ as a function of $\omega$ at $ k = k_F$
exhibits a two-peak structure in the normal state. This definition is particularly useful for spectroscopies such as ARPES which can probe the spectral function near the Fermi surface.
\item Alternatively, the existence of a pseudogap can be identified when the density of states (DOS) (which represents
an integral over ${\bf k}$ of the spectral function)
is depleted near the Fermi energy. This definition appeals to tunneling experiments where the DOS can be measured.
\end{enumerate}
In addressing these criteria it
is useful to refer to the spectral function of the BCS-Leggett
ground state given by
$A_{BCS}(\omega,{\bf
k})=u_{k}^{2}\delta(\omega-E_k)+v_{k}^{2}\delta(\omega+E_k)$, where
$u_{k}^{2},v_{k}^{2}=(1/2)[1\pm(\xi_{k}/E_{k})]$. As a function of
frequency, there are two
branches: the upper branch located at $\omega=E_k$ has weight
$u^{2}_{k}$ and the lower branch located at $\omega=-E_k$ has weight
$v^{2}_{k}$. Since $E_k\ge \Delta$, the spectral function is gapped at
all ${\bf k}$. One recognizes two features in $A_{BCS}$. First, there
is particle-hole mixing which results in the two branches. Second, there
is an upwardly dispersing and a downwardly dispersing symmetric
contribution to the spectral function arising from the $\pm$
signs in Eq.~(\ref{eq:3a}).
This is symmetric about
the non-interacting Fermi energy.
At finite temperatures, as one
will see, both NSR theory and the extended-BCS-Leggett theory show
particle-hole mixing in the sense that there are two
branches in the spectra. In contrast, the fermionic dispersion
in NSR theory does not lead to two symmetric upwardly and downwardly
dispersing branches.
The behavior of the finite $T$ spectral function associated with BCS-Leggett
theory, given in Eq.~(\ref{eq:2a}) is, however,
rather similar to its superfluid
analogue.
It is unlikely that Eq.~(\ref{eq:2a}) will be appropriate at sufficiently
high temperatures.
Indeed, one can see from Figure 3 in Reference \cite{Maly2} and
the surrounding discussion that numerical calculations show this
approximation is appropriate up to some temperatures of the order
of $T/T_c \approx 1.3$ for a system near unitarity and in the absence
of a trap.
In a fully consistent numerical calculation one expects that as
$T$ is raised the pseudogap will decrease so that the pair susceptibility
of the extended BCS-Leggett
theory should eventually evolve from $GG_0$ to $G_0G_0$. In this way
the fully numerical NSR scheme should be very reasonable at sufficiently
high $T$, where the pseudogap begins to break down. Physically
we can argue that the BCS-Leggett scheme is better suited to treating
pairs which have pre-dominantly low momentum, and thus, it should apply
closer to condensation.
For the purposes of comparison, in this section we apply Eq.~(\ref{eq:2a}) up
to somewhat higher temperatures than appears strictly feasible.
\begin{figure}
\includegraphics[width=3.4in,clip]
{Fig3.eps}
\caption{(Color online) The plot of the function $k^{2}f(\omega)A({\bf k},\omega)/2\pi^{2}$ calculated from NSR theory (left column) and from the extended BCS-Leggett theory (right column) at $1/k_{F}a=0$. Temperatures ($T/T_F$) from top to bottom are: (a) and (b) $0.24$, (c) and (d) $0.34$, (e) and (f) $0.55$. The ranges of $k/k_{F}$ and $\omega/E_F$ are $(0,2)$ and $(-3,3)$, respectively. }
\label{fig:kRF_Ia0}
\end{figure}
A very important physical distinction emerges between the different
models for the pair susceptibility which is then reflected in
the fermionic self energy and ultimately in the spectral function.
Because a dressed Green's function appears in BCS-Leggett theory,
the t-matrix $t(Q)$ at small $q$
has a notably different behavior, particularly
at low $\omega$ as compared with the NSR case. This is seen
most clearly by comparing Figure 2 and Figure 9 in Ref.\cite{Maly1}.
This difference can be seen as a gap in the $GG_0$ t-matrix which
serves to stabilize the pair excitations. In the normal state
the pairs live longer when a pseudogap is present because of
this feedback. As a result the behavior of the fermionic self energy is
different, leading to a reasonable fit to Eq.~(\ref{eq:2a})
in $GG_0$ theory as shown by the lower inset in Figure 9
of Reference \cite{Maly1}, as compared with the poorer
fit to Eq.~(\ref{eq:2a}) found in NSR theory and shown in
Figure 8a from Ref. \cite{Maly1}.
We will reach qualitatively similar conclusions in the next
section of the paper.
We summarize by noting that the extended
BCS-Leggett theory focuses on low $q$ pairs which
dominate near condensation.
NSR theory treats pairing without
singling out low $q$ only.
Each of these theories should be appropriate in different temperature
regimes of the normal state.
Concomitantly,
because of the enhanced stability of the pairs, the broadening of
the spectral peaks will be considerably smaller in BCS-Leggett theory
as compared with NSR theory.
To make a connection with experiments
on ultra-cold Fermi gases, we regularize the attractive coupling
constant via $U^{-1}=m/(4\pi a)-\sum_{\bf k}(m/k^{2})$.
We choose as our units the Fermi energy $E_F$, or, as appropriate
the Fermi temperature $T_F$, or
Fermi momentum $k_F$ of a non-interacting Fermi gas with the
same particle density.
The unitary point where $a$ diverges is of particular interest because
two-body bound states emerge. Since many-body effects renormalize the
coupling constant, the fermion chemical potential remains positive at
unitarity in both NSR theory and the BCS-Leggett theory. This implies
that bound states in a many-body sense have not fully
emerged.
In our numerics,
we choose $\gamma(T)$ to be very roughly consistent
with RF experiments. For the unpolarized case we set
$\gamma/E_F=0.12(T/T_c)$ at unitarity and included a small background
imaginary term
$\Sigma_0/E_F=0.05$.
\subsection{Comparison of Spectral functions via Contour plots}
Figure~\ref{fig:Sp_Ia0} present a plot of the spectral function at
unitarity ($1/k_{F}a=0$) obtained from NSR theory (left column) and
from the BCS-Leggett t-matrix theory (right column) at selected
temperatures.
In the BCS-Leggett case we use
the approximation that the self energy associated
with non-condensed pairs is of a broadened BCS form (in Eq.~(\ref{eq:2a})).
The transition temperatures
$T_c/T_F=0.238$ and $T_c/T_F=0.26$ are obtained for the
NSR and BCS-Leggett cases respectively.
Both theories yield higher $T_c$ values
than found \cite{QMCTc} in quantum Monte Carlo simulations, where
$T_c/T_F\approx 0.15$ at unitarity. The $T_c$ curves in BCS-BEC crossover of NSR and the extended BCS-Leggett theories are shown in Ref.~\cite{CompareReview}.
In Fig.~\ref{fig:Sp_Ia0} the comparisons are made at three different
temperatures. The horizontal and vertical axes on each panel
correspond to wave number and frequency and what is plotted in the
contour plots is the fermionic spectral function for a
three-dimensional homogeneous gas. The white areas correspond to peaks
in the spectral function and they map out a dispersion for the
fermionic excitations. With the possible exception of the highest $T$
NSR case (lower left figure) the spectral functions in all cases shown
in Fig.~\ref{fig:Sp_Ia0} are gapped at small $k/k_F$, which indicates
the existence of particle-hole mixing The lower branch of the spectral
function from the BCS-Leggett t-matrix theory clearly shows a downward
bending for $k>k_{F}$ which is associated with a broadened BCS-like
behavior. The spectral function of a phenomenological pseudogap model
presented in Ref.~\cite{GeorgesSpectral}, which \textit{can be derived
microscopically from the BCS-Leggett approach}, exhibits
similar contour plots as ours from the extended BCS-Leggett theory.
By contrast, in NSR theory the lower branch corresponds to very
broad and very small peaks when $k>k_{F}$ which are barely
observable. We see no clear evidence of a downward dispersing branch
even at the lowest temperature above $T_c$.
As a function of temperature, in the NSR case, the physics suggests a
smooth evolution with increasing $T$ towards a single branch, upwards
dispersing-- almost Fermi liquid dispersion. It seems likely that as
$T$ is raised and the pairing gap becomes less important the pair
susceptibility of the BCS-Leggett state should cross from $GG_0$ to
$G_0G_0$ so that the two schemes merge. This means that our previous
simplification of the fermionic self energy $\Sigma$ (as a broadened
BCS form, see Eq.~(\ref{eq:9})) is no longer suitable in this high $T$
regime. There are not really two different types of pseudogap, but
rather the extended BCS-Leggett theory represents pairs which are
close to condensation-- and thus predominantly low momentum
pairs. [This is built into the approximation Eq.~(\ref{eq:9}), which
was used to model the pseudogap self energy.] By contrast the NSR
case considers pairs with a broad range of momenta.
One can see that the vanishing of the pseudogap as temperature
increases is different in the two theories.
In the extended BCS-Leggett theory, the two
branches approach each other and the gap closes while in NSR theory,
the spectral function fills in the gapped regime and its overall shape
evolves toward a single parabola.
We note that
our spectral function from NSR theory at unitarity looks identical to the results in Ref.~\cite{OhashiNSR}, even though the latter was computed with a more
self consistent number equation.
This
implies that the difference between the two number equations (Eq.~(\ref{eq:NSRneq}) and $n=2\sum_{K}G(K)$) has no qualitative impact.
Interestingly, the spectral function at the highest $T$
from NSR theory resembles that presented in Ref.~\cite{HaussmannRF} for the
near-$T_c$ normal phase of $GG$ t-matrix theory.
It should be noted that
this spectral function is not the quantity directly measured in
momentum resolved RF studies \cite{Jin6}. Rather what is measured
there is the function
$k^{2}f(\omega)A({\bf k},\omega)/2\pi^{2}$ which is plotted in
Figure~\ref{fig:kRF_Ia0}. This convolution preserves the large momentum
part of the lower branch of the spectral function and suppresses the
remainder. The downward bending behavior is clearly observed in the extended BCS-Leggett theory again (right column) for all three temperatures. In contrast,
only the lowest
temperature plot of NSR theory ($T/T_F=0.24$, slightly above $T_c$)
shows a weak downward bending at large momentum. This downward
dispersion,
however, cannot be observed at higher temperatures (left column). In
the actual momentum-resolved RF experiments of trapped Fermi
gases \cite{Jin6} trap averages enter so that the actual curves
are substantially broader \cite{momentumRF}.
A clear signature of downward dispersion in
experimental data will help determine whether the pseudogap phase with
noncondensed pairs behaves in a way which is similar to
the HTSCs, where this feature has been reported \cite{ANLPRL}.
It should be noted that it is this signature which has been
used in Ref.~\cite{Jin6} to arrive at an indication of
the presence of pairing.
\begin{figure}
\includegraphics[width=3.4in,clip]
{Fig4.eps}
\caption{Behavior of the frequency dependent spectral function at unitarity
in NSR case for (a) $T/T_F = 0.24$ and (b) $T/T_F = 0.34$ for various wave-vectors ${\bf k}$.
This Figure suggests that the two-peak structure near $k_F$ associated with
this crossover theory barely meets the (most restrictive)
definition for the presence of a pseudogap near $T_c$. Away from $T_c$, the two-peak structure near $k_F$ is virtually not observable.}
\label{fig:4g}
\end{figure}
\begin{figure}
\includegraphics[width=3.4in,clip]
{Fig5.eps}
\caption{(Color online) Spectral function obtained from NSR theory (left column) and from the extended BCS-Leggett theory (right column) at $1/k_{F}a=1$. Temperatures ($T/T_F$) from top to bottom are: (a) and (b) $0.25$, (c) and (d) $0.34$, (e) and (f) $0.55$. The ranges of $k/k_{F}$ and $\omega/E_F$ are $(0,2)$ and $(-4,4)$, respectively. }
\label{fig:Sp_Iap1}
\end{figure}
The issue of what constitutes the proper definition of a pseudogap is
an important one and we turn next to the first definition we introduced
above in which one requires that the spectral
function $A (\omega,{\bf k})$ as a function of $\omega$ exhibits two
peaks around $k \approx k_F$.
It is clearly seen that in the BCS Leggett approach
this definition is met for all three curves exhibited in
Figure~\ref{fig:Sp_Ia0}. Because it is more difficult to establish this
for the NSR results plotted in Fig.~\ref{fig:Sp_Ia0}, in Fig.~\ref{fig:4g}
we address this question more directly for two different temperatures
and a range of $k$ values near $k_F$.
It should be clear that at the lower temperature (which is slightly
above $T_c$), a pseudogap is seen in NSR theory, although we have
seen
that the peak dispersion is not well described by
Eq.~(\ref{eq:3a}). This pseudogap should not be viewed as a broadened
BCS like feature.
At the higher temperature shown by the bottom panel of Fig.~\ref{fig:4g} there appears
to be no indication of a pseudogap according to the first definition.
Only a single peak is found in the spectral function near $k_F$.
We next explore analogous curves in the BEC regime and thereby
investigate how many-body bound states affect the distribution of weight
in the
spectral function. Figure~\ref{fig:Sp_Iap1} illustrates the spectral
function on the BEC side of resonance with $1/k_{F}a=1$ obtained from NSR theory
(left column) and from the extended BCS-Leggett theory (right column) at
selected temperatures. Here $T_c/T_F=0.22$ for the NSR case and
$T_c/T_F=0.21$ in the BCS-Leggett scheme. There is noise at
small $k$ in the spectral function of NSR theory which is presumably a
numerical
artifact. In the presence of bound states
(in a many-body sense), the lower branch of the spectral function of
NSR theory shows a downward bending near $T_c$, but it can be
seen that this behavior
rapidly evolves
to
an upward dispersion as $T$ increases (Fig.~\ref{fig:Sp_Iap1}(e)). In
contrast, the spectral function from the extended BCS-Leggett theory exhibits
the downward dispersion at all $T$ indicated, rather similar to
the behavior in the superfluid phase.
One can further
see from the figure that in the extended BCS-Leggett theory the lower branch has
a much weaker spectral weight compared to that of the upper branch. This
derives from the same phenomenon as in the
BCS-Leggett ground state, where the coefficient $v^{2}_{k}$ becomes
negligibly small in the BEC regime.
The behavior of the NSR spectral function is rather different
from its counterpart in BCS-Leggett
theory at all three temperatures. If one plots the spectral function at
$k=k_F$ as a function of $\omega$ two peaks are present with the
upper peak much sharper and narrower than the lower, as also
reported in
Ref.~\cite{OhashiNSR}.
\subsection{Comparison of Density of States}
\label{sec:DOS}
\begin{figure}
\includegraphics[width=3.2in,clip]
{Fig6.eps}
\caption{(Color online) DOS at unitarity from (a) NSR theory and (b) the extended BCS-Leggett theory. (Black) solid line, (red) dashed line, and (green) dot-dash line correspond to $T/T_F=0.24$, $0.34$, and $0.55$.}
\label{fig:DOS}
\end{figure}
We turn now to the DOS which, when depleted around
the Fermi energy, provides a second
criterion
for the existence of the pseudogap.
The DOS is given by
\begin{equation}
N(\omega)=\sum_{\bf k}A(\omega,{\bf k}).
\end{equation}
In the HTSCs, an above-$T_c$- depletion in the DOS around the Fermi energy,
measured in tunneling
experiments, provided
a clear
signature of the pseudogap (\cite{Timusk} and references therein).
By contrast, in the ultra-cold
Fermi gases the DOS has not been directly measured, although it is
useful to discuss it here in an abstract sense.
Figure~\ref{fig:DOS} shows the
DOS from the two theories at unitarity and
at selected temperatures.
The DOS based on the extended BCS-Leggett theory clearly shows a pseudogap
at all three selected temperatures. Similarly, the DOS from NSR theory
show a clear depletion around the Fermi energy ($\omega=0$) at $T/T_F=0.24$.
This depletion is barely visible at
$0.34$. At higher temperature ($T/T_F=0.55)$, the depletion does
not appear, and one sees only an
asymmetric background. We note that
our NSR
results are similar to those in Ref.~\cite{OhashiNSR} although
different number equations were employed. With this criterion for
a pseudogap one would conclude that
NSR theory does have a pseudogap --- at least at $T\approx T_c$.
It is somewhat unlikely that the FLEX scheme (which at
$T_c$ seems to behave similarly to the highest $T$ NSR figures)
a pseudogap would be present--- via this second definition.
\subsection{Comparison of RF Spectra of Unpolarized Fermi Gases}
\label{sec:RF}
\begin{figure}
\includegraphics[width=3.2in,clip]
{Fig7.eps}
\caption{(Color online) RF spectrum at $1/k_{F}a=0$. The (black) solid dots, (red) solid lines, and (green) dashed lines correspond to the RF currents obtained from the experimental data (\cite{MITtomo}, supplemental materials), the extended BCS-Leggett theory, and NSR theory. The values of $T/T_F$ are (a)$0.2$, (b)$0.22$ ($0.24$ for the curve from NSR theory), (c)$0.34$, and (d)$0.55$. }
\label{fig:RF_Ia0}
\end{figure}
The RF current at detuning $\nu$
also depends on an integral involving the
fermionic spectral function. The current obtained from
linear response theory is given by \cite{heyan}
\begin{eqnarray}
I_0^{RF}(\nu)
&=& \sum_{\bf k} \left. \frac{|T_k|^2}{2\pi} A(\omega,{\bf k})
f(\omega)\right|_{\omega=\xi_{\bf k} -\nu}.
\label{RFc0}
\end{eqnarray}
where, for the present purposes we ignore the complications
from final-state effects \cite{Baym2,ourRF3}, as would be reasonable for the
so-called ``13" superfluid of $^{6}$Li. Here $|T_k|^2$ is a tunneling
matrix element which is taken to be a constant. The data points in
Figure~\ref{fig:RF_Ia0} correspond to measured tomographic spectra
from Ref.~\cite{MITtomo} in units of the local Fermi energy, (see the
Supplemental Materials). The results from NSR theory are indicated by
the dashed lines and from the extended BCS-Leggett theory ($GG_0$) by
the solid (red) lines. To compare and contrast the spectra, we
normalize each curve by its maximum and align the maxima on the
horizontal axis, so as to effectively include Hartree shifts. The
experimental data were taken at $T/T_F=0.2, 0.22, 0.34, 0.55$. The RF
spectra from the extended BCS-Leggett theory are calculated at the same set of
temperatures. The RF spectra from NSR theory, which are restricted to
the normal phase correspond to the three higher temperatures:
$T/T_F=0.24, 0.34, 0.55$.
The RF spectra from the extended BCS-Leggett theory at high
temperatures indicate a double-peak structure, which was addressed in
Ref.~\cite{ourRF3}.
This peak at negative RF detuning emerges at finite temperatures in
BCS-Leggett theory as a result of thermally excited
quasiparticles. With increasing $T$, the weight under this peak
increases although the peak-to-peak separation will decrease,
following the temperature dependent pairing gap, as seen in the
figure. When temperature increases, the peak at negative RF detuning
grows and nearly merges with the peak at positive RF detuning so that
it may not be resolved experimentally.
By contrast, the RF spectra in NSR theory show a single peak which is
broader than the experimental RF spectra. This is to be expected based
on our analysis of the NSR spectral function in the previous section,
where we saw that the symmetrical upward and downward dispersing
branches of BCS theory were not present. The RF spectra presented in
Ref.~\cite{HaussmannRF} using $GG$ t-matrix (FLEX) theory also shows a
broader (than experiment) single peak.
In view of the contrast between the BCS-Leggett curves and experiment,
it
is natural to ask why is there no indication of the
negative detuning peak in these unpolarized experiments?
One can contemplate whether this stems from the
fact that (owing to large $\gamma$)
the two peaks simply aren't resolved. This would yield a figure
closer to that obtained from NSR-based calculations, which
is associated with a rather broad peak structure. As a result
it would not lead to a more satisfying fit to experiment.
At this stage we have no clear answer, but it will be important
to investigate, as we do below,
very slightly polarized gases to gain some insight.
\begin{figure}
\includegraphics[width=3.2in,clip]
{Fig8.eps}
\caption{(Color online) RF spectrum at $1/k_{F}a=1$. The (black) dot-dash line and (red) dashed line correspond to the RF currents obtained from the extended BCS-Leggett ($GG_0$) theory and NSR theory at $T/T_F=0.34$.}
\label{fig:RF_Iap1}
\end{figure}
In Figure~\ref{fig:RF_Iap1} we present the comparison on
the BEC side of resonance.
Here $1/k_{F}a=1$
and $T/T_F=0.34$. In this case both spectra show a double-peak
structure. For convenience,
we have scaled both spectra to their maxima and aligned the
maxima. For the NSR case,
the double-peaked feature reflects the negative fermionic
chemical potential which is associated with
bound states. It similarly reflects the
stronger spectral weight of the upper branch in the spectral
function which can also be examined in Figure~\ref{fig:Sp_Iap1}. Our RF spectra in NSR theory are
consistent with
those presented in Ref.~\cite{StoofRF}.
\section{Extended BCS-Leggett Theory: Signature of $T_c$ in RF Spectrum}
\label{sec:RFTc}
\begin{figure}
\includegraphics[width=3.2in,clip]
{Fig9.eps}
\caption{(Color online) (a) RF current as a function of $T$ at $1/k_{F}a=0$ for $\nu_1/E_F=0.15$ (black dashed line) and $\nu_1/E_F=0.1$ (red solid line) obtained from the extended BCS-Leggett theory. Inset: RF currents as a function of detuning at $T/T_F=0.22$ (dot-dash line), $0.26$ (dashed line), $0.3$ (solid line). The two arrows indicate $\nu_1$ and $\nu_2$. (b) The slopes of the RF currents from (a). The vertical dashed line indicates $T_c$.}
\label{fig:RF_Tc_Ia0}
\end{figure}
We have tried in the paper to emphasize comparisons whenever possible,
but there are instances where other crossover theories (besides that
based on the BCS-Leggett theory) have no counterpart. In the first of these we
investigate the signature of the second order transition which should
be a subtle, but nevertheless thermodynamically required feature of
any crossover theory. The experimental RF spectra in
Ref.~\cite{Grimm4,MITtomo} imply that the RF spectrum is more
sensitive to the existence of pairing rather than to
superfluidity. That it evolves smoothly across $T_c$ is due to the
presence of noncondensed pairs. The extended BCS-Leggett theory has the
important advantage in that it describes a smooth transition across
$T_c$ and should be a suitable theory for investigating this
question. In contrast, NSR theory and its generalization below $T_c$,
as well as the FLEX or Luttinger-Ward approach \cite{HaussmannRF}
encounter unphysical discontinuities
In the following we search for signatures of $T_c$ in the RF
spectrum as obtained from BCS-Leggett theory near $T_c$.
Here, in contrast to
Ref.~\cite{RFlong}, we use constraints provided by
our semi-quantitative fits to RF spectra (associated with
the estimated size of $\gamma(T)$) to obtain a more direct
assessment of how important these superfluid signatures should
be.
Figure~\ref{fig:RF_Tc_Ia0}
presents a plot suggesting how one
might expect to see signatures of coherence in a tomographic
(but momentum integrated) RF probe,
such as pioneered by the MIT group \cite{MITtomo}.
It shows the RF current versus temperature
at three different detuning frequencies. The inset plots the
RF characteristics indicating where the frequencies are chosen.
One can see that there is a feature at $T_c$, as expected.
This shows up more clearly in the lower figure which
plots the temperature derivative.
The same sort of feature has to be contained in the
specific heat \cite{ThermoScience}
which represents an integral over the spectral function.
What does it come from, since RF is not a phase sensitive
probe? The feature comes from the presence of a condensate
below $T_c$. What is distinguishing condensed from non-condensed
pairs is their self energy contribution. In the HTSCs \cite{Normanarcs}
and
also in the BCS-Leggett formulation the self energy from
the non-condensed pairs is taken to be
of a broadened BCS form in Eq.~(\ref{eq:9}).
By contrast, the non-condensed pairs live infinitely long and so have no
damping $\gamma$. These are the effects which are represented
in
Figure~\ref{fig:RF_Tc_Ia0}.
In this way the
figure shows that there are features at $T_c$ which can in
principle help to distinguish the ordered state from the normal
pseudogap phase.
\section{Extended BCS-Leggett Theory: RF Spectrum of Polarized Fermi Gases}
\label{sec:RFpol}
\begin{figure}
\includegraphics[width=3.2in,clip]
{Fig10.eps}
\caption{(Color online) RF spectra of polarized Fermi gases at unitarity. (Black) dots and (red) solid lines correspond to the experimental data from Ref.~\cite{MITtomo} and results from the extended BCS-Leggett theory. The left (right) column shows the RF spectra for the majority (minority) species. The local temperature $T/T_{F\uparrow}$ and local polarization $p$ for the experimental data ($ex$) and the theoretical results ($th$), $(T_{ex}/T_{F\uparrow}, T_{th}/T_{F\uparrow}, p_{ex}, p_{th})$, are: $(0.05, 0.04, -0.04, 0)$ for (a) and (d); $(0.06, 0.13, 0.03, 0.07)$ for (b) and (e); $(0.06, 0.15, 0.19, 0.23)$ for (c) and (f).
}
\label{fig:RFImb_Ia0}
\end{figure}
Another strength of the BCS-Leggett approach is that it can
address polarized gases at unitarity, which are not as readily
treated
\cite{Parish,Hupolarized} in the alternative crossover theories.
In Figure~\ref{fig:RFImb_Ia0} we plot the RF spectra from the extended
BCS-Leggett theory and the experimental RF spectra from
Ref.~\cite{MITtomo}. Since the experimental RF spectra were obtained
from RF tomography of trapped polarized Fermi gases, we follow a
similar procedure to extract our RF spectra at varying, but comparable
locations from a similar trap profile. Also indicated are the
polarizations $p$. If we make fewer restrictions on the choice of
radial variable, the agreement is better as is shown in
Ref.~\cite{RFlong}. To compare the results, we normalized the maxima
and align the spectra, thereby introducing a fit to Hartree
contributions. The left (right) column shows the RF spectra of the
majority (minority). Here we set $\gamma/E_F=0.05$ and
$\Sigma_0/E_F=0.02$.
The experimental data points from the left hand
column can be compared with those in
Figure
~\ref{fig:RF_Ia0}
which are for the
$p \equiv 0$ case, and it is seen that even at very
small polarizations (say $p \approx 0.03$) the negative detuning
peak becomes visible.
Indeed, it appears here to be larger than the theoretically
estimated negative detuning peak height.
A possible
explanation for why the
double-peak structure can be resolved experimentally in polarized
but not in unpolarized Fermi gases is because the
existence of excess majority fermions causes a negative RF-detuning
peak even at low temperatures. At these lower $T$
the separation between the two
peaks can be large in the experimental RF spectra of polarized Fermi
gases. In contrast, for an unpolarized Fermi gas the negative
RF-detuning peak due to thermally excited quasiparticles only becomes
significant at high temperatures around $T_c$
and above. Here the separation between the two
peaks may not be as readily resolved. As expected, at low
temperatures there is only a single peak in the RF spectra of the
minority. We notice that at extremely high polarization, polaron-like
behavior has been observed in RF experiments \cite{Zwierleinpolaron},
whose explanation has attracted a great deal of attention
in the theoretical community \cite{Lobo,Chevy2}.
These effects have not been incorporated in our BCS-Leggett
formalism, where the normal state has been assumed to be
strictly non-interacting \cite{ChienPRL}.
\begin{figure}
\includegraphics[width=3.2in,clip]
{Fig11.eps}
\caption{(Color online) Reproduction of Fig.5 in the Supplemental Materials of Ref.~\cite{MITtomoimb}. Red curves correspond to experimental data from Fig.~\ref{fig:RFImb_Ia0} (a) and (b). Black curves are RF currents calculated from BCS-Leggett theory.
}
\label{fig:MITBCSfit}
\end{figure}
The Ketterle group \cite{MITtomoimb} has argued that it should be
possible
to extract the pairing gap size from RF spectroscopy in
polarized gases at very low temperatures.
In Fig.~\ref{fig:MITBCSfit} we present a plot from their
paper (Supplementary Materials) which relates to their procedure.
This figure presents a fit to a generalized BCS-Leggett ground
state in the presence of polarization. The red curves correspond to
the actual data and the black curves are obtained from this
theory. An additional resolution broadening is included in the
theory and one can see that this theoretical approach appears
to be in quite reasonable agreement with experiment.
In this way there is some support for this simplest of ground
states --- at least in the polarized case.
\section{Conclusion}
\label{sec:conclusion}
The goal of this paper is to communicate that BCS-BEC crossover
theories are very exciting. They are currently being clarified and
developed hand in hand with experiment. For the Fermi superfluids,
unlike their Bose counterparts, we have no ready-made theory. In this
paper we confine our attention to the normal phase, although we have
presented a discussion of some of the controversial issues which have
surface in the literature below $T_c$. We view the principal value of
this paper is the presentation of comparisons of two different
crossover theories and the identification of (mostly future)
experiments which can help distinguish them. The two theories we
consider are the extended BCS-Leggett theory and that of Nozieres and
Schmitt-Rink. We chose not to discuss the FLEX or Luttinger-Ward
scheme in any detail because it is discussed elsewhere
\cite{HaussmannRF}, and because there are concerns that, by ignoring
vertex corrections, this approach has omitted the important physics
associated with the pseudogap. These concerns are longstanding
\cite{Moukouri,Tremblay2,Fujimoto,Micnas95}.
Here we have argued that the extended BCS-Leggett theory is the one
theory which preserves (broadened) BCS features into the normal state
over a significant range of temperatures. Even above $T_c$ one finds
that the fermionic excitations have an (albeit, smeared out)
dispersion of the form $E_{\bf k} \approx \pm \sqrt{ (\epsilon_{\bf k}
- \mu) ^2 + \Delta_{pg}^2}$ in the normal state.
We find that
NSR theory does not have this dispersion, although it has a pseudogap
by all other measures. Interestingly high $T_c$ superconductors have
been shown to have this dispersion in their normal state \cite{ANLPRL}
and it is generally believed
\cite{Norman98,Normanarcs,FermiArcs}
that their fermionic self energy can be fit to a broadened ($d$-wave)
BCS form $\Sigma_{pg}(K) \approx
\Delta_{pg}({\bf k})^2/(\omega+\epsilon_{\bf k}-\mu+i\gamma)$.
In this paper we show that one can identify both physically and
mathematically the difference between the two normal states of the
different crossover theories. Mathematically because BCS theory
involves one dressed Green's function in the pair susceptibility, it
leads to a low frequency gap in the t-matrix or pair propagator (at
low $q$). Physically this serves to stabilize low momentum pairs. This
helps us to understand that the pseudogap of NSR theory does not
incorporate primarily low momentum pairs, but rather pairs of all
momentum and that it should be better further from condensation.
Indeed, this is reenforced by our observation that at higher $T$,
feedback effects which distinguish the two theories becomes less and
less important and the BCS-Leggett pair susceptibility, $GG_0$,
crosses over to something closer to $G_0G_0$ as in NSR theory. Our
simplest approximation for the self energy in Eq.~(\ref{eq:2a}) is no
longer suitable once temperature exceeds, say $T/T_c \approx
1.5$. Indeed this is reenforced by earlier numerical observations
\cite{Maly1,Maly2}.
As a result, we believe that both theories are right but in different temperature
regimes.
Moreover, this serves to elucidate another concern about NSR theory
(and FLEX theory)-- that they are associated with an unphysical first order
transition. Both theories change discontinuously in going from
above to below $T_c$. In the superfluid phase the coupling which
is included in all other theories is between the non-condensed pairs
and the collective modes of the condensate, even though in the
normal state one couples the fermions and the non-condensed
pairs. In the extended BCS-Leggett theory, (as seems reasonable,
in the vicinity of $T_c$ both above and below), the dominant
coupling is, indeed,
between non-condensed pairs and fermions. These
(effectively, pseudogap) effects
will behave smoothly across $T_c$. The Goldstone modes which turn on
at $T_c$ are
highly damped in its vicinity, where the condensate is weak. Only
at lower $T$ should their coupling become the more important.
In summary, a central conclusion of this study of the spectral
functions of the extended BCS-Leggett theory and NSR theory is that
one may expect that the former is suitable near $T_c$ due to its
similarity to BCS theory while the latter better describes the normal
phase at much higher $T$ as the system approaches a Fermi liquid, and
concomitantly,
the pseudogap begins to disappear. In the course of this work we
have found that the theoretical RF spectra from both theories agree
(only semi-quantitatively) to about the same extent with experimental
data at unitarity. The BCS-Leggett approach has the advantage that it
can address the RF spectrum of generally polarized Fermi gases without
the problems which have been noted \cite{Hupolarized} for the NSR
approach. However, momentum resolved experiments \cite{Jin6} may be
the ultimate way of distinguishing experimentally between different
theories.
\section*{Acknowledgement}
This work was supported by Grant Nos. NSF PHY-0555325 and NSF-MRSEC
DMR-0213745. We thank Prof. E.J.Mueller for helping substantially with the numerical calculations of NSR theory and Prof. Q.J.Chen for useful discussions.
\vspace*{-1ex}
\bibliographystyle{apsrev}
| proofpile-arXiv_065-6882 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Jet quenching, i.e.\ the energy loss of hard partons created in the first moments of an A-A collision due to interactions with the surrounding soft medium has long been regarded a promising tool to study properties of the soft bulk medium created along with the hard process \cite{Jet1,Jet2,Jet3,Jet4,Jet5,Jet6}. The effect of the medium is apparent from a comparison of high $P_T$ hadron observables measured in A-A collisions with the same observables in p-p collisions. The current range of such observables includes the suppression in single inclusive hard hadron spectra $R_{AA}$ \cite{PHENIX_R_AA}, the suppression of back-to-back correlations \cite{Dijets1,Dijets2} and single hadron suppression as a function of the emission angle with the reaction plane \cite{PHENIX-RP}. Recently also preliminary measurements of fully reconstructed jets have become available \cite{STARJET}.
Single hadron observables and back-to-back correlations above 6 GeV (where hadron production is dominated by hard processes) are well described in detailed model calculations using the concept of energy loss \cite{HydroJet1,Dihadron1,Dihadron2}, i.e. under the assumption that the process can be described by a medium-induced shift of the leading parton energy by an amount $\Delta E$ where the probability of energy loss is governed by a distribution $P(\Delta E)$, followed by a fragmentation process using vacuum fragmentation of a parton with the reduced energy. This can be cast into the form of a modified fragmentation function (MMFF). If the vacuum fragmentation function, i.e. the distribution of hadrons produced from a parton at fractional momentum $z$ given a hadronization scale $\mu^2$ is $D(z,\mu^2)$, then the MMFF given the medium induced energy loss probability $P(\Delta E)$ can be written as
\begin{equation}
\label{E-ModF}
\tilde{D}(z,\mu^2) = \int_0^E d \Delta E P(\Delta E) \frac{D\left( \frac{z}{1-\Delta E/E},\mu^2\right)}{1-\Delta E/E}.
\end{equation}
Beyond the leading parton approximation in which energy loss and fragmentation factorize, one has to solve the full partonic shower evolution equations in the medium while assuming that the non-perturbative hadronization takes place outside the medium. At least for light subleading hadrons in a shower, factorizing hadronization from the medium-modified parton shower is a reasonable assumption at both RHIC and LHC kinematics. There are several calculations which utilize such medium-modified showers analytically \cite{HydroJet2,HydroJet3,Dihadron3}. Recently, also Monte Carlo (MC) codes for in-medium shower evolution have become available \cite{JEWEL,YAS,YAS2,Carlos,Carlos2,Martini} which are based on MC shower simulations developed for hadronic collisions, such as PYTHIA \cite{PYTHIA} or HERWIG \cite{HERWIG}. These have, unlike current analytical computations, full energy-momentum conservation enforced at each branching vertex. In these calculations the MMFF is obtained directly rather than from an expression like Eq.~(\ref{E-ModF}).
So far, the different pictures for the parton-medium interaction have been explored and compared with data over a rather finite kinematical window with $P_T < 20$ GeV. There is a widespread expectation that if the $P_T$ range of the measurement could be extended, either at RHIC or at LHC, one would eventually observe the disappearance of the medium effect. The origin of this expectation is that the medium is able to modify the hard parton kinematics at a typical scale set by its temperature $T$, whereas the parton dynamics takes place at a partonic hard scale $p_T$, and if $p_T \gg T$ the hard kinematics should be esentially unchanged, which can be realized for large hadronic $P_T$. For example, in the case of the nuclear suppression factor, this expectation would imply that $R_{AA}(P_T)$ approaches unity for $P_T \gg T$. It is the aim of this paper to discuss the physics contained in the shape of $R_{AA}(P_T)$ and to present what current models, both based on the energy-loss concept and on the in-medium parton shower concept, predict for the shape of $R_{AA}$ at very large momenta at both RHIC and LHC.
\section{Nuclear suppression in the energy loss picture}
\subsection{Qualitative estimates}
A qualitative argument why $R_{AA}$ should increase with $P_T$ can be made as follows: Parton spectra can be approximated by a power law as $dN/dp_T = const./p_T^n$ where $n\approx 7$. Assume that one can approximate the effect of the medium by the mean value energy loss $\langle \Delta E \rangle$ (for realistic energy loss models, this is not a good approximation, as fluctuations around the mean turn out to be large). In this case, the energy loss shifts the spectrum. This can be described by the replacement $p_T \rightarrow p_T + \langle \Delta E \rangle$ in the expression for the parton spectrum. $R_{AA}(p_T)$ can then be approximated by the ratio of the parton spectra before and after energy loss as
\begin{equation}
\label{E-RAAApprox1}
R_{AA}(p_T) \approx \left(\frac{p_T}{p_T + \langle \Delta E \rangle}\right)^n = \left(1 - \frac{\langle \Delta E \rangle}{p_T + \langle \Delta E \rangle}\right)^n
\end{equation}
and it is easily seen that this expression approaches unity for $p_T \gg \langle \Delta E\rangle$. However, it is not readily obvious under what conditions the limit is reached even if the medium properties are known. Parametrically, the medium temperature $T$ governs both the medium density and the typical medium momentum scale, but the total energy loss represents the cumulative effect of the medium, i.e. a line integral of medium properties along the path of the partons, and furthermore the physics of medium-induced radiation is rather complicated, interference between different radiation graphs play a significant role, and therefore the mean energy loss is not simply $\sim T$. Thus, in realistic calculation the mean energy loss at RHIC conditions is $\langle \Delta E \rangle \approx O(10)$ GeV even for $T < 0.35$ GeV \cite{Dihadron2}, and hence it can be understood that current data are relatively far from the limit.
There are five main points which may be raised against the approximation Eq.~(\ref{E-RAAApprox1}):
\begin{itemize}
\item the estimate holds for partons and does not take into account fragmentation:
This, however, is not a crucial issue for the question at hand. The fragmentation function $D(z,\mu^2)$ is steeply falling with $z$ and as a result, fragmentation processes at low $z$ are preferred. However, for given momentum scale of the hadron spectrum, low $z$ implies high parton momentum, and this is suppressed because the parton spectrum is also steeply falling with $p_T$. As a result, there is some typical intermediate $\langle z \rangle$ (dependent on hadron and parton type) which relates hadron and parton momentum, for quarks fragmenting into light hadrons at RHIC kinematics $\langle z \rangle \approx 0.5-0.7$. This means that hadronic $R_{AA}$ is to first approximation simply scaled by this factor as compared to partonic $R_{AA}$. Fluctuations around the average tend to smear out structures in the partonic $R_{AA}$ through the hadronization process, but do not alter the shape of $R_{AA}(P_T)$ beyond that. Thus, qualitatively Eq.~(\ref{E-RAAApprox1}) holds also on the hadronic level.
\item the estimate does not distinguish between quarks and gluons:
This is moderately important, as energy loss is expected to be stronger for gluons by a factor 9/4 (the ratio of the Casimir color factors). At low $P_T$, hadron production is driven by gluonic processes as gluons are copiously available in the low $x$ region in parton distribution functions (PDFs) \cite{CTEQ1,CTEQ2,NPDF,EKS98,EPS09}. However, hadron production at higher $P_T$ probes more and more in the high $x$ region in the parton distributions, and eventually valence quark scattering dominates. The hadronic $R_{AA}$ should therefore show a rise from gluonic $R_{AA}$ to the larger value of quark $R_{AA}$ which corresponds to the transition from gluon to quark-dominated hadroproduction. As shown in \cite{RAA_Proton}, this is likely to be the mechanism underlying the rising trend observed in $R_{AA}$ at RHIC. For asymptotically high energies, the mechanism is not relevant however, as this is always a quark dominated regime.
\item the estimate neglects fluctuations around the average energy loss:
In the presence of fluctuations, $P(\Delta E)$ can be written as the sum of three terms, corresponding to transmission witout energy loss, shift of the parton energy by finite energy loss or parton absorption as
\begin{equation}
P(\Delta E) = \tilde{T} \delta(\Delta E) + \tilde{S} \cdot \tilde{P}(\Delta E) + \tilde{A} \cdot \delta(\Delta E - E)
\end{equation}
where $\tilde{P}(\Delta E)$ is a normalized probability distribution such that $\tilde{T}+\tilde{S}+\tilde{A}=1$. Inserting this form to average over Eq.~(\ref{E-RAAApprox1}) with the proper weights, one finds
\begin{equation}
R_{AA} \approx \tilde{T} + \int d\Delta E \cdot \tilde{S} \cdot \tilde{P}(\Delta E) \left(1-\frac{\Delta E}{p_T + \Delta E}\right)^n.
\end{equation}
It follows that $R_{AA}$ obtained with this expression is always bounded by $\tilde{T}$ from below (if a fraction of partons escapes unmodified, no amount of modification to the rest will alter this) and by $(1-\tilde{A})$ from above (if partons are absorbed {\em independent of their energy}, $R_{AA}$ will never approach unity). In many calculations, $\tilde{A}$ is determined by the condition that a parton is absorbed whenever its calculated energy loss exceeds its energy, i.e. $\tilde{A}$ and $\tilde{S}$ are dependent on the initial parton energy. In particular, in the ASW formalism \cite{QuenchingWeights}, the energy loss can be formally larger than the initial energy, since the formalism is derived for asymptotically high energies $E\rightarrow \infty$ and small energy of radiated gluons $\Delta E \ll E$, but is commonly applied to kinematic situations in which these conditions are not fulfilled.
$R_{AA}$ at given $p_T$ is then equal to the transmission term $\tilde{T}$ plus a contribution which is proportional to the integral of $\tilde{P}(\Delta E)$ from zero up to the energy scale $E_{max}$ of the parton, {\it seen through the filter} of the steeply falling parton spectrum. Thus, in the presence of fluctuations, $R_{AA}$ is dominated by fluctuations towards the low $\Delta E$, and $R_{AA}$ grows with $p_T$ since $E_{max}$ grows linearly with $p_T$. If $P(\Delta E)$ includes fluctuations up to a maximal energy loss $\Delta E_{max}$, then for $p_T \gg \Delta E_{max}$ the original argument made for constant energy loss applies and $R_{AA}$ aproaches unity. In practice this may not be observable - the energy loss probability for RHIC kinematics may be substantial up to scales of $O(100)$ GeV \cite{Dihadron2}, i.e. of the order of the kinematic limit.
\item the pQCD parton spectrum is not a power law:
\begin{figure}
\epsfig{file=parton_powerlaw_f.eps, width=7.8cm}
\caption{\label{F-Power}(Color online) The LO pQCD parton spectrum for $\sqrt{s}=200$ GeV compared with a power law fit to the region from 20 to 50 GeV.}
\end{figure}
While in a limited kinematic range, the pQCD parton spectrum is approximated well by a power law, at about $\sqrt{s}/4$ the power law fit becomes a bad description of the spectrum. This is shown in Fig.~\ref{F-Power}. Close to the kinematic limit at $\sqrt{s}/2$, the parton spectrum falls very steeply. If one would attempt a local power law fit in this region, the region of validity for the fit would be small and $n$ very large. One can readily see from Eq.~(\ref{E-RAAApprox1}) that even for $p_T \gg \langle \Delta E \rangle$ $R_{AA}$ does not approach unity when at the same time $n \rightarrow \infty$. In other words, close to the kinematic limit, even a small $\Delta E$ causes a massive suppression simply because there are no partons available higher up in the spectrum which could be shifted down. For this reason, close to the kinematic limit $R_{AA} \rightarrow 1$ can not be expected, rather (dependent on the details of modelling), something like $R_{AA} \rightarrow \tilde{T}$ should be expected.
However, note also that the validity of factorization into a hard process and a fragmentation function has been assumed for hadron production up to the kinematic limit. This may not be true, Higher Twist mechanisms like direct hadron production in the hard subprocess (in the context of heavy-ion collisions, see e.g. \cite{DirectHP} may represent a different contribution which, due to color transparency, remains unaffected by the medium at all $P_T$ and which may be significantly stronger than fragmentation close to the kinematic limit. This could be effectively absorbed into a modified coefficient $\tilde{T}$ which however ceases to have a probabilistic interpretation.
\item the nuclear initial state effects have not been taken into account:
The initial state nuclear effects, i.e. the difference in nucleon \cite{CTEQ1,CTEQ2} and nuclear \cite{NPDF,EKS98,EPS09} PDFs, are often thought to be a small correction to the final state medium effects. Over a large kinematic range, that is quite true. However, as one approaches the kinematic limit and forces the distributions into the $x \rightarrow 1$ valence quark distributions, one probes the Fermi motion region in the nuclear parton distributions where the difference to nucleon PDFs is sizeable.
\begin{figure}
\epsfig{file=RAA_RHIC_NPDF_f.eps, width=7.8cm}
\caption{\label{F-NPDF}(Color online) $R_{AA}(P_T)$ calculated with nuclear {\em initial state} effects only, obtained from the NPDF set \cite{NPDF} and the EKS98 \cite{EKS98} set of nuclear parton distributions calculated for the whole kinematic range at RHIC.}
\end{figure}
In Fig.~\ref{F-NPDF}, $R_{AA}(P_T)$ is shown for RHIC kinematics taking into account only the nuclear initial state effects with two different sets of nuclear PDFs, but no final state medium induced energy loss. It is readily apparent that over most of the kinematic range, $R_{AA}(P_T) \approx 1$, but that there is a strong enhancement visible above 80 GeV.
\end{itemize}
\subsection{Detailed calculation}
The detailed calculation of $R_{AA}$ in the energy loss models presented here follows the Baier-Dokshitzer-Mueller-Peigne-Schiff (BDMPS) formalism for radiative energy loss \cite{Jet2} using quenching weights as introduced by Salgado and Wiedemann \cite{QuenchingWeights}, commonly referred to as the Armesto-Salgado-Wiedemann (ASW) formalism.
The probability density $P(x_0, y_0)$ for finding a hard vertex at the transverse position ${\bf r_0} = (x_0,y_0)$ and impact parameter ${\bf b}$ is given by the product of the nuclear profile functions as
\begin{equation}
\label{E-Profile}
P(x_0,y_0) = \frac{T_{A}({\bf r_0 + b/2}) T_A(\bf r_0 - b/2)}{T_{AA}({\bf b})},
\end{equation}
where the thickness function is given in terms of Woods-Saxon the nuclear density
$\rho_{A}({\bf r},z)$ as $T_{A}({\bf r})=\int dz \rho_{A}({\bf r},z)$ and $T_{AA}({\bf b})$ is the standard nuclear overlap function $T_{AA}({\bf b}) = d^2 {\bf s}\, T_A({\bf s}) T_A({\bf s}-{\bf b})$.
If the angle between outgoing parton and the reaction plane is $\phi$, the path of a given parton through the medium $\zeta(\tau)$, i.e. its trajectory $\zeta$ as a function of proper medium evolution time $\tau$ is determined in an eikonal approximation by its initial position ${\bf r_0}$ and the angle $\phi$ as $\zeta(\tau) = \left(x_0 + \tau \cos(\phi), y_0 + \tau \sin(\phi)\right)$ where the parton is assumed to move with the speed of light $c=1$ and the $x$-direction is chosen to be in the reaction plane. The energy loss probability $P(\Delta E)_{path}$ for this path can be obtained by evaluating the line integrals along the eikonal parton path
\begin{equation}
\label{E-omega}
\omega_c({\bf r_0}, \phi) = \int_0^\infty \negthickspace d \zeta \zeta \hat{q}(\zeta) \quad \text{and} \quad \langle\hat{q}L\rangle ({\bf r_0}, \phi) = \int_0^\infty \negthickspace d \zeta \hat{q}(\zeta)
\end{equation}
with the relation
\begin{equation}
\label{E-qhat}
\hat{q}(\zeta) = K \cdot 2 \cdot \epsilon^{3/4}(\zeta) (\cosh \rho - \sinh \rho \cos\alpha)
\end{equation}
assumed between the local transport coefficient $\hat{q}(\zeta)$ (specifying the quenching power of the medium), the energy density $\epsilon$ and the local flow rapidity $\rho$ with angle $\alpha$ between flow and parton trajectory \cite{Flow1,Flow2}. The medium parameters $\epsilon$ and $\rho$ are obtained from a 2+1-d hydrodynamical simulation of bulk matter evolution \cite{Hydro}, chosen to have the RHIC and the LHC medium described within the same framework. $\omega_c$ is the characteristic gluon frequency, setting the scale of the energy loss probability distribution, and $\langle \hat{q} L\rangle$ is a measure of the path-length weighted by the local quenching power. The parameter $K$ is seen as a tool to account for the uncertainty in the selection of the strong coupling $\alpha_s$ and possible non-perturbative effects increasing the quenching power of the medium (see discussion in \cite{Dijets2}) and adjusted such that pionic $R_{AA}$ for central Au-Au collisions is described at one value of $P_T$.
Using the numerical results of \cite{QuenchingWeights} and the definitions above, the energy loss probability distribution given a parton trajectory can now be obtained as a function of the initial vertex and direction $({\bf r_0},\phi)$ as $P(\Delta E; \omega_c({\bf r},\phi), R({\bf r},\phi))_{path} \equiv P(\Delta E)_{path}$ for $\omega_c$ and $R=2\omega_c^2/\langle\hat{q}L\rangle$. From the energy loss distribution given a single path, one can define the averaged energy loss probability distribution $P(\Delta E)\rangle_{T_{AA}}$ as
\begin{equation}
\label{E-P_TAA}
\langle P(\Delta E)\rangle_{T_{AA}} \negthickspace = \negthickspace \frac{1}{2\pi} \int_0^{2\pi}
\negthickspace \negthickspace \negthickspace d\phi
\int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dx_0
\int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dy_0 P(x_0,y_0)
P(\Delta E)_{path}.
\end{equation}
The energy loss probability $P(\Delta E)_{path}$ is derived in the limit of infinite parton energy \cite{QuenchingWeights}, however in the following the formalism is applied to finite kinematics. In order to account for the finite energy $E$ of the partons $\langle P(\Delta E) \rangle_{T_{AA}}$ is truncated
at $\Delta E = E$ and $\delta(\Delta E-E) \int^\infty_{E} d\Delta E \,P(\Delta E)$ is added to the truncated distribution to ensure proper normalization. The physics meaning of this correction is that all partons are considered absorbed if their energy loss is formally larger than their initial energy. The momentum spectrum of hard partons is calculated in leading order perturbative Quantum Chromodynamics (LO pQCD) (explicit expressions are given in \cite{Dijets2} and references therein). The medium-modified perturbative production of hadrons can then be computed from the expression
\begin{equation}
d\sigma_{med}^{AA\rightarrow h+X} \negthickspace \negthickspace = \sum_f d\sigma_{vac}^{AA \rightarrow f +X} \otimes \langle P(\Delta E)\rangle_{T_{AA}} \otimes
D^{f \rightarrow h}(z, \mu^2)
\end{equation}
where $d\sigma_{vac}^{AA \rightarrow f +X}$ is the partonic cross section for the inclusive production of a parton $f$, $D^{f \rightarrow h}(z, \mu^2)$ the vacuum fragmentation function for the hadronization of a parton $f$ into a hadron $h$ with momentum fraction $z$ and hadronization scale $\mu$ and from this the nuclear modification factor $R_{AA}$ follows as
\begin{equation}
\label{E-RAA}
R_{AA}(p_T,y) = \frac{dN^h_{AA}/dp_Tdy }{T_{AA}({\bf b}) d\sigma^{pp}/dp_Tdy}.
\end{equation}
\section{Nuclear suppression in the medium-modified shower picture}
\subsection{Qualitative arguments}
In a medium-modified shower picture, the whole partonic in-medium evolution of a parton shower following a hard process is studied, leading to a modification of the fragmentation function (FF) which is more general than Eq.~(\ref{E-ModF}). In this framework, $R_{AA} \rightarrow 1$ is realized if the MMFF becomes sufficiently similar to the vacuum FF. A qualitative argument why the MMFF should approach the vacuum FF for $P_T \gg T$ can be made by considering for example the RAD (radiative energy loss) scenario of the MC code YaJEM (Yet another Jet Energy-loss Model). This model is described in detail in Refs. \cite{YAS,YAS2}.
The parton shower developing from a highly virtual initial hard parton in this model is described as a series of $1\rightarrow 2$ splittings $a \rightarrow bc$ where the virtuality scale decreases in each splitting, i.e. $Q_a > Q_b,Q_c$ and the energy is shared among the daugther partons $b,c$ as $E_b = z E_a$ and $E_c = (1-z) E_a$. The splitting probabilities for a parton $a$ in terms of $Q_a, E_a$ are calculable in pQCD and the resulting shower is computed event by event in a MC framework.
In the absence of a medium, the evolution of the shower is obtained using the PYSHOW routine \cite{PYSHOW} which is part of the PYTHIA package \cite{PYTHIA}.
In the presence of a medium, the main assumption of YaJEM is that the parton kinematics or the splitting probability is modified. In the RAD scenario, the relevant modification is a virtuality gain
\begin{equation}
\label{E-Qgain}
\Delta Q_a^2 = \int_{\tau_a^0}^{\tau_a^0 + \tau_a} d\zeta \hat{q}(\zeta)
\end{equation}
of a parton during its lifetime through the interaction with the medium. In order to evaluate Eq.~(\ref{E-Qgain}) during the shower evolution, the momentum space variables of the shower evolution equations need to be linked with a spacetime position in the medium. This is done via the uncertainty relation for the average formation time as
\begin{equation}
\label{E-Lifetime}
\langle \tau_b \rangle = \frac{E_b}{Q_b^2} - \frac{E_b}{Q_a^2}
\end{equation}
and randomized splitting by splitting by sampling from the distribution
\begin{equation}
\label{E-RLifetime}
P(\tau_b) = \exp\left[- \frac{\tau_b}{\langle \tau_b \rangle} \right].
\end{equation}
The limit in which the medium modification is unimportant is then given by $Q^2 \gg \Delta Q^2$, i.e. if the influence of the medium on the parton virtuality is small compared with the virtuality itself, the evolution of the shower takes place as in vacuum. Note that there is always a kinematical region in which the condition can never be fulfilled: The region $z \rightarrow 1$ in the fragmentation function represents showers in which there has been essentially no splitting. Since the initial virtuality determines the amount of branchings in the shower, this means one probes events in which the initial virtuality $Q_0$ is not (as in typcial events) of the order of the initial parton energy $E_0$, but rather $Q_0 \sim m_h$ where $m_h$ is a hadron mass. Since $m_h$ is, at least for light hadrons, of the order of the medium temperature, $Q^2 \gg \Delta Q^2$ can not be fulfilled in the region $z \approx 1$ of the MMFF --- here the medium effect is always visible.
\begin{figure}
\epsfig{file=ffratio_new_f.eps, width=7.8cm}
\caption{\label{F-FFnew}(Color online) Ratio of medium-modified over vacuum quark fragmentation function into charged hadrons obtained in YaJEM for a constant medium with 5 fm length and $\hat{q}=2$ GeV$^2$/fm. Shown are the results for different initial quark energies $E$.}
\end{figure}
This is illustrated in Fig.~\ref{F-FFnew}. Here the ratio of medium-modified over vacuum fragmentation function $D^{q\rightarrow h^-}(z,\mu_p^2)$ as obtained in YaJEM is shown for a constant medium for different initial partonic scales $\mu_p \equiv E$. For a low initial scale of $E=20$ GeV, one observes that the whole range between $z=0.2$ and $z=1$ is suppressed in the medium, whereas the region below $z=0.1$ shows enhancement due to the hadronization of the additional medium-induced radiation. For larger initial scales, the region of enhancement becomes confined to smaller and smaller $z$ and the fragmentation function ratio approaches unity across a large range. However, in the region $z \approx 1$ suppression due to the medium always persists as expected.
As a consequence, one can expect $R_{AA} \rightarrow 1$ for $P_T \gg T$ (where $\Delta Q^2$ is assumed to be parametrically $O(T^2)$) except near the kinematic limit $P_T \approx \sqrt{s}/2$ where the region $z\approx 1$ of the MMFF is probed and suppression is expected to persist.
\subsection{Detailed calculation}
The detailed computation of $R_{AA}$ within YaJEM is outlined in \cite{YAS2}. It shares many steps with the computation within the energy loss picture as described above, in particular the medium averaging procedure.
The basic quantity to compute is the MMFF, given a path through the medium. Due to an approximate scaling law identified in \cite{YAS}, it is sufficient to compute the line integral
\begin{equation}
\label{E-Qsq}
\Delta Q^2_{tot} = \int d \zeta \hat{q}(\zeta)
\end{equation}
in the medium to obtain the full MMFF $D_{MM}(z, \mu_p^2,\zeta)$ from a YaJEM simulation for a given eikonal path of the shower-initiating parton, where $\mu_p^2$ is the {\em partonic} scale. The link between $\hat{q}$ and medium parameters is given as previously by Eq.~(\ref{E-qhat}), albeit with a different numerical value for $K$. The medium-averaged MMFF is then computed as
\begin{equation}
\label{E-D_TAA}
\begin{split}
\langle D_{MM}&(z,\mu_p^2)\rangle_{T_{AA}} \negthickspace =\\ &\negthickspace \frac{1}{2\pi} \int_0^{2\pi}
\negthickspace \negthickspace \negthickspace d\phi
\int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dx_0
\int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dy_0 P(x_0,y_0)
D_{MM}(z, \mu_p^2,\zeta).
\end{split}
\end{equation}
From this, the medium-modified production of hadrons is obtained from
\begin{equation}
\label{E-Conv}
d\sigma_{med}^{AA\rightarrow h+X} \negthickspace \negthickspace = \sum_f d\sigma_{vac}^{AA \rightarrow f +X} \otimes \langle D_{MM}(z,\mu_p^2)\rangle_{T_{AA}}
\end{equation}
and finally $R_{AA}$ via Eq.~(\ref{E-RAA}). A crucial issue when computing $R_{AA}$ for a large momentum range is that YaJEM provides the MMFF for a given {\em partonic} scale whereas a factorized QCD expression like Eq.~(\ref{E-Conv}) utilizes a fragmentation function at givem {\em hadronic scale}. In previous publications \cite{YAS,YAS2}, the problem has been commented on, but not addressed, as the variation in momentum scale for current observables is not substantial. In this paper, the matching between partonic and hadronic scale is done as follows:
For several partonic scales, $\langle D_{MM}(z,\mu_p^2)\rangle_{T_{AA}}$ is computed, and the exponent $n$ of a power law fit to the parton spectrum at scale $\mu_p$ is determined. The maximum of $z^n \langle D_{MM}(z,\mu_p^2)\rangle_{T_{AA}}$ corresponds to the most likely value $\tilde{z}$ in the fragmentation process, and thus the partonic scale choice is best for a hadronic scale $P_T = \tilde{z}\mu_p$. The hadronic $R_{AA}$ is then computed by interpolation between different optimal scale choices from runs with different partonic scales. Finally, in the region $P_T \rightarrow \sqrt{s}/2$, $\langle D_{MM}(z,s/4)\rangle_{T_{AA}}$ is always the dominant contribution to hadron production.
The matching procedure between hadronic and partonic scale choice also leads to a significant improvement in the description of $R_{AA}$ in the measured momentum range at RHIC as compared to previous results \cite{YAS,YAS2}.
\section{Results for RHIC}
The nuclear suppression factor for 200 AGeV central Au-Au collisions at RHIC, calculated both in the energy loss picture (represented by the ASW model) and the medium-modified shower picture (represented by the MC code YaJEM), is shown over the full kinematic range in Fig.~\ref{F-RAARHIC} and compared with PHENIX data \cite{PHENIX_R_AA}. For the ASW calculation, the partonic $R_{AA}$ is also indicated separately for quarks and gluons.
\begin{figure}
\epsfig{file=R_AA_RHIC_limit_f.eps, width=7.8cm}
\caption{\label{F-RAARHIC}(Color online) The nuclear suppression factor $R_{AA}$ at RHIC across the full kinematic range in 10\% central 200 AGeV Au-Au collisions. Shown are PHENIX data \cite{PHENIX_R_AA}, a calculation in the energy loss picture (ASW) with quark and gluon result shown separately, and a calculation in the medium-modified parton shower picture (YaJEM).}
\end{figure}
Before discussing details of the plot, let us recapitulate the main differences between the energy loss picture as exemplified by the ASW model and the medium-modified shower picture as represented by YaJEM:
\begin{itemize}
\item ASW is derived for infinite parton energy, hence $P(\Delta E)$ is independent of the initial parton energy and the only energy dependence arises from the prescription to assign contributions where $\Delta E > E$ into an absorption term $A$. In contrast, YaJEM is a finite energy framework where the MMFF explicitly depends on the initial parton energy. In particular, within ASW there is an energy-independent transmission probability $\tilde{T}$ which bounds $R_{AA}$ from below.
\item In the energy loss picture, it is not specified what happens to the lost energy. In contrast, within YaJEM the energy lost from the leading shower partons is recovered explicitly in a low $z$ enhancement of the MMFF.
\end{itemize}
In Fig~\ref{F-RAARHIC}, these differences are apparent as follows: In the lowest $P_T$ region from 6 GeV and above, there is small rise of $R_{AA}$ with $P_T$ observed in ASW which is not seen in YaJEM. As apparent from the comparison of the ASW result for pions to the result for quarks and gluons, the rise in this region in the ASW model is driven by the transition from a gluon-dominated to a quark dominated regime --- the ASW hadronic result subsequently approaches the quark result for larger momenta. This transition is also present in YaJEM, however it is masked by the onset of the low $P_T$ enhancement, which just starts to become significant below 6 GeV and corresponds to a decreasing trend of $R_{AA}$ with increasing $P_T$. As a result, the two opposing effects roughly cancel and the YaJEM result appears flatter than the ASW result between 6 and 25 GeV.
For higher $P_T$, there follows a region up to 50 GeV in which both the ASW and the YaJEM result descrease slightly. This can be traced back to the fact that the pQCD spectrum is not a power law, and that local power law fits result in increasing $n$ for higher $p_T$. The two curves run in parallel till $\approx 75$ GeV, then the predictions of the two models are strikingly different.
The ASW curve turns upward beyond $P_T = 75$ GeV. A comparison with Fig.~\ref{F-NPDF} shows that this has nothing to do with the final state energy loss, but reflects the Fermi motion region in the nuclear PDFs. At the kinematic boundary, the curve finally turns over to reach the transmission probability $\tilde{T}$, as all shifts in the spectrum at the kinematic boundary result in substantial suppression and the only remaining contribution are unmodified partons. In contrast, the YaJEM result shows a strong suppression from 75 GeV to the kinematic limit. This corresponds to the region $z \rightarrow 1$ in the MMFF in which suppression was always observed in Fig.~\ref{F-FFnew}, regardless of the initial energy. This suppression is strong enough to mask the enhancement from the nuclear PDF. In contrast to the ASW model, YaJEM does not include an $E$-independent transmission term, thus $R_{AA}$ becomes very small towards the kinematic limit. In this, the finite-energy nature of the suppression in YaJEM is apparent. Note that the YaJEM result cannot be computed all the way to the kinematic limit due to lack of statistics in the MC results at $z \rightarrow 1$.
It is also clear that there is no region throughout the whole kinematic range at RHIC in either model for which $R_{AA} \rightarrow 1$ could be observed.
\section{Results for LHC}
It is then natural to expect that $R_{AA} \rightarrow 1$ could be realized by probing even higher momenta beyond the RHIC kinematic limit, e.g. by studying $R_{AA}$ at the LHC. However, in going to collisions at larger $\sqrt{s}$, not only the kinematic limit is changed, but also the production of bulk matter is increased, i.e. higher $\sqrt{s}$ corresponds to a modification of both hard probe {\em and} medium. There is however reason to expect that eventually one will find a region in which $P_T \gg T, \Delta E_{max}, \sqrt{\Delta Q^2}$ and $R_{AA} \rightarrow 1$ can be realized: The kinematic limit $\sqrt{s}/2$ increases linearly with $\sqrt{s}$. However, the medium density does not. There are different models which try to extrapolate how the rapidity density of produced matter increases with $\sqrt{s}$. The Eskola-Kajantie-Ruuskanen-Tuominen (EKRT) model is among the models with the strongest predicted increase, and has the scaling $\frac{dN}{dy} \approx \sqrt{s}^{0.574}$. Thus, the rapidity density of bulk matter increases significantly slower than the kinematic limit for increasing $\sqrt{s}$.
Although the medium lifetime may increase substantially with $\sqrt{s}$ as well, the more relevant scale is the transverse size of the medium, as high $p_T$ partons move with the speed of light and will exit the medium once they reach its edge. However, the transverse size of the medium is approximately given by the overlap of the two nuclei, and hence to 0th approximation independent of $\sqrt{s}$ (beyond, there is of course the weak logarithmic growth of all total cross sections with $\sqrt{s}$). Thus, for asymptotically high energies, the integration limits for a line integral along the parton path through the medium will not grow arbitrary large, and the integrand, i.e. the density distribution, will be the main change. All these arguments indicate that $R_{AA} \rightarrow 1$ can thus be realized for large $\sqrt{s}$ despite the increased production of bulk matter. The remaining question if the LHC energy $\sqrt{s} = 5.5$ ATeV is large enough.
The result of the detailed calculation shown in Fig.~\ref{F-RAALHC} indicates that this is not the case. For this calculation, a hydrodynamical evolution based on an extrapolation of RHIC results using the EKRT saturation model \cite{Hydro} has been used to account for the increased medium density and lifetime. All other differences to the RHIC result are either plain kinematics, or can be traced back to the scale evolution of the MMFF.
\begin{figure}
\epsfig{file=R_AA_LHC_limit_f.eps, width=7.8cm}
\caption{\label{F-RAALHC}(Color online) The nuclear suppression factor $R_{AA}$ at LHC across the full kinematic range in 10\% central 5.5 ATeV Pb-Pb collisions. Shown are results in the energy loss picture (ASW) with the quark and gluon result shown separately, and a calculation in the medium-modified parton shower picture (YaJEM).}
\end{figure}
As far as the shape of $R_{AA}(P_T)$ is concerned, the LHC predictions for both ASW and YaJEM agree, however quantitatively they differ substantially. At the heart of this difference is that ASW is an infinite energy formalism in which the larger $\sqrt{s}$ of LHC as compared to RHIC is chiefly reflected in the harder slope of the parton spectra, but not directly in $P(\Delta E)$. In contrast, within YaJEM, in addition to the harder slope of the parton spectrum, there is an explicit scale evolution of the medium effect in the MMFF (see Fig.~\ref{F-FFnew}). Since both mechanisms tend to increase $R_{AA}$, the combined effect of scale evolution and parton spectrum slope leads, all things considered, to less final state suppression in the YaJEM result.
The shape of $R_{AA}(P_T)$ can be understood by the mechanisms also observed in the RHIC case. The initial steep rise and subsequent flattening reflects the changing slope of the parton spectra. Note that the transition from gluon dominated to quark-dominated hadron production is not an issue over most of the LHC kinematical range. The final enhancement above 2 TeV is again driven by the Fermi motion region in the nuclear PDFs. Unlike in the RHIC case, at LHC kinematics the suppression obtained from YaJEM for this region is not strong enough to mask the enhancement. Finally, close to the kinematic limit, a small $R_{AA}$ is obtained.
These results indicate that there is no reason to expect that the limit $R_{AA} \rightarrow 1$ can be observed even with LHC kinematics. However, the general trend for larger $R_{AA}$ observed in the transition from RHIC to LHC indicates that the limit could be reached for asymptotically high energies over a large kinematic range, however not close to the kinematic boundary.
\section{Discussion}
So far, the nuclear suppression factor $R_{AA}$ has been observed experimentally only in a very limited kinematical region. In this region, no strong $P_T$ dependence has been observed. The main expectation of how $R_{AA}$ changes if observed over a larger kinematical range is that the suppression should eventually vanish and $R_{AA}$ approach unity. The results presented here show that this expression is too simplistic.
In particular, it is wrong to think of the shape of $R_{AA}(P_T)$ to be the result of any single cause. Instead, many effects, among them the slope change of the pQCD parton spectrum, the scale evolution of the medium modification effect, the transition from gluon-dominated to quark-dominated hadron production and also the initial state nuclear effects all influence $R_{AA}(P_T)$ in a characteristic way. Moreover, it is not sufficient to think of going to higher $P_T$ to see the lessening of the suppression --- it matters how one approaches higher $P_T$, in particular if one can push a measurement further up in $P_T$ with higher statistics, or if one measures a different system with higher $\sqrt{s}$. Based on the results presented above, it appears unlikely that the simple limit $R_{AA} \rightarrow 1$ for sufficiently high $P_T$ can be reached even at LHC kinematics.
These findings may largely be of little practical value due to the impossibility of reaching out to a substantial fraction of the kinematic limit experimentally. However, theoretically they serve well to illustrate that even a hard probe observable like $R_{AA}$ is never 'simple' in the sense that it reflects directly tomographic properties of the medium, but rather that it is a convolution of many different effects which all need to be understood and discussed carefully. In particular, $R_{AA}$ cannot be interpreted as an observable reflecting properties of the medium causing a final state effect. The shape of the underlying parton spectrum or initial state effects are equally important to understand $R_{AA}$.
\begin{acknowledgments}
Discussions with Will Horowitz, Kari Eskola and Hannah Petersen are gratefully acknowledged. This work was supported by an Academy Research Fellowship from the Finnish Academy (Project 130472) and from Academy Project 115262. The numerical computations were carried out with generous support by Helen Caines on the {\bf bulldogk} cluster at Yale University.
\end{acknowledgments}
| proofpile-arXiv_065-6898 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Understanding the formation and evolution of galaxies is one of the
central themes in observational cosmology. To address this issue
considerable work is underway to construct complete galaxy samples
over a broad redshift range using many diverse ground and space-based
facilities. The ultimate aim of these studies is to construct
distribution functions of galaxy properties such as stellar mass, star
formation rate, morphology and nuclear activity as a function of
redshift and environment and to use these measurements in conjunction
with theoretical models to establish the evolutionary links from one
cosmic epoch to another. Essentially we would like to understand how
the galaxy population at distant times became the galaxies we see
around us today and to identify the physical mechanisms driving this
transformation. Improvements in both modelling and observations are the
ultimate drivers in achieving this understanding.
The cold dark matter (CDM) model of structure formation remains our
best description of how structures grow on large scales from minute
fluctuations in the cosmic microwave background
\citep{1996MNRAS.282..347M,Springel:2006p8432}. In this picture,
structures grow ``hierarchically'', with the smallest objects forming
first. Unfortunately, CDM only tells us about the underlying
collisionless dark component of the universe; what we observe are
luminous galaxies and stars. At large scales, the relationship between
dark matter and luminous objects (usually codified as the ``bias'') is
simple, but at small scales (less than a few megaparsecs) the effects
of baryonic physics intervene to make the relationship between
luminous and non-luminous components particularly
complex. Understanding how luminous objects ``light up'' in dense
haloes of dark matter one is essentially the problem of
understanding galaxy formation. ``Semi-analytic'' models avoid
computationally intensive hydrodynamic simulations by utilising a set
of scaling relations which connect dark matter to luminous objects (as
will be explained later in this paper), and these models are now
capable of predicting how host of galaxy properties, mass assembly and
star-formation rate evolve as a function of redshift.
Observationally, at redshifts less than one, various spectroscopic
surveys such as the VVDS
\citep{2005A&A...439..845L,2005A&A...439..863I,Pozzetti:2007p100}
DEEP2 \citep{Faber:2007p5801,Noeske:2007p5802} and zCSOSMOS-bright
\citep{Lilly:2007p1799,Silverman:2009p5804,Mignoli:2009p5805} have
mapped the evolution of galaxy and active galactic nuclei (AGNs)
populations over fairly wide areas on the sky. There is now general
agreement that star formation in the Universe peaks at $1<z<2$ and
that $\sim 50\%-70\%$ of mass assembly took place in the redshift
range $1<z<3$
\citep{Connolly:1997p8767,Dickinson:2003p8724,Arnouts:2007p3665,Pozzetti:2007p100,Noeske:2007p5802,PerezGonzalez:2008p8736}. Alternatively
stated, half of today's stellar mass appears to be in place by
$z\sim1$ \citep{Drory:2005p8438,Fontana:2004p8883}. This is largely at
odds with the predictions of hierarchical structure formation models
which have difficulty in accounting for the large number of evolved
systems at relatively early times in cosmic history
\citep{Fontana:2006p3228}. Furthermore, there is some evidence that
around half the stellar mass in evolved or ``passive'' galaxies
assembled relatively recently \citep{Bell:2004p8521}. It is thus of
paramount importance to gather the largest sample of galaxies possible
at this redshift range.
In the redshift range $1.4 < z < 3.0 $ identifiable
spectral features move out of the optical wave bands and so
near-infrared imaging and spectroscopy become essential. The role of
environment and large-scale structure at these redshifts is largely
unexplored \citep{Renzini:2009p5799}. It is also worth mentioning that
in addition to making it possible to select galaxies in this important
range, near-infrared galaxy samples offer several advantages compared
to purely optical selections (see, for example
\cite{Cowie:1994p1284}). They allow us to select $z>1$ galaxies in
the rest-frame optical, correspond more closely to a
stellar-mass-selected sample and are less prone to dust extinction.
As $k-$ corrections in $K-$ band are insensitive to galaxy type over a
wide redshift range, near-infrared-selected samples provide a fairly
unbiased census of galaxy populations at high redshifts (providing
that the extinction is not too high, as in the case of some submillimeter
galaxies). Such samples represent the ideal input catalogues from
which to extract targets for spectroscopic surveys as well as for
determining accurate photometric redshifts.
\cite{Cowie:1996p8471} carried out one of the first extremely deep,
complete $K-$ selected surveys and made the important discovery that
star-forming galaxies at low redshifts have smaller masses than
actively star-forming galaxies at $z\sim1$, a phenomenon known as
``downsizing''. Stated another way, the sites of star-formation
``migrate'' from higher-mass systems at high redshift lower-mass
systems at lower redshifts. More recent $K$-selected surveys include
the K20 survey \citep{Cimatti:2002p3013} reaching $\hbox{$K_{\rm s}$}\simeq 21.8$
and the GDDS survey \citep{Abraham:2004p2980} which reached
$\hbox{$K_{\rm s}$}\simeq 22.4$ provide further evidence for this
picture. The areas covered by these surveys was small,
comprising only $\sim 55$ arcmin$^2$ and $\sim 30$ arcmin$^2$ K20 and
GDDS respectively. While
\cite{Glazebrook:2004p3320} and \cite{Cimatti:2008p1800} provided spectroscopic
confirmation of evolved systems $z>1.4$ and provided further evidence
for the downsizing picture \citep{Juneau:2005p8688} their limited
coverage made them highly susceptible to the effects of cosmic
variance. It became increasingly clear that much larger samples of
passively evolving galaxies were necessary.
At $K<20$ the number of passive galaxies at $z\sim2$ redshifts is
small and spectroscopic followup of a complete magnitude-limited
sample can be time-consuming. For this reason a number of groups have
proposed and validated techniques based on applying cuts in
colour-colour space to isolate populations in certain redshift
ranges. Starting with the Lyman-break selection at $z\sim3$
\citep{Steidel:1996p9040}, similar techniques have been applied at
intermediate redshifts to select extremely red objects (EROs;
\cite{Hu:1994p9049}) or distant red galaxies (DRGs;
\cite{Franx:2003p310}) and the ``BzK'' technique used in this paper
\citep{Daddi:2004p76}. The advantage of these methods is that they are
easy to apply requiring at most only three or four photometric bands;
their disadvantage being that the relationships between each object
class is complicated and some selection classes contain galaxies with
a broad range of intrinsic properties
\citep{Daddi:2004p76,Lane:2007p295,Grazian:2007p398}. The relationship
to the underlying complete galaxy population can also be difficult to
interpret \citep{LeFevre:2005p8609}. Ideally, one like to make
complete mass-selected samples at a range of redshifts but such
calculations require coverage in many wave bands and can depend
sensitively on the template set
\citep{Pozzetti:2007p100,Longhetti:2009p9068}. Moreover, for redder
populations the mass uncertainties can be even larger;
\cite{Conroy:2009p9107} estimate errors as larger as 0.6 dex at $z\sim
2$.
At $z\sim1.4$ \cite{Daddi:2004p76} used spectroscopic data from the
K20 survey in combination with stellar evolutionary tracks to define
their ``BzK'' technique. They demonstrated that in the $(B-z)$
$(z-K)$ colour-colour plane, star-forming galaxies and evolved systems
are well separated at $z>1.4$, making it possible accumulate larger
samples of passive galaxies at intermediate redshifts that was
possible previously with simple one-colour criterion.
Subsequently, several other surveys have applied these techniques to
larger samples of near-infrared selected galaxies. In one of the
widest surveys to date, \cite{Kong:2006p294} constructed $K$-band
selected samples over a $\sim 920$ arcmin$^2$ field reaching
$\hbox{$K_{\rm s}$}\simeq 20.8$ reaching to $\hbox{$K_{\rm s}$}\simeq 21.8$ over a 320 arcmin$^2$
sub-field. The exploration of a field of this size made possible to
measure the clustering properties of star-forming and passive galaxy
sample and to establish that passively evolving galaxies in this
redshift range are substantially more strongly clustered than
star-forming ones, indicating that a galaxy-type - density relation
reminiscent of the local morphology-density relation must be already
in place at $z\ifmmode{\mathrel{\mathpalette\@versim>} 1.4$.
The UKIDSS survey reaches $\hbox{$K_{\rm s}$}\sim22.5$ over a $\sim 0.62$-deg$^2$
area included in the Subaru-\textit{XMM Newton} Deep Survey and
\cite{Lane:2007p295} used this data set to investigate the different
commonly-used selected techniques at intermediate redshifts,
concluding most bright DRG galaxies have spectra energy distributions
consistent with dusty star-forming galaxies or AGNs at $\sim2$. They
observe a turn-over in the number counts of passive $BzK$ galaxies.
Other recent works include the MUSYC/ECDFS survey covering $\sim 900$
arcmin$^2$ to $\hbox{$K_{\rm s}$}\sim 22.5$ over the CDF South field
\citep{Taylor:2008p3500}, not to be confused with the GOODS-MUSIC
catalog of \cite{Fontana:2006p3228}, which covers 160 arcmin$^2$ of
GOODS-South field to $\hbox{$K_{\rm s}$}\sim 23.8$. This $K$-band selected
catalogue, as well as the FIREWORKS catalog by \cite{Wuyts:2008p5806},
are based on the ESO Imaging Survey coverage of the GOODS-South
field\footnote{http://www.eso.org/science/goods/releases/20050930./}
These studies have investigated, amongst other topics, the evolution
of the mass function at $z\sim2$ and what number of red sequence
galaxies which were already in place at $z\sim2$.
Finally, one should mention that measuring the distribution of a
``tracer'' population, either red passive galaxies or normal field
galaxies can provide useful additional information on the galaxy
formation process. In particular one can estimate the mass of the dark
matter haloes hosting the tracer population and, given a suitable
model for halo evolution, identify the present-day descendants of the
tracer population, as has been done for Lyman break galaxies at
$z\sim3$. A few studies have attempted this for passive galaxies at
$z\sim2$, but small fields of view have made these studies somewhat
sensitive to the effects of cosmic variance. The ``COSMOS'' project
\citep{2007ApJS..172....1S} comprising a contiguous $2~\deg^2$
equatorial field with extensive multi-wavelength coverage, is well
suited to probing the universe at intermediate redshift.
In this paper we describe a $K_{\rm s}$-band survey covering the
entire $\sim 1.9\deg^2$ COSMOS field carried out with WIRCam at
the Canada-France-Hawaii Telescope (CFHT). The addition of deep, high
resolution $K-$ data to the COSMOS field enables us to address many of
scientific issues outlined in this introduction, in particular to
address the nature of the massive galaxy population in the redshift
range $1<z<2$.
Our principal aims in this paper are to (1) present a catalogue of
$BzK$-selected galaxies in the COSMOS field; (2) present the number
counts and clustering properties of this sample in order principally
to establish the catalogue reliability; (3) present the COSMOS $K-$
imaging data for the benefit of other papers which make indirect use
of this dataset (for example, in the computation of photometric
redshifts and stellar masses).
Several papers in preparation or in press make use of the data
presented here. Notably, \cite{Ilbert:2009p7351} combine this data
with IRAC and optical data to investigate the evolution of the galaxy
mass function. The deep part of the zCOSMOS survey
\citep{Lilly:2007p5770} is currently collecting large numbers galaxies
spectra at $z>1$ in the central part of the COSMOS field using a
colour-selection based on the $K$- band data set described here.
Throughout the paper we use a flat lambda cosmology ($\Omega_m~=~0.3$,
$\Omega_\Lambda~=~0.7$) with
$h~=~H_{\rm0}/100$~km~s$^{-1}$~Mpc$^{-1}$. All magnitudes are given in the
AB system, unless otherwise stated. The stacked $K_{\rm s}$ image
presented in this paper will made publicly available at
IRSA.\footnote{\url{http://irsa.ipac.caltech.edu/Missions/cosmos.html}}
\section{Observations and data reductions}
\label{sec:observ-data-reduct}
\subsection{Observations}
\label{sec:observations}
WIRCam \citep{Puget:2004p4595} is a wide-field near-infrared detector
at the 3.6m CFHT which consists of four $2048\times2048$ cryogenically
cooled HgCdTe arrays. The pixel scale is $0.3\arcsec$ giving a field
of view at prime focus of $21\arcmin\times21\arcmin$. The data
described in this paper were taken in a series of observing runs
between 2005 and 2007. A list of these observations and the total
amount of on-sky integration time for each run can be found in
Table~\ref{tab:observations}.
Observing targets were arranged in a set of pointing centres arranged
in a grid across the COSMOS field. At each pointing centre
observations were shifted using the predefined ``DP10'' WIRCAM
dithering pattern, in which each observing cube of four micro-dithered
observations is offset by $1\arcmin 12\arcsec$ in RA and $18\arcsec$
in decl. Our overall observing grid was selected so as to fill all
gaps between detectors and provide a uniform exposure time per pixel
across the entire field and to ensure a the WIRCAM focal plane was
adequately sampled to aid in the computation of the astrometric
solution. All observations were carried out in queue-scheduled
observing mode. Our program constraints demanded a seeing better than
$0.8\arcsec$ and an air mass less than 1.2. Observations were only
validated by CFHT if these conditions were met. In practice, a few
validated images were outside these specifications (usually due to
short-term changes in observing conditions) , and were rejected at
later subsequent processing steps.
Observations were made in the $K_{\rm s}$ filter (``$K$-short'';
\citep{Skrutskie:2006p3517}), which has a bluer cutoff than the
standard $K$ filter, and unlike the $K'$ filter
\citep{1992AJ....103..332W} has a ``cut-on'' wavelength close to
standard $K$. This reduces the thermal background and for typical
galaxy spectra decreases the amount of observing time needed to reach
a given signal-to-noise ratio (S/N) compared to the standard $K-$
filter. A plot of the WIRCam $K_{\rm s}$ filter is available from
CFHT.\footnote{\url{http://www.cfht.hawaii.edu/Instruments/Filters/wircam.html}}
\subsection{Pre-reductions}
\label{sec:pre-reductions}
WIRCam images were pre-reduced using IRAF\footnote{\url
{http://iraf.noao.edu/}} using a two-pass method. To reduce the data
volume the four micro-dithers were collapsed into a single frame at
the beginning of the reduction process. This marginally reduced the
image quality (by less than $0.1\arcsec$ FWHM) but made the reduction
process manageable on a single computer. After stacking the sub-frames
the data were bias subtracted and flat fielded using the bias and flat
field frames provided by the CFHT WIRCam queue observing team. A
global bad pixel mask was generated using the flat to identify the
dead pixels; the dark frames were used to identify hot pixels. A
median sky was then created and subtracted from the images using all
images in a given dither pattern. The images were then stacked using
integer pixel offsets and the WCS headers with IRAF's
\texttt{imcombine} task. These initial stacks of the science data are
used to generate relatively deep object masks through SExtractor's
'CHECKIMAGE\_TYPE = OBJECTS' output file. These object masks are used
to explicitly mask objects when re-generating the sky in the second
pass reduction. Supplementary masks for individual images are made to
mask out satellites or other bad regions not included in the global
bad pixel mask on a frame-by-frame basis.
In the second pass reduction, we individually sky-subtract each
science frame using the object masked images and any residual
variations in the sky is removed by subtracting a constant to yield a
zero mean sky level. These individual sky-subtracted images of a
single science frame are then averaged with both sigma-clipping to
remove cosmic rays and masking using a combination of the object mask
and any supplementary mask to remove the real sources and bad regions
in the sky frames. The region around the on-chip guide star is also
masked. These images are further cleaned of any non-constant residual
gradients as needed by fitting to the fully masked (object $+$
supplementary $+$ global bad pixel masks) background on a line-by-line
basis. Any frames with poor sky subtraction or other artifacts after
this step were rejected.
Next amplifier cross talk is removed for bright Two Micron All Sky
Survey (2MASS) stars which creates ``donut'' shaped ghosts at
regularly spaced intervals. These are unfortunately variable, with
their shape and level depending on the brightness of the star and the
amplifier the star falls on. We proceed in three steps: we first build
for each star a median image of the potential cross talk pattern it
could generate. This is done by taking the median of $13\times13$
pixels sub-frames at 64 pixel intervals above and below the star
position, taking into account the full bad pixel mask. Next, we check
whether the cross-talk is significant by comparing the level of the
median image to the expected noise. If cross-talk is detected, we
subtract it by fitting at every position it could occur the median
shape determined in the preceding steps.
After pre-reduction, the TERAPIX tool \texttt{QualityFITS} was used to
create weight-maps, catalogues, and quality assessment Web pages for
each image. These quality assessment pages provide information on the
instrument point-spread function (PSF), galaxy counts, star counts,
background maps and a host of other information. Using this
information, images with focus or electronic problems were
rejected. \texttt{QualityFITS} also produces a weight-map for each
WIRCam image; this weight map is computed from the bad pixel mask and
the image flat-field. Observations with seeing FWHM greater than
$1.1\arcsec$ were rejected.
\begin{deluxetable}{ccc}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecolumns{3}
\small
\tabletypesize{\scriptsize}
\tablecaption{COSMOS-WIRCam observations\label{tab:observations}}
\tablewidth{0pt}
\tablehead{
\colhead{Year} & \colhead{RunID} &\colhead{total integration time (hrs)}
}
\startdata
2005 &BH36 & 8.1\\
2005 &BH89 & 3.5\\
2006 &AH97 & 12.8\\
2006 &AC99 & 2.1\\
2006 &BF97 & 12.7\\
2006 &BH22 & 13.3\\
2007 &AF34 & 6\\
2007 &AC20 & 6.1\\
2007 &AH34 & 17\\
\enddata
\end{deluxetable}
\subsection{Astrometric and photometric solutions}
\label{sec:astr-solut}
In the next processing step the TERAPIX software tool \texttt{Scamp}
\citep{2006ASPC..351..112B} was used to compute a global astrometric
solution using the COSMOS i* catalogue \cite{Capak:2007p267} as
an astrometric reference.
{\texttt {Scamp} calculates a global astrometric solution. For each
astrometric "context" (defined by the QRUNID image keyword supplied
by CFHT) we derive a two-dimensional third-order polynomial which
minimises the weighted quadratic sum of differences in positions
between the overlapping sources and the COSMOS reference astrometric
catalogue derived from interferometric radio observations. Each
observation with the same context has the same astrometric
solution. Note that we use the {\texttt STABILITY\_MODE}
``instrument'' parameter setting which assumes that the derived
polynomial terms are identical exposure to exposure within a given
context but allows for anamorphosis induced by atmospheric
refraction. This provides reliable, robust astrometric solutions for
large numbers of images. The relative positions of each wircam
detector are supplied by precomputed header file which allows us to
use the initial, approximate astrometric solution supplied by CFHT
as a first guess.
Our internal and external astrometric accuracies are $\sim0.2$ and
$\sim0.1$ arcseconds respectively. Scamp produces an XML summary file
containing all details of the astrometric solution, and any image
showing a large reduced chisquared is flagged and rejected.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f1.ps}}
\caption{The difference between COSMOS-WIRCam total magnitudes and
2MASS magnitudes as a function of COSMOS-WIRCam total magnitude.}
\label{fig:2mass_comparison}
\end{figure}
We do not use \texttt{Scamp} to compute our photometric solutions (the
version we used (1.2.11MP) assumes that the relative gains between
each WIRCam detector is fixed). Instead, we first use the astrometric
solution computed by \texttt{scamp} to match 2MASS stars with objects
in each WIRCam image. We then compute the zero-point of each WIRCam
detector by comparing the fluxes of 2MASS sources with the ones
measured by \texttt{SExtractor} in a two-pass process. First, saturated objects
brighter than $K_{\rm AB}=13.84$ magnitudes or objects where the
combined photometric error between SExtractor and 2MASS is greater
than 0.2 magnitudes are rejected. An initial estimate of the zero-point is
produced by computing the median of the difference between the two
cleaned catalogues. Any object where this initial estimation differ by
more than $3\sigma$ from the median is rejected, and the final
difference is computed using an error-weighted mean. The difference
between 2MASS and COSMOS catalogues in the stacked image is shown in
Figure~\ref{fig:2mass_comparison}. Note that the main source of
scatter at these magnitudes ($13.84 < K_{\rm AB} < 17$) comes from
uncertainties in 2MASS photometry. The magnitude range with good
sources between objects in 2MASS and WIRCam catalogues is quite narrow
(around two magnitudes) so setting these parameters is quite
important.
Finally, all images and weight-maps were combined using \texttt{Swarp}
\citep{Bertin:2002p5282}. The tangent point used in this paper was
$10^{\mathrm h} 00^{\mathrm m} 15^{\mathrm s}$,$+02\degr 17\arcmin
34.6\arcsec$ (J2000) with a pixel scale of 0.15''/pixel to match
COSMOS observations in other filters. The final image has an effective
exposure time of one second and a zero point of $31.40$ AB
magnitudes. This data will be publically available from the IRSA web
site. The seeing on the final stack is excellent, around $0.7\arcsec$
FWHM. Thanks to rigorous seeing constraints imposed during
queue-scheduled observations, seeing variation over the final stack is
small, less than $\sim5\%$.
\subsection{Complementary datasets}
\label{sec:compl-datas}
We also add Subaru-suprime $B_J$, $i^+$ and $z^+$ imaging data
(following the notation in \cite{Capak:2007p267}). We downloaded the
image tiles from IRSA\footnote{\url{http://irsa.ipac.caltech.edu/}}
and recombined them with \texttt{Swarp} to produce a
single large image astrometrically matched to the $K_{\rm s}-$band WIRCam
image (the astrometric solutions for the images at IRSA were
calculated using the same astrometric reference catalogue as our
current $K_{\rm s}-$ image, and they share the same tangent point). Catalogues
were extracted using \texttt{SExtractor} \citep{1996A&AS..117..393B}
in dual-image mode, using the $K_{\rm s}-$band image as a detection image. An
additional complication arises from the fact that the $B-$ Subaru
images saturate at $B\sim19$. To account for this, we use the TERAPIX
tool \texttt{Weightwatcher} to create ``flag-map'' images in which all
saturated pixels are indicated (the saturation limit was determined
interactively by examining bright stars in the images). During the
subsequent scientific analysis, all objects which have flagged pixels
are discarded. We also manually masked all bright objects by defining
polygon region files. In addition, we also automatically masked
regions at fixed intervals from each bright star to remove positive
crosstalk in the $K_{\rm s}-$ image. The final catalogue covers a total area
of $1.9\deg^2$ after masking.
\section{Catalogue preparation and photometric calibration}
\label{sec:catal-prep}
\subsection{Computing colours}
\label{sec:comp-total-colo}
We used \texttt{SExtractor} in dual-image mode with the $K_{\rm
s}-$band as a reference image to extract our catalogues. For this
$K_{\rm s}-$selected catalogue, we measured $K_{\rm s}-$ band total
magnitudes using \texttt{SExtractor}'s \texttt{MAG\_AUTO}
measurement and aperture colours. Our aperture magnitudes are
measured in a diameter of $2\arcsec$ and we compute a correction to
``total'' magnitudes by comparing the flux of point-like sources in
this small aperture with measurements in a larger $6\arcsec$ diameter
aperture. We verified that for the $B_J$ and $z^+$ Subaru-suprime
images the difference between these apertures varies less than $0.05$
magnitudes, indicating that seeing variations are small across the
images which was confirmed by an analysis of the variation of the
best-fit Gaussian FWHM for $B,z$ and $K$ images over the full
$2\deg^2$ field.
Obviously for extended bright objects this colour measurement will
be dominated the object nucleus as the majority of the $z\sim2$
objects studied in this paper are unresolved, distant galaxies we will
neglect this effect. We verified that for these objects,
variable-aperture colours computed using \texttt{MAG\_AUTO} gave
results very similar (within 0.1~mag) to these corrected aperture
colours.
Based on these considerations, we apply the following corrections to
our aperture measurements to compute colours:
\begin{equation}
i^+_{\rm tot} = i^+-0.1375
\end{equation}
\begin{equation}
K_{\rm tot} = K'-0.1568
\end{equation}
\begin{equation}
B_{\rm tot} = B_J - 0.1093
\end{equation}
For the $z^+$ our corrections are more involved.
As noted in \cite{Capak:2007p267} the Subaru $z$-band images were
taken over several nights with variable seeing. To mitigate the
effects of seeing variation on the stacked image PSF individual
exposures were smoothed to the same (worst) FWHM with a Gaussian
before image combination. This works well at faint magnitudes where
many exposures were taken so the non-Gaussian wings of the PSF average
out. However, at bright magnitudes ($z^+\sim20$) the majority of
longer exposures are saturated, so the non-Gaussian wings of the PSF
in the few remaining exposures can bias aperture photometry. To
correct for this non-linear effect we apply a magnitude dependent
aperture correction in the transition magnitudes between
$19<z<20$. After first applying the correction to total magnitude,
\begin{equation}
z^+{_{\rm tot}} = z^+ - 0.1093
\end{equation}
We apply a further correction, for $z^+<19.0$, $z^+_{\rm
tot}=z^+_{\rm tot}-0.023$ and for $z^+>20.0$, $z^+_{\rm tot}=z^+_{\rm tot}+0.1$. For
$19<z^+<20$,
\begin{equation}
z^+_{\rm tot} = z^+_{\rm tot}+(z^+_{\rm tot}-19.0) \times 0.077+0.023
\end{equation}
Flux errors in \texttt{SExtractor} are underestimated (in part due to
correlated noise in the stacked images) and must be corrected. For a
given $4000\times4000$ image section, we compute a correction factor
from the ratio of the one-sigma error between a series of $2\arcsec$
apertures on empty regions of the sky and the median
\texttt{SExtractor} errors for all objects. The mean correction factor
is derived from several such regions. For $B$,$i$ and $z$ images we
multiply our flux errors by 1.5; for the $K_{\rm s}$ image we apply a
correction factor of $2.0$.
\subsection{Catalogue completeness and limiting magnitude}
\label{sec:catal-compl-limit}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f2.ps}}
\caption{Recovery fraction for point-like sources, bulges and disks
for the central region of the COSMOS-WIRCam stack. Note that the
slight decline in completeness at relatively bright magnitudes ($K_{
\rm {AB}} \sim 22$) is due to confusion.}
\label{fig:compsim}
\end{figure}
We conducted an extensive set of realistic simulations to determine
limiting magnitude as a function of object magnitude and profile. In
our simulations, we first created a noiseless image containing a
realistic mix of stars, disk-dominated and bulge-dominated galaxies
using the TERAPIX software \texttt{stuff} and
\texttt{skymaker}. Type-dependent luminosity functions are
their evolution with redshift are taken from the VVDS survey
\citep{Zucca:2006p7671,Ilbert:2006p3432}. The spectral energy
distribution (SED) of each galaxy type was modelled using empirical
templates of \cite{1980ApJS...43..393C}. The disk size distribution
was modelled using the fitting formula and parameters presented in
\cite{deJong:2000p7685}.
Next, we used \texttt{SExtractor} to detect all objects on the
stacked image (using the same configuration used to detect objects
for the real catalogues) and to produce a \texttt{CHECK\-IMAGE} in
which all these objects were removed (keyword
\texttt{CHECKIMAGE\_TYPE -OBJECTS}). In the next step this empty
background was added to the simulated image, and \texttt{SExtractor}
run again in ``assoc-mode'' in which a match is attempted between
each detected galaxy and the output simulated galaxy catalogue
produced by \texttt{stuff}. In the last step, the magnitude
histograms of the number output galaxies is compared to the
magnitude histograms in the input catalogue; this ratio gives the
completeness function of each type of object. In total
around 30,000 objects (galaxy and stars) in one image were used in
these simulations. The results from these simulations are shown
in Figure~\ref{fig:compsim}. The solid line shows the completeness
curve for stars and the dotted line for disks. The completeness
fraction is $70\%$ for disks and $90\%$ for stars and bulges at
$K_{\rm s}\sim23$.
In addition to these simulations, we compute upper limits for each
filter based on simple noise statistics in apertures of $2\arcsec$
(after applying the noise correction factors listed above). The
$1\sigma$, $2\arcsec$ limiting magnitude for our data at the centre of
the field are $29.1$,$27.0$ and $25.4$~AB magnitudes for $B$, $z$ and
$K_{\rm s}$ magnitudes respectively.
A related issue is the uniformity of the limiting magnitude over the
full image. Figure~\ref{fig:depthmap} shows the limiting magnitude as
a function of position for the $K_{\rm s}$ stack. This was created by
converting the weight map to an rms error map, scaling this error map
from $1\sigma$ per pixel to units of $5\sigma$ in the effective area
of an optimally weighted $0.7\arcsec$ aperture, and then converting
this flux to units of AB magnitude. This depth map agrees well with
the completeness limit for point sources shown
Figure~\ref{fig:compsim}. Note that future COSMOS-WIRCam observations
(which will be available in around one years' time) are expected to
further reduce the depth variations across the survey area.
\begin{figure}
\plotone{f3.ps}
\caption{Depth map for the COSMOS-WIRCam survey. This map was
constructed from the weight-map from the final stacked image. The
grey-scale corresponds to the magnitude at which a point source is
detected at $5\sigma$ in a $2\arcsec$ aperture.}
\label{fig:depthmap}
\end{figure}
Based on the considerations outlined in this section, we adopt $K_{\rm
s}=23$ as the limiting magnitude for our catalogues. At this limit
our catalogue is greater than $90\%$ and $70\%$ complete for
point-sources and galaxies respectively. At this magnitude the number
of spurious sources (based on carrying out detections on an image of
the $K_s$ stack multiplied by -1) is less than $1\%$ of the total.
\subsection{The $BzK$ selection}
\label{sec:bzk-selection}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f4.ps}}
\caption{The $(B-Z)_{\rm AB}$ vs $(z-K)_{\rm AB}$ diagram for all
galaxies in the COSMOS field. Four distinct regions are shown: stars
(lower part of the diagram), galaxies (middle), star-forming
galaxies (left) and passively-evolving galaxies (top right). The solid
line shows the colours of stars in the $BzK$ filter set of
\citeauthor{Daddi:2004p76} computed using the models of
\cite{Lejeune:1997p4534}}
\label{fig:bzk_diagram}
\end{figure}
One of the principal objectives of this paper is to produce a
reliable catalogue of objects at $z\sim2$ using a colour-colour
selection technique. A number of different methods now exist to select
galaxies in colour-colour space. For instance, the ``dropout''
technique \citep{Steidel:1996p9040} makes use of Lyman-break
spectral feature and the opacity of the high-redshift universe to
ultraviolet photons to select star-forming galaxies at $z\sim3$,
provided they are not too heavily reddened. Similar techniques can be
used at $1<z<3$ \citep{Adelberger:2004p9303,Erb:2003p9269} and large
samples of UV-selected star-forming galaxies now exist at these
redshifts. At intermediate redshifts, spectroscopy has shown that
``ERO'' galaxies which are galaxies selected according to red
optical-infrared colours contains a mix of old passive galaxies and
dusty star-forming systems in the redshift range $0.8<z<2$
\citep{Cimatti:2002p3013,Yan:2004p9408}. The ``DRG'' criteria, which
selects galaxies with $(J-K)_{\rm Vega}>2.3$ is affected by similar
difficulties \citep{Papovich:2006p9480,Kriek:2006p7151}. On the other
hand, the ``$BzK$'' criterion introduced by \cite{Daddi:2004p76} can
reliably select galaxies in the redshift range $1.4\ifmmode{\mathrel{\mathpalette\@versim<} z\ifmmode{\mathrel{\mathpalette\@versim<} 2.5$
with relatively high completeness and low contamination. Based on the
location of star-forming and reddened systems in a spectroscopic
control sample lie in the $(B-z)$ and $(z-K)$ plane and considerations
of galaxy evolutionary tracks, it has been adopted and tested in
several subsequent studies (e.g.,
\cite{Kong:2006p294,Lane:2007p295,Hayashi:2007p3726,Blanc:2008p3635,Dunne:2009p6812,Popesso:2009p6632}). $BzK-$selected
galaxies are estimated to have masses of $\sim 10^{11}M_\odot$ at
$z\sim2$ \citep{Daddi:2004p76,Kong:2006p294}.
Compared to other colour criteria, it offers the advantage of
distinguishing between actively star-forming and passively-evolving
galaxies at intermediate redshifts. It also sharply separates stars
from galaxies, and is especially efficient for $z>1.4$ galaxies. The
criterion was originally designed using the redshift evolution in the
$BzK$ diagram of various star-forming and passively-evolving template
galaxies (i.e., synthetic stellar populations) located over a wide
redshift interval. \citeauthor{Daddi:2004p76} carried out extensive
verifications of their selection criteria using spectroscopic
redshifts.
To make the comparison possible with previous studies we wanted our
photometric selection criterion to match as closely as possible as the
original ``BzK'' selection proposed in \citeauthor{Daddi:2004p76} and
adopted by the authors cited above. As our filter set is not the same
as this work we applied small offsets (based on the tracks of
synthetic stars), following a similar procedure outlined in
\cite{Kong:2006p294}.
To account for the differences between our Subaru $B-$ filter and the
$B-$ VLT filter used by \cite{Daddi:2004p76} we use this
empirically-derived transformation, defining $bz=B_{J_{\rm
total}}-z^+_{\rm tot}$, then for blue objects with $bz<2.5$,
\begin{equation}
bz_{\rm cosmos}=bz+0.0833\times bz+0.053
\end{equation}
otherwise, for objects with $bz>2.5$,
\begin{equation}
bz_{\rm cosmos}= bz + 0.27
\end{equation}
This ``$bz_{\rm cosmos}$'' quantity is the actual corrected
$(B-z)_{\rm AB}$
colour which we use in this paper.
Finally we divide our catalogue into galaxies at $z<1.4$, stars,
star-forming galaxies and passively evolving galaxies at $1.4<z<2.5$,
by first defining the $BzK$ quantity introduced in
\cite{Daddi:2004p76}:
\begin{equation}
BzK\equiv(z-K)-(B-z)
\end{equation}
For galaxies expected at $z>1.4$ star-forming galaxies (hereafter
$sBzK$) are selected as those objects with $BzK>-0.2$. One
should also note that the reddening vector in the $BzK$ plane is
approximately parallel to the $sBzK$ selection criteria, which ensure
that the selection is not biased against heavily reddened dusty galaxies.
Old, passively
evolving galaxies (hereafter $pBzK$) can be selected as those objects
which have
\begin{equation}
BzK<-0.2, (z-K)>2.5.
\end{equation}
Stars are selected using this criteria:
\begin{equation}
(z-K) < -0.5+(B-z) \times 0.3
\end{equation}
Finally, the full galaxy sample consists simply of objects which do
\textit{not} fulfill this stellarity criterion. The result of this
division is illustrated in Figure~\ref{fig:bzk_diagram}. The solid
line represents the colours of stars in the $BzK$ filter set of
\citeauthor{Daddi:2004p76} using the empirically corrected spectra
presented in \cite{Lejeune:1997p4534}, and it agrees very with our
corrected stellar locus.
\section{Source counts}
\label{sec:source-number-counts}
We now present number counts of the three populations selected in the
previous Section.
\subsection{Star and galaxy counts}
\label{sec:galaxy-counts}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f5.ps}}
\caption{$K_{\rm s}-$ selected galaxy and star counts from the COSMOS survey
(open circles and stars respectively) compared to measurements from
recent wide-field near-infrared surveys.}
\label{fig:counts_all}
\end{figure}
\begin{deluxetable*}{ccccccc}
\tabletypesize{\scriptsize}
\tablecolumns{7}
\tablewidth{0pt}
\small
\tablecaption{Differential Number Counts\\ of Stars, Galaxies and
Star-Forming galaxies per Half-magnitude Bin.\label{tab:countsgals}}
\tablehead{
&\multicolumn{2}{c}{Galaxies} &
\multicolumn{2}{c}{Stars} &\multicolumn{2}{c}{$sBzK's$} \\
\colhead{$K_{\rm AB}$}&\colhead{$N_{\rm gal}$}&\colhead{$\log(N_{\rm gal})$~$\deg^{-2}$}&\colhead{$N_{\rm stars}$}&\colhead{$\log(N_{\rm stars})$~$\deg^{-2}$}&\colhead{$N_{\mathit sBzK}$}&\colhead{$\log(N_{\mathit sBzK})$~$\deg^{-2}$}}
\startdata
16.25& 102& 1.73&\nodata &\nodata &\nodata&\nodata\\
16.75& 204& 2.03&\nodata&\nodata &\nodata&\nodata\\
17.25& 487& 2.41&\nodata & \nodata&\nodata&\nodata\\
17.75& 838& 2.65&\nodata & \nodata&\nodata&\nodata\\
18.25& 1479& 2.89& 750& 2.60 &\nodata&\nodata\\
18.75& 2588& 3.14& 928& 2.69 &\nodata&\nodata\\
19.25& 4073& 3.33& 1038& 2.74& 24& 1.10\\
19.75& 6410& 3.53& 1138& 2.78 & 61& 1.51\\
20.25& 9433& 3.70& 1257& 2.82& 195& 2.01\\
20.75& 12987& 3.84& 1397& 2.87& 710& 2.57\\
21.25& 17027& 3.95& 1425& 2.88& 1982& 3.02\\
21.75& 22453& 4.07& 1586& 2.92& 4191& 3.35\\
22.25& 29502& 4.19& 1504& 2.90& 7684& 3.61\\
22.75& 36623& 4.29& 1596& 2.93& 11109& 3.77\\
\enddata
\tablecomments{{Note. Logarithmic counts are normalised to the effective area
of our survey, $1.89\deg^2$.}}
\end{deluxetable*}
Figure~\ref{fig:counts_all} shows our differential galaxy number
counts compared to a selection of measurements from the literature. We
note that at intermediate magnitudes ($20<K_s<22$) counts from the
four surveys presented here are remarkably consistent
\citep{Elston:2006p1644,1997ApJ...476...12H,Hartley:2008p5290}. At
$16<K_s<20$ discrepancies between different groups concerning
measurement of total magnitudes and star-galaxy separation leads to an
increased scatter. At these magnitudes, shot noise and
large-scale-structure begin to dominate the number count errors.
The COSMOS-WIRCam survey is currently the only work to provide
unbroken coverage over the range $16 < K_s<23$. In addition, our
colour-selected star-galaxy separation provides a very robust way to
reject stars from our faint galaxy sample. These stellar counts are
shown by the asterisks in Figure~\ref{fig:counts_all}. We note that at
magnitudes brighter than $K_{\rm s}\sim18.0$ our stellar number counts
become incomplete because of saturation in the Subaru $B$ image (our
catalogues exclude any objects with saturated pixels which
preferentially affect point-like sources). Our galaxy and star number
counts are reported in Table~\ref{tab:countsgals}.
\subsection{$sBzK$ and $pBzK$ counts}
\label{sec:sbzk-pbzk-counts}
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{f6.ps}}
\caption{Number counts for star-forming $BzK$ galaxies in the
COSMOS-WIRCam survey (open circles) compared to measurements from
the literature and the predictions of the model of
\citeauthor{Kitzbichler:2007p3449} (dashed line).}
\label{fig:counts_sbzk}
\end{figure}
Figure~\ref{fig:counts_sbzk} shows the counts of star-forming $BzK$
galaxies compared to measurements from the literature. These counts
are summarised in Table~\ref{tab:countsgals}. We note an excellent
agreement with the counts in \cite{Kong:2006p294} and the counts
presented by the MUYSC collaboration \citep{Blanc:2008p3635}. However,
the counts presented by the UKIDSS-UDS group
\citep{Lane:2007p295,Hartley:2008p5290} are significantly offset
compared to our counts at bright magnitudes, and become consistent
with it by $K_{\rm s}\sim 22$. These authors attribute the
discrepancy to cosmic variance but we find photometric offsets a more
likely explanation (see below).
Figure~\ref{fig:pbzk_newsel} shows in more detail the zone occupied by
passive galaxies in Figure~\ref{fig:bzk_diagram}. Left of the diagonal
line are objects classified as star-forming $BzK$ galaxies. Objects
not detected in $B$ are plotted as right-pointing arrows with colours
computed from the upper limit of their $B-$ magnitudes. An
object is considered undetected if the flux in a 2\arcsec aperture
is less than the corrected $1\sigma$ noise limit. For the $B-$ band
this corresponds to approximately 29.1 mag. This criterion
means that in addition to the galaxies already in the $pBzK$ selection
box, fainter $sBzK$ with $B$-band non-detections (shown with the green
arrows) may be scattered rightward into the $pBzK$ region.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f7.ps}}
\caption{Selection diagram for the passive $BzK$ population. Objects
with rightward-pointing arrows are galaxies plotted at the lower
limit of their $(B-z)_{\rm AB}$ colours. Circles are objects
selected as $pBzK$ galaxies; normal $sBzK$ galaxies are not shown.}
\label{fig:pbzk_newsel}
\end{figure}
Counts for our passive galaxy population including these
``additional'' objects are represented by the hatched region in
Figure~\ref{fig:counts_pbzk}. The upper limit for the source counts in
this figure represents the case in which \textit{all} the $(z-K)_{s} >
2.5$ sources undetected in $B$ are scattered into the $pBzK$
region. Even accounting for these additional objects we unambiguously
observe a flattening and subsequent turnover in the passive galaxy
counts at around $K_{\rm s}\sim22$, well above the completeness limit of
either our $K_{\rm s}-$ or $B-$ data in agreement with
\citep{Hartley:2008p5290}.
This upper limit, however, is a conservative estimate. We have made a
better estimate of this upper limit by carrying out a stacking
analysis of the objects not detected in $B-$ in both the passive and
star-forming regions of the $BzK$ diagram. For each apparent $K_{\rm s}$
magnitude bin in Table we median-combine Subaru $B-$band postage
stamps for objects with no $B$-band detection, producing separate
stacks for the star-forming and passive regions of the $BzK$
diagram. In both cases, objects below our detection limit are clearly
visible (better than a three-sigma detection) in our stacked images at
each magnitude bin to $K_{\rm s}~23$. By assuming that the mean $B$
magnitude of the stacked source to be the average magnitude of our
undetected sources, we can compute the average $(B-z)$ colour of our
undetected sources, and reassign their location in the $BzK$ diagram
if necessary. This experiment shows that at most only $15\%$ of the
star-forming $BzK$ galaxies undetected in $B-$ move to the passive
$BzK$ region.
Our number counts are summarised in Table~\ref{tab:countspbzk}, which
also indicates the upper count limits based on $B-$ band
observations. As before, our counts are in good agreement with those
presented in \cite{Kong:2006p294} and \cite{Blanc:2008p3635} but
are above the counts in \cite{Hartley:2008p5290}.
To investigate the origin of this discrepancy, we compared our $BzK$
diagram with \citeauthor{Hartley:2008p5290}'s, which should also be in
the \citeauthor{Daddi:2004p76} filter set. We superposed our $BzK$
diagram on that of \citeauthor{Hartley:2008p5290} and found that the
\citeauthor{Hartley:2008p5290} stellar locus is bluer by $\sim0.1$ in
both $(B-z)$ and $(z-K)$ compared to our measurements\footnote{Since
the first draft of this manuscript was prepared, a communication
with W. Hartley has confirmed that the transformations to the Daddi
et al. system were incorrectly computed in their work.}. We have
already seen that our stellar locus agrees well with the theoretical
stellar sequence computed using the \cite{Lejeune:1997p4534} synthetic
spectra in the \citeauthor{Daddi:2004p76} filter set and also with the
stellar locus presented in \citeauthor{Daddi:2004p76} and
\citeauthor{Kong:2006p294}. We conclude therefore that the number
counts discrepancies arise from an incorrect transformation to the
\citeauthor{Daddi:2004p76} filter set.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f8.ps}}
\caption {Differential number counts for the passive $BzK$ population
in the COSMOS-WIRCam survey (open circles) compared to measurements
from the literature and the predictions of the model of
\citeauthor{Kitzbichler:2007p3449} (dashed line). The shaded
region represents an upper limit on the number counts of passive
$BzKs$ if all star-forming BzKs in Figure~\ref{fig:pbzk_newsel} were
moved into the region of the figure occupied by the passively
evolving population. }
\label{fig:counts_pbzk}
\end{figure}
\begin{deluxetable}{ccccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablecaption{Differential Number Counts for the passive $BzK$ population.\label{tab:countspbzk}}
\tablehead{
&\multicolumn{2}{c}{{Passive $BzK$}}
&\multicolumn{2}{c}{{{Passive $BzK$ (upper limits)}}\tablenotemark{a}} \\
\colhead{$K_{\rm AB}$} & \colhead{$N_{\mathit pBzK}$}& \colhead{$\log(N_{\mathit pBzK})$~$\deg^{-2}$} &
\colhead{$N_{\mathit pBzK}$} & \colhead{$\log(N_{\mathit pBzK})$~$\deg^{-2}$}}
\startdata
19.25&13&0.84&13&0.84\\
19.75&69&1.56&69&1.56\\
20.25&265&2.15&280&2.17\\
20.75&553&2.47&621&2.52\\
21.25&837&2.65&1015&2.73\\
21.75&963&2.71&1285&2.83\\
22.25&757&2.60&1229&2.81\\
22.75&475&2.40&984&2.72\\
\enddata
\tablecomments{{Logarithmic counts are normalised to the effective area
of our survey, $1.89\deg^2$.}}
\tablenotetext{a}{The upper limit to the $pBzK$ was computed by including
all the $sBzK$ galaxies undetected in $B$ with $(z-K)>2.5$}
\end{deluxetable}
\subsection{Comparison with the Semi-analytic Model of~\citeauthor{Kitzbichler:2007p3449}}
\label{sec:comparison-with-semi}
In Figures \ref{fig:counts_all},~\ref{fig:counts_sbzk}
and~\ref{fig:counts_pbzk} we show counts of galaxies extracted from
the semi-analytical model presented in
\cite{Kitzbichler:2007p3449}.
Semi-analytic models start from either an analytic ``merger tree'' of
dark matter haloes or, in the case of the model used here, merger
trees derived from a numerical simulation, the Millennium simulation
\citep{2005Natur.435..629S}). Galaxies are ``painted'' onto dark
matter haloes using a variety of analytical recipes which include
treatments of gas cooling, star-formation, supernovae feedback, and
black hole growth by accretion and merging. An important recent
advance has been the addition of ``radio mode'' AGN feedback
\cite{Bower:2006p7511,2007MNRAS.374.1303C} which helps provide a
better fit to observed galaxy luminosity functions. The \citeauthor
{Kitzbichler:2007p3449} model is derived from the work presented by
\cite{Croton:2006p7487} and further refined by
\cite{DeLucia:2007p7510}. It differs only from these papers in the
inclusion of a refined dust model. We refer the readers to these works
for further details. An extensive review of semi-analytic modelling
techniques can be found in \cite{Baugh:2006p782}.
To derive counts of quiescent galaxies, we follow the approach of
\cite{Daddi:2007p2924} and select all galaxies at $z>1.4$ in the
star-formation rate - mass plane (Figure 18 from
\citeauthor{Daddi:2007p2924}) which have star-formation rates less
than three times the median value for a given mass. Star-forming
objects were defined as those galaxies which do \texttt{not} obey
this criterion, in this redshift range. (Unfortunately the
publicly available data do not contain all the COSMOS bands so we
cannot directly apply the $BzK$ selection criterion to them.).
In all three plots, the models over-predict the number of faint
galaxies, an effect already observed for the $K-$ selected samples
investigated in \citeauthor{Kitzbichler:2007p3449}. We also
note that adding an upper redshift cut to the model catalogues to
match our photometric redshift distributions (see later) does not
change appreciably the number of predicted galaxies.
Considering in more detail the counts of quiescent galaxies we find
that at $20<\hbox{$K_{\rm s}$}<20.5$ models are below observations by a factor of
two, whereas at $22.5<\hbox{$K_{\rm s}$}<23.0$ model counts are in excess of
observations by around a factor of 1.5. Given the narrow redshift
range of our passive galaxy population, apparent \hbox{$K_{\rm s}$} magnitude is a
good proxy for absolute \hbox{$K_{\rm s}$}~magnitude which can itself be directly
related to underlying stellar mass \cite{Daddi:2004p76}. This implies
that these models predict too many small, low-mass passively-evolving
galaxies and too few large high mass passively evolving galaxies at
$z\sim1.4$.
It is instructive to compare our results with Figure 7 from
\citeauthor{Kitzbichler:2007p3449}, which shows the stellar mass
function for their models. At $z\sim2$, the models both under-predict
the number of massive objects and over-predict the number of less
massive objects, an effect mirroring the overabundance of luminous
$pBzK$ objects with respect to the \citeauthor{Kitzbichler:2007p3449}
model seen in our data.
A similar conclusion was drawn by \cite{Fontana:2006p3228} who
recently compared predictions for the galaxy stellar mass function for
massive galaxies for a variety of models with observations of massive
galaxies up to $z\sim4$ in the GOODS field. They also concluded that
models incorporating AGN feedback similar to
\citeauthor{Kitzbichler:2007p3449} under-predicted the number of high
mass galaxies.
Thanks to the wide-area, deep $B-$ band data available in the COSMOS
field, we are able to make reliable measurements of the number of
faint passive $BzK$ galaxies. Reassuringly, the turnover in counts of
passive galaxies observed in our data is qualitatively in agreement
with the measurements of the faint end of the mass function of
quiescent galaxies at $1.5<z<2$ made in \cite{Ilbert:2009p7351}.
\section{Photometric redshifts for the $pBzK$ and $sBzK$ population}
\label{sec:phot-redsh-pbzk}
For many years studies of galaxy clustering at $z\sim2$ have been
hindered by our imperfect knowledge of the source redshift
distribution and small survey fields. Coverage of the $2\deg^2$ COSMOS
field in thirty broad, intermediate and narrow photometric bands has
enabled the computation of very precise photometric redshifts
\citep{Ilbert:2009p4457}.
These photometric redshifts were computed using deep Subaru data
described in \cite{Capak:2007p267} combined with intermediate band
data, the $K_{\rm s}$ data presented in this paper, $J$ data from
near-infrared camera WFCAM at the United Kingdom Infrared Telescope
and IRAC data from the Spitzer-COSMOS survey (sCOSMOS,
\cite{Sanders:2007p3108}). These near- and mid-infrared bandpasses
are an essential ingredient to compute accurate photometric redshifts
in the redshift range $1.4<z<2.5$, in particular because they permit
the location of the $4000$~\AA~break to be determined
accurately. Moreover, spectroscopic redshifts of 148 $sBzK$ galaxies
with a $\bar z\sim 2.2$ from the early zCOSMOS survey
\citep{Lilly:2007p1799} have been used to check and train these
photometric redshifts in this important redshift range.
A set of templates generated by \cite{Polletta:2007p6857} using the
GRASIL code \cite{Silva:1998p6890} are used . The nine galaxy
templates of \citeauthor{Polletta:2007p6857} include three SEDs of
elliptical galaxies and six spiral galaxies templates (S0, Sa, Sb, Sc,
Sd, Sdm). This library is complemented with 12 additional blue
templates generated using the models of\cite{Bruzual:2003p963}.
The photometric redshifts are computed using a standard $\chi^2$
template fitting procedure (using the ``Le Phare'' code). Biases in
the photo-z are removed by iterative calibration of the photometric
band zero-points. This calibration is based on 4148 spectroscopic
redshifts at $i^+_{AB}<22.5$ from the zCOSMOS survey
\cite{Lilly:2007p1799}. As suggested by the data, two different dust
extinction laws \cite{Prevot:1984p6814,Calzetti:2000p6839} were
applied specific to the different SED templates. A new method to
account for emission lines was implemented using relations between the
UV continuum and the emission line fluxes associated with star
formation activity.
Based on a comparison between photometric redshifts and 4148
spectroscopic redshifts from zCOSMOS, we estimate an accuracy of
$\sigma_{\Delta z/(1+z)}=0.007$ for the galaxies brighter than
$i^+_{\rm AB}=22.5$. We extrapolate this result at fainter magnitude based
on the analysis of the 1$\sigma$ errors on the photo-z. At $z<1.25$,
we estimate an accuracy of $\sigma_z=0.02$, $\sigma_z=0.07$ at
$i^+_{\rm AB}\sim 24$, $i^+_{\rm AB}<25.5$, respectively.
\subsection{Photometric redshift distributions}
\label{sec:phot-redsh-distr}
\begin{figure}\resizebox{\hsize}{!}{\includegraphics{f9.ps}}
\caption{The Redshift distribution for field galaxies (top
panel) $sBzK$ (middle panel) and $pBzK$ galaxies (bottom panel),
computed using the 30-band photometric redshifts presented in
\cite{Ilbert:2009p4457}.}
\label{fig:zeddist}
\end{figure}
We have cross-correlated our catalogue with photometric redshifts to
derive redshift selection functions for each photometrically-defined
galaxy population. Note that although photometric redshifts are based
on an optically-selected catalogue, this catalogue is very deep
($i'<26.5$) and contains almost all the objects present in the $K_{\rm
s}$-band selected catalogue. At $K_{\rm s}<23.0$, 138376 were
successfully assigned photometric redshifts, representing $96\%$ of
the total galaxy population.
Figure~\ref{fig:zeddist} shows the redshift distribution for all
$K_{\rm s}$-selected galaxies, as well as for $BzK$-selected
passively-evolving and star-forming galaxies in the magnitude range
$18.0<\hbox{$K_{\rm s}$}<23.0$. We have computed the redshift selection function in
several magnitude bins and found that the effective redshift $z_{\rm
eff}$ does not depend significantly on apparent magnitude for the
$sBzK$ and $pBzK$ populations.
By using only the blue grism of the VIMOS spectrograph at the VLT, the
zCOSMOS-Deep survey is not designed to target $pBzK$ galaxies, and so
no spectroscopic redshifts were available to train the photometric
redshifts of objects over the COSMOS field. At these redshifts the
main spectral features of $pBzK$ galaxies, namely Ca II H\& K and the
4000 \AA\ break, have moved to the near infrared. Hence, optical
spectroscopy can only deliver redshifts based on identifying the so
called Mg-UV feature at around 2800\AA~ in the rest frame. All in all,
spectroscopic redshifts of passive galaxies at $z>1.4$ are now
available for only a few dozen objects
\citep{Glazebrook:2004p3320,Daddi:2005p68,Kriek:2006p7151,McGrath:2007p7049,Cimatti:2008p1800}. We note
that the average spectroscopic redshift of these objects ($\bar
z\sim1.7$) indicates that the average photometric redshift of $\bar z
\sim 1.4$ of our $pBzK$ galaxies to the same $K_{\rm s}$-band limit may be
systematically underestimated. For the medium to short term one has to
unavoidably photometric redshifts when redshifts are needed for large
numbers of passive galaxies at $z>1.4$.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f10.ps}}
\caption{Fraction of the total galaxy population of DRG, $pBzK$ and
$sBzK$ galaxy classifications. Poisson error bars in each bin have
been offset slightly for clarity. The thick solid line represents the sum
of $pBzK$ and $sBzK$ components.}
\label{fig:rel_dndz}
\end{figure}
From Figure~\ref{fig:zeddist} we can estimate the relative
contribution of each classification type to the total number of
galaxies to at least $z\sim2$. This is shown in
Figure~\ref{fig:rel_dndz}, where we show for each bin in photometric
redshift the number of each selection class as a fraction of the
total number of galaxies. Upper and lower confidence limits are
computed using Poisson statistics and the small-number approximation
of \cite{Gehrels:1986p1797}; in general we can make reliable
measurements to $z\sim2$. In the redshift range $1<z<3$ at
$\hbox{$K_{\rm s}$}\sim22$ the $pBzK$ population represents around $\sim20\%$ of
the total number of galaxies, in contrast to $\sim70\%$ for
$sBzK$-selected galaxies. The sum of both components represents at
most $\sim 80\%$ of the total population at $z\sim2$.
We estimate the fraction of DRG galaxies using $J-$band data described
in \cite{Capak:2007p267}. DRG-selected galaxies remain an
important fraction of the total galaxy population, reaching around
$\sim 50\% $ of the total at $z\sim2$. This in contrast with
\cite{Reddy:2005p2062} who found no significant overlap in the
spectroscopic redshift distributions of $pBzK$ and DRG galaxies.
Our work confirms that \textit{most} bright passive-$BzK$
galaxies lie in a narrower redshift range than either the $sBzK$ or
the DRG selection. We also see that the distribution in photometric
redshifts for the DRG galaxies is quite broad. Similar conclusions
were reached by \cite{Grazian:2007p398} and \cite{Daddi:2004p76}
using a much smaller, fainter sample of galaxies.
\section{Clustering properties}
\label{sec:clust-prop-full}
\subsection{Methods}
\label{sec:meth}
For each object class we measure $w$, the angular correlation
function, using the standard \citet{1993ApJ...412...64L} estimator:
\begin{equation}
w ( \theta) ={\mbox{DD} - 2\mbox{DR} + \mbox{RR}\over \mbox{RR}}
\label{eq:1.ls}
\end{equation}
where $DD$, $DR$ and $RR$ are the number of data--data, data--random
and random--random pairs with separations between $\theta$ and
$\theta+\delta\theta$. These pair counts are appropriately
normalised; we typically generate random catalogues with ten times
higher numbers of random points than input galaxies. We compute $w$ at
a range of angular separations in logarithmically spaced bins from
$\log(\theta)=-3.2$ to $\log(\theta)=-0.2$ with
$\delta\log(\theta)=0.2$, where $\theta$ is in degrees. At each
angular bin we use bootstrap errors to estimate the errors in
$w$. Although these are not in general a perfect substitute for a full
estimate of cosmic variance (e.g. using an ensemble of numerical
simulations), they should give the correct magnitude of the
uncertainty \citep{1992ApJ...392..452M}.
We use a sorted linked list estimator to minimise the computation time
required. The fitted amplitudes quoted in this paper assume a
power-law slope for the galaxy correlation function,
$w(\theta)=A_w\theta^{1-\gamma}$; however this amplitude
must be adjusted for the `integral constraint' correction, arising
from the need to estimate the mean galaxy density from the sample
itself. This can be estimated as \citep[e.g.][]{2005ApJ...619..697A},
\begin{equation}
C = {1 \over {\Omega^2}} \int\!\!\! \int w(\theta)\, d\Omega_1\, d\Omega_2,
\label{eq:5}
\end{equation}
Our quoted fitted amplitudes are are corrected for this integral
constraint, i.e., we fit
\begin{equation}
w(\theta)=A_w(\theta^{1-\gamma}-C)
\end {equation}
For the COSMOS field, $C=1.42$ for $\gamma=1.8$. An added complication
is that the integral constraint correction depends weakly on the
slope,$\gamma$; in fitting simultaneously $\gamma$ and $A_w$ we use an
interpolated look-up table of values for $C$ in our minimisation
procedure.
Finally, it should be mentioned that in recent years it has become
increasingly clear that the power-law approximation for $w(\theta)$ is
no longer appropriate (see for example, \cite{Zehavi:2004p856}). In
reality, the observed $w(\theta)$ is the sum of the contributions of
galaxy pairs in separate dark matter haloes and within the same halo
of dark matter; it is only in a few fortuitous circumstances that this
observed $w(\theta)$ is well approximated by a power law of slope
$\gamma=1.8$. We defer a detailed investigation of the shape of
$w(\theta)$ in terms of these ``halo occupation models'' to a second
paper, but these points should be borne in mind in the forthcoming
analysis.
\subsection{Clustering of galaxies and stars}
\label{sec:galaxies-stars}
To verify the stability and homogeneity of our photometric calibration
over the full $2~\deg^2$ of the COSMOS field we first compute
the correlation function for stellar sources. These stars, primarily
residing in the galactic halo, should be
unclustered. They are identified as objects below the diagonal line in
Figure~\ref{fig:bzk_diagram}. This classification technique is more
robust than the usual size or compactness criterion, which can include
unresolved galaxies.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{f11.ps}}
\caption{Clustering amplitude for stars and galaxies (open and filled
circles respectively) in our catalogue selected with
$18.0<K_{\rm AB}<23.0$ as a function of angular separation $\theta$. The
inset shows measurements at large scales. The clustering amplitude
of stars is consistent with zero at all angular scales.}
\label{fig:w_stars}
\end{figure}
The amplitude of $w(\theta)$ as a function of angular scale for stars
and faint galaxies is shown in Figure~\ref{fig:w_stars}. For
comparison we have also plotted the clustering amplitude for our
faintest $K_{\rm s}-$selected galaxy sample. The inset plot shows a
zoom on measurements at large scale scales where the amplitude of $w$
is very low. At each angular bin our stellar correlation function is
consistent with zero out to degree scales down to a limiting magnitude
of $K_{\rm s}=23$. If we fit a power-law correlation function of slope
0.8 to our stellar clustering measurements we find $A_w=(1.7\pm
1.7)\times 10^{-4}$ (at $1^\circ$; in comparison, the faintest
galaxy correlation function signal we measure is
$A_w=(9.9\pm1.5)\times10^{-4}$, around $\sim6$ times larger.
Figure~\ref{fig:w_slices} shows $w(\theta)$ for galaxies
in three magnitude slices. It is clear that the slope
of $w$ becomes shallower at fainter magnitudes. At small
separations (less than $1\arcsec$) $w$ decreases due to object
blending. Our fitted correlation amplitudes and slopes for field
galaxies are reported in Table~\ref{tab:fitw}.
\begin{deluxetable}{ccc}
\tablecaption{Angular Correlation Amplitudes\label{tab:fitw}}
\tablehead{
\colhead{$K_{\rm AB}$} &\colhead{$A_w(1\arcmin)\times10^{-2}$} & \colhead{$\gamma$}
}
\tablewidth{0pt}
\startdata
18.5 & $16.70\pm 4.03$ &$ 1.75\pm 0.06$\\
19.0 & $12.10\pm 2.39$ &$ 1.74\pm 0.05$\\
19.5 & $ 9.86\pm 1.43$ &$ 1.76\pm 0.03$\\
20.0 & $ 7.83\pm 0.98$ &$ 1.72\pm 0.03$\\
20.5 & $ 6.77\pm 0.70$ &$ 1.67\pm 0.02$\\
21.0 & $ 5.69\pm 0.54$ &$ 1.61\pm 0.02$\\
21.5 & $ 4.71\pm 0.42$ &$ 1.59\pm 0.02$\\
22.0 & $ 3.81\pm 0.32$ &$ 1.59\pm 0.02$\\
22.5 & $ 3.10\pm 0.26$ &$ 1.59\pm 0.02$\\
23.0 & $ 2.57\pm 0.20$ &$ 1.55\pm 0.02$\\
\enddata
\end{deluxetable}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f12.ps}}
\caption{Clustering amplitude $w$ for galaxies in three slices of
apparent magnitude. The dotted line shows a fit to a slope
$\gamma=1.8$ with an integral constraint appropriate to the size of
our field applied.}
\label{fig:w_slices}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f13.ps}}
\caption{Lower panel: clustering amplitude at 1\arcmin~as a function
of $K_{\rm s}$ limiting magnitude for the full galaxy sample. Upper
panel: Best-fitting slope over entire angular range of our survey
($-3.2<\log(\theta)<0.2)$. }
\label{fig:scaling_w}
\end{figure}
In Figure~\ref{fig:scaling_w} we investigate further the dependence of
slope $\gamma$ on $K_{\rm s}$ limiting magnitude. Here we fit for the slope
and amplitude simultaneously for all slices. At bright magnitudes the
slope corresponds to the canonical value of $\sim 1.8$; towards
intermediate magnitudes it becomes steeper and fainter magnitudes
progressively flatter. It is interesting to compare this Figure with
the COSMOS optical correlation function presented in Figure 3 of
\cite{2007ApJS..172..314M} which also showed that the slope of the
angular correlation function becomes progressively shallower at
fainter magnitudes. One possible interpretation of this behaviour is
that at bright magnitudes our $K_{\rm s}$-selected samples are dominated by
bright, red galaxies which have an intrinsically steeper correlation
function slope; our fainter samples are predominantly bluer,
intrinsically fainter objects with shallower intrinsic correlation
function slope.
Finally, it is instructive to compare our field galaxy clustering
amplitudes with literature measurements as our survey is by far the
largest at these magnitude limits. Figure~\ref{fig:scaling_w_comp}
shows the scaling of the correlation amplitude at one degree as a
function to limiting $K_{\rm s}$ magnitude, compared a compilation of
measurements from the literature. To make this comparison, we have
assumed a fixed slope of $\gamma=1.8$ and converted the limiting
magnitude of each of our catalogues to Vega magnitudes.
In general our results are within the $1\sigma$
error bars of most measurements, although it does appear that the
COSMOS field is slightly more clustered than other fields in the
literature, as we have discussed previously \citep{2007ApJS..172..314M}.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{f14.ps}}
\caption{Fitted clustering amplitude at 1 degree as a function of
$K_{\rm VEGA}$
limiting magnitude (connected open circles), compared to values from
the literature.}
\label{fig:scaling_w_comp}
\end{figure}
\nocite{1996MNRAS.283L..15B}
\nocite{Iovino:2005p17}
\nocite{1998MNRAS.295..946R}
\nocite{Daddi:2000p677}
\subsection{Galaxy clustering at $z\gtrsim 1.4$}
\label{sec:galaxy-clustering-at}
In the previous Sections we have demonstrated the reliability of our
estimates of $w$ and our general agreement with preceding
literature measurements for magnitude-limited samples. We now
investigate the clustering properties of passive and star-forming
galaxy candidates at $z\sim2$ selected using our $BzK$
diagram. Figure~\ref{fig:xyplot} shows the spatial distribution of the
$pBzK$ galaxies in our sample; a large amount of small-scale
clustering is evident.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{f15.ps}}
\caption{Angular distribution of~$18<K_{\rm s}<23$~$pBzK$ sources in the
COSMOS-WIRCam survey. A large amount of small-scale clustering
is clearly visible.}
\label{fig:xyplot}
\end{figure}
\begin{deluxetable}{ccccc}
\tablecaption{Angular correlation amplitudes.\label{tab:fitw_pbzksbzk}}
\tablewidth{0pt}
\tablehead{
&\multicolumn{2}{c}{Passive $BzK$}
&\multicolumn{2}{c}{Star-forming $BzK$} \\
\colhead{$K_{\rm s}$} &\colhead{$A_w(1\arcmin)\times10^{-2}$} & \colhead{$\gamma$}
&\colhead{$A_w(1\arcmin)\times10^{-2}$} & \colhead{$\gamma$}}
\tablewidth{0pt}
\startdata
22.0 & $ 8.41\pm 4.15$ &$ 2.32\pm 0.10$& $ 5.62\pm 1.72$ &$ 1.80\pm 0.07$\\
23.0 & $ 6.23\pm 3.06$ &$ 2.50\pm 0.09$& $ 3.37\pm 0.62$ &$ 1.80\pm 0.04$\\
\enddata
\end{deluxetable}
The upper panel of Figure~\ref{fig:pbzk_w} shows the angular correlation functions
for our $pBzK$, $sBzK$ and for all galaxies. In each case we apply a
$18.0<K_s<23.0$ magnitude cut. For comparison we show the
clustering amplitude of dark matter computed using the redshift
selection functions presented in Section~\ref{sec:phot-redsh-pbzk} and
the non-linear power spectrum approximation given in
\cite{2003MNRAS.341.1311S}. At
intermediate to large scales, the clustering amplitude of field
galaxies and the $sBzK$ population follows very well the underlying
dark matter.
The lower panel of Figure~\ref{fig:pbzk_w} shows the bias $b$, as a
function of scale, computed simply as $b(\theta)=\sqrt{(w_{\rm
gal}(\theta)/w_{\rm dm}(\theta)}$. Dashed, dotted and solid lines show
$b$ values for $pBzK$, $sBzK$ and field galaxies retrospectively (in
this case our $w$ measurements have been corrected for the integral
constraint). The bias for the faint field galaxy population is
1.2 at $1\arcmin$ indicating that the faint
$K_{\rm s}-$ selected galaxy population traces well the underlying
dark matter. In comparison, at the same scales, the bias values for the
passive $BzK$ and star-forming $BzK$ galaxies are 2.5 and 2.1 respectively.
Our best-fitting $\gamma$ and amplitudes (quoted at $1\arcmin$) for
$pBzK$ and $sBzK$ galaxies are reported in
Table~\ref{tab:fitw_pbzksbzk}. Given that for the $sBzK$ galaxies
$\gamma=1.8$ we may compare with previous authors who generally assume
a fixed slope $\gamma=1.8$ for all measurements. At $K_{\rm VEGA}<20$,
corresponding to $K_{\rm AB}\sim22$, \cite{Kong:2006p294} find
$(4.95\pm0.52)\times10^{-3}$ whereas (at $1\deg$) we measure
$(2.1\pm0.6)\times10^{-3}$, closer to the value of
$(3.14\pm1.12)\times 10^{-3}$ found by \cite{Blanc:2008p3635}. We note
that both \cite{Hayashi:2007p3726} and \cite{2005ApJ...619..697A} also
investigated the luminosity dependence of galaxy clustering at
$z\sim$ although with samples considerable smaller than those
presented here. It is plausible that field-to-field variation and
large scale structure are the cause of the discrepancy between these surveys.
The best fitting slopes for our $pBzK$ populations is $\gamma
\sim2.3$, considerably steeper than the field galaxy population (no
previous works have attempted to fit both slope and amplitude
simultaneously for the $pBzK$ populations due to small sample
sizes). In the next section we will derive the spatial clustering
properties of both populations.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{f16.ps}}
\caption{Top panel: amplitude of the galaxy correlation function $w$ for
field galaxies, star-forming $BzK$ galaxies and passive $BzK$
galaxies with $18<K_{\rm sAB}<23$ (squares, triangles and circles). The
lines show the predictions for the non-linear clustering amplitudes
of dark matter computed using the non-linear power spectrum. Bottom
panel: bias, $b$ for $pBzK$, $sBzK$ and field galaxies (dashed,
dotted and solid lines respectively).}
\label{fig:pbzk_w}
\end{figure}
\subsection{Spatial clustering}
\label{sec:spatial-clustering}
To de-project our measured clustering amplitudes and calculate the
comoving correlation lengths at the effective redshifts of our survey
slices we use the photometric redshift distributions presented in
Section~\ref{sec:phot-redsh-pbzk}.
Given a redshift interval $z_1,z_2$ and a redshift distribution
$dN/dz$ we define the effective redshift in the usual way, namely,
$z_{\rm eff}$ is defined as
\begin{equation}
z_{\rm eff} ={\int_{z_1}^{z_2}z(dN/dz)dz/{\int_{z_1}^{z_2}(dN/dz)dz}}.
\end{equation}
Using these redshift distributions together with the fitted
correlation amplitudes in presented in
Sections~\ref{sec:galaxies-stars} and ~\ref{sec:galaxy-clustering-at}
we can derive the comoving correlation lengths $r_0$ of each galaxy
population at their effective redshifts using the usual
\cite{1953Apj...117..134,Peebles:1980p5506} inversion. We assume that $r_0$ does not
change over the redshift interval probed.
It is clear that our use of photometric redshifts introduces an
additional uncertainty in $r_0$. We attempted to estimate this
uncertainty by using the probability distribution
functions associated with each photometric redshift to compute an
ensemble of $r_0$ values, each estimated with a different $n(z)$. The
resulting error in $r_0$ from these many realisations is actually
quite small, $\sim 0.02$ for the $pBzK$ population. Of course,
systematic errors in the photometric redshifts could well be much
higher than this. Figure 9. in \citeauthor{Ilbert:2009p4457} shows the
$1\sigma$ error in the photometric redshifts as a function of
magnitude and redshift. Although all galaxy types are combined here,
we can see that the approximate $1\sigma$ error in the photometric
redshifts between $1<z<2$ is $\sim0.1$. Our estimate of the
correlation length is primarily sensitive to the median redshift and
the width of the correlation length. An error $\sim0.1$ translates
into an error of $\sim 0.1$ in $r_0$. We conclude that, for our $pBzK$
and $sBzK$ measurements, the dominant source of uncertainty in
our measurements of $r_0$ comes
from our errors on $w$.
We note that previous investigations of the correlation of passive
galaxies always assumed a fixed $\gamma=1.8$; from
Figure~\ref{fig:w_slices} it is clear that our slope is much
steeper. These surveys, however, fitted over a smaller range of
angular scales and therefore could not make an accurate determination
of the slope for the $pBzK$ population. In all cases we fit for both
$\gamma$ and $A_w$.
Our spatial correlation amplitudes for $pBzK$ and $sBzK$ galaxies are
summarised in Table~\ref{tab:fit_r0}. Because of the degeneracy
between $r_0$ and $\gamma$ we also quote clustering measurements as
$r_0^{\gamma/1.8}$. These measurements are plotted in
Figure~\ref{fig:r0g_zed}. At lower redshifts, our field galaxy samples
are in good agreement with measurements for optically selected redder
galaxies from the CFHTLS and VVDS surveys
\citep{2006A&A...452..387M,2008A&A...479..321M}. At higher redshifts,
our clustering measurements for $pBzK$ and $sBzK$ galaxies are in
approximate agreement with the measurements of
\cite{Blanc:2008p3635}. We note that part of the differences with the
measurements of \citeauthor{Blanc:2008p3635} arises from the their
approximation of the redshift distribution of passive $BzK$ galaxies
using simple Gaussian distribution.
Interestingly, a steep slope $\gamma$ for optically-selected
passive galaxies has already been reported at lower redshift surveys;
for example, \cite{2003MNRAS.344..847M} found that passive galaxies
had a much steeper slope than active galaxies in the 2dF galaxy
redshift survey.
The highly biased nature of the $pBzK$
galaxy population indicates that these objects reside in more massive
dark matter haloes than either the field galaxy population or the
$sBzK$ population, and} we intend to present a more detailed
discussion of the spatial clustering of each galaxy sample in the
framework of the halo occupation models in a future paper.
\begin{deluxetable*}{ccccccc}
\tablecaption{Spatial Correlation Amplitudes.\label{tab:fit_r0}}
\tablehead{
&\multicolumn{3}{c}{Passive $BzK$ galaxies} &
\multicolumn{3}{c}{Star-forming $BzK$ galaxies}\\
\colhead{$K_{\rm s}$} &\colhead{$z_{\rm eff}$} & \colhead{$r_0$}&\colhead{$r_0^{\gamma/1.8}$}&
\colhead{$z_{\rm eff}$} & \colhead{$r_0$}&\colhead{$r_0^{\gamma/1.8}$}}
\tablewidth{0pt}
\startdata
22.0 & 1.41 & $4.55 \pm 0.97$ & $7.05 \pm 0.51$ & 1.61 & $ 4.69 \pm 0.80$ & $ 4.69 \pm 0.23$\\
23.0 & 1.41 & $3.71 \pm 0.73$ & $6.18 \pm 0.40$ & 1.71 & $4.25 \pm 0.43$ & $4.25 \pm 0.11$\\
\enddata
\end{deluxetable*}
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{f17.ps}}
\caption{The rescaled comoving correlation length $r_0^{\gamma/1.8}$
as a function of redshift for $K_{\rm s}-$ selected field galaxies (filled
squares), $sBzK$ galaxies (filled triangles) and $pBzK$ galaxies
(filled circles). Also shown are results from lower-redshift optically
selected red galaxies and higher redshift $K_{\rm s}-$ selected samples. }
\label{fig:r0g_zed}
\end{figure}
\nocite{2006A&A...452..387M}
\section{Summary and conclusions}
\label{sec:summary}
We have presented counts, colours and clustering properties for a
large sample of $K-$ selected galaxies in the $2\deg^2$ COSMOS-WIRCam
survey. This represents the largest sample of galaxies to date at this
magnitude limit. By adding deep Subaru $B-$ and $z-$ data we are able
to classify our catalogue into star-forming and quiescent/passive
objects using the selection criterion proposed by
\cite{Daddi:2004p76}. To $K_{\rm s}<23.0$ our catalogues comprises
$143,466$ galaxies, of which $3931$ are classified
as passive galaxies and $25,757$ as star-forming
galaxies. We have also identified a large sample of $13,254$ faint
stars.
Counts of field galaxies and star-forming galaxies change slope at
$K_{\rm s}\sim22$. Our number counts of quiescent galaxies turns over
at $\hbox{$K_{\rm s}$}\sim22$, confirming an observation previously made in shallower
surveys \citep{Lane:2007p295}. This effect cannot be explained by
incompleteness in any of our very deep optical bands. Our number
counts of passive, star-forming and field galaxies agree well with
surveys with brighter magnitude limits.
We have compared our counts to objects selected in a semi-analytic
model of galaxy formation. For simple magnitude-limited samples the
\cite{Kitzbichler:2007p3449} model reproduces very well galaxy counts in the
range $16<\hbox{$K_{\rm s}$}<20$. However, at fainter magnitudes
\citeauthor{Kitzbichler:2007p3449}'s model predict many more objects than
are observed.
Comparing this model with predictions of passive galaxy counts, we
find that at $20<\hbox{$K_{\rm s}$}<20.5$ model counts are below observations by a
factor of 2, whereas at $22.5<\hbox{$K_{\rm s}$}<23.0$ model counts are in excess
of observations by around a factor of 1.5. This implies that the
\citeauthor{Kitzbichler:2007p3449} model predicts too many small,
low-mass passively-evolving galaxies and too few large high-mass
passively evolving galaxies at $z\sim1.4$. In these models,
bulge formation takes place by mergers. At $\hbox{$K_{\rm s}$}\sim 22$, passive
galaxies in the millennium simulation have stellar masses of $\sim
10^{11}M_\odot$, similar to spectroscopic measurements of passive
galaxies \citep{Kriek:2006p7151}. This suggests that the difference
between models and observations is linked to the amount of ``late
merging'' taking place \citep{DeLucia:2007p7510}. The exact choice of
the AGN feedback model can also sensitively affect the amount
star-formation in massive systems
\citep{DeLucia:2006p9608,Bower:2006p7511}. It is clear that
observations of the abundance of massive galaxies can now provide
insight into physical processes occurring in galaxies at intermediate
redshifts. For the time being it remains a challenge for these models to reproduce both
these observations at high redshift and lower-redshift reference samples.
Our results complement determinations of the galaxy stellar
mass function at intermediate redshifts which show that total mass in
stars formed in semi-analytic models is too low at $z\sim2$ compared
to models \citep{Fontana:2006p3228}. We note that
convolution with standard uncertainties of $\sim0.25$ dex in mass
function estimates at $z\sim2$ can make a significant difference in the mass
function, as can be see in Figure~14 of \cite{Wang:2008p9639} who show
detailed comparisons between semi-analytic models and
observations. The discrepancy between our observations and models
cannot be explained in this way.
We have cross-matched our catalogue with precise 30-band photometric
redshifts calculated by \citeauthor{Ilbert:2009p4457} and have used
this to derive the redshift distributions for each galaxy population.
At \hbox{$K_{\rm s}$}$\sim 22$ our passive galaxies have a redshift distribution
with $z_{\rm {med}}\sim1.4$, in approximate agreement with similar
spectroscopic surveys comprising smaller numbers of objects.
Most of our $pBzK$ galaxies have $z_p<2.0$, in contrast with the
redshift distribution for $sBzK$ galaxies and for the general field
galaxy population which extend to much higher redshifts at this
magnitude limit. In the redshift range $1<z<3$ at $\hbox{$K_{\rm s}$}\sim22$ the
$pBzK$ population represents around $\sim20\%$ of the total number of
galaxies, in contrast to $\sim80\%$ for $sBzK$-selected
galaxies. DRG-selected galaxies remain an important fraction of the
total galaxy population, reaching around $\sim 50\%$ of the total at
$z\sim2$. Our work confirms that most galaxies satisfying the
passive-$BzK$ selection criteria lie in a narrower redshift than
either $sBzK$- or DRG-selected objects. Interestingly, a few a passive
$BzK$ galaxies in our survey have $z_p>2.0$, and it is tempting to
associate these objects with higher-redshift evolved galaxies detected in
spectroscopic surveys \citep{Kriek:2008p9702}.
We have investigated the clustering properties of our catalogues for
which the $2\deg^2$ field of view of the COSMOS survey
provides a unique probe of the distant universe. Our stellar
correlation function is zero at all angular scales to $K_{\rm s}\sim23$
demonstrating the photometric homogeneity and stability of our
catalogues. For a $K_{\rm s}-$ selected samples, the clustering amplitude
declines monotonically toward fainter magnitudes. However, the
slope of the best-fitting angular correlation function becomes
progressively shallower at fainter magnitudes, an effect already seen
in the COSMOS optical catalogues.
At the faintest magnitude slices, the field galaxy population
(all objects with $18.0<\hbox{$K_{\rm s}$}<23.0$) is only slightly more
clustered than the underlying dark matter distribution, indicating
that $K_{\rm s}-$ selected samples are excellent tracers of the
underlying mass. On the other hand, star-forming and passive galaxy
candidates are more clustered than the field galaxy population. At
arcminute scales and smaller the passive BzK population is strongly
biased with respect to the dark matter distribution with bias values
of 2.5 and higher, depending on scale.
Using our photometric redshift distributions, we have derived the
comoving correlation length $r_0$ for each galaxy class. Fitting
simultaneously for slope and amplitude we find a comoving correlation
length $r_0^{\gamma/1.8}$of $\sim7 h^{-1}$~Mpc for the passive $BzK$
population and $\sim 5 h^{-1}$~Mpc for the star-forming $BzK$ galaxies
at $K_{\rm s}<22$ . Our field galaxy clustering amplitudes are in
approximate agreement with optically-selected red galaxies at lower
redshifts.
High bias values are consistent with a picture in which
$pBzK$ galaxies inhabit relatively massive dark matter haloes on order
of $\sim10^{12}$M$_\odot$, compared to the $sBzK$ and field galaxy population. We will
return to this point in future papers, where will interpret these
measurements in terms of the halo model.
Measuring spectroscopic redshifts for a major fraction of 3000
$pBzK$ galaxies in the COSMOS field one will have to wait for the
advent of large-throughput, wide-field near-infrared ($J$-band)
spectrographs on 8-10m class telescopes, such as FMOS at Subaru
(Kimura et al. 2006). Smaller field, cryogenic, multi-object
spectrographs such as MOIRCS at Subaru \citep{Ichikawa:2006p4968},
EMIR at the GTC telescope \citep{Garzon:2006p4837}, and Lucifer at the
LBT \citep{Mandel:2006p4846} should prove effective in producing high
S/N spectra for a relatively small fraction of the $pBzK$ galaxies in
the COSMOS field.
Since this paper was prepared, additional COSMOS-WIRCam $K_{\rm s}$ data
observations have been taken which will increase the total exposure
time by $\sim30\%$. In addition, new $H$- observations have also
been made. Both these data products will be made publicly available
in around one year from the publication of this article. In the longer
term, the COSMOS field will be observed as part of the UltraVISTA deep
near-infrared survey which will provide extremely deep $JHK$ observations
over the central part of the field.
\section{Acknowledgements}
\label{sec:acknowledgement}
This work is based in part on data products produced at TERAPIX at the
Institut d'Astrophysique de Paris. H.J.~McC acknowledges the use of
TERAPIX computing facilities and the hospitality of the IfA, Honolulu,
where this paper was finished. M. L. Kilbinger is acknowledged for
help with dark matter models in Section 6 and N. V. Asari for the
stacking analysis in Section 4. This research has made use of the
VizieR catalogue access tool provided by the CDS, Strasbourg,
France. This research was supported by ANR grant
``ANR-07-BLAN-0228''. ED and CM also acknowledge support from
``ANR-08-JCJC-0008''. JPK acknowledges support from the CNRS. We
thank the referee for an extensive commentary on a earlier version of
this paper.
\bibliographystyle{apj}
| proofpile-arXiv_065-6900 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $F$ be a fixed graph or hypergraph. Say that a (hyper)graph is $F$-free if it contains
no copy of $F$ as a (not necessarily induced) sub(hyper)graph. Beginning with a result of
Erd\H os-Kleitman-Rothschild \cite{EKR}, there has been much work
concerning the number and structure of $F$-free graphs with vertex
set $[n]$ (see, e.g. \cite{EFR, KPR, PS1, BBS1, BBS2, BBS3, BSAM}). The
strongest of these results essentially state that for a large class of graphs $F$, most
of the $F$-free graphs with vertex set $[n]$ have a similar
structure to the $F$-free graph with the maximum number of edges.
Many of these results use the Szemer\'edi regularity lemma.
With the development of the hypergraph regularity Lemma, these problems can be attacked for hypergraphs.
For brevity, we refer to a $3$-uniform hypergraph as a triple system or $3$-graph.
{\bf Definition.}
{\em For a $3$-graph $F$ let $Forb(n, F)$ denote the set of (labeled) $F$-free
$3$-graphs on vertex set $[n]$. }
The first result in this direction was due to Nagle and R\"odl \cite{NR} who proved that for a fixed 3-graph $F$,
$$|Forb(n, F)| \le 2^{{\rm ex}(n, F) + o(n^3)},$$
where ex$(n,F)$ is the maximum number of edges in an $F$-free triple system on $n$ vertices.
Since there is no extremal result for hypergraphs similar to Tur\'an's theorem for graphs,
one cannot expect a general result that characterizes the structure of almost all $F$-free triple systems for large classes of $F$.
Nevertheless, much is known about the extremal numbers for a few specific 3-graphs $F$ and one could hope
to obtain characterizations for these $F$. Recently, Person and Schacht \cite{PSch} proved the first result of this kind, by showing that almost all
triple systems on $[n]$ not containing a Fano configuration are $2$-colorable.
The key property that they used was the linearity of the Fano plane,
namely the fact that every two edges of the Fano plane share at most one vertex.
This enabled them to apply the (weak) $3$-graph regularity lemma, which is almost
identical to Szemer\'edi's regularity lemma. They then proved an embedding lemma for linear
hypergraphs essentially following ideas from Kohayakawa-Nagle-R\"odl-Schacht \cite{KNRS}.
It is well-known that such an embedding lemma fails to hold for
non-linear $3$-graphs unless one uses the (strong) $3$-graph
regularity lemma, and operating in this environment is more
complicated. In this paper, we address the situation for a
particular non-linear $F$ using this approach.
A triple system is tripartite or $3$-partite if it has a vertex
partition into three parts such that every edge has exactly one
point in each part. Denote by $T(n)$ the number of $3$-partite
$3$-graphs on $[n]$. Let
$$s(n):=
\left\lfloor\frac{n}{3}\right
\rfloor \left\lfloor\frac{n+1}{3}\right\rfloor
\left\lfloor\frac{n+2}{3}\right\rfloor\sim \frac{n^3}{27}$$
be the maximum number
of edges in a $3$-partite triple system with $n$ vertices. A triple system is cancellative if $A \cup B = A \cup C$ implies that $B=C$ for edges $A,B,C$.
Every tripartite triple system is cancellative and Katona conjectured, and Bollob\'as \cite{bollobas:74}
proved that the maximum number of edges in a cancellative triple system with $n$ vertices is $s(n)$.
It is easy to see that a cancellative triple system is one that contains no copy of
$$F_5=\{123, 124, 345\} \quad \hbox{ and } \quad
K_4^-=\{123, 124, 234\}.$$ Later Frankl and F\"uredi \cite{frankl+furedi:83} sharpened Bollob\'as'
theorem by proving that ex$(n, F_5)=s(n)$ for $n>3000$ (this was improved to $n>33$ in \cite{KM}).
Our main result is the following.
\begin{theorem}\label{mainf}
Almost all $F_5$-free $3$-graphs on $[n]$ are $3$-partite. More
precisely there is a constant $C$ such that
\begin{equation}\label{induction}
|Forb(n,F_5)|< \left(1+ 2^{Cn-\frac{2n^2}{45}}\right) T(n).\end{equation}
\end{theorem}
Theorem \ref{mainf} clearly implies the same result for
cancellative $3$-graphs which is stated in the abstract.
As
mentioned before, the proof of Theorem \ref{mainf} uses the strong
hypergraph regularity lemma, and stability theorems.
Using the fact that
$$\frac{4\cdot 3^n}{n^2} 2^{s(n)}<T(n)<3^n2^{s(n)}$$ (see Lemma \ref{tnincreasing}), we get the following improvement over the general result of
Nagle and R\"odl~\cite{NR} which only implies that
$|Forb(n,F_5)|<2^{s(n)+o(n^3)}$.
\begin{corollary} As $n \rightarrow \infty$,
$$\log_2|Forb(n,F_5)|=s(n)+n\log_2 3 +\Theta(\log n).$$
\end{corollary}
In a
forthcoming paper \cite{t5}, we shall characterize the structure of
almost all $F$-free $3$-graphs, where $F=\{123,124,125,345\}$. Note
that such a fine statement as Theorem~\ref{mainf} is rare even for
graphs: Pr\"omel and Steger \cite{PS1} characterized the structure
of almost all $F$-free graphs when $F$ has a color-critical edge,
and Balogh, Bollob\'as and Simonovits~\cite {BBS3} when
$F=K(2,2,2)$.
\section{Stability}
The key idea in the proof of Theorem \ref{mainf} is to reduce the
problem to $3$-graphs that are almost $3$-partite. We associate a hypergraph with its edge set.
For a triple system $\mathcal{H}$
with a $3$-partition $P$ of its vertices, say that an edge is {\em crossing} if it has exactly one point in each part,
otherwise say that it is {\em non-crossing}. Let $D_P$ be the set of non-crossing edges.
An {\it
optimal partition} $X \cup Y \cup Z$ of a triple system $\mathcal{H}$ is a $3$-partition of the
vertices of $\mathcal{H}$ which minimizes the number of non-crossing edges.
Let $D=D_{\mathcal{H}}$ be the number of non-crossing ({\it bad}) edges in an
optimal partition $X \cup Y \cup Z$. Define
$$Forb(n, F_5, \eta):=\{\mathcal{H} \subset [n]^3: F_5 \not\subset \mathcal{H}\hbox{ and } D_{\mathcal{H}}\le \eta n^3\}.$$
The first part of the proof of Theorem \ref{mainf} is the following
result, which we will prove in Section~\ref{mostrip}.
\begin{theorem} \label{stablef}
For every $\eta>0$, there exists $\nu>0$ and $n_0$ such that
if $n>n_0$, then $$|Forb(n,F_5)-Forb(n, F_5, \eta)|<2^{(1-\nu)\frac{n^3}{27}}.$$
\end{theorem}
\section{Hypergraph Regularity}\label{hypreg}
In this section, we quickly define the notions required to state the
hypergraph regularity Lemma. These concepts will be used in
Section~\ref{mostrip} to prove Theorem \ref{stablef}. Further
details can be found in \cite{FR} or \cite{NR}. As mentioned before
we associate a hypergraph with its edge set.
A $k$-{\it partite cylinder} is a $k$-partite graph $G$ with
$k$-partition $V_1, \ldots, V_k$, and we write $G=\cup_{i<j}
G^{ij}$, where $G^{ij}=G[V_i \cup V_j]$ is the bipartite subgraph
of $G$ with parts $V_i$ and $V_j$.
For $B \in [k]^3$, the $3$-partite cylinder $G(B)=\cup_{\{i,j\} \in [B]^2} G^{ij}$ is called a {\it triad}.
For a $2$-partite cylinder $G$,
the {\it density} of the pair $V_1, V_2$ with respect to $G$ is $d_G(V_1, V_2)=\frac{|G^{12}|}{|V_1||V_2|}$.
Given an integer $l>0$ and real $\epsilon>0$, a $k$-partite
cylinder $G$ is called an $(l, \epsilon, k)$-{\it cylinder} if for
every $i<j$, $G^{ij}$ is $\epsilon$-regular with density $1/l$.
For a $k$-partite cylinder $G$, let ${\cal K}_3(G)$ denote the
$3$-graph on $V(G)$ whose edges correspond to triangles of $G$. An easy
consequence of these definitions is the following fact.
\begin{lemma} {\bf (Triangle Counting Lemma)} \label{tlemma} For integer $l>0$ and real $\theta>0$,
there exists $\epsilon>0$ such that every $(l,\epsilon,3)$-cylinder $G$ with $|V_i|=m$ for all $i$ satisfies
$$|{\cal K}_3(G)| =(1\pm \theta)\frac{m^3}{l^3}.$$
\end{lemma}
We now move on to $3$-graph definitions. A $k$-{\it partite
$3$-cylinder} is a $k$-partite 3-graph ${\cal H}$ with $k$-partition
$V_1, \ldots, V_k$. Here $k$-partite means that every edge of ${\cal H}$
has at most one point in each $V_i$. Often we will say that these
edges are crossing, and the edges that have at least two points is
some $V_i$ are non-crossing. Given $B \in [k]^3$, let
${\cal H}(B)={\cal H}[\cup_{i \in B}V_i]$. Given $k$-partite cylinder $G$ and
$k$-partite $3$-cylinder ${\cal H}$ with the same vertex partition, say
that $G$ {\it underlies} ${\cal H}$ if ${\cal H} \subset {\cal K}_3(G)$. In other
words, ${\cal H}$ consists only of triangles in $G$. Define the density
$d_{{\cal H}}(G(B))$ of ${\cal H}$ with respect to the triad $G(B)$ as the
proportion of edges of ${\cal H}$ on top of triangles of $G(B)$, if the
latter quantity is positive, and zero otherwise. This definition
leads to the more complicated definition of ${\cal H}$ being $(\delta,
r)$-regular with respect to $G(B)$, where $r>0$ is an integer
and $\delta>0$. If in addition $d_{{\cal H}}(G(B))=\alpha \pm \delta$, then
say that ${\cal H}$ is $(\alpha, \delta, r)$-{\it regular} with respect
to $G(B)$. We will not give the precise definitions of $(\alpha,
\delta, r)$-regularity, and it suffices to take this definition as a
``black box" that will be used later.
For a vertex set $V$, an $(l, t, \gamma, \epsilon)$-partition
${\cal P}$ of $[V]^2$ is a partition $V=V_0 \cup V_1 \cup \cdots \cup V_t$
together with a collection of edge disjoint bipartite graphs $P_{a}^{ij}$,
where $1\le i<j\le t, 0\le a\le l_{ij} \le l$ that satisfy the following properties:
(i) $|V_0|<t$ and $|V_i|=\lfloor \frac{n}{t} \rfloor:=m$ for each
$i>0$,
(ii) $\cup_{a=0}^{l_{ij}}P_{a}^{ij}=K(V_i, V_j)$ for all
$1\le i<j\le t$, where $K(V_i, V_j)$ is the complete bipartite graph
with parts $V_i, V_j$,
(iii) all but $\gamma{t \choose 2}$ pairs $\{v_i, v_j\}$, $v_i \in V_i, v_j \in V_j$,
are edges of $\epsilon$-regular bipartite graphs $P_{a}^{ij}$, and
(iv) for all but $\gamma{t \choose 2}$ pairs $\{i,j\} \in [t]^2$,
we have $|P_0^{ij}|\le \gamma m^2$ and $d_{ P_{a}^{ij} }(V_i, V_j)=(1\pm \epsilon)\frac{1}{l}$
for all $a \in [l_{ij}]$.
Finally, suppose that ${\cal H} \subset [n]^3$ is a $3$-graph and ${\cal P}$
is an $(l, t, \gamma, \epsilon)$-partition of $[n]^2$ with
$m_{{\cal P}}=|V_1|$. For each triad $P \in {\cal P}$, let
$\mu_P=\frac{|{\cal K}_3(P)|}{m_{{\cal P}}^3}$. Then ${\cal P}$ is $(\delta,
r)$-regular if
$$\sum\{\mu_P: \hbox{$P$ is a $(\delta, r)$-irregular triad of ${\cal P}$}\} <\delta\left(\frac{n}{m_{{\cal P}}}\right)^3.$$
We can now state the Regularity Lemma due to Frankl and R\"odl
\cite{FR}.
\begin{theorem} {\bf (Regularity Lemma)} \label{rl}
For every $\delta, \gamma$ with $0<\gamma\le 2\delta^4$, for all
integers $t_0, l_0$ and for all integer-valued functions $r=r(t, l)$
and all functions $\epsilon(l)$, there exist $T_0, L_0, N_0$ such
that every $3$-graph ${\cal H} \subset [n]^3$ with $n\ge N_0$ admits a
$(\delta, r(t,l))$-regular $(l, t, \gamma, \epsilon(l))$-partition
for some $t,l$ satisfying $t_0 \le t<T_0$ and $l_0\le l<L_0$.
\end{theorem}
To apply the Regularity Lemma above, we need to define a cluster
hypergraph and state an accompanying embedding Lemma, sometimes
called the Key Lemma. Given a $3$-graph ${\cal J}$, let ${\cal J}^2$ be the
set of pairs that lie in an edge of ${\cal J}$.
{\bf Cluster $3$-graph.} For given constants $k, \delta, l, r,
\epsilon$ and sets $\{\alpha_B: B \in [k]^3\}$ of nonnegative
reals, let ${\cal H}$ be a $k$-partite 3-cylinder with parts $V_1,
\ldots, V_k$, each of size $m$. Let $G$ be a graph, and ${\cal J}
\subset [k]^3$ be a $3$-graph such that the following conditions are
satisfied.
(i) $G=\cup_{\{i,j\} \in {\cal J}^2} G^{ij}$ is an underlying cylinder of
${\cal H}$ such that for all $\{i,j\} \in {\cal J}^2$, $G^{ij}$ is an $(l,
\epsilon, 2)$-cylinder.
(ii) For each $B \in {\cal J}$, ${\cal H}(B)$ is $(\alpha_B, \delta, r)$-regular with respect to the triad $G(B)$.
Then we say that ${\cal J}$ is the {\it cluster $3$-graph} of ${\cal H}$.
\begin{lemma} {\bf (Embedding Lemma)} \label{elemma} Let $k \ge 4$ be fixed.
For all $\alpha>0$, there exists $\delta>0$ such that for $l>\frac{1}{\delta}$, there exists
$r, \epsilon$ such that the following holds: Suppose that ${\cal J}$ is the cluster $3$-graph
of ${\cal H}$ with underlying cylinder $G$ and parameters $k, \delta, l, r, \epsilon, \{\alpha_B: B \in [k]^3\}$
where $\alpha_B \ge \alpha$ for all $B \in {\cal J}$. Then ${\cal J} \subset {\cal H}$.
\end{lemma}
For a proof of the Embedding Lemma, see \cite{NR}.
\section{Most $F_5$-free triple systems are almost tripartite}\label{mostrip}
In this section we will prove Theorem \ref{stablef}. We will need
the following stability result proved in \cite{KM}. The constants
have been adjusted for later use.
\begin{theorem} {\bf (Keevash-Mubayi \cite{KM})} \label{km}
For every $\nu''>0$, there exist $\nu', t_2$ such that every
$F_5$-free $3$-graph on $t>t_2$ vertices and at least
$(1-2\nu')\frac{t^3}{27}$ edges has a $3$-partition for which the
number of non-crossing edges is at most $\nu'' t^3$.
\end{theorem}
Given $\eta>0$, our constants will obey the following hierarchy:
$$\eta\gg \nu''\gg \nu' \gg \nu \gg \sigma, \theta
\gg \alpha_0, \frac{1}{t_0} \gg \delta \gg \gamma >\frac{1}{l_0} \gg\frac{1}{r}, \epsilon \gg \frac{1}{n_0}.$$
Before proceeding with further details regarding our constants,
we define the {\it binary entropy function} $H(x):=
-x\log_2 x- (1-x)\log_2 (1-x).$
We use the fact that for $0<x< 0.5$ we
have $$\binom{n}{xn}<2^{H(x)n}.$$ Additionally, if $x$ is sufficiently small
then
\begin{equation} \label{x} \sum_{i=0}^{xn} \binom{n}{i}<2^{H(x)n}.\end{equation}
{\bf Detailed definition of constants.}
Set
\begin{equation} \label{nu''def}\nu''=\frac{\eta}{1000}\end{equation}
and suppose that $\nu'_1$ and $t_2$ are the outputs of Theorem \ref{km} with input $\nu''$. Put
\begin{equation} \label{nu'}
\nu'=\min\{\nu'_1, \nu''\} \quad \hbox{ and } \quad \nu=(\nu')^4.\end{equation}
We choose
\begin{equation} \label{theta}
\theta=\frac{\nu}{4(1-\nu)}.\end{equation}
Choose $\sigma_1$ small enough so that
\begin{equation} \label{sigma}
\left(1-\frac{\nu}{2}\right)\frac{n^3}{27}+o(n^3)+H(\sigma)n^3\le \left(1-\frac{\nu}{3}\right)\frac{n^3}{27}\end{equation}
holds for sufficiently large $n$. In fact the function denoted by
$o(n^3)$ will actually be seen to be of order $O(n^2)$ so
(\ref{sigma}) will hold for sufficiently large $n$. Choose $\sigma_2$ small enough so that (\ref{x}) holds for $\sigma_2$. Let
$$\sigma=\min\{\sigma_1, \sigma_2\}.$$
Next we consider the Triangle Counting Lemma (Lemma \ref{tlemma}) which provides an
$\epsilon$ for each $\theta$ and $l$. Since $\theta$ is fixed, we may let
$\epsilon_1=\epsilon_1(l)$ be the output of Lemma \ref{tlemma} for each integer $l$.
For $\sigma$ defined above, set
\begin{equation} \label{alpha}\delta_1=\alpha_0=\frac{\sigma}{100} \quad
\hbox{ and } \quad t_1=\left\lceil \frac{1}{\delta_1} \right\rceil.\end{equation}
Let
$$t_0=\max\{t_1, t_2, 33\}.$$
Now consider the Embedding Lemma (Lemma \ref{elemma}) with inputs $k=5$ and $\alpha_0$
defined above. The Embedding Lemma gives $\delta_2=\delta_2(\alpha_0)$, and
we set
\begin{equation} \label{delta} \delta=\min\{\delta_1, \delta_2\}, \quad
\quad \gamma=\delta^4, \quad \quad l_0=\frac{2}{\delta}.\end{equation}
For each integer $l>\frac{1}{\delta}$, let $r=r(l)$ and
$\epsilon_2=\epsilon_2(l)$ be the outputs of Lemma~\ref{elemma}. Set
\begin{equation} \label{epsilonl}\epsilon=\epsilon(l)=\min\{\epsilon_1(l), \epsilon_2(l)\}.\end{equation}
With these constants, the Regularity Lemma (Theorem \ref{rl}) outputs $N_0$. We choose
$n_0$ such that $n_0>N_0$ and every $n>n_0$ satisfies
(\ref{sigma}).
\medskip
{\bf Proof of the Theorem \ref{stablef}.}
We will prove that $$|Forb(n,F_5)-Forb(n, F_5, \eta)|<2^{(1-\frac{
\nu}{3})\frac{n^3}{27}}.$$ This is of course equivalent to Theorem
\ref{stablef}.
For each ${{\cal G}} \in Forb(n,F_5)-Forb(n, F_5, \eta)$, we use the
Hypergraph Regularity Lemma, Theorem~\ref{rl}, to obtain a $(\delta,
r)$-regular $(l, t, \gamma, \epsilon)$-partition ${\cal P}={\cal P}_{{{\cal G}}}$.
The input constants for Theorem~\ref{rl} are as defined above and
then Theorem \ref{rl} guarantees constants $T_0, L_0, N_0$ so that
every $3$-graph ${{\cal G}}$ on $n>N_0$ vertices admits a $(\delta,
r)$-regular $(l, t, \gamma, \epsilon)$-partition ${\cal P}$ where $t_0
\le t \le T_0$ and $l_0\le l \le L_0$. To this partition ${\cal P}$,
associate a {\em density vector} $s=(s_{\{i,j,k\}_{a,b,c}})$ where $1
\le i<j<k\le t$ and $1\le a,b,c \le l$ and
$$d_{{{\cal G}}}(P_a^{ij} \cup P_b^{jk} \cup P_c^{ik})\in [s_{\{i,j,k\}_{a,b,c}}\delta, (s_{\{i,j,k\}_{a,b,c}}+1)\delta].$$
For each ${{\cal G}} \in
Forb(n, F_5, \eta)$, choose one $(\delta, r)$-regular $(l, t,
\gamma, \epsilon)$-partition ${\cal P}_{{{\cal G}}}$ guaranteed by
Theorem~\ref{rl}, and let ${\cal P}=\{{\cal P}_1, \ldots, {\cal P}_p\}$ be the set
of all such partitions over the family $Forb(n, F_5, \eta)$.
Define
an equivalence relation on $Forb(n, F_5, \eta)$ by letting ${{\cal G}}\sim
{{\cal G}}'$ iff
1) ${\cal P}_{{{\cal G}}}={\cal P}_{{{\cal G}}'}$ and
2) ${{\cal G}}$ and ${{\cal G}}'$ have the same density vector.
The number of equivalence classes $q$ is the number of partitions
times the number of density vectors. Consequently,
$$q\le \left({T_0 +1\choose 2}(L_0+1)\right)^{n \choose 2}
\left(\frac{1}{\delta}\right)^{{T_0+1 \choose 2}(L_0+1)^3}<2^{O(n^2)}.$$
We will show that each equivalence class $C({\cal P}_i, s)$ satisfies
\begin{equation} \label{C} |C({\cal P}_i, s)|=2^{(1-\frac{\nu}{2})\frac{n^3}{27}+H(\sigma)n^3}.
\end{equation}
Combined with the upper bound for $q$ and (\ref{sigma}), we obtain
$$|Forb(n, F_5, \eta)|\le 2^{O(n^2)}2^{(1-\frac{\nu}{2})\frac{n^3}{27}+H(\sigma)n^3}\le
2^{(1-\frac{\nu}{3})\frac{n^3}{27}}.$$
For the rest of the proof, we fix an equivalence class $C=C({\cal P}, s)$ and we will show the upper bound in (\ref{C}).
We may assume that ${\cal P}$ has vertex partition
$[n]=V_0\cup V_1\cup \cdots \cup V_t$, $|V_i|=m=\lfloor \frac{n}{t}\rfloor$ for all $i\ge 1$,
and system of bipartite graphs $P_{a}^{ij}$, where $1\le i<j\le t, 0\le a\le l_{ij} \le l$.
Fix ${{\cal G}} \in C$. Let ${\cal E}_0\subset {{\cal G}}$ be the set of triples that either
(i) intersect $V_0$, or
(ii) have at least two points in some
$V_i, i\ge 1$, or
(iii) contain a pair in $P_0^{ij}$ for some $i,j$, or
(iv) contain a pair in some $P_{a}^{ij}$ that is not $\epsilon$-regular with density $\frac1l$.
Then
$$|{\cal E}_0|\le tn^2+t\left(\frac{n}{t}\right)^2 n +\gamma{t \choose 2}n+2\gamma{t \choose 2}\left(\frac{n}{t}\right)^2 n.$$
Let ${\cal E}_1 \subset {{\cal G}}-{\cal E}_0$ be the set of triples $\{v_i, v_j, v_k\}$ such that either
(i) the three bipartite graphs of ${\cal P}$ associated with the pairs within the triple form
a triad $P$ that is not $(\delta, r)$-regular with respect to ${{\cal G}}(\{i,j,k\})$, or
(ii) the density $d_{{{\cal G}}}(P)<\alpha_0$.
Then
$$|{\cal E}_1|\le 2\delta t^3\left(\frac{n}{t}\right)^3(1+\theta) +\alpha_0
{t \choose 3}l^3\left(\frac{n}{t}\right)^3 \frac{1}{l^3}.$$
Let ${\cal E}_{{{\cal G}}}={\cal E}_0\cup {\cal E}_1$.
Now (\ref{alpha}) and (\ref{delta}) imply that
$$|{\cal E}_{{{\cal G}}}|\le \sigma n^3.$$
Set ${{\cal G}}'={{\cal G}}-{\cal E}_{{{\cal G}}}$.
Next we define ${\cal J}^C={\cal J}^C({{\cal G}})\subset [t]^3 \times [l] \times [l] \times [l]$ as follows:
For $1 \le i <j <k \le t,\ 1 \le a,b,c \le l$, we have $\{i,j,k\}_{a,b,c} \in {\cal J}^C$ if and only if
(i) $P=P_a^{ij} \cup P_b^{jk} \cup P_c^{ik}$ is an $(l, \epsilon,
3)$-cylinder, and
(ii) ${{\cal G}}'(\{i,j,k\})$ is $(\overline{\alpha}, \delta, r)$-regular with respect to $P$, where
$\overline{\alpha}\ge \alpha_0$.
We view ${\cal J}^C$ as a multiset of triples on $[t]$. For each
$\phi:{[t]\choose 2} \rightarrow [l]$, let ${\cal J}_{\phi}\subset {\cal J}^C$ be the
$3$-graph on $[t]$ corresponding to the function $\phi$ (without
parallel edges). In other words, $\{i,j,k\} \in {\cal J}_{\phi}$ iff the triples of ${{\cal G}}$
that lie on top of the triangles of $P_{a}^{ij} \cup P_{b}^{jk} \cup
P_{c}^{ik}\ $, $a=\phi(ij),\ b=\phi(jk),\ c=\phi(ik)$,
are $(\overline{\alpha}, \delta, r)$-regular and the underlying bipartite graphs $P_{a}^{ij}, P_{b}^{jk}, P_{c}^{ik}$
are all $\epsilon$-regular with density $1/l$.
By our choice of the constants in (\ref{delta}) and (\ref{epsilonl}),
we see that any ${\cal F}\subset {\cal J}_{\phi}$ with five vertices is a cluster $3$-graph for ${{\cal G}}$, and hence by the Embedding Lemma ${\cal F} \subset {{\cal G}}$.
Since $F_5 \not\subset {{\cal G}}$, we conclude that $F_5 \not\subset {\cal J}_{\phi}$.
It was shown in \cite{KM} that for $t \ge 33$, we have ex$(t, F_5)\le \frac{t^3}{27}$.
Since we know that $t \ge 33$, we conclude that
$$|{\cal J}_{\phi}|\le \hbox{ex} (t, F_5) \le \frac{t^3}{27}$$ for each $\phi :{[t]\choose 2} \rightarrow [l]$. Recall from (\ref{nu'}) that $\nu'=\nu^{1/4}$.
\begin{lemma}\label{markov}
Suppose that $|{\cal J}^C|>(1-\nu)\frac{l^3t^3}{27}$. Then for at least $(1-\nu') l^{{t \choose 2}}$ of the
functions $\phi:{[t]\choose 2} \rightarrow [l]$ we have
$$|{\cal J}_{\phi}| \ge (1-\nu')\frac{|{\cal J}^C|}{l^3}.$$
\end{lemma}
\begin{proof} Form the following bipartite graph: the vertex partition is
$\Phi \cup {\cal J}^C$ , where
$$\Phi=\left\{\phi: {[t]\choose 2} \rightarrow [l]\right\}$$
and the edges are of the form $\{\phi, \{i,j,k\}_{abc}\}$ if and only if $\phi \in \Phi$,
$\{i,j,k\}_{abc}\in {\cal J}^C$ where $\phi(\{i,j\})=a,\ \phi(\{j,k\})=b,\ \phi(\{i,k\})=c$.
Let $E$ denote the number of edges in this bipartite graph. Since each $\{i,j,k\}_{abc} \in {\cal J}^C$ has degree precisely
$l^{{t \choose 2}-3}$, we have
$$E=|{\cal J}^C| l^{{t \choose 2}-3}.$$
Note that the degree of $\phi$ is $|{\cal J}_{\phi}|$.
Suppose for contradiction that the number of $\phi$ for which
$|{\cal J}_{\phi}| \ge (1-\nu')\frac{|{\cal J}^C|}{l^3}$ is less than $(1-\nu') l^{{t \choose 2}}$.
Then since $|{\cal J}_{\xi}|\le \frac{t^3}{27}$ for each $\xi\in \Phi$, we obtain the upper bound
$$E\le (1-\nu') l^{{t \choose 2}}\frac{t^3}{27} + \nu'l^{{t \choose 2}}(1-\nu')\frac{|{\cal J}^C|}{l^3}.$$
Dividing by $l^{{t \choose 2}-3}$ then yields
$$|{\cal J}^C| \le (1-\nu')l^3\frac{t^3}{27} + \nu'(1-\nu')|{\cal J}^C|.$$
Simplifying, we obtain
$$(1-\nu'(1-\nu'))|{\cal J}^C|\le (1-\nu')l^3\frac{t^3}{27}.$$
The lower bound $|{\cal J}^C|>(1-\nu)\frac{l^3t^3}{27}$ then gives
$$(1-\nu'(1-\nu'))(1-\nu)< 1-\nu'.$$
Since $\nu'=\nu^{1/4}$, the left hand side expands to
$$1-\nu'+\nu^{1/2}-\nu+\nu^{5/4}-\nu^{3/2}>1-\nu'.$$
This contradiction completes the proof.\end{proof}
{\bf Claim 1.}
$$|{\cal J}^C|\le (1-\nu)\frac{l^3 t^3}{27}.$$
Once we have proved Claim 1, the proof is complete by following the argument which is very
similar to that in \cite{NR}. Define
$$S^C=\bigcup_{ \{i,j,k\}_{abc} \in {\cal J}^C} {\cal K}_3(P_a^{ij} \cup P_b^{jk} \cup P_c^{ik}).$$
The Triangle Counting Lemma implies that $|{\cal K}_3(P_a^{ij} \cup P_b^{jk} \cup P_c^{ik})| <\frac{m^3}{l^3}(1+\theta)$.
Now Claim 1 and $(\ref{theta})$ give
$$|S^C| \le \frac{m}{l^3}(1+\theta)|{\cal J}^C|\le m^3(1+\theta)(1-\nu)\frac{t^3}{27}<m^3\frac{t^3}{27}
\left(1-\frac{\nu}{2}\right)\le \frac{n^3}{27}\left(1-\frac{\nu}{2}\right).$$
Since ${{\cal G}}' \in S^C$ for every ${{\cal G}} \in C$,
$$|\{{{\cal G}}': {{\cal G}} \in C\}|\le 2^{(1-\frac{\nu}{2})\frac{n^3}{27}}.$$
Each ${{\cal G}} \in C$ can be written as ${{\cal G}}={{\cal G}}' \cup {\cal E}_{{{\cal G}}}$. In view of (\ref{x}) and
$|{\cal E}_{{{\cal G}}}|\le \sigma n^3$, the number of ${\cal E}_{{{\cal G}}}$ with ${{\cal G}} \in
C$ is at most $\sum_{i\le \sigma n^3} {n^3 \choose i}\le
2^{H(\sigma)n^3}$.
Consequently,
$$|C| \le 2^{(1-\frac{\nu}{2})\frac{n^3}{27}+H(\sigma)n^3}
$$
and we are done.
{\bf Proof of Claim 1.} Suppose to the contrary that $|{\cal J}^C|>
(1-\nu)\frac{l^3 t^3}{27}$.
We apply Lemma~\ref{markov} and conclude that for most functions $\phi$ the corresponding triple
system ${\cal J}_{\phi}$ satisfies
$$|{\cal J}_{\phi}| \ge (1-\nu')\frac{|{\cal J}^C|}{l^3} > (1-\nu')(1-\nu)\frac{t^3}{27}>(1-2\nu')\frac{t^3}{27}.$$
By Theorem \ref{km}, we conclude that for all of these $\phi$, the
triple system ${\cal J}_{\phi}$ has a $3$-partition where the number of
non-crossing edges is at most $\nu'' t^3$. We also conclude that
the number of crossing triples that are not edges of ${\cal J}_{\phi}$ is
at most
\begin{equation} \label{nu''} \left(\frac{2\nu'}{27}+\nu''\right)t^3<\frac{5}{3}\nu''t^3. \end{equation}
Fix one such $\phi$ and let the optimal partition of ${\cal J}_{\phi}$ be $P_{\phi}=X \cup Y \cup Z$.
Let $P=V_X \cup V_Y \cup V_Z$ be the corresponding vertex partition of $[n]$.
In other words, $V_X$ consists of the union of all those parts $V_i$ for which $i \in X$ etc.
We will show that $P$ is a partition of $[n]$ where the number of
non-crossing edges $|D_P|$ is fewer than $\eta n^3$. This contradicts the fact that
${{\cal G}} \in Forb(n,F_5)-Forb(n, F_5, \eta)$ and completes the proof of Theorem \ref{stablef}.
We have argued earlier that $|{\cal E}_{{{\cal G}}}|\le \sigma n^3 \le
\frac{\eta}{2}n^3$ so it suffices to prove that $|D_P -{\cal E}_{{{\cal G}}}|\le
\frac{\eta}{2}n^3$.
Call a $\xi: {[t]\choose 2} \rightarrow [l]$ {\it good} if it satisfies the
conclusion of Lemma~\ref{markov}, otherwise call it {\it bad}. For
each $\xi$ and edge $\{i,j,k\} \in {\cal J}_{\xi}$, we have $a,b,c$
defined by $a=\xi(\{i,j\})$ etc. let ${{\cal G}}_{\xi}$ be the union, over
all $\{i,j,k\} \in {\cal J}_{\xi}$, of the edges of ${{\cal G}}$ that lie on top
of the triangles in $P_{a}^{ij} \cup P_{b}^{jk} \cup P_{c}^{ik}$.
Let $D_{\xi}$ be the set of edges in ${{\cal G}}_{\xi}$ that are
non-crossing with respect to $P=V_X \cup V_Y \cup V_Z$. We will
estimate $|D_P-{\cal E}_{{{\cal G}}}|$ by summing $|D_{\xi}|$ over all $\xi$.
Please note that each $e\in D_P-{\cal E}_{{{\cal G}}}$ lies in exactly $l^{{t
\choose 2}-3}$ different $D_{\xi}$ due to the definition of ${\cal J}^C$.
Summing over all $\xi$ gives
$$l^{{t \choose 2}-3} |D_P-{\cal E}_{{{\cal G}}}| =\sum_{\xi:{[t]\choose 2} \rightarrow [l]}|D_{\xi}|\le \sum_{\xi\ good} |D_{\xi}|
+\sum_{\xi\ bad} |D_{\xi}|.$$ Note that for a given edge $\{i,j,k\}
\in {\cal J}_{\phi}$ the number of edges in ${{\cal G}}_{\phi}$ corresponding to
this edge is the number of edges in $V_i \cup V_j \cup V_k$ on top
of triangles formed by the three bipartite graphs, each of which is
$\epsilon$-regular of density $1/l$. By the Triangle Counting
Lemma, the total number of such triangles is at most
$$2|V_i||V_j||V_k|\left(\frac1l\right)^3<2\left(\frac{n}{t}\right)^3 \left(\frac1l\right)^3.$$
By Lemma~\ref{markov}, the number of bad $\xi$ is at most $\nu'
l^{{t\choose 2}}$. So we have
$$\sum_{\xi\ bad} |D_{\xi}|\le \nu' l^{{t\choose 2}}{t \choose 3}2
\left(\frac{n}{t}\right)^3 \left(\frac1l\right)^3<\nu'l^{{t\choose
2}-3}n^3.$$ It remains to estimate $\sum_{\xi\ good} |D_{\xi}|$.
Fix a good $\xi$ and let the optimal partition of ${\cal J}_{\xi}$ be $P_{\xi}=A \cup B \cup C$
(recall that we know the number of non-crossing edges with respect to to this partition is less than $\nu''t^3$).
{\bf Claim 2.} The number of crossing edges of $P_{\xi}$ that are
non-crossing edges of $P_{\phi}$ is at most $100\nu''t^3$.
Suppose that Claim 2 was true. Then we would obtain
$$\sum_{\xi \ good} |D_{\xi}| \le l^{{t\choose 2}}\left[100\nu''t^3(\frac{n}{t})^3\frac{2}{l^3}+
\nu''t^3(\frac{n}{t})^3\frac{2}{l^3}\right]\le l^{{t\choose
2}-3}\left[202\nu''n^3\right].$$ Explanation: We consider the
contribution from the non-crossing edges of $P_{\phi}$ that are (i)
crossing edges of $P_{\xi}$ and (ii) non-crossing edges of
$P_{\xi}$. We do not need to consider the contribution from the
crossing edges of $P_{\phi}$ since by definition, these do not give
rise to edges of $D_P$.
Altogether, using (\ref{nu''def}) we obtain
$$|D_P-{\cal E}_{{{\cal G}}}| \le (202\nu''+\nu')n^3<\frac{\eta}{2} n^3$$
and the proof is complete. We now prove Claim 2.
{\bf Proof of Claim 2.} Suppose for contradiction that the number
of crossing edges of $P_{\xi}$ that are non-crossing edges of
$P_{\phi}$ is more than $100\nu'' t^3$. Each of these edges intersects at most $3{t \choose 2}$ other edges of ${\cal J}_{\xi}$,
so by the greedy algorithm we can find a
collection of at least $50\nu''t$ of these edges that form a matching $M$. Pick one such edge $e=\{k,k', k''\}\in M$ and assume that $k$ and $k'$ lie in same part $U$ of $P_{\phi}$. Let $d$ be the number of ways to choose a set of two triples $\{f, f'\}$ with $f=\{i,j,k\}, f'=\{i,j,k'\}$, $i,j \not\in U\cup \{k''\}$ and $i$ and $j$ lie in distinct parts of $P_{\phi}$.
Since $|{\cal J}_{\phi}|>(1-2\nu')\frac{t^3}{27}$, $|D_{P_{\phi}}|\le \nu'' t^3$ and $\nu', \nu''$ are sufficiently small
$$d\ge (\min\{|X|, |Y|, |Z|\} -1)^2\ge \frac{t^2}{10}.$$ As $\{e,f,f'\}\cong F_5$
there
are at least $d$ potential copies of $F_5$ that we can
form using $e$ and two crossing triples $f,f'$ of $P_{\phi}$. Suppose that $f=\{i,j,k\}, f'=\{i,j,k'\}$
are both in ${\cal J}_{\phi}$ for one such choice of $\{f,f'\}$. Consider the following eight bipartite graphs:
$$G^{ij}=P_{\phi(\{i,j\})}^{ij}, \quad G^{jk}=P_{\phi(\{j,k\})}^{jk} \quad G^{ik}=
P_{\phi(\{i,k\})}^{ik}\quad G^{jk'}=P_{\phi(\{j,k'\})}^{jk'}\quad
G^{ik'}=P_{\phi(\{i,k'\})}^{ik'}$$
$$G^{kk'}=P_{\xi(\{k, k'\})}^{kk'}\quad G^{k'k''}=P_{\xi(\{k', k''\})}^{k'k''}\quad G^{kk''}=P_{\xi(\{k, k''\})}^{kk''}.$$
Set $G=\bigcup G^{uv}$ where the union is over the eight bipartite graphs defined above.
Since $\{e,f,f'\} \subset {\cal J}_{\phi} \cup {\cal J}_{\xi}$, the 3-graph $J=\{e, f,f'\}$ associated with $G$ and ${{\cal G}}$
is a cluster 3-graph.
By (\ref{delta}) and (\ref{epsilonl}), we may apply the Embedding Lemma and obtain the contradiction $F_5 \subset {{\cal G}}$.
We conclude that $f''\not\in {\cal J}_{\phi}$ for some $f''\in \{f, f'\}$.
To each $e \in M$ we have associated at least $d$ triples $f''\not\in {\cal J}_{\phi}$.
Since $M$ is a matching and $|e \cap f''| = 1$, each such $f''$ is counted at most three times.
Summing over all $e\in M$, we obtain at least $\frac{|M|d}{3}\ge \frac{5}{3}\nu'' t^3$ triples $f''$ that are crossing with
respect to $P_{\phi}$ but are not edges of ${\cal J}_{\phi}$. This contradicts (\ref{nu''}) and completes the proof. \qed
\section{Proof of Theorem \ref{mainf}}\label{proofmain}
In this section we complete the proof of Theorem \ref{mainf}. We
begin with some preliminaries.
\subsection{Inequalities}
We shall use Chernoff's inequality as follows:
\begin{theorem}\label{chernoff}
Let $X_1,\ldots,X_m$ be independent $\{0,1\}$ random variables with
$P(X_i=1)=p$ for each $i$. Let $X=\sum_i X_i$. Then the following
inequality holds for
$a>0$:\\
$$P(X < {\mathbb E} X - a) < \exp(-a^2/(2pm)).$$
\end{theorem}
We will use the following easy statement.
\begin{lemma}\label{matching}
Every graph $G$ with $n$ vertices contains a matching of size at
least $\frac{|G|}{2n}$.
\end{lemma}
\begin{proof}
Assume that there is a maximal matching of size $r$. The $2r$
vertices of the matching can cover at most $2rn$ edges, by the
maximality of the matching there is no other edge in $G$.
\end{proof}
Recall that $T(n)$ is the number of $3$-partite $3$-graphs with vertex set $[n]$ and $s(n)=\lfloor\frac{n+2}{3}\rfloor\cdot \lfloor\frac{n+1}{3}\rfloor\cdot
\lfloor\frac{n}{3}\rfloor.$ For a 3-partition $A, B, C$ of a 3-graph, and $u \in A, v \in B$, write $L_C(u,v)$ or simply $L(u,v)$ for the set of $w \in C$ such that $uvw$ is an edge.
As usual, the multinomial coefficient ${n \choose a,b,c}=\frac{n!}{a!b!c!}$.
\begin{lemma}\label{tnincreasing}As $n \rightarrow \infty$ we have
\begin{equation} \label{T(n)}
\left(\frac{1}{6}-o(1)\right) \ \binom{n}{\lfloor\frac{n+2}{3}\rfloor,
\lfloor\frac{n+1}{3}\rfloor, \lfloor\frac{n}{3}\rfloor} 2^{s(n)}\ < T(n) \ < \
3^n 2^{s(n)}.\end{equation} In addition, \begin{equation}\label{tnn-2} T(n-2)<\left(n^{2} 2^{-\frac{2n^2}{9}+n}\right) T(n).\end{equation}
\end{lemma}
\begin{proof}
For the upper bound in (\ref{T(n)}), observe that $3^n$ counts the number of $3$-partitions
of the vertices, and the exponent is the maximum number of
crossing edges that a $3$-partite 3-graph can have.
For the lower
bound we count the number of (unordered) $3$-partitions where this
equality can be achieved. Each such $3$-partition gives rise to $2^{s(n)}$ $3$-partite $3$-graphs.
The number of such 3-partitions of $[n]$ is at least
$$\frac{1}{6} \ \binom{n}{\lfloor\frac{n+2}{3}\rfloor,
\lfloor\frac{n+1}{3}\rfloor, \lfloor\frac{n}{3}\rfloor}.$$
We argue next that most of the 3-partite 3-graphs obtained in this way are different. More precisely, we show below that for any given 3-partition $P$ as above, most 3-partite 3-graphs with 3-partition $P$ have a unique 3-partition (which must be $P$).
Given a $3$-partition $U_1,U_2,U_3$ of $[n]$, if the crossing edges are added randomly, then Chernoff's inequality gives that almost all 3-graphs generated
satisfy the following two conditions:
(i) for all $ u\in U_i, v\in U_j$, where $\{i,j,\ell\}=\{1,2,3\}$ we have $|L_{U_\ell}(u,v)| > n/10$
(ii) for $\{i,j,\ell\}=\{1,2,3\}$ and for every $ A_i \subset U_i, A_j\subset U_j$ with $|A_i|, |A_j|> n/10$ and $v\in U_\ell$, the number of crossing edges
intersecting each of $A_i,A_j$ and containing $v$ is at least $|A_1||A_2|/10$.
If $\mathcal{H}$ has $3$-partition $U_1,U_2,U_3$ of $[n]$, and it satisfies conditions (i) and (ii), then the $3$-partition is unique. Indeed, take $u,v$ lying in an edge, then $u$, $v$ and $L(u,v)$ are in different parts, where
$|L(u,v)|> n/10$, so for $w\in L(u,v)$, $L(u,w)$ is in the same part as $v$ and $L(v,w)$ is in the same part as $u$. Now by (ii) the rest of the vertices must lie in a unique part.
To prove \eqref{tnn-2} first note that if $a+b+c=n$, then ${n \choose a,b,c}$ is maximized for $a=\lfloor (n+2)/3\rfloor, b=\lfloor (n+1)/3\rfloor, c=\lfloor n/3\rfloor$. This implies that
$$3^n=\sum_{a+b+c=n} {n \choose a,b,c} \le {n+2 \choose 2} \binom{n}{\lfloor\frac{n+2}{3}\rfloor,
\lfloor\frac{n+1}{3}\rfloor, \lfloor\frac{n}{3}\rfloor}<(0.6)n^2
\binom{n}{\lfloor\frac{n+2}{3}\rfloor,
\lfloor\frac{n+1}{3}\rfloor, \lfloor\frac{n}{3}\rfloor}
.$$
Together with \eqref{T(n)} we obtain
$$
\frac{T(n-2)}{T(n)} < \frac{3^{n-2} 2^{s(n-2)}}{(\frac{1}{6}-o(1)) \binom{n}{\lfloor\frac{n+2}{3}\rfloor,
\lfloor\frac{n+1}{3}\rfloor, \lfloor\frac{n}{3}\rfloor} 2^{s(n)}} < n^2 2^{s(n-2)-s(n)}.$$
It is easy to see that $s(n)-s(n-2)\ge 2n^2/9-n$, and the result follows.
\end{proof}
\subsection{Lower Density}
\begin{definition}\label{defdensity}
A vertex partition $U_1,U_2,U_3$ of a 3-graph $\mathcal{F}$ is {\it $\mu$-lower dense}
if each of the following conditions
are satisfied:\\
(i) For every $i$ if $A_i\subset U_i$ with $|A_i|\ge \mu n$ then
$$|\{E\in\mathcal{F}: \ |E\cap A_i|=1, \text{ for } 1\le i\le 3\}\ >\ |A_1| \cdot |A_2| \cdot |A_3| \cdot 2^{-3}.$$
(ii) Let $\{i,j,\ell\}=\{1,2,3\}$, $A_i\subset U_i$ with $|A_i|\ge
\mu n$, $G\subset U_j\times U_\ell$ with
$|G|\ge \mu^2 n^2.$
Then
$$|\{E\in\mathcal{F}: \ |E\cap A_i|=1,\ E-A_i\in G\}|\ >\ |A_i|\cdot |G| \cdot 2^{-3}.$$
(iii) Let $\{i,j,\ell\}=\{1,2,3\}$, $A_i\subset U_i$ and $A_j\subset
U_j$ with $|A_i|,|A_j|\ge \mu n$, and
$G$ be a matching on
$U_\ell$ with $|G|\ge \mu n.$
Set
$$\mathcal{F}_{A_i, A_j, G}=\{ \{C, D\} \in \mathcal{F}^2: C-U_\ell=D-U_\ell,\
|C\cap A_i|=|C\cap A_j|=1,\{(C\cap U_\ell),( D\cap U_\ell)\}\in G\}.$$
Then $$|\mathcal{F}_{A_i, A_j, G}| \ge \frac{|A_i|\cdot |A_j|\cdot |G|}{
2^{7}}.$$ (iv) For every $i$ we have $||U_i|-n/3|<\mu n$.
\end{definition}
For $\mu>0$ let $Forb(n,F_5,\eta,\mu)\subset Forb(n,F_5,\eta)$ be the
family of $\mu$-lower dense hypergraphs.
\begin{lemma}\label{density}
For every $\eta $ if $\mu^3\ge 10^3H(6\eta)$ then for $n$ large enough
$$|Forb(n,F_5,\eta)-Forb(n,F_5,\eta,\mu)|\ < \ 2^{n^3(1/27-\mu^3/40)}.$$
\end{lemma}
\begin{proof}
We wish to count the number of $\mathcal{H} \in Forb(n, F_5, \eta)-Forb(n,
F_5, \eta, \mu)$. The number of ways to choose a $3$-partition of
$\mathcal{H}$ is at most $3^n$. Given a particular $3$-partition $P=(U_1,
U_2, U_3)$, the number of ways the at most $\eta n^3$ bad edges
could be placed is at most
$$\sum_{i\le \eta n^3} {{n \choose 3} \choose i}< 2^{H(6\eta){n \choose 3}}.$$
If $|U_i-n/3|>\mu n$ for some $i$, then the number of possible
crossing edges is at most
$$n^3(1/27-\mu^2/4+\mu^3/4)<n^3(1/27-\mu^2/5).$$ We conclude that the number of
$\mathcal{H} \in Forb(n, F_5, \eta)-Forb(n, F_5, \eta, \mu)$ for which there exists a partition that
fails property (iv) is at most
$$ f(n, \eta) 2^{n^3(1/27-\mu^2/5)},$$
where
$$f(n, \eta)=3^n \cdot 2^{H(6\eta){n \choose 3}}.$$
Since $\mathcal{H} \not\in Forb(n, F_5, \eta, \mu)$ it fails to satisfy one of the four conditions in Definition \ref{defdensity}.
For a fixed partition $P$ and choice of bad edges, we may view $\mathcal{H}$ as a probability space where we choose
each crossing edge with respect to $P$ independently with probability $1/2$.
The total number of ways to choose the crossing edges is at most $2^{n^3/27}$ (an upper bound on the size of the
probability space) so we obtain that $|Forb(n,F_5,\eta)-Forb(n,F_5,\eta,\mu)|$ is upper bounded by
$$f(n, \eta) \cdot 2^{n^3/27} \cdot Prob(\mathcal{H} \hbox{ fails (i) or (ii) or (iii)})+f(n, \eta) 2^{n^3(1/27-\mu^2/5)}.$$
We will consider each of these probabilities separately and then use the union bound. First however, note that
the number of
choices for $A_i \subset U_i$ as in Definition \ref{defdensity} is at most $2^n$ and the number of ways $G$
could be chosen is at most $2^{n^2}$.
(i) Since $|A_1||A_2||A_3| \ge \mu^3 n^3$, Chernoff's inequality
gives
$$Prob(\mathcal{H} \hbox{ fails (i)}) \le 2^{3n} \cdot
\exp(-\mu^3 n^3/16).$$
(ii) Since $|A_i||G| \ge \mu^3 n^3$, Chernoff's inequality gives
$$Prob(\mathcal{H} \hbox{ fails (ii)}) \le 2^{n} \cdot 2^{n^2}\cdot \exp(-\mu^3 n^3/16).$$
(iii) Since $|A_i||A_j||G| \ge \mu^3 n^3$ and both edges $C$ and $D$ must be present, we apply
Chernoff's inequality with $m=|A_i||A_j||G|/2$ and $p=1/4$. The number of matchings $G$ is at most $(n^2)^{n/2}=2^{n\log_2 n}$, so
$$Prob(\mathcal{H} \hbox{ fails (iii)}) \le 2^{2n} \cdot 2^{n\log_2 n}\cdot \exp(-\mu^3 n^3/32).$$
The lemma now follows since $10^3H(6\eta)\le \mu^3,$ and $n$ is sufficiently large.
\end{proof}
\subsection{There is no bad vertex}
Let $\mathcal{H} \in Forb(n,F_5,\eta,\mu)$, assume $n$ is large enough, and
$U_1,U_2,U_3$ is an optimal
partition of $\mathcal{H}$, with $x\in U_1$.
For a vertex $y$ let $L_{i,j}(y)$ denote the set of edges of $\mathcal{H}$
containing $y$, and additionally intersecting $U_i$ and $U_j$. In
particular, $L_{i,i}(y)$ is the set of edges of $\mathcal{H}$ which contain
$y$, and their other vertices are in $U_i$.
The aim of this subsection is to prove the following lemma, which shows that the number of bad edges
containing a vertex is small.
\begin{lemma}\label{lowdegree}
Each of the followings is satisfied for $x\in U_1$.\\
(i) $|L_{1,1}(x)|\ <\ 2\mu n^2.$\\
(ii) $|L_{1,2}(x)|\ < \ 2\mu n^2.$\\
(iii) $|L_{2,2}(x)|\ < \ 2\mu n^2.$\\
(iv) $|L_{1,3}(x)|\ < \ 2\mu n^2.$\\
(v) $|L_{3,3}(x)|\ < \ 2\mu n^2.$
\end{lemma}
\begin{proof}
(i) If $|L_{1,1}(x)|>2\mu n^2$ then by Lemma~\ref{matching} $
\{E-x:\ E\in L_{1,1}(x)\}$ contains a matching $G$ with size at
least $\mu n$. Then using Definition~\ref{defdensity} (iii) (with
$G, A_i=U_2, A_j=U_3$) for $\mathcal{H}$, we find $y,z\in U_1, a\in U_2,
b\in U_3$ such that $xyz,yab,zab\in \mathcal{H}$,
yielding an $F_5\subset \mathcal{H}$, a contradiction.
(ii) Suppose for contradiction that $|L_{1,2}(x)|\ge \ 2\mu n^2$. By the optimality of the partition
$|L_{1,2}(x)|\le
|L_{2,3}(x)|$, otherwise $x$ could be moved to $U_3$ to decrease the
number of bad edges.
We shall use property (ii) in Definition~\ref{defdensity}. We use it
with $G=\{E-x:\ E\in L_{1,2}(x)\}$ and
$$A_3=\{z\in U_3: \ \exists \hbox{ crossing edges } E_1,E_2\in \mathcal{H}
\text{ with } \{x,z\}\subset E_1\cap E_2\}.$$
Note that $|A_3|\ge
\mu n$ as $|L_{2,3}(x)|\ge 2\mu n^2$. Since $\mathcal{H}$ is $\mu$-lower
dense, we find $abz \in \mathcal{H}$ with $xab \in L_{1,2}(x)$ and $z \in
A_3$. By definition of $A_3$, there exists $b' \in U_2-\{b\}$
such that $xb'z \in L_{2,3}(x)$. This gives us $abx, abz, xb'z\in \mathcal{H}$,
forming an $F_5$.
(iii) Suppose for contradiction that $|L_{2,2}(x)|\ge 2\mu n^2$.
By Lemma~\ref{matching} $\{E-x:\ E\in L_{2,2}(x)\}$ contains a
matching $G$ with size at least $\mu n$. Then using
Definition~\ref{defdensity} (iii)
(with $G, A_i=U_1-x, A_j=U_3$)
we find $b,b'\in U_2,\ a\in U_1,\ c\in U_3$ such that $abc, ab'c, xbb'\in \mathcal{H}$,
forming an $F_5\subset \mathcal{H}$, a contradiction.
The proof of (iv) is identical to (ii) and of (v) is to (iii).
\end{proof}
\subsection{Getting rid of bad edges - A Progressive Induction}
Here we have to do something similar to the previous section,
however, as we get rid of only a few edges, the computation needed
is more delicate. We shall do progressive induction on the number of
vertices. The general idea is that we remove some vertices of a bad
edge, and count the number of ways it could
have been joined to the rest of the hypergraph.
We shall prove \eqref{induction} via induction on $n$. Fix an $n_0$
such that $1/n_0$ is much smaller than any of our constants, and all
of our prior lemmas and theorems are valid for every $n\ge n_0$. Let $C>10$ be
sufficiently large that \eqref{induction} is true for every $n\le
n_0$.
Let $Forb'(n, F_5, \eta, \mu)$ be the set of hypergraphs $\mathcal{H} \in
Forb(n,F_5,\eta,\mu)$ having an optimal partition with a bad edge.
Our final step is to give an upper bound $|Forb'(n, F_5, \eta,
\mu)|$. There are two types of bad edges, one which is completely
inside of a class, and the one which intersects two classes.
Let the bad edge be $xyz$, and the optimal partition be $U_1,U_2,U_3$. Without loss of generality assume that
$x,y\in U_1$.
In an $\mathcal{H} \in Forb'(n,F_5,\eta,\mu)$, $x,y,z$ could be chosen at
most $n^3$ ways, the optimal partition of $\mathcal{H}$ in at most $3^n$ ways and
the hypergraph $\mathcal{H}-\{x,y\}$ in at most $|Forb(n-2,F_5)|$ ways. By Lemma \ref{lowdegree} each of $|L_{1,1}(x)|,\ |L_{1,1}(y)|,\
|L_{1,2}(x)|,\ |L_{1,2}(y)|,\ |L_{1,3}(x)|,\ |L_{1,3}(y)|,\
|L_{2,2}(x)|,$ $ |L_{2,2}(y)|,\ |L_{3,3}(x)|,\ |L_{3,3}(y)|$ is at most $2\mu
n^2$, therefore the number of ways the bad edges could be joined to
$x,y$ is at most
$$\left(\sum_{i\le 2\mu n^2}\binom{n^2/2}{i}\right)^{10}\le 2^{10 H(4\mu)n^2}.$$
The key point is that for any $(u,v)\in (U_2-z)\times (U_3-z)$,
we cannot have both $xuv,yuv\in \mathcal{H}$ otherwise they form with $xyz$ a
copy of $F_5$. Together with Definition \ref{defdensity} part (iv),
we conclude that the number of ways to choose the crossing edges containing $x$ or $y$ is at most
$$3^{|U_2||U_3|}2^{2n}\le 3^{\frac{n^2}{9}+\mu n^2}.$$
Note that the $2^{2n}$ estimates the number of ways having edges containing $u,z$ or $vz$, as for these pairs we do not have any restriction.
Putting this together,
\begin{equation}
|Forb'(n,F_5,\eta,\mu)|\ \le\ n^3 3^n |Forb(n-2,F_5)|\cdot 2^{10
H(4\mu)n^2} 3^{\frac{n^2}{9}+\mu n^2}.\end{equation}
By the induction hypothesis, this is at most
$$ n^3 3^n (1+2^{C(n-2)-\frac{2(n-2)^2}{45}})T(n-2)
2^{10H(4\mu)n^2}3^{\frac{n^2}{9}+\mu n^2}.$$
Using (\ref{tnn-2}) this is upper bounded by
$$ n^5 3^n\left(1+2^{C(n-2)-\frac{2(n-2)^2}{45}}\right)
2^{(90H(4\mu)+ \log_2 3+9\mu-2+\frac9n)\frac{n^2}{9}}\cdot T(n).$$
As mentioned before, the crucial point in the expression above is that $\log_2 3-2<0$.
More precisely, since $n>n_0$, $\log_2 3<1.59$ and
$90H(4\mu)+9\mu<0.001$, we have
$$\left(90H(4\mu)+ \log_2 3+9\mu-2+\frac9n\right)\frac{n^2}{9}<-\frac{2n^2}{45}.$$
Consequently,
$$ |Forb'(n,F_5,\eta,\mu)|\ \le\ n^5 3^n\left(1+2^{C(n-2)-\frac{2(n-2)^2}{45}}\right) \cdot 2^{-\frac{2n^2}{45}}T(n)< \frac{1}{10} 2^{Cn-\frac{2n^2}{45}}T(n). $$
Now we can complete the proof of \eqref{induction} by upper bounding $|Forb(n,F_5)|$ as follows:
$$ |Forb(n,F_5)-Forb(n, F_5, \eta)| + |Forb(n,F_5,\eta)-Forb(n,F_5,\eta,\mu)| + |Forb'(n,F_5,\eta,\mu)|+T(n)$$
$$ \ < \ 2^{(1-\nu)\frac{n^3}{27}} + 2^{n^3(\frac{1}{27}-\frac{\mu^3}{40})} + \frac{1}{10}
2^{Cn-\frac{2n^2}{45}}T(n) + T(n)$$ $$ <
(1+2^{Cn-\frac{2n^2}{45}})T(n),$$
where the last inequality holds due to $T(n)>2^{s(n)}>2^{\frac{n^3}{27}-O(n^2)}$.
This completes the proof of the theorem. \qed
| proofpile-arXiv_065-6907 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Regularization of supersymmetric field theory has increased
its importance both for recent theoretical and phenomenological
developments, including, for instance,
gauge/gravity duals such as AdS/CFT and supersymmetry breaking.
One may expect that
lattice formulation, among others, would provide a promising
regularization scheme with the applicability to strong-coupling,
thus constructive and nonperturbative, analysis in the first
principle calculation.
It is, however, far from straightforward to incorporate
supersymmetry on the lattice due to the discrete nature of
spacetime;
superalgebra, which prescribes supersymmetry,
contains momentum operator, and momentum operator should be
the generator of infinitesimal spacetime translation,
which is broken on the lattice.
For this difficulty, various approaches have been
developed so far. (For a review
see \cite{review} and references therein.)
In this article, we present a possible formulation~\cite{DKS} of lattice
supersymmetry with a "deformed" notion of superalgebra
in the framework of the link approach~\cite{DKKN, ADFKS}.
This deformation can be naturally interpreted as a generalization
of Lie algebra to Hopf algebra. What we need to formulate is then
a field theory with this Hopf algebraic supersymmetry. We show
that such a formulation would be given by applying a general
formalism called braided quantum field theory (BQFT)~\cite{Oeckl}.
For this
purpose, we introduce a simply generalized statistics of fields
which is compatible with the structure of our Hopf algebra.
Supersymmetry on the lattice can now be recognized as various
sets of Ward--Takahashi identities derived by this BQFT formalism
\cite{Sasai-Sasakura}.
We will illustrate these aspects in the following,
mainly concentrating on two-dimensional non-gauge examples.
\section{Superalgebra in the link approach}
In the link approach~\cite{DKKN},
superalgebra on a two-dimensional lattice
was introduced in the form
\begin{equation}
\{Q, Q_\mu\} = i\ensuremath{\partial}_{+\mu},\qquad
\{\ensuremath{\tilde{Q}}, Q_\mu\} = -i\epsilon_{\mu\nu}\ensuremath{\partial}_{-\nu},
\label{t-algebra}
\end{equation}
with the other commutators just vanishing.
Notice that supercharges $Q_A=Q,\ Q_\mu$
and $\ensuremath{\tilde{Q}}$
are expressed in
the Dirac--K\"ahler\ twisted basis, which essentially corresponds
to $\MC{N}=(2,2)$ supercharges in two dimensions in the normal
basis.%
\footnote{With a similar argument, we need to take
the Dirac--K\"ahler\ twisted $\MC{N}=4$ supersymmetry in
four dimensions.}
We can see that fermions in the link approach
should be geometrically distributed on the lattice
just like the Dirac--K\"ahler\
or staggered fermions, where the d.o.f.\ of possible
doublers on the lattice is essentially used as that of
extended supersymmetry through the Dirac--K\"ahler\ twisting. This is why
the twisted basis was chosen in the superalgebra above.
Another point is that the algebra \Ref{t-algebra}
contains the forward and backward finite difference
operators $\ensuremath{\partial}_{\pm\mu}$ which simply replace the momentum
operator in the continuum. These difference operators
don't obey the Leibniz rule, but obey the modified Leibniz rule
\begin{equation}
\ensuremath{\partial}_{\pm\mu}(\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2)(x)
=
\ensuremath{\partial}_{\pm\mu}\ensuremath{\varphi}_1(x)\ensuremath{\varphi}_2(x)
+
\ensuremath{\varphi}_1(x\pm a\ensuremath{\hat{\mu}})\ensuremath{\partial}_{\pm\mu}\ensuremath{\varphi}_2(x),
\label{mod-Leib-P}
\end{equation}
where $\ensuremath{\hat{\mu}}$ denotes the unit vector along the $\mu$ direction.
One might expect that, other less simple operators,
instead of the simple forward/backward difference operators,
could obey the usual Leibniz rule even on the lattice.
This is not possible, however, due to the no-go theorem
proving non-existence of such a local operator \cite{Kato-Sakamoto-So},
which implies the modified Leibniz rule is unavoidable on the lattice
when one concentrate on local field theories.
In other words, it implies that the supercharges
can't obey the normal Leibniz rule either to make the algebra
\Ref{t-algebra} hold.
In the link approach,
the following modified Leibniz rule for $Q_A$ was assumed
\begin{equation}
Q_A(\ensuremath{\varphi}_1\cdot\ensuremath{\varphi}_2)(x)
=
Q_A\ensuremath{\varphi}_1(x)\ensuremath{\varphi}_2(x)
+(-1)^\stat{\ensuremath{\varphi}_1}\ensuremath{\varphi}_1(x+a_A)Q_A\ensuremath{\varphi}_2(x),
\label{mod-Leib-Q}
\end{equation}
where $\stat{\ensuremath{\varphi}}$ is 0 (or 1) when
$\ensuremath{\varphi}$ is bosonic (or fermionic, respectively).
This already shows that the algebra \Ref{t-algebra}
doesn't form a Lie superalgebra in the usual sense,
and ``deforms'' the notion of usual superalgebra.
Then a natural question is whether we could treat
the algebra \Ref{t-algebra}
with the modified Leibniz rules \Ref{mod-Leib-P} and \Ref{mod-Leib-Q}
in a mathematically rigorous manner.
We will see shortly that the answer is affirmative;
the algebra in the link approach can be identified as a Hopf algebra,
which assures mathematical consistency especially
for the modified Leibniz rule \Ref{mod-Leib-Q}.
Another question would immediately
follows: even if the algebra itself makes sense, it is still
unclear whether it corresponds to a symmetry of a local quantum field
theory as usual Lie algebra does. For this, too, we will propose
an affirmative answer.
First,
we can manage to take care of the locality with a mildly generalized
statistics of fields. This statistics is in fact expressed
mathematically as (trivial) braiding.
Since quantum field theory for fields with such a braiding structure
is known to be formulated generally as BQFT~\cite{Oeckl},
we could apply it to our case.
This approach now allows us to associate our Hopf algebraic symmetry
with various sets of Ward--Takahashi identities~\cite{Sasai-Sasakura},
showing clear relations of the Hopf algebra to a symmetry
of a quantum field theory.
\section{Superalgebra on the lattice as a Hopf algebra}
Here we are going to show how the superalgebra which was originally
introduced with an extra shift structure in the link approach~\cite{DKKN}
can as a whole be rigorously identified as a Hopf algebra \cite{DKS}.
Before going into the detail,
let us briefly summarize the Hopf algebra axioms.
For a full mathematical treatment on Hopf algebra, see, for example,
\cite{Majid}.
Hopf algebra $H$ is an object which satisfies the following
four axioms.
\begin{enumerate}
\item $H$ is an algebra, namely a vector space which has
an associative product (multiplication)
$\ensuremath{\cdot}: H\ensuremath{\otimes} H\ensuremath{\rightarrow} H$, where the associativity reads
$\ensuremath{\cdot}\ensuremath{\circ}(\ensuremath{\cdot}\ensuremath{\otimes}\ensuremath{\operatorname{id}})=\ensuremath{\cdot}\ensuremath{\circ}(\ensuremath{\operatorname{id}}\ensuremath{\otimes}\ensuremath{\cdot})$,
and unit element
$\vid$.\footnote{Here $\ensuremath{\operatorname{id}}$ is the identity map and $\ensuremath{\circ}$
denotes composition of maps.}
\item $H$ is a coalgebra, namely a vector space which has
a coassociative coproduct (comultiplication)
$\ensuremath{\Delta}: H\ensuremath{\rightarrow} H\ensuremath{\otimes} H$, where the coassociativity reads
\begin{equation}
(\ensuremath{\Delta}\ensuremath{\otimes}\ensuremath{\operatorname{id}})\ensuremath{\circ}\ensuremath{\Delta}=(\ensuremath{\operatorname{id}}\ensuremath{\otimes}\ensuremath{\Delta})\ensuremath{\circ}\ensuremath{\Delta},
\label{coassociativity}
\end{equation}
and counit $\ensuremath{\epsilon}: H\ensuremath{\rightarrow}\ensuremath{\mathbb{C}}$ which satisfies the condition
\begin{equation}
(\ensuremath{\epsilon}\ensuremath{\otimes}\ensuremath{\operatorname{id}})\ensuremath{\circ}\ensuremath{\Delta}=(\ensuremath{\operatorname{id}}\ensuremath{\otimes}\ensuremath{\epsilon})\ensuremath{\circ}\ensuremath{\Delta}=\ensuremath{\operatorname{id}}.
\label{counit-condition}
\end{equation}
\item The coproduct and counit are both algebra maps, namely,
\begin{equation}
\begin{cases}
\ensuremath{\Delta}\ensuremath{\circ}\ensuremath{\cdot} = (\ensuremath{\cdot}\ensuremath{\otimes}\ensuremath{\cdot})\ensuremath{\circ}\ensuremath{\Delta},
\\
\ensuremath{\Delta}(\vid) = \vid\ensuremath{\otimes}\vid,
\end{cases}
\qquad\text{and}\qquad
\begin{cases}
\ensuremath{\epsilon}\ensuremath{\circ}\ensuremath{\cdot} = \ensuremath{\epsilon}\ensuremath{\otimes}\ensuremath{\epsilon},\\
\ensuremath{\epsilon}(\vid)=1.
\end{cases}
\label{algebra-maps}
\end{equation}
\item $H$ has an antipode $\ensuremath{\operatorname{S}}: H\ensuremath{\rightarrow} H$, which satisfies
the defining condition
\begin{equation}
\ensuremath{\cdot}\ensuremath{\circ}(\ensuremath{\operatorname{S}}\ensuremath{\otimes}\ensuremath{\operatorname{id}})\ensuremath{\circ}\ensuremath{\Delta}
=\ensuremath{\cdot}\ensuremath{\circ}(\ensuremath{\operatorname{id}}\ensuremath{\otimes}\ensuremath{\operatorname{S}})\ensuremath{\circ}\ensuremath{\Delta}
=\vid\ensuremath{\epsilon}.
\label{antipode-condition}
\end{equation}
\end{enumerate}
It is easy to see that
the superalgebra~\Ref{t-algebra} in the link approach
forms an algebra. Product of two
generators, say, $Q_A$ and $Q_B$, is defined with a
successive applications of $Q_B$ and $Q_A$ as in
\(
(Q_A\ensuremath{\cdot} Q_B)\ensuremath{\operatorname{\rhd}}\ensuremath{\varphi}
:= (Q_A\ensuremath{\operatorname{\rhd}})\ensuremath{\circ}(Q_B\ensuremath{\operatorname{\rhd}})\ensuremath{\varphi},
\)%
\footnote{The ``action'' of a generator $a$ on a field $\ensuremath{\varphi}$
is denoted as $a\ensuremath{\operatorname{\rhd}}\ensuremath{\varphi}$.}
whereas the unit operator is trivially defined as
$\vid\ensuremath{\operatorname{\rhd}}\ensuremath{\varphi}=\ensuremath{\varphi}$.
These structures together with the ``equivalence'' relations,
(anti-)commutation relations \Ref{t-algebra}, form
a universal enveloping algebra of a sort.
To be specific, let us list explicit field representations
for this algebra, taking the example of
$\MC{N}=(2,2)$ Wess--Zumino model in two dimensions.
The field contents are
scalar bosons $\phi,\ \sigma$, fermions
$\psi,\ \psi_\mu,\ \ensuremath{\tilde{\psi}}$ and auxiliary fields
$\ensuremath{\tilde{\phi}},\ \ensuremath{\tilde{\sigma}}$,
for which the supertransformations are as follows:
\begin{equation}
\begin{aligned}
Q\phi &= 0, &\qquad Q_\mu\phi &= \psi_\mu, &\qquad \ensuremath{\tilde{Q}}\phi &= 0, \\
Q\psi_\nu &= i\ensuremath{\partial}_{+\nu}\phi, &\qquad
Q_\mu\psi_\nu &= -\epsilon_{\mu\nu}\ensuremath{\tilde{\phi}}, &\qquad
\ensuremath{\tilde{Q}}\psi_\nu &= -i\epsilon_{\nu\mu}\ensuremath{\partial}_{-\mu}\phi, \\
Q\ensuremath{\tilde{\phi}} &= -i\epsilon_{\mu\nu}\ensuremath{\partial}_{+\mu}\psi_\nu, &\qquad
Q_\mu\ensuremath{\tilde{\phi}} &= 0, &\qquad
\ensuremath{\tilde{Q}}\ensuremath{\tilde{\phi}} &= i\ensuremath{\partial}_{-\mu}\psi_\mu, \\
Q\sigma &= -\psi, &\qquad Q_\mu\sigma &= 0, &\qquad
\ensuremath{\tilde{Q}}\sigma &= -\ensuremath{\tilde{\psi}}, \\
Q\psi &= 0, &\qquad Q_\mu\psi &= -i\ensuremath{\partial}_{+\mu}\sigma, &\qquad
\ensuremath{\tilde{Q}}\psi &= -\ensuremath{\tilde{\sigma}}, \\
Q\ensuremath{\tilde{\psi}} &= \ensuremath{\tilde{\sigma}}, &\qquad
Q_\mu\ensuremath{\tilde{\psi}} &= i\epsilon_{\mu\nu}\ensuremath{\partial}_{-\nu}\sigma, &\qquad
\ensuremath{\tilde{Q}}\ensuremath{\tilde{\psi}} &= 0, \\
Q\ensuremath{\tilde{\sigma}} &= 0, &\qquad
Q_\mu\ensuremath{\tilde{\sigma}} &=
i\epsilon_{\mu\nu}\ensuremath{\partial}_{-\nu}\psi +i\ensuremath{\partial}_{+\mu}\ensuremath{\tilde{\psi}},
&\qquad
\ensuremath{\tilde{Q}}\ensuremath{\tilde{\sigma}} &= 0.
\end{aligned}
\label{supertransf}
\end{equation}
What is more important is the coproduct structure. It
amounts to specifying the action of an operator, say $Q_A$,
on a product of fields
$\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2 =: \ensuremath{m}(\ensuremath{\varphi}_1\ensuremath{\otimes}\ensuremath{\varphi}_2)$ as in
\begin{equation}
Q_A\ensuremath{\operatorname{\rhd}}(\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2)
= \ensuremath{m}\Bigl(
\ensuremath{\Delta}(Q_A)\ensuremath{\operatorname{\rhd}}(\ensuremath{\varphi}_1\ensuremath{\otimes}\ensuremath{\varphi}_2)
\Bigr).
\label{act-on-prod}
\end{equation}
Thus determining the coproduct structure
is nothing but to specifying the Leibniz rule.
For instance,
the modified Leibniz rule \Ref{mod-Leib-Q} is essentially equivalent
to the coproduct formula
\begin{equation}
\ensuremath{\Delta}(Q_A)
= Q_A\ensuremath{\otimes}\vid + \ensuremath{(-1)^{\mathcal{F}}}\ensuremath{\cdot} T_{a_A}\ensuremath{\otimes} Q_A,
\label{coproduct-Q}
\end{equation}
where $\ensuremath{(-1)^{\mathcal{F}}}$ just gives factor $+1$ (or $-1$)
when applied on a bosonic (or fermionic, resp.)\ field:
$\ensuremath{(-1)^{\mathcal{F}}}\ensuremath{\operatorname{\rhd}}\ensuremath{\varphi} = \pm\ensuremath{\varphi}$,
and $T_{a_A}$ is the shift operator:
$\bigl(T_{a_A}\ensuremath{\operatorname{\rhd}}\ensuremath{\varphi}\bigr)(x) := \ensuremath{\varphi}(x+a_A)$.
Note in passing that these operators would satisfy
the trivial (anti-)commutation relations
\begin{equation}
[Q_A, T_b] = [P_\mu, T_b] = [T_b, T_c] = 0,\qquad
\{Q_A,\ensuremath{(-1)^{\mathcal{F}}}\}
= [P_A,\ensuremath{(-1)^{\mathcal{F}}}]
= [T_b,\ensuremath{(-1)^{\mathcal{F}}}]
= 0.
\end{equation}
We can determine the coproduct for
$\ensuremath{\partial}_{\pm\mu},\ T_{b}$ and $\ensuremath{(-1)^{\mathcal{F}}}$ by the identifications
similar to \Ref{act-on-prod}, which result in the following
formulae:
\begin{equation}
\ensuremath{\Delta}(\ensuremath{\partial}_{\pm\mu})
= \ensuremath{\partial}_{\pm\mu}\ensuremath{\otimes}\vid + T_{\pm a\ensuremath{\hat{\mu}}}\ensuremath{\otimes}\ensuremath{\partial}_{\pm\mu},\qquad
\ensuremath{\Delta}(T_b) = T_b\ensuremath{\otimes} T_b,\qquad
\ensuremath{\Delta}(\ensuremath{(-1)^{\mathcal{F}}}) = \ensuremath{(-1)^{\mathcal{F}}}\ensuremath{\otimes}\ensuremath{(-1)^{\mathcal{F}}}.
\label{coproduct-others}
\end{equation}
We can confirm ourselves, by straightforward calculations,
that these prescriptions indeed obey the coassociativity
condition~\Ref{coassociativity}.
Notice that the coassociativity condition
assures the uniqueness of the action of an operator
on a product of three or more fields.
For instance, since by the associativity
\(
\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2\ensuremath{\cdot}\ensuremath{\varphi}_3
=
\bigl(\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2\bigr)\ensuremath{\cdot}\ensuremath{\varphi}_3
=
\ensuremath{\varphi}_1\ensuremath{\cdot}\bigl(\ensuremath{\varphi}_2\ensuremath{\cdot}\ensuremath{\varphi}_3\bigr),
\)
we need
\(
Q_A\ensuremath{\operatorname{\rhd}}\bigl(\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2\ensuremath{\cdot}\ensuremath{\varphi}_3\bigr)
=
Q_A\ensuremath{\operatorname{\rhd}}\Bigl(\bigl(\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2\bigr)\ensuremath{\cdot}\ensuremath{\varphi}_3\Bigr)
=
Q_A\ensuremath{\operatorname{\rhd}}\Bigl(\ensuremath{\varphi}_1\ensuremath{\cdot}\bigl(\ensuremath{\varphi}_2\ensuremath{\cdot}\ensuremath{\varphi}_3\bigr)\Bigr).
\)
This is equivalent to
\(
(\ensuremath{\Delta}\ensuremath{\otimes}\ensuremath{\operatorname{id}})\ensuremath{\circ}\ensuremath{\Delta}(Q_A)
=
(\ensuremath{\operatorname{id}}\ensuremath{\otimes}\ensuremath{\Delta})\ensuremath{\circ}\ensuremath{\Delta}(Q_A),
\)
which is the coassociativity condition for the operator $Q_A$.
Similar arguments of course hold for other operators.
The coproduct structure thus determines how operators act on
products of fields.
Note, however, that any field $\ensuremath{\varphi}$ can be considered as
a product
\(
\ensuremath{\varphi} = \ensuremath{\mathbf{1}}\ensuremath{\cdot}\ensuremath{\varphi} = \ensuremath{\varphi}\ensuremath{\cdot}\ensuremath{\mathbf{1}}
\)
with the constant field $\ensuremath{\mathbf{1}}$. Accordingly,
when the operator, say, $Q_A$
acts on $\ensuremath{\varphi}$, it must satisfy the consistency condition
\(
Q_A\ensuremath{\operatorname{\rhd}}\ensuremath{\varphi}
= Q_A\ensuremath{\operatorname{\rhd}}\bigl(\ensuremath{\mathbf{1}}\ensuremath{\cdot}\ensuremath{\varphi})
= Q_A\ensuremath{\operatorname{\rhd}}\bigl(\ensuremath{\varphi}\ensuremath{\cdot}\ensuremath{\mathbf{1}}).
\)
In order to state this more generally, let us define
the counit map by
\begin{equation}
Q_A\ensuremath{\operatorname{\rhd}}\ensuremath{\mathbf{1}}
\equiv
\ensuremath{\epsilon}(Q_A)\ensuremath{\mathbf{1}},
\end{equation}
so that the counit gives the trivial representation.
The consistency above is now written as
\(
\ensuremath{\operatorname{id}}
= (\ensuremath{\epsilon}\ensuremath{\otimes}\ensuremath{\operatorname{id}})\ensuremath{\circ}\ensuremath{\Delta} = (\ensuremath{\operatorname{id}}\ensuremath{\otimes}\ensuremath{\epsilon})\ensuremath{\circ}\ensuremath{\Delta},
\)
which is what we listed before \Ref{counit-condition}.
The explicit formulae \Ref{coproduct-Q} and \Ref{coproduct-others}
now allow us to specify the counit of operators which satisfies
this condition as follows:
\begin{equation}
\ensuremath{\epsilon}(Q_A) = 0,\quad
\ensuremath{\epsilon}(P_\mu) = 0,\quad
\ensuremath{\epsilon}(T_b) = 1,\quad
\ensuremath{\epsilon}\bigl(\ensuremath{(-1)^{\mathcal{F}}}\bigr) = 1.
\label{counit}
\end{equation}
Coproduct and counit for a product of operators
can be calculated through the algebra-map conditions
\Ref{algebra-maps}. We emphasize that this property
is important since it also assures
the explicit formulae
\Ref{coproduct-Q}, \Ref{coproduct-others} and
\Ref{counit} are indeed compatible with
the algebraic relations \Ref{t-algebra}.
We introduce one more object, the antipode.
It essentially gives the ``inverse'' of operators,
uniquely determined by the relation \Ref{antipode-condition}.
From the explicit formulae \Ref{coproduct-Q},
\Ref{coproduct-others} and \Ref{counit},
we find the following formulae
\begin{equation}
\ensuremath{\operatorname{S}}(Q_A)
=
-\ensuremath{(-1)^{\mathcal{F}}}\ensuremath{\cdot} Q_A,
\qquad
\ensuremath{\operatorname{S}}(P_\mu)
= -P_\mu,\qquad
\ensuremath{\operatorname{S}}(T_b) = T_b^{-1},\qquad
\ensuremath{\operatorname{S}}\bigl(\ensuremath{(-1)^{\mathcal{F}}}\bigr) = \ensuremath{(-1)^{\mathcal{F}}}.
\label{antipode}
\end{equation}
We can show by the condition \Ref{antipode-condition}
that the antipode is anti-algebraic, namely,
\(
\ensuremath{\operatorname{S}}\ensuremath{\circ}\ensuremath{\cdot}
=
\ensuremath{\cdot}\ensuremath{\circ}\ensuremath{\operatorname{\tau}}\ensuremath{\circ}
(\ensuremath{\operatorname{S}}\ensuremath{\otimes}\ensuremath{\operatorname{S}})
\)
and
\(
\ensuremath{\operatorname{S}}(\vid) = \vid,
\)
where $\ensuremath{\operatorname{\tau}}$ is the transposition
$\ensuremath{\operatorname{\tau}}(a\ensuremath{\otimes} b) := b\ensuremath{\otimes} a$.
This is again consistent with the relation \Ref{t-algebra},
as seen with the explicit formulae
\Ref{coproduct-Q}, \Ref{coproduct-others} and \Ref{counit}.
We can also derive that the antipode should satisfy
anti-coalgebraic nature as in
\(
(\ensuremath{\operatorname{S}}\ensuremath{\otimes}\ensuremath{\operatorname{S}})\ensuremath{\circ}\ensuremath{\Delta}
=
\ensuremath{\operatorname{\tau}}\ensuremath{\circ}\ensuremath{\Delta}\ensuremath{\circ}\ensuremath{\operatorname{S}}
\)
and
\(
\ensuremath{\epsilon}\ensuremath{\circ}\ensuremath{\operatorname{S}}
=
\ensuremath{\epsilon},
\)
which are also found to be compatible to the explicit formulae.
\section{Statistics on the lattice as a braiding}
Our next task is to consider field-product representations of the Hopf
algebraic supersymmetry.
We first emphasize here that a Hopf algebra in general has
a noncommutative representation. In the current application,
a noncommutative representation would naturally
lead to a noncommutative field theory, which would then be
nonlocal. In fact, we could avoid this noncommutativity
or nonlocality, systematically taking product representations which are
almost commutative, or, in other words, commutative
up to a mildly generalized statistics.%
\footnote{Our Hopf algebra has a simple structure,
(quasi)triangularity, which allows such an almost
commutative representation.}
We illustrate more concretely
how this is possible with the previous example of
$\MC{N}=(2,2)$ Wess--Zumino model in two dimensions.
For scalars $\phi,\ \sigma$, supertransformations
with respect to $Q_\mu$ are given as
$Q_\mu\phi=\psi_\mu,\ Q_\mu\sigma=0$ (see \Ref{supertransf}).
Let us assume that the scalars be commutative:
$\phi(x)\ensuremath{\cdot}\sigma(x) = \sigma(x)\ensuremath{\cdot}\phi(x)$.
The point is, once we set this assumption, we could deduce
from the supertransformations \Ref{supertransf} the statistics
for the other fields in a manner totally consistent with
the Hopf algebra structures. To see this, we calculate
$Q_\mu\bigl(\phi(x)\ensuremath{\cdot}\sigma(x)\bigr)
=Q_\mu\bigl(\sigma(x)\ensuremath{\cdot}\phi(x)\bigr)$,
so that, with the use of the coproduct formula \Ref{coproduct-Q},
we have
\(
\psi_\mu(x)\ensuremath{\cdot}\sigma(x)
=
\sigma(x+a_\mu)\ensuremath{\cdot}\psi_\mu(x).
\)
This shows that the fermion $\psi_\mu$ is commutative with
the boson $\sigma$ up to the shift of argument. Similar
calculations lead to that $\psi_\mu$ is
(anti-)commutative with any other fields up to the same
amount of shift of argument. We can in fact generalize this
statement as follows
\begin{equation}
\ensuremath{\Psi}\Bigl(
\ensuremath{\varphi}_{A_0\cdots A_p}(x)\ensuremath{\otimes}\ensuremath{\varphi}'_{B_0\cdots B_q}(y)
\Bigr)
=
(-1)^{pq}
\ensuremath{\varphi}'_{B_0\cdots B_q}\left(y+\sum_{i=1}^p a_{A_i}\right)
\ensuremath{\otimes} \ensuremath{\varphi}_{A_0\cdots A_p}\left(x-\sum_{i=1}^q a_{B_i}\right),
\label{braid}
\end{equation}
where
$\ensuremath{\Psi}$ represents the exchange of the order of fields
in a tensor product, called (trivial) braid, and
\(
\ensuremath{\varphi}_{A_0\cdots A_p}:= Q_{A_p}\cdots Q_{A_1}\ensuremath{\varphi}_{A_0},
\)
with $\ensuremath{\varphi}_{A_0}$ denoting scalars $\phi$ or $\sigma$.
With the mildly generalized statistics \Ref{braid},
the ordering ambiguity claimed in \cite{Dutch} no longer appears.%
\footnote{Another difficulty raised there
in the case of gauge theory needs further investigations.}
Notice also that we might understand this statistics property
in terms of a grading structure for each field
and symmetry operator which
is determined corresponding to the indices $A_i$ and
$B_i$ in the formula \Ref{braid}.
Then in particular the difference operators $\ensuremath{\partial}_{\pm\mu}$ as well
must have the grading structure, which is difficult to express
explicitly. We will come to this point in the conclusion.
\section{Supersymmetric lattice field theory as BQFT}
Quantum field theory for fields with generalized statistics,
or braiding, can be generally formulated as braided quantum
field theory (BQFT) at least perturbatively~\cite{Oeckl}.
We can thus apply the formalism into our approach
to construct a perturbative lattice field theory.
Here we just sketch the outline of the formulation.
The theory is quantized through path integral formalism
\begin{equation}
Z = \int e^{-S},\qquad
\langle\ensuremath{\varphi}_1\cdots\ensuremath{\varphi}_n\rangle
=\frac{1}{Z}\int\ensuremath{\varphi}_1\cdots\ensuremath{\varphi}_n e^{-S},\qquad
\int\frac{\delta}{\delta\ensuremath{\varphi}(x)} = 0,
\end{equation}
for a classical action $S$. The last equation
formally defines the path integral, for which
the functional derivative is assumed to obey
the deformed Leibniz rule
\begin{equation}
\frac{\delta}{\delta\ensuremath{\varphi}(x)} (\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2)
= \frac{\delta}{\delta\ensuremath{\varphi}(x)}\ensuremath{\varphi}_1\ensuremath{\cdot}\ensuremath{\varphi}_2
+ (-1)^{\stat{\ensuremath{\varphi}}\stat{\ensuremath{\varphi}_1}}T_\ensuremath{\varphi}^{-1}\ensuremath{\varphi}_1\ensuremath{\cdot}
\frac{\delta}{\delta\ensuremath{\varphi}(x)}\ensuremath{\varphi}_2.
\end{equation}
The formal expression is enough to derive perturbative
Wick's theorem with appropriate statistics, which allows
to compute arbitrary correlation functions in terms of propagators
determined by the specific form of the classical action.
Now the classical Hopf algebraic supersymmetry is expressed
by $Q_A\ensuremath{\operatorname{\rhd}} S = 0$. At the quantum level, this leads
to various sets of Ward--Takahashi identities of the form
\cite{Sasai-Sasakura}
\begin{equation}
Q_A\ensuremath{\operatorname{\rhd}}\langle\ensuremath{\varphi}_1\cdots\ensuremath{\varphi}_n\rangle
=
\ensuremath{\epsilon}(Q_A)\langle\ensuremath{\varphi}_1\cdots\ensuremath{\varphi}_n\rangle
= 0.
\end{equation}
\section{Conclusion}
In this article, we presented a formulation of lattice
supersymmetry with the machinery of Hopf algebra and
BQFT, based on the previously proposed formulation,
the link approach.
We showed explicitly that superalgebra on a lattice can be
identified as a Hopf algebra, where the modified or
deformed Leibniz rules invented in the original link
approach are now incorporated as the coproduct structure
in the Hopf algebra. Fields, as representations of
the Hopf algebraic symmetry, would in general be noncommutative,
thus the corresponding field theory would be nonlocal.
This noncommutativity, however, could be reduced to
the commutativity up to a lattice-deformed statistics
in a manner consistent with the Hopf algebraic superalgebra.
The difficulty claimed as ordering ambiguity
against the original link approach
is now solved thanks to this deformed statistics.
We then applied the formalism of BQFT to construct
a quantum field theory for such a generalized statistics,
which allows us to derive Ward--Takahashi identities
associated with the Hopf algebraic supersymmetry
at least perturbatively.
In this formulation, fields and symmetry generators
could be interpreted to have grading structures corresponding to
the deformed statistics of fields.
In particular, ``momentum'' operator,
the difference operators on the lattice, should have nontrivial grading.
In order to compute arbitrary correlation functions,
especially when including loop corrections,
we need an explicit representation of the graded
difference operators. Such a representation might be
unnecessary for the computations of physical observables.
Another issue is that this construction at the moment is limited
only on a formal and perturbative level, and it is not
yet clear whether it can lead to nonperturbative
formulation as a lattice field theory.
Gauge theory extension is missing as well.
These issues are for the future works.
| proofpile-arXiv_065-6911 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
FePd alloys with L1$_0$ structure deposited in thin layers have
attracted much attention because of their very high perpendicular
anisotropy, which is a key property for magneto-optical recording and
for high density magnetic storage. Recently, alloys with a
perpendicular anisotropy have been used in spin-valves, where they are
used as the polarizer and as the free layer that should be
reversed\cite{seki06}. It has been shown that in such
devices\cite{seki08} or in magnetic tunnel junctions,
the reversal of the free layer occurs through the nucleation of a
reversed domain followed by the propagation of a domain wall.
Near the saturation, the band domain structure in FePd layers
transforms into a lattice of magnetic bubbles\cite{hubert98}, which
remains stable at high fields. In some bubbles the Bloch-like walls have
different polarities, separated by segments called vertical Bloch
lines (VBL). In the present work we analyze the role of VBL on the
shape of these magnetic bubbles.
VBL were much studied in the 1980s in garnets, both experimentally and
numerically. Typical parameters for these garnets are $K =
10^3~\textrm{J.m}^{-3}$, $M_s = 1.4\times 10^4~\textrm{A.m}^{-1}$ and
$A = 1.3\times 10^{-12}~\textrm{J.m}^{-1} $, so that the domain wall
width is $\delta = \pi\sqrt{A/K} \approx 0.1~\mu\textrm{m}$. This
large value, compared to the domain wall width in FePd of around 8~nm,
makes possible the optical observation of domain walls and
VBL\cite{thiaville90}. In FePd, a higher resolution is necessary to
probe the sample, which can be reached by Lorentz transmission electron
microscopy (LTEM). Extensive analytical and numerical studies have also
been performed in VBL in
garnets\cite{slonczewski74,hubert74,nakatani88,miltat89}. Given the
high value of the quality factor $Q = 2K/(\mu_0 M_s^2) \approx 8$, a
common assumption in the models is $Q \gg 1$, which notably permits to
use a local approximation of the demagnetizing field and thus
simplifies the calculations. This assumption is \emph{a priori} not
valid in the case of FePd, which exhibits smaller values of $Q$ of the
order of 1.6.
In the present work we performed high resolution imaging of domain
walls in magnetic bubbles in FePd thin layers, using Lorentz
microscopy, to highlight their magnetic configuration. In particular
we describe the influence of VBL on the shape of the bubbles. We also
show the results of multiscale simulations that provide an explanation
for these observed shapes.
\section{Observation of magnetic bubbles in FePd thin film}
\label{sec:observation-magnetic-bubbles-FePd}
Lorentz microscopy is now a well established method that enables
magnetic imaging with a resolution better than ten nanometers. The
simplest mode of LTEM is the observation of the overlapping of
electrons experiencing different Lorentz forces in magnetic
domains. The contrasts obtained by simply defocalizing the lens used
for imaging are called Fresnel contrasts \cite{Chapman1984}. In a
classical in-plane magnetization configuration, Fresnel contrasts
appear on the domain walls position due to the overlapping of
electrons coming from two opposite domains. In the particular case of
FePd, where magnetization is mainly out-of-plane, the contrasts can be
obtained by tilting the sample \cite{Aitchison2001}. This enables the
magnetization inside the domain to act on the electron beam and to
produce traditional Fresnel contrasts located on the domain
walls. Otherwise contrasts can be produced by the domain walls
themselves if the layer is thick enough and if the amount of in-plane
magnetization in the wall is large enough\cite{Masseboeuf2009}
(\textit{i.e.} to reach the LTEM sensitivity of about 10~nm.T). This
was the case for our samples, so we have performed Fresnel
observations of Bloch walls without tilting the FePd layers. The
microscope used in these observations was a JEOL 3010 fitted in with a
Gatan imaging filter for contrast enhancement\cite{Dooley1997}. The
images displayed in this letter have been also filtered by a Fourier
approach to enhance the contrasts localized on the domain walls. The
magnetization was performed using the objective lens, calibrated with
a Hall Probe. The sample was prepared by Molecular Beam Epitaxy on MgO
[001] substrate. The magnetic stacking is decomposed in two layers: a
``soft'' layer of 17~nm FePd$_2$ having a vanishing anisotropy is
deposited before a 37~nm-L1$_0$ layer of FePd. Details can be found
in Ref.~\onlinecite{Masseboeuf2008}. The sample was prepared for TEM
observation with a classical approach: mechanical polishing and ion
milling.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\linewidth]{LTEM1.eps}
\caption{Magnetization process on FePd thin film. The two rows
present two different areas in the film. Both of them present a
magnetic bubble state just before saturation. Left images are raw
datas while the other images are enhanced by Fourier
filtering. Right images are simple schemes to highlight the contrast observed in the last step of magnetization process. Arrows point out the direction of magnetic induction in bubbles. Images are 500 $\times$ 500~nm.}
\label{LTEM1}
\end{figure*}
Fig.~\ref{LTEM1} shows two different areas of the foil during the
magnetization process. We observe couples of black and white contrasts
corresponding to the Bloch walls\cite{Masseboeuf2009}. These pictures
have been obtained for increasing applied fields. We should notice
that upon 500~mT the quality of the images decreases due to the action
of the objective lens on the image formation. Nevertheless it is
possible to follow the shape of the domains during the magnetization
process (enhanced here by Fourier filtering). We observe in both cases
that a magnetic domain collapses to a bubble state. Attention can thus
be paid on the chirality of the Bloch wall. The chirality (sense of
the magnetization inside the Bloch wall) is directly linked to the
Fresnel contrast: the wall chirality of a black/white contrast and the
chirality of a white/black contrast are opposite. Knowing this, the
observation of the two magnetic bubbles presented in the right images
of Fig.~\ref{LTEM1} gives some information on the magnetization inside
the domain walls of the bubbles. The first bubble presents a
continuous domain wall, swirling all around the bubble, whereas the
other one exhibits two different parts with the same magnetization
orientation. In the latter configuration, the magnetization inside the
domain wall experiences two rotations of 180$^{\circ}$ localized
at the top and the bottom of the bubble. These switching areas are
known as vertical Bloch lines (VBL). One can notice the main
difference in the two bubble shapes: the first one is almost round
while the second bubble seems to be slightly elongated along the
vertical direction.
To confirm the role of VBL on the bubble shape, we have thus simulated
the inner structure of domain walls containing VBL.
\section{Simulation of domain walls with vertical Bloch lines}
\label{sec:simulation-VBL}
The numerical simulation of magnetic bubbles is not a tractable
problem with standard codes. Indeed it requires to handle
large systems whose size is related to the size of the bubbles, but
with regions where the magnetization varies rapidly in space, such as
domain walls and all their substructures. Considering all regions with
the same level of refinement is clearly not well adapted to such a
multiscale problem and leads to a high computational effort. The same
level of accuracy can be reached with a coarser mesh in uniformly
magnetized regions.
In this work we used a multiscale code (Mi\_$\mu$Magnet) based on an adaptive mesh
refinement technique, as well as on a mixed atomistic-micromagnetic
approach, to achieve both precision and computational
efficiency\cite{jourdan08}. Given the large size of the systems we
envisage here, the code was only used in its micromagnetic mode. It
has been recently shown that micromagnetic calculations can be applied
to singularities appearing in VBL, called Bloch Points
(BP)\cite{thiaville03}. In all calculations the mesh step is kept
lower than half the exchange length.
Parameters are chosen in agreement with experimental
measurements\cite{gehanno97-2}: the saturation magnetization,
anisotropy constant and exchange stiffness are $M_s =
10^6~\textrm{A.m}^{-1}$, $K=10^6~\textrm{J.m}^{-3}$, and $A = 7\times
10^{-12}~\textrm{J.m}^{-1}$. With such parameters, the exchange length
is $l_{ex} = \sqrt{2A/(\mu_0M_s^2)}=3.3~\textrm{nm}$.
Two types of computations have been carried out. First we investigate
the properties of a straight domain wall containing a VBL. Secondly we
study the role of VBL on the shape of the magnetic bubbles in FePd
layers.
\subsection{Vertical Bloch lines in straight domain walls}
\label{sec:VBL-straight-DW}
\newcommand{\mup}[4]{%
\filldraw[black, very thick] (#1,#2) circle (#3);
\draw[black, very thick] (#1,#2) circle (#4);
}
\newcommand{\mdn}[3]{%
\draw[black, very thick] (#1,#2) circle (#3);
\draw[black, very thick] (#1,#2) -- +(45:#3);
\draw[black, very thick] (#1,#2) -- +(135:#3);
\draw[black, very thick] (#1,#2) -- +(225:#3);
\draw[black, very thick] (#1,#2) -- +(315:#3);
}
\newcommand{\cpl}[3]{%
\draw[black, very thick] (#1,#2) -- +(#3,0);
\draw[black, very thick] (#1,#2) -- +(-#3,0);
\draw[black, very thick] (#1,#2) -- +(0,#3);
\draw[black, very thick] (#1,#2) -- +(0,-#3);
}
\newcommand{\cms}[3]{%
\draw[black, very thick] (#1,#2) -- +(#3,0);
\draw[black, very thick] (#1,#2) -- +(-#3,0);
}
The system used here contains a domain wall with a
single VBL (Fig.~\ref{fig-simulation-VBL-straight-DW}). The lateral
size of the system is $110~\textrm{nm}\times 110~\textrm{nm}$ and the
thickness of the layer is varied between 11.3 to 37.6~nm. No periodic
boundary conditions are used, because this would involve a second VBL
along $y$ and a second domain wall along $x$. A view of such a system
is given in Fig.~\ref{fig-vbl-BP-all}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{pg_0001.eps}
\caption{Schematic representation of the system used to study the
structure of VBL in domain walls.}
\label{fig-simulation-VBL-straight-DW}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{VBL_40_BP_075_all.eps}
\caption{(Color online) Cross-section of the whole system obtained
from a multiscale simulation (Bloch wall, with a vertical Bloch
line containing a Bloch point). The orientation of the
magnetization for the in-plane component is given by the arrows
and the color wheel, and by the grayscale for the out of plane
contribution (along $z$). The norm of the arrows is proportional
to the in plane magnetization. The lateral size of the system is
$110~\textrm{nm}\times 110~\textrm{nm}$.}
\label{fig-vbl-BP-all}
\end{figure}
For all the values of the thickness $h$ that we envisage, we consider
configurations with and without a BP (Fig.~\ref{fig-vbl-BP}
and~\ref{fig-vbl-noBP}). The configuration without a BP can be
stabilized only for a thickness lower than 15~nm and becomes
energetically favorable below a critical thickness of around 13~nm
(Fig.~\ref{fig-stability-BP}). For thicknesses larger than 15~nm, well
defined N\'eel caps are present due to the dipolar field created by
the domains and a BP nucleates on the surface where the magnetization
rotates of nearly 360$^\circ$ (Fig.~\ref{fig-vbl-noBP}, at $z =
0$). It must be noted that the critical thickness is around
$4~l_{ex}$, which is significantly lower than the value $7.3~l_{ex}$
found by a variational method\cite{hubert76}. Indeed this method,
based on a local approximation of the dipolar field, is well justified
if $Q \gg 1$ but does not hold in our case.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{VBL_40_BP_145.eps}
\includegraphics[width=\linewidth]{VBL_40_BP_075.eps}
\includegraphics[width=\linewidth]{VBL_40_BP_005.eps}
\caption{(Color online) Cross-section along the planes $z=h$, $z=h/2$ and $z=0$
(from top to bottom) with a VBL containing a BP.}
\label{fig-vbl-BP}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{VBL_40_NoBP_145.eps}
\includegraphics[width=\linewidth]{VBL_40_NoBP_075.eps}
\includegraphics[width=\linewidth]{VBL_40_NoBP_005.eps}
\caption{(Color online) Cross-section along the planes $z=h$, $z=h/2$ and $z=0$
(from top to bottom) with a VBL containing no BP.}
\label{fig-vbl-noBP}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{stability_BP.eps}
\caption{(Color online) Relative energy of a system with and without a BP in the
VBL. The reference energy is given by the system containing a
domain wall without a VBL. The decrease of the energy when a BP is
present is mainly due to the dipolar term.}
\label{fig-stability-BP}
\end{figure}
An interesting feature of the domain wall containing a VBL without a
BP is the so-called buckling of the magnetization near the line. This
buckling was already described in garnets, where it was ascribed to
the diminution of the magnetic charges created by the variation of the
magnetization in the direction orthogonal to the wall\cite{miltat89}
(which we note $x$ here). These charges are called $\pi$ charges or
dipolar charges, in analogy with $\pi$ orbitals, because positive
charges are associated with negative charges.
Analytical models, based on the assumption $Q \gg 1$, and two
dimensional simulations, predict a much smaller value of the
buckling\cite{miltat89}. Given our material parameters, it would be less than 2~nm,
whereas it is around 10~nm for all the thicknesses we have
considered. Three dimensional simulations for $Q = 7.7$ and thick
garnet layers ($h \approx 50~l_{ex}$) also give a tiny buckling. In
this case, a tilt of the wall is observed in the $x-z$
plane that provides a compensation for the charges associated with the
variation of the magnetization along $y$ (called $\sigma$ or monopolar
charges)\cite{thiaville91}.
Such a deformation is not present in our simulations. As shown in
Fig.~\ref{fig-charges-ini} and~\ref{fig-charges-fin}, the compensation
of the $\sigma$ charges is achieved by the buckling itself. It can be
noted that this buckling is due to the dipolar term, although a small
decrease of the exchange energy is also observed in the presence of
buckling. Indeed, we have represented in Fig.~\ref{fig-charges-ini} the
magnetic charges $-\partial m_x/\partial x$ ($\pi$ charges),
$-\partial m_y/\partial y$ ($\sigma$ charges) and the total charges
after a transformation $\phi \rightarrow -\phi$ on the configuration
of Fig.~\ref{fig-vbl-noBP}. The angle $\phi$ refers to the orientation of
the magnetization in the plane of the layer. The configuration after
the minimization of the energy is shown in
Fig.~\ref{fig-charges-fin}. The deformation of the domain wall has
reversed, whereas the exchange energy was invariant under the
transformation. This indicates that the deformation must be ascribed
to the compensation of $\pi$ and $\sigma$ charges, which cannot really
be distinguished, given the moderate value of $Q$. Incidentally, the
name ``$\sigma$ charges'' is not really adapted to our case given that
positive charges are associated with negative charges along $y$.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{rho_x_ini_crop.eps}
\vspace{0.5cm}
\includegraphics[width=\linewidth]{rho_y_ini_crop.eps}
\vspace{0.5cm}
\includegraphics[width=\linewidth]{rho_ini_crop.eps}
\includegraphics[width=0.8\linewidth]{pg_0002.eps}
\caption{Magnetic charges in the plane $z=h/2$ corresponding to the
configuration of Fig.~\ref{fig-vbl-noBP} when the transformation
$\phi\rightarrow -\phi$ is performed on the magnetization and the
configuration is left unrelaxed. From top to bottom: charges
associated with the variation of $m_x$, $m_y$ and total
charge. The charges due to the variation along $z$ are the same
before and after the transformation and are not
represented. Positive and negative charges are represented
repectively by light and dark gray tones. On the schematic the
letters $x$ and $y$ refer to charges due to the variation of $m_x$
and $m_y$.}
\label{fig-charges-ini}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{rho_x_fin_crop.eps}
\vspace{0.5cm}
\includegraphics[width=\linewidth]{rho_y_fin_crop.eps}
\vspace{0.5cm}
\includegraphics[width=\linewidth]{rho_fin_crop.eps}
\includegraphics[width=0.8\linewidth]{pg_0003.eps}
\caption{Magnetic charges in the plane $z=h/2$ after the relaxation
of the configuration in Fig.~\ref{fig-charges-ini}.}
\label{fig-charges-fin}
\end{figure}
Unfortunately, no conclusions can be made from experimental datas on
the straight wall. As seen in Fig.~\ref{LTEM1}, at low fields the
domain walls in this sample are not straight enough.
On the contrary, is is possible to simulate entire bubbles and thus to
reproduce the geometry of domain walls near saturation.
\subsection{Magnetic bubbles}
\label{sec:magnetic-bubbles}
It seems reasonable to think that the deformation observed in straight
domain walls can be responsible for the distorted shape of the magnetic
bubbles.
However, the curvature of the magnetic bubbles is such that the domain
wall cannot be considered as a straight object. The presence of two
VBL in a bubble, that bear opposite $\sigma$ charges and thus attract
themselves, may also affect the distortion. Therefore it is necessary
to perform the simulation of entire magnetic bubbles.
The system considered in these simulations contains a magnetic bubble
centered in a square of length 218~nm. Three thicknesses are
envisaged: 15~nm, 20.7~nm and 37.6~nm. Periodic boundary conditions
are used along $x$ and $y$ to simulate an array of bubbles. The
distance between the bubble's centers is thus 218~nm and is close to
the experimental value of about 250~nm. The use of an adaptive mesh
refinement technique permits to decrease the number of variables by a
factor of around 8.
Stability of bubbles is achieved for applied fields between two
critical values: if the field is too high, the bubble collapses, and
if the field is too low, the bubble transforms into a stripe domain
pattern\cite{thiele70}. For a thickness of 37.6~nm, we find that the
collapse field is between 0.6 and 0.7~T, close to the experimental
value of 0.8~T.
For thicknesses of 20.7~nm and 37.6~nm, it is not possible to
stabilize the configuration with two VBL without a BP. As
observed for straight domain walls, two BP nucleate because
of the dipolar field. The bubbles with VBL containing BP are
found to be almost circular (Fig.~\ref{fig-bubble-with-BP}). The small
distortion may be ascribed either to the interaction between the two
VBL which possess opposite charges, or to a local stiffness due to the
presence of the BP.
For a thickness of 15~nm, the configuration containing VBL with BP is
not stable and the two BP migrate towards the two opposite surfaces of
the system. The two regions that exbibit high spatial variations of
the magnetization (360$^\circ$ rotation for straight domain walls) are
thus located on opposite sides of the system
(Fig.~\ref{fig-bubble-without-BP}). This disappearance of the two BP
is associated with a deformation of the domain wall, in agreement with
the one found on straight domain walls in the previous section and
with experimental results. Likewise the charges are minimized and the
exchange energy decreases.
It is worth noting that the magnetization in the two lines is oriented
in the same direction. This is called the winding
configuration\cite{hubert98}. Lines with opposite orientations of the
magnetization constitute the unwinding configuration, and have found
to be unstable: the two lines annihilate and the bubble is
circular. Indeed, in order to minimize charges in both VBL the bubble
would have a ``heart''-like shape, which is not favorable. The
orientation in the two lines is close to the orientation in the rest
of the domain wall at $z=h/2$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.47\linewidth]{bulle_103_VBL_BP.eps}
\hspace{\fill}
\includegraphics[width=0.48\linewidth]{bulle_206_VBL_BP_zoom.eps}
\vspace{0.2cm}
\includegraphics[width=0.48\linewidth]{bulle_103_VBL_BP_zoom.eps}
\hspace{\fill}
\includegraphics[width=0.48\linewidth]{bulle_000_VBL_BP_zoom.eps}
\caption{(Color online) Cross-sections of a system containing two
VBL with a BP. From top left to bottom right: whole system at
$z=h/2$ (lateral size $218~\textrm{nm}\times 218~\textrm{nm}$),
zoom at $z=h$, $z=h/2$ and $z=0$ (lateral size
$90~\textrm{nm}\times 90~\textrm{nm}$). The system is 20.7~nm
thick and a field of 0.3~T is applied. The largest cell lateral
size is 27.3~nm, while the smallest is 1.7~nm.}
\label{fig-bubble-with-BP}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\linewidth]{bulle_075_VBL_all.eps}
\hspace{\fill}
\includegraphics[width=0.48\linewidth]{bulle_145_VBL_zoom.eps}
\vspace{0.2cm}
\includegraphics[width=0.48\linewidth]{bulle_075_VBL_zoom.eps}
\hspace{\fill}
\includegraphics[width=0.48\linewidth]{bulle_005_VBL_zoom.eps}
\caption{(Color online) Cross-sections of a system containing two
VBL with no BP. From top left to bottom right: whole system at
$z=h/2$ (lateral size $218~\textrm{nm}\times 218~\textrm{nm}$),
zoom at $z=h$, $z=h/2$ and $z=0$ (lateral size
$90~\textrm{nm}\times 90~\textrm{nm}$). The system is 15~nm thick
and a field of 0.25~T is applied.}
\label{fig-bubble-without-BP}
\end{figure}
A further step can be made towards the comparison between simulated
and experimental configurations by simulating Fresnel contrasts that
would be obtained from the multiscale calculations. They are given in
Fig.~\ref{LTEM2}. Beside the result corresponding to
Fig.~\ref{fig-bubble-without-BP}, we report the results for a
bubble without a BP. It can be seen that the position of the contrast
and the shape of the bubble agree fairly well.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{LTEM2.eps}
\caption{Comparison of simulated Fresnel contrasts and experimental contrasts for the two type of bubbles observed. The defocalisation used is 100~$\mu$m.}
\label{LTEM2}
\end{figure}
Despite the good agreement on the shape of bubbles, the transition
from the BP-free to the BP configuration does not occur at the same
thickness experimentally and in the simulations. Indeed, the
configurations without BP are not stable in our simulations for a
thickness of 37.6~nm (and even 20.7~nm), whereas according to the
deformation of the bubbles observed in the samples, VBL contain a BP
at this thickness. One reason for this discrepancy may be the presence
of the soft layer on which the L1$_0$ layer is deposited. The exchange
and demagnetizing contributions to the energy are modified due to the
different closure of the magnetic flux. The thickness of the bottom
N\'eel cap increases\cite{Masseboeuf2008}, which induces a dissymmetry in
the system and could favor the configuration without BP.
\section{Conclusion}
\label{sec:conclusion}
Using Lorentz transmission electron microscopy on FePd samples and
multiscale simulations, we have shown that it is possible to determine
the magnetic structure of domain walls as thin as 8~nm. The presence
of vertical Bloch lines in some bubbles has been demonstrated by
microscopy. Bubbles containing two vertical Bloch lines exhibit a
distortion of the classical circular shape. The simulation of entire
bubbles has been possible thanks to the multiscale approach and has
revealed that the deformation observed experimentally is a signature
of the absence of Bloch points inside the vertical Bloch lines. For
straight domain walls in FePd, we predict a larger buckling than
previously reported for other materials.
| proofpile-arXiv_065-6913 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In spite of many papers devoted to the physical properties (physics
and geometry, see e.g. Sulentic et al. 2000) of the broad line
region (BLR) in active galactic nuclei (AGN), the true nature of the
BLR is not well known. The broad emission lines, their shapes and
intensities can give us many information about the BLR geometry and
physics. In the first instance, the change in the line profiles and
intensities could be used for investigating the BLR nature. It is
often assumed that variations of line profiles on long time scales
are caused by dynamic evolution of the BLR gas, and on short time
scales by reverberation effects (Sergeev et al. 2001). In a number
of papers it was shown that individual segments in the line profiles
change independently on both long and short time scales (e.g.,
Wanders and Peterson 1996; Kollatschny and Dietrich 1997; Newman et
al. 1997; Sergeev et al. 1999). Moreover, the broad line shapes may
reveal some information about the kinematics and structure of the
BLR (see Popovi\'c et al. 2004).
One of the most famous and best studied Seyfert galaxies is NGC 4151
(see e.g. Ulrich 2000; Sergeev et al. 2001; Lyuty 2005, Shapovalova
et al. 2008 - Paper I, and reference therein). This galaxy, and its
nucleus, has been studied extensively at all wavelengths. The
reverberation investigation indicates a small BLR size in the center
of NGC 4151 (see e.g. Peterson and Cota 1988: $6\pm4$ l.d.; Clavel
et al. 1990: $4\pm3$ l.d.; Maoz et al. 1991: $9\pm2$ l.d.; Bentz et
al. 2006: $6.6_{+1.1}^{-0.8}$ l.d.).
Spectra of NGC 4151 show a P Cygni Balmer and He I absorption with
an outflow velocity from -500 ${\rm km \ s^{-1}}$ to -2000 $\ {\rm
km \ s^{-1}}$, changing with the nuclear flux (Anderson and Kraft
1969; Anderson 1974; Sergeev et al. 2001; Hutchings et al. 2002).
This material is moving outward along the line of sight, and may be
located anywhere outside $\sim$15 light-days (Ulrich and Horne
1996). An outflow is also seen in higher velocity emission-line
clouds near the nucleus (Hutchings et al. 1999), such as multiple
shifted absorption lines in C IV and other UV resonance lines
(Weymann et al. 1997; Crenshaw et al. 2000), while warm absorbers
are detected in X-ray data (e.g. Schurch and Warwick 2002).
Some authors assumed that a variable absorption is responsible, at
least partially, for the observed continuum variability of AGN
(Collin-Souffrin et al.1996; Boller et al. 1997; Brandt et al. 1999;
Abrassart and Czerny 2000; Risaliti, Elvius, Nicastro 2002). Czerny
et al. (2003) considered that most of variations are intrinsic to
the source, though a variable absorption cannot be quite excluded.
The nucleus of NGC 4151 emits also in the radio range. The radio
image reveals a 0.2 pc two-sided base to the well-known arc-second
radio jet (Ulvestad et al. 2005). The apparent speeds of the jet
components relative to the radio AGN are less than 0.050c and less
than 0.028c at nuclear distances of 0.16 and 6.8 pc, respectively.
These are the lowest speed limit yet found in a Seyfert galaxy and
indicates non-relativistic jet motions, possibly due to thermal
plasma, on a scale only an order of magnitude larger than the BLR
(Ulvestad et al. 2005).
The observed evolution of the line profiles of the Balmer lines of
NGC 4151 was studied by Sergeev et al.(2001) in 1988--1998 and was
well modeled within the framework of the two-component model,
where two variable components with fixed line profiles
(double-peaked and single-peaked) were used.
Although the AGN of NGC 4151 has been much observed and discussed,
there are still several questions concerning the BLR kinematics
(disk, jets or more complex BLR) and dimensions of the innermost
region. On the other hand, as was mentioned above, multi-wavelength
observations suggest both the presence of an accretion disk (with a
high inclination) and an outflow emission (absorption).
Consequently, further investigations of the NGC 4151 nucleus are
needed in order to constraint the kinematics, dimensions and
geometry of its BLR.
This work is subsequent to Paper I with the aim of studying the
variations of both the integrated profiles of the broad emission
lines and segments along the line profiles, during the (11-year)
period of monitoring of NGC 4151.
The paper is organized as follows: in \S2 observations and data
reduction are presented. In \S3 we study the averaged spectral line
profiles (over years, months and Periods I--III, see Paper I) of
H$\alpha$ and H$\beta$, the line asymmetries and their FWHM
variations, light curves of different line segments and the
line--segment to line--segment flux and continuum--line--segment
relations. In \S4 we analyze the Balmer decrements. The
results are discussed in \S6 and in \S7 we outline our conclusions.
\section{Observations and data reduction}
Optical spectra of NGC 4151 were taken with the 6-m and 1-m
telescopes of SAO, Russia (1996--2006), with the 2.1-m
telescope of the Guillermo Haro Astrophysical Observatory (GHAO) at
Cananea, Sonora, M\'exico (1998--2006), and with the 2.1-m telescope
of the Observatorio Astron\'omico Nacinal at San Pedro Martir
(OAN-SMP), Baja California, M\'exico (2005--2006). They were
obtained with a long--slit spectrograph equipped with CCDs. The
typical wavelength range was 4000 -- 7500 \AA , the spectral
resolution was R=5--15 \AA , and the S/N ratio was $>$ 50 in the
continuum near H$\alpha$ and H$\beta$. In total 180 blue and 137
red spectra were taken during 220 nights.
Spectrophotometric standard stars were observed every night. The
spectrophotometric data reduction was carried out either with the
software developed at the SAO RAS by Vlasyuk (1993), or with IRAF
for the spectra observed in M\'exico. The image reduction process
included bias subtraction, flat-field corrections, cosmic ray
removal, 2D wavelength linearization, sky spectrum subtraction,
addition of the spectra for every night, and relative flux
calibration based on standard star observations. Spectra were scaled
to the constant flux F$([\ion{O}{iii}]\lambda\,5007)$. More details
about observations and data reduction are given in Paper I and will
not be repeated.
The observed fluxes of the emission lines were corrected for the
position angle (PA), seeing and aperture effects (see Paper I). The
mean error (uncertainty) in our { integral} flux determinations
for H$\alpha$ and H$\beta$ and for the continuum is $<$3\%. In order
to study the broad components of emission lines showing the main BLR
characteristics, we removed from the spectra the narrow components
of these lines and the forbidden lines. To this purpose, we
construct spectral templates using the blue and red spectra in the
minimum activity state (May 12, 2005). Both the broad and narrow
components of H$\beta$ and H$\alpha$, were fitted with Gaussians
{ (see Fig. \ref{fig1}, available electronically only)}.
The template spectrum contains the following lines: for H$\beta$ the
narrow component of H$\beta$ and [\ion{O}{iii}]\,$\lambda\lambda$
4959, 5007; for H$\alpha$ the narrow component of H$\alpha$,
[\ion{N}{ii}]\,$\lambda\lambda$\,6548, 6584,
[\ion{O}{i}]\,$\lambda\lambda$\,6300, 6364,
[\ion{S}{ii}]\,$\lambda\lambda$\,6717, 6731. Then, we scaled the
blue and red spectra according to our scaling scheme (see Appendix
in Shapovalova et al. 2004), using the template spectrum as a
reference. The template spectrum and any observed spectrum are thus
matched in wavelength, reduced to the same resolution, and then the
template spectrum is subtracted from the observed one. More details
can be found in Paper I and Shapovalova et al. (2004).
\onlfig{1}{\begin{figure*}
\centering
\includegraphics[width=7cm,angle=-90]{f01a.ps}
\includegraphics[width=7cm,angle=-90]{f01b.ps}
\caption{The decomposition of H$\beta$ (upper panel) and H$\alpha$
(bottom panel) with Gaussians (below) in order to construct narrow
line templates. The dashed Gaussians correspond to the narrow
components taken for the narrow line templates, while the solid ones
correspond to the broad components (down). The observed spectra and
rms (up) are denoted with solid line. } \label{fig1}
\end{figure*}}
\onlfig{2}{\begin{figure*}
\centering
\includegraphics[width=8cm]{f02a.ps}
\includegraphics[width=8cm]{f02b.ps}
\caption{The month-averaged profiles of the H$\alpha$ and H$\beta$
broad emission lines in the period 1996--2006. The abscissae shows
the radial velocities relative to the narrow component of H$\alpha$
or H$\beta$. The ordinate shows the flux in units of
$10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$.} \label{fig2}
\end{figure*}}
\addtocounter{figure}{-1}
\onlfig{2}{\begin{figure*}
\centering
\includegraphics[width=8cm]{f02c.ps}
\includegraphics[width=8cm]{f02d.ps}
\caption{Continued.} \label{fig2}
\end{figure*}}
\begin{figure*}
\centering
\includegraphics[width=8cm]{f03a.ps}
\includegraphics[width=8cm]{f03b.ps}
\caption{The year-averaged profiles (solid line) and their rms
(dashed line) of the H$\alpha$ and H$\beta$ broad emission lines in
1996-2006. The abscissae (OX) shows the radial velocities relative
to the narrow component of the H$\alpha$ or H$\beta$ line. The
ordinate (OY) shows the flux in units of
$10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$.}\label{fig3}
\end{figure*}
\section{Line profile variations}
To investigate the broad line profile variations, we use the most
intense broad lines in the observed spectral range, i.e. H$\alpha$
and H$\beta$, only from spectra with the spectral resolution of
$\sim8\,\AA$. In Paper I we defined 3 characteristic time periods
(I: 1996--1999, II: 2000--2001, III: 2002--2006) during which the
line profiles of these lines were similar. Average values and rms
profiles of both lines were obtained for these periods.
Here we recall some of the most important results of Paper I. In the
first period (I, 1996--1999, JD=2450094.5--2451515.6), when the
lines were the most intense, a highly variable blue component was
observed, which showed two peaks or shoulders at $\sim-4000 \ {\rm
km \ s^{-1}}$ and $\sim-2000 \ {\rm km \ s^{-1}}$ in the rms
H$\alpha$ profiles and, to a less degree, in H$\beta$\footnote{Here
and after in the text the radial velocities are given with respect
to the corresponding narrow components of H$\alpha$ or H$\beta$,
i.e. it is accepted that $V_{\rm r}=0$ for the narrow components of
H$\alpha$ and H$\beta$}. In the second period (II, 2000--2001,
JD=2451552.6--2452238.0) the broad lines were much fainter; the
feature at $\sim-4000 \ {\rm km \ s^{-1}}$ disappeared from the blue
part of the rms profiles of both lines; only the shoulder at
$\sim-2000 \ {\rm km \ s^{-1}}$ was present. A faint shoulder at
$\sim3500 \ {\rm km \ s^{-1}}$ was present in the red part of rms
line profiles (see Fig. 6 in Paper I). In the third period (III,
2002--2006, JD=2452299.4--2453846.4) a red feature (bump, shoulder)
at $\sim2500 \ \ {\rm km \ s^{-1}}$ was clearly seen in the red part
of both the mean and the rms line profiles (see Fig. 7 in Paper I).
In this paper we study the variations of the broad line profiles in
more details.
\subsection{Month- and year-averaged profiles of the broad H$\alpha$ and H$\beta$
lines}
A rapid inspection of spectra shows that the broad line profiles
vary negligibly within a one-month interval. On the other hand, in
this time-interval, a slight variation in the broad line flux is
noticed (usually around $\sim5-10$\%, except in some cases up to
$30$\%). Therefore we constructed the month-averaged line profiles
(see Fig. \ref{fig2}, available electronically only) of H$\alpha$
and H$\beta$.
Moreover, the broad H$\alpha$ and H$\beta$ line profiles were not
changing within a one-year period (or even during several years),
while the broad line fluxes sometimes varied by factors $\sim 2-2.5$
even during one year. The smallest flux variations (factors
$\sim1.1-1.3$) were observed in 1996--1998 (during the line flux
maximum). The largest line flux variations were observed in
2000--2001 and 2005 (factors $\sim 1.7-2.5$), during the minimum of
activity.
As it was mentioned in Paper I, specific line profiles are observed
during the three periods, but a more detailed inspection of the line
profiles shows that slight changes can also be seen between the
year-averaged profiles (see Fig.~\ref{fig3}). Note here that in the
central part of the H$\alpha$ profiles one can often see
considerable residuals (e.g., in 2001 in the form of peaks) due to a
bad subtraction of the bright narrow components of H$\alpha$ and
[\ion{N}{ii}]\,$\lambda\lambda$\,6548, 6584, (at $V_{\rm r} \sim-680
\ {\rm km \ s^{-1}} $ and $\sim960\ {\rm km \ s^{-1}}$). Therefore,
we cannot conclude on the presence of some absorption in the central
part of H$\alpha$. But, in the H$\beta$ profiles (in the central
part, at $V_{\rm r} \sim-430\ {\rm km \ s^{-1}} $; $\sim-370\ {\rm
km \ s^{-1}}$) an absorption, especially strong from June 1999 to
the end of 2000 (see Fig. \ref{fig2}), was detected. Note here that
Hutchings et al. (2002) also found absorbtion at $V_{\rm r} (-2000 -
0) \ {\rm km \ s^{-1}} $ in the H$\beta$ line.
Now let us point some noticeable features in month-averaged line
profiles:
\begin{figure}
\centering
\includegraphics[width=8cm]{f04.ps}
\caption{Some examples of month-averaged profiles of the H$\alpha$
and H$\beta$ broad emission lines from 2000 to 2006.
The abscissae shows the radial velocities relative to the narrow
component of H$\alpha$ or H$\beta$.
The vertical dashed lines correspond to radial velocities:
-2600$\ {\rm km \ s^{-1}}$, 0$\ {\rm km \ s^{-1}}$ and 3000 $\ {\rm km \ s^{-1}}$.
The profiles are shifted vertically by
a constant value.}\label{fig4}
\end{figure}
\onlfig{5}{
\begin{figure*}
\centering
\includegraphics[width=8.5cm]{f05a.ps}
\includegraphics[width=8.5cm]{f05b.ps}
\caption{The H$\beta$ broad emission lines of NGC 4151 in
particular two sequent months where bumps were strong (red and blue
line). Their residual is also given (black line below).} \label{fig5}
\end{figure*} }
1) In 1996--2001 a blue peak (bump) in H$\beta$ and a shoulder in
H$\alpha$ were clearly seen at $\sim-2000\ {\rm km \ s^{-1}}$ (Fig.
\ref{fig2}). However, in 2002--2004, the blue wing of both lines
became steeper than the red one and it did not contain any
noticeable features.
2) In 2005 (May--June), when the nucleus of NGC 4151 was in the
minimum activity, the line profiles had a double-peak structure with
two distinct peaks (bumps) at radial velocities of ($-2586; +2027)\
{\rm km \ s^{-1}}$ in H$\beta$ and $(-1306; +2339) \ {\rm km \
s^{-1}}$ in H$\alpha$ (see Fig. \ref{fig2}; 2005\_05 and 2005\_06).
In principle, the two-peak structure in the H$\beta$ profiles is
also seen in spectra of 1999--2001: at $V_{\rm r}\sim-1500 \ {\rm km
\ s^{-1}}$ the blue and $V_{\rm r}\sim500 \ {\rm km \ s^{-1}}$ the
red peak. But in this case the blue peak may be caused by a broad
absorption line at the radial velocity ($\sim-400 \ {\rm km \
s^{-1}}$).
3) In 2006 the line profiles changed dramatically - the blue wing
became flatter than in previous period, while the red wing was very
steep without any feature at $V_{\rm r}>2300\ {\rm km \ s^{-1}}$.
4) In 2002 a distinct peak (bump) appeared in the red wing of the
H$\alpha$ and H$\beta$ lines at the radial velocity $V_{\rm
r}\sim3000\ {\rm km \ s^{-1}} $. The radial velocity of the red peak
decreases: in 2002-2003 it corresponds to $\sim3100\ {\rm km \
s^{-1}}$ and in 2006 to $\sim2100\ {\rm km \ s^{-1}}$. This effect
is well seen in Fig. \ref{fig4}, especially in the H$\alpha$ line
profile. Table \ref{tab1} gives the obtained radial velocities of
red peak measured in the H$\alpha$ and H$\beta$ profiles for which
the peak was clearly seen. Radial velocities of the H$\alpha$ and
H$\beta$ red peak, measured in the same periods, are similar (the
differences are within the error-bars of measurements). The mean
radial velocity of the red peak decreased by $\sim1000\ {\rm km \
s^{-1}}$ from 2002 to 2006 (Table \ref{tab1}). It is not clear if the
red peak is shifting along the line profile or it disappears and
again appears as a new red peak at another velocity.
\begin{table*}[t]
\begin{center}
\caption[]{The peak shifts in the red wing of the H$\alpha$ and
H$\beta$ lines. Columns: 1 - year and month for H$\alpha$ (e.g.,
2002\_1\_3 = 2002, January and March); 2 - $V_{\rm r}$(H$\alpha$),
H$\alpha$ peak velocity in the red wing ($ {\rm km \ s^{-1}}$); 3 -
year and month for H$\beta$; 4 - $V_{\rm r}$(H$\beta$), H$\beta$
peak velocity in the red wing ($ {\rm km \ s^{-1}}$). The line after
each year (or interval of years) gives the mean $V_{\rm
r}$(H$\alpha$) and $V_{\rm r}$(H$\beta$) and their standard
deviations. The last line gives the maximal shift of the red peak
($\Delta V_{\rm r}$) from 2002 to 2006.}\label{tab1}
\begin{tabular}{lccc}
\hline
\hline
H$\alpha$ & $V_{\rm r}$(H$\alpha$) & H$\beta$ & $V_{\rm r}$(H$\beta$)\\
year\_month & $ {\rm km \ s^{-1}}$ &year\_month & $ {\rm km \ s^{-1}}$ \\
\hline
& & 2001b\_11 & 3257 \\
2002\_1\_3 & 3068 & 2002\_1\_3 & 3257 \\
2002\_4\_5 & 3114 & 2002\_4\_6 & 3196 \\
2002r\_06 & 3068 & & \\
2002r\_12 & 2977 & 2002b\_12 & 3073 \\
\hline
mean 2002 & 3057$\pm$57 & mean 2001\_11 + 2002 & 3197$\pm$94 \\
\hline
2003r\_01 & 2886 & 2003b\_01 & 3073 \\
2003r\_05 & 2613 & 2003b\_03 & 3012 \\
2003r\_06 & 2613 & 2003\_5\_6 & 2889 \\
\hline
mean 2003 & 2704$\pm$158 & mean 2003 & 2991$\pm$94 \\
\hline
2004r\_12 & 2339 & 2004b\_12 & 2273 \\
2005r\_02 & 2294 & 2005\_1\_2 & 2027 \\
2005r\_04 & 2294 & 2005\_5\_6 & 2027 \\
2005r\_06 & 2339 & 2005\_11\_12 & 2338 \\
2006\_1\_2 & 2430 & 2006b\_01 & 2335 \\
& & 2006b\_02 & 2335 \\
2006r\_03 & 2202 & 2006b\_03 & 2150 \\
2006r\_04 & 2294 & 2006b\_04 & 2027 \\
\hline
mean 2004-2006 & 2313$\pm$69 & mean 2004-2006 & 2189$\pm$147 \\
\hline
\multicolumn{4}{l}{maximal shift $\Delta V_{\rm r}$(H$\alpha$+H$\beta$)=(1072$\pm$226)$ \ {\rm km \ s^{-1}}$} \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=9cm]{f06.ps}
\caption{ The absorption seen in years 1996, 1997, 1998 and 2000
(from top to bottom). The H$\beta$ line after subtraction of the
narrow lines is shown with solid lines. The vertical line marks the
radial velocity of -400 km/s. }\label{fig6}
\end{figure}
\subsection{The absorption and emission features in H$\alpha$ and H$\beta$ line profiles}
As mentioned above (\S 3.1), the H$\alpha$ and H$\beta$ line
profiles have the absorption and emission features. Distinguishing
between these features is very difficult. The question is are the
"bumps" an intrinsic property of the broad H$\alpha$ and H$\beta$
lines or is it just the strengthening of the absorption features?
Furthermore, an open question is the origin of these features, i.e.
is there an intrinsic mechanism in the BLR which creates these
"bumps"? We should mentioned here that in the central part of
both lines, the residuals of the narrow components can affect these relatively
weak absorption/emission features.
To study it, the residuals between some H$\beta$ broad lines with
prominent (noticeable) bumps from particular two successive months
were obtained. An example of these residuals are presented in
Fig.~\ref{fig5} (available electronically only). The noticeable
residual bumps (without an absorption-like feature) at $V_{\rm r}$
from (-2000) $ {\rm km \ s^{-1}}$ to (-1000) $ {\rm km \ s^{-1}}$ in
1999-2001, and at $V_{\rm r}$ from 3500 $ {\rm km \ s^{-1}}$ to 2500
$ {\rm km \ s^{-1}}$ in 2002-2006 are seen well. It seems that
the absorption changes slowly and during several next months it
remains constant. Therefore this absorption disappears in profile
residuals (see Fig. \ref{fig5}). Consequently, it seems that
emission bumps observed in the H$\beta$ and H$\alpha$ line profiles
(seen in the residuals in Fig. \ref{fig5}) are mostly an intrinsic
property of the broad emission-line profile.
The strong absorption features are also present in 1996-2001
H$\beta$ spectra. In this period we observed the dip and broad
absorbtion at radial velocity that changes from $\sim$-1000 km/s in
(1996-1998) to $\sim$-400 km/s in 1999-2000 (see Fig. \ref{fig2},
available electronically only). This velocity corresponds to the
minimum of the absorption band, but its blue edge extends to higher
velocities ($\sim$-1800 km/s in 1998 and $\sim$-1170 in 1999-2000).
Fig. \ref{fig6} shows some observed individual spectra and their
broad component where the blue absorbtion is well resolved. The blue
shifted absorption is probably coming from an outflowing material.
It is interesting to note that the higher radial velocity ($\sim$
-1000 km/s, observed in 1996-1998) is appearing for a higher continuum
flux level, while the smaller velocity ($\sim$ -400 km/s) has been
detected when the continuum flux decreases 3 - 6 times, i.e. we
confirm the results reported by Hutchings et al. (2002), who found
the same trend that the outflow velocity increases with the
continuum flux.
\onlfig{7}{
\begin{figure*}
\centering
\includegraphics[width=15.5cm]{f07.ps}
\caption{An example of the FWHM and asymmetry measurements. The
observed spectra is denoted with the dashed line and the smoothed
spectra with solid line. }\label{fig7}
\end{figure*}
}
\begin{figure}
\centering
\includegraphics[width=10cm]{f08.eps}
\caption{Variations of the FWHM (upper panel), asymmetry (middle
panel) in H$\alpha$ (denoted with crosses) and H$\beta$ (denoted
with plus) broad lines, and of the continuum flux at $\lambda 5100\,
\AA$ (bottom panel) in 1996--2006. The abscissae shows the Julian
date (bottom) and the corresponding year (up). The continuum flux is
given in $10^{-14}\rm erg \ cm^{-2} \ s^{-1} \AA^{-1}$.
}\label{fig8}
\end{figure}
\subsection{Asymmetry of the broad H$\alpha$ and H$\beta$ line profiles}
We measured the full width at half maximum (FWHM) of the broad lines
from their month-averaged profiles and determined the asymmetry (A)
as a ratio of the red and blue parts of FWHM, i.e. A=W$_{\rm
red}$/W$_{\rm blue}$, { where W$_{\rm red}$ and W$_{\rm blue}$
are the red and blue half-widths at maximal intensity (see Fig.
\ref{fig7}, available electronically only) with respect to the
position of the narrow component of H$\alpha$ and H$\beta$. As we
mentioned above, there are residuals in the center of the H$\alpha$
and H$\beta$ lines due to residuals from the subtraction of narrow
components, that can affect the FWHM and A measurements. Therefore
we first smoothed the line profiles, in order to avoid artificial
peaks from the residuals (see Fig. \ref{fig7}), before measuring the
FWHM and asymmetry. The two independent measurements of FWHM and A
were performed.} We determined the averaged continuum and its
dispersion for each month and the averaged Julian date from the
spectra which were used to construct the month-averaged profiles.
The measurements of the FWHM, asymmetry and continuum during the
whole period of monitoring (1996--2006) are presented in Table
\ref{tab2} (available electronically only) and in Fig. \ref{fig8}.
{ In Table \ref{tab2} we give averaged values of the FWHM and A
from two independent measurements and their dispersions.}
\onllongtab{2}{
\begin{longtable}{cccccccc}
\caption[]{\label{tab2} The FWHM and asymmetry of the H$\alpha$ and
H$\beta$ broad emission lines in the period of 1996--2006. Columns:
1 - year and month (e.g., 1996\_01 = 1996\_January); 2 - Julian date
(JD); 3 - the FWHM of H$\beta$; 4 - the asymmetry of H$\beta$; 5 -
the FWHM of H$\alpha$ ; 6 - the asymmetry of H$\alpha$; 7 - F(5100),
the continuum flux at $\lambda 5100\, \AA$ in units of
$10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$ and $\sigma$(cnt),
the estimated continuum flux error in the same units.}\\
\hline \hline
year\_month & JD & FWHM H$\beta\pm\sigma$& A H$\beta\pm\sigma$& FWHM H$\alpha\pm\sigma$ &A H$\alpha\pm\sigma$& \multicolumn{2}{c}{F(5100)$\pm \sigma$}\\
& 2400000+ & $ {\rm km \ s^{-1}}$ & & ${\rm km \ s^{-1}}$ & & \multicolumn{2}{c}{$10^{-14} \ \rm erg \ cm^{-2} \ s^{-1} \AA^{-1}$} \\
\hline1&2&3&4&5&6&7\\
\hline
\endfirsthead
\caption{Continued.}\\
\hline
year\_month & JD & FWHM H$\beta\pm\sigma$& A H$\beta\pm\sigma$& FWHM H$\alpha\pm\sigma$ &A H$\alpha\pm\sigma$& \multicolumn{2}{c}{F(5100)$\pm \sigma$}\\
& 2400000+ & $ {\rm km \ s^{-1}}$ & & ${\rm km \ s^{-1}}$ & & \multicolumn{2}{c}{$10^{-14} \ \rm erg \ cm^{-2} \ s^{-1} \AA^{-1}$} \\
\hline
1&2&3&4&5&6&7\\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
1996b\_01& 50096.0 & 6860 $\pm$ 566 & 0.904 $\pm$ 0.043 & 5332 $\pm$ 324 & 0.799 $\pm$ 0.030 & 9.031 $\pm$ 0.322\\
1996b\_03 & 50163.4 & 6182 $\pm$ 303& 0.877 $\pm$ 0.069& 5036 $\pm$ 291 & 0.782 $\pm$ 0.021 & 11.219 $\pm$ 0.617\\
1996b\_04 & 50200.8 & 6183 $\pm$ 479 & 0.877 $\pm$ 0.072 & -- -- &-- -- & 9.226 $\pm$ 0.527\\
1996b\_06 & 50249.3 & 6028 $\pm$ 434 & 0.797 $\pm$ 0.060 & -- -- & -- -- & 12.641 --\\
1996b\_07 & 50277.8 & 6367 $\pm$ 479 & 0.783 $\pm$ 0.048 & 5058 $\pm$ 259 & 0.762 $\pm$ 0.011 & 11.822 $\pm$ 0.473\\
1996b\_11 & 50402.6 & 5659 $\pm$ 348 & 0.896 $\pm$ 0.035 & -- -- & -- -- & 10.649 --\\
1997b\_03 & 50510.9 & 5568 $\pm$ 305 & 0.884 $\pm$ 0.048 & 4808 $\pm$ 420 & 0.853 $\pm$ 0.019 & 8.350 $\pm$ 0.716\\
1997b\_04 & 50547.9 & 5567 $\pm$ 218 & 0.946 $\pm$ 0.046 & 4512 $\pm$ 323 & 0.854 $\pm$ 0.005 & 6.897 $\pm$ 0.375\\
\hline
1996-1997&& 6051 $\pm$ 449&0.871 $\pm$ 0.054 & 4959 $\pm$ 307&0.810 $\pm$ 0.042& 9.979$\pm$1.950\\
\hline
1998b\_01 & 50838.0 & 5844 $\pm$ 173 &1.159 $\pm$ 0.005 & 4831 $\pm$ 323 & 0.947 $\pm$ 0.050 & 6.280 $\pm$ 0.277\\
1998b\_02 & 50867.4 & 6059 $\pm$ 130 & 1.189 $\pm$ 0.021 & -- -- & -- -- & 6.219 --\\
1998r\_04& 50934.5 & -- -- & -- -- & 5058 $\pm$ 259 & 0.981 $\pm$ 0.051 & 6.387 --\\
1998b\_05& 50940.4 & 6489 $\pm$ 42 & 1.131 $\pm$ 0.016 & 5150 $\pm$ 388 & 0.933 $\pm$ 0.071 & 6.049 $\pm$ 0.385\\
1998b\_06& 50988.3 & 6458 $\pm$ 86 & 1.059 $\pm$ 0.029 & 4922 $\pm$ 323 & 0.931 $\pm$ 0.073 & 7.974 $\pm$ 0.776\\
1998b\_07 & 51025.3 & 6214 $\pm$ 88 & 1.082 $\pm$ 0.000 & 4762 $\pm$ 549 & 0.935 $\pm$ 0.067 & 6.205 --\\
1998\_11-12 & 51148.6 & 6275 $\pm$ 1 & 1.062 $\pm$ 0.028 & 4694 $\pm$ 387 & 0.909 $\pm$ 0.053 & 4.975 $\pm$ 1.011 \\
\hline
1998&& 6223 $\pm$ 245&1.114 $\pm$ 0.054 & 4903 $\pm$ 176&0.939 $\pm$ 0.024&6.298$\pm$0.880\\
\hline
1999b\_01& 51202.6 & 5967 $\pm$ 174 & 1.063 $\pm$ 0.001 & 4831 $\pm$ 194 &0.863 $\pm$ 0.079 & 6.245 $\pm$ 0.243 \\
1999b\_02 & 51223.1 & 5721 $\pm$ 174 & 0.957 $\pm$ 0.031 & -- -- & -- -- & 5.873 $\pm$ 0.258 \\
1999b\_03 & 51260.5 & 5475 $\pm$ 261 & 1.045 $\pm$ 0.064 & 4398 $\pm$ 291 &0.910 $\pm$ 0.045 & 5.793 $\pm$ 0.296\\
1999b\_04 & 51281.5 & 5567 $\pm$ 218 & 1.011 $\pm$ 0.078 & 4010 $\pm$ 388 & 0.959 $\pm$ 0.001 & 5.397 $\pm$ 0.146 \\
1999b\_06 & 51346.4 & 5352 $\pm$ 522 & 1.173 $\pm$ 0.058 & -- -- & -- -- & 2.753 --\\
1999b\_12 & 51515.6 & 5106 $\pm$ 609 & 1.023 $\pm$ 0.032 & 4352 $\pm$ 226 & 0.836 $\pm$ 0.045 & 2.782 $\pm$ 0.016 \\
\hline
1999&& 5531 $\pm$ 298&1.045 $\pm$ 0.072 & 4903 $\pm$ 337&0.892 $\pm$ 0.054&4.807$\pm$1.603\\
\hline
2000b\_01 & 51553.6 & -- -- & -- -- & 4010 $\pm$ 388 &0.744 $\pm$ 0.041 & 4.155 --\\
2000\_1\_2 & 51571.6 & 4890 $\pm$ 566 & 0.960 $\pm$ 0.124 & -- -- & -- -- & 3.868 $\pm$ 0.406 \\
2000b\_02 & 51589.5 & 4767 $\pm$ 566 & 1.036 $\pm$ 0.090 & 4056 $\pm$ 194 & 0.801 $\pm$ 0.055 & 3.581 $\pm$ 0.029 \\
2000b\_06 & 51711.3 & -- -- & -- -- & 6244 $\pm$ 775 & 1.014$\pm$ 0.162 & 1.434 $\pm$ 0.115 \\
2000\_6\_7 & 51728.6 & 6336 $\pm$ 262 & 1.423 $\pm$ 0.060 & -- -- & -- -- & 1.974 $\pm$ 0.764 \\
2000b\_11 & 51874.1 & 3968 $\pm$ 739 & 1.014 $\pm$ 0.020 & -- -- & -- -- & 1.778 $\pm$ 0.177 \\
2000b\_12 & 51883.6 & 3383 $\pm$ 783 &1.353 $\pm$ 0.096 & 2985 $\pm$ 871 & 0.805 $\pm$ 0.013 & 1.427 $\pm$ 0.09\\
2001b\_02 & 51947.9 & 6367 $\pm$ 914 & 1.523 $\pm$ 0.015 & 4603 $\pm$ 646 & 0.907 $\pm$ 0.106 & 1.538 $\pm$ 0.320 \\
2001b\_11 & 52238.0 & 6152 $\pm$ 261 & 1.127 $\pm$ 0.027 & -- -- & -- -- & 3.331 $\pm$ 0.007 \\
\hline
2000-2001&& 5123 $\pm$ 1199&1.205 $\pm$ 0.224 & 4380 $\pm$ 1195&0.854 $\pm$ 0.107&2.565$\pm$1.193\\
\hline
2002b\_01 & 52299.4 & 6183 $\pm$ 217 & 1.284 $\pm$ 0.006 & 4785 $\pm$ 903 & 0.957 $\pm$ 0.134 & 3.680 --\\
2002r\_03 & 52345.8 & -- -- & -- -- & 4603 $\pm$ 775 & 1.173 $\pm$ 0.124 & 3.348 $\pm$ 0.862 \\
2002\_3\_4 & 52357.3 & 6060 $\pm$ 479 & 1.706 $\pm$ 0.239 & -- -- & -- -- & 3.645 $\pm$ 0.420 \\
2002r\_04 & 52368.7 & -- -- & -- -- & 5150 $\pm$ 838 & 1.196 $\pm$ 0.076 & 3.942 --\\
2002b\_05 & 52398.7 & 6336 $\pm$ 173 & 1.369 $\pm$ 0.050 & 5605 $\pm$ 581 & 1.203 $\pm$ 0.005 & 3.283 $\pm$ 0.021 \\
2002b\_06 & 52439.0 & 7012 $\pm$ 521 & 1.285 $\pm$ 0.088 & 5582 $\pm$ 226 &1.414 $\pm$ 0.115 & 3.903 $\pm$ 0.292 \\
2002b\_12 & 52621.0 & 6121 $\pm$ 305 & 1.489 $\pm$ 0.051 & 5970 $\pm$ 387 & 1.374 $\pm$ 0.129 & 3.467 $\pm$ 0.157 \\
\hline
2002&& 6490 $\pm$ 465&1.427 $\pm$ 0.177 & 5283 $\pm$ 528&1.220 $\pm$ 0.163&3.629$\pm$0.243\\
\hline
2003b\_01 & 52665.9 & 5905 $\pm$ 348 & 1.710 $\pm$ 0.112 & 5651 $\pm$ 129 & 1.373 $\pm$ 0.117 & 5.072 --\\
2003b\_03 & 52723.8 & 6336 $\pm$ 434 & 1.344 $\pm$ 0.066 & -- -- & -- -- & 4.541 --\\
2003b\_05 & 52777.1 & 5906 $\pm$ 87 & 1.559 $\pm$ 0.009 & 5287 $\pm$ 324 &1.260 $\pm$ 0.027 & 5.283 $\pm$ 0.205 \\
2003b\_06 & 52813.7 & 5660$ \pm$ 349 & 1.630 $\pm$ 0.050 & 5013 $\pm$ 388 &1.163 $\pm$ 0.038 & 4.184 --\\
2003b\_11 & 52966.8 & 5505 $\pm$ 217 & 1.358 $\pm$ 0.082 & -- -- & -- -- & 3.478 $\pm$ 0.193 \\
2003b\_12 & 52995.6 & 5414 $\pm$ 523 & 1.352 $\pm$ 0.083 & -- -- & -- -- & 3.515 --\\
\hline
2003&& 5788 $\pm$ 336&1.492 $\pm$ 0.162 & 5317 $\pm$ 320&1.265 $\pm$ 0.105&4.346$\pm$0.763\\
\hline
2004b\_01 & 53019.5 & 5351 $\pm$ 261 & 1.456 $\pm$ 0.124 & -- -- & -- -- & 3.851 --\\
2004b\_12 & 53349.0 & 6889 $\pm$ 173 & 1.240 $\pm$ 0.057 & 5879 $\pm$ 389 &1.422 $\pm$ 0.076 & 2.932 $\pm$ 0.105 \\
\hline
2004&& 6120 $\pm$ 1088&1.348 $\pm$ 0.153 & 5879 $\pm$ 389&1.422 $\pm$ 0.076&3.392$\pm$0.650\\
\hline
2005\_1\_2 & 53402.8 & 7013 $\pm$ 783 & 1.332 $\pm$ 0.076 & -- -- & -- -- & 2.218 $\pm$ 0.308 \\
2005b\_02 & 53417.5 & -- -- & -- -- & 6586 $\pm$ 356 & 1.359 $\pm$ 0.072 & 2.040 --\\
2005b\_05 & 53505.3 & 8305 $\pm$ 262 & 1.093 $\pm$ 0.043 & 6882 $\pm$ 582 & 1.265 $\pm$ 0.105 & 2.140 $\pm$ 0.037\\
2005b\_06 & 53535.0 & 8427 $\pm$ 348 & 0.957 $\pm$ 0.041 &7635 $\pm$ 357 & 1.100 $\pm$ 0.058 & 1.841 $\pm$ 0.142 \\
2005b\_11 & 53704.0 & 6059 $\pm$ 130 & 1.402 $\pm$ 0.031 & -- -- & -- -- & 3.567 --\\
2005b\_12 & 53711.5 & 5967 $\pm$ 88 & 1.395 $\pm$ 0.007 & -- -- & -- -- & 2.911 $\pm$ 0.177\\
\hline
2005&& 7154 $\pm$ 1180&1.236 $\pm$ 0.200 & 7034 $\pm$ 540&1.241 $\pm$ 0.131&2.953$\pm$0.656\\
\hline
2006b\_01 & 53762.4 & 6797 $\pm$ 218 & 1.188 $\pm$ 0.021 &6015 $\pm$ 710 & 1.449 $\pm$ 0.202 & 1.939 $\pm$ 0.179 \\
2006b\_02 & 53788.0 & 6582 $\pm$ 174 & 1.161 $\pm$ 0.027 & 5925 $\pm$ 518 & 1.403 $\pm$ 0.145 & 1.926 $\pm$ 0.016\\
2006b\_03 & 53816.9 & 6704 $\pm$ 86 & 1.117 $\pm$ 0.002 & 5446 $\pm$ 613 & 1.546 $\pm$ 0.222 & 2.894 $\pm$ 0.293 \\
2006b\_04 & 53845.1 & 5691 $\pm$ 305 & 1.373 $\pm$ 0.044 &5401 $\pm$ 356 & 1.480 $\pm$ 0.069 & 3.390 $\pm$ 0.103 \\
\hline
2006&& 6444 $\pm$ 509&1.210 $\pm$ 0.113 & 5697 $\pm$ 318&1.470 $\pm$ 0.060&2.537$\pm$0.727\\
\hline
\hline
\hline
mean 96-06&& 6021 $\pm$ 851&1.187 $\pm$ 0.237 & 5164 $\pm$ 873&1.072 $\pm$ 0.245&\\
\hline
\hline
Paper I&& 6110 $\pm$ 440&1.056 $\pm$ 0.018 & 4650 $\pm$ 420&1.00 $\pm$ 0.023&\\
\hline
\end{longtable}
}
\onlfig{9}{
\centering
\begin{figure*}[]
\includegraphics[width=14cm]{f09.eps}
\caption{The flux error measurements (in percent of part line flux) against flux for -5, -1, 0, 1, 5 segments of the
H$\alpha$ line. The flux is in units of 10$^{-13}\rm erg cm^{-2}sec^{-1}$.} \label{fig9}
\end{figure*}
}
\begin{figure*}
\centering
\includegraphics[width=9.5cm]{f10a.eps}
\includegraphics[width=9.5cm]{f10b.eps}
\caption{Light curves of the different H$\alpha$ (upper panel) and
H$\beta$ (bottom panel) line segments.
The segments in the blue wing (numbers from -5 to -1 from Table \ref{tab3}) are
marked with crosses ($\times$) and in the red wing (numbers from +5 to +1
from Table \ref{tab3}) with plus ($+$). The abscissae gives the Julian date
(bottom) and the corresponding year (top). The ordinate gives the flux in units
$\rm erg\,cm^{-2}$\,s$^{-1}$.
The variation of the continuum flux, that is scaled by different factors
(100 and 10 times for H$\alpha$ and H$\beta$, respectively) in order to be
comparable with the variation of the central part of the line, is
presented with the solid line. }\label{fig10}
\end{figure*}
\begin{figure*}[]
\includegraphics[width=6cm]{f11a.ps}
\includegraphics[width=6cm]{f11b.ps}
\includegraphics[width=6cm]{f11c.ps}
\includegraphics[width=8cm]{f11e.ps}
\includegraphics[width=8cm]{f11d.ps}
\caption{The segment to segment response, where the first period
(Period I, 1996--1999) is denoted with open circles, the second
period (Period II, 2000--2001) with full ones, and the third period
(Period III, 2002--2006) with open triangles (Paper I). The flux is
given in $\rm erg \ cm^{-2} \ s^{-1}$.} \label{fig11}
\end{figure*}
\begin{table*}
\begin{center}
\caption[]{The beginning and ending radial velocities, $V_{\rm beg}$
and $V_{\rm end}$, in km/s for different segments in the line
profiles.} \label{tab3}
\begin{tabular}{llllllllllll}
\hline
\hline
\backslashbox[0pt][l]{Vr}{segment} & -5 & -4 & -3 & -2 & -1 & 0(C) & +1 & +2 & +3 & +4 & +5 \\
\hline
$V_{\rm beg}$ & -5500 & -4500 & -3500 & -2500 & -1500 & -500 & 500 & 1500 & 2500 & 3500 & 4500 \\
$V_{\rm end}$ & -4500 & -3500 & -2500 & -1500 & -500 & 500 & 1500 & 2500 & 3500 & 4500 & 5500 \\
\hline
\end{tabular}
\end{center}
\end{table*}
The FWHM of H$\beta$ was almost always larger than the H$\alpha$ one
(see upper panel of Fig. \ref{fig8}). The asymmetry of both lines
(middle panel of Fig. \ref{fig8}) has been gradually increasing from
1996 to 2006 and it slightly anticorrelates with the variations of
the continuum (bottom panel in Fig. \ref{fig8}). The largest values
of the FWHM and an outstanding red asymmetry of both lines (A$>$1.2)
was observed in 2002--2006. We calculated average values of the FWHM
and A for each year, as well as for the whole monitoring period
(they are given in Table \ref{tab2}). FWHMs and asymmetries obtained
in this work from measurements of the month-average profiles are
similar to the results given in Paper I and difference between them
is within the error bars. As one can see from Table \ref{tab2} the
FWHM of both lines was varying rather considerably from year to year
($\Delta$FWHM$\sim 500-1500 \ {\rm km \ s^{-1}}$ ). These lines were
the narrowest in 2000-2001 (FWHM $\sim4000-5000\ {\rm km \ s^{-1}}$)
and the broadest in 2005 ($\sim7000\ {\rm km \ s^{-1}}$) (see Table
\ref{tab2}). At the same time the H$\beta$ FWHM was always broader
than H$\alpha$ by $\sim1000 \ {\rm km \ s^{-1}}$ on average. The
asymmetry was varying in different ways: in 1996--1997 the blue
asymmetry was observed in H$\beta$ (A$\sim$0.85) when the continuum
flux was maximum; in 1998--2000 H$\beta$ was almost symmetric, and
from 2001 to 2006 the red asymmetry appeared (A$>$1.2). In
1996--2001, the blue symmetry (A$\sim$0.8) was observed in
H$\alpha$, and in 2002--2006 it was a red asymmetry (A$>$1.2).
{ Also, we tried to find correlations between the FWHM and A with
the continuum flux and found that the FWHM practically does not
correlate with the continuum (r$\sim0.0;-0.1$). In the case of the
asymmetry A, there is an indication of anticorrelation, but it
should be also taken with caution since there is a large scatter of
points on A vs. continuum flux plane, especially in the case of low
continuum fluxes $ F_{\rm c}\ < \ 5.3 \times 10^{-14} \rm erg \
cm^{-2}s^{-1}\AA^{-1}$ when the measured asymmetry reaches the
highest values.} Note here that photoinoization model predicts that
the Balmer lines should be broader in lower continuum states and
narrower in the higher continuum states (see Korista \& Goad 2004),
as due to larger response in the line cores one can expect that
Balmer lines became narrower in higher continuum states. As one can
see from Table \ref{tab2} there is no trend that FWHM is
significantly narrower in the high continuum state.
\subsection{Light curves of different line segments}
In Paper I we obtained light curves for the integrated flux in lines
and continuum. To study the BLR in more details, we are starting in
this paper from the fact that a portion of the broad line profile
can respond to variations of the continuum in different ways.
Therefore, we divided the line profiles into 11 identical profile
segments, each of width $\sim$1000 $\ {\rm km \ s^{-1}}$ (see Table
\ref{tab3}).
The observational uncertainties were determined for each segment of
the H$\beta$ and H$\alpha$ light curves. In the uncertainties we
include the uncertainties due to: the position angle correction,
seeing correction procedure and aperture correction. The methods to
evaluate these uncertainties (error-bars) are given in Paper I. The
effect of the subtraction of the template spectrum (or the narrow
components) has been studied by comparing the flux of pairs of
spectra obtained in the time interval from 0 to 2 days. In Table
\ref{tab4} (available electronically only) we presented the
year-averaged uncertainties (in percent) for each segment of
H$\alpha$ and H$\beta$ and mean values for all segments { and
corresponding mean-year flux}. To determine the error-bars we used
44 pairs of H$\alpha$ and 68 pairs of H$\beta$. As one can see from
Table \ref{tab4}, for the far wings (segments $\pm$5) the error-bars
are greater ($\sim$10\%) in H$\beta$ than in H$\alpha$ ($\sim$6\%).
But when comparing the error-bars in the far red and blue wings, we
find that the error-bars are similar. Also, higher error-bars can be
seen in the central part of the H$\alpha$ due to the narrow line
subtraction. Fig. \ref{fig9} (available electronically only) shows
the distributions of the error-bars as a function of the line flux
for segments ha$\pm$5 and ha0 and ha$\pm$1. It can be seen that in
the case of ha0 and ha+1 there is a slight anticorrelation with flux
and two points, corresponding to the very small flux
(F$<4\cdot$10$^{-13}\rm erg/cm^{-2}sec^{-1}$ in 2005), have the
highest error-bar of (40-70)\%.
We constructed light curves for each segment of the
H$\alpha$ and H$\beta$ lines. Fig. \ref{fig10} presents light curves
of profile segments in approximately identical velocity intervals in
the blue and red line wings (segments from 1 to 5, where larger
number corresponds to higher velocity, see Table \ref{tab3}) and for
the central part (0 in Table \ref{tab3} or H$\alpha$\_c, H$\beta$\_c
in Fig. \ref{fig10}, corresponding to the interval $\sim\pm 500\ {\rm
km \ s^{-1}}$). To compare the segment variation with the continuum
we plot (as a solid line) the continuum flux
variation into central part (see Fig. \ref{fig10}).
\onltab{4}{
\begin{table*}
\caption[]{\label{tab4}The errors of measurements (e$\pm \sigma$)
for { all line segments (see Table \ref{tab3}) of H$\alpha$ and
H$\beta$ given in percents. Also, for each segment the mean-year
flux is given in units $10^{-13} \ \rm erg \ cm^{-2} s^{-1} $.}}
\begin{tabular}{ccccccccccccc}
\hline\hline
Year & \multicolumn{2}{c}{Ha(-5)} & \multicolumn{2}{c}{Ha(+5)} & \multicolumn{2}{c}{Ha(-4)} & \multicolumn{2}{c}{Ha(+4)} &
\multicolumn{2}{c}{Ha(-3)} & \multicolumn{2}{c}{Ha(+3)}\\
& Flux & (e$\pm \sigma$) & Flux &(e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$) & Flux & (e$\pm \sigma$)\\
\hline
1996 & 6.327 & 9.6$\pm$ 9.7 & 7.052 & 13.3$\pm$10.5 & 12.456 & 9.9$\pm$5.9 & 12.494 & 11.7$\pm$7.1 & 22.736 & 9.1$\pm$6.8 & 17.722 & 10.9$\pm$7.1\\
1997 & 3.904 & 11.1$\pm$ 6.1 & 4.950 & 9.7$\pm$ 0.8 & 8.100 & 6.7$\pm$8.8 & 10.048 & 9.7$\pm$3.8 & 18.741 & 6.7$\pm$9.2 & 16.634 & 9.1$\pm$5.3\\
1998 & 2.497 & 11.7$\pm$ 4.2 & 4.516 & 7.2$\pm$ 6.6 & 6.381 & 5.4$\pm$2.4 & 8.904 & 7.4$\pm$4.8 & 16.570 & 3.2$\pm$2.9 & 16.518 & 4.6$\pm$4.7\\
1999 & 2.030 & 14.3$\pm$18.3 & 5.133 & 6.2$\pm$ 3.8 & 5.649 & 2.1$\pm$3.0 & 8.776 & 3.3$\pm$1.1 & 16.851 & 2.2$\pm$2.0 & 16.470 & 2.1$\pm$1.0\\
2000 & 1.647 & 5.3$\pm$ 4.3 & 3.460 & 4.3$\pm$ 3.5 & 3.561 & 4.4$\pm$4.4 & 5.570 & 4.2$\pm$3.5 & 9.204 & 4.2$\pm$3.8 & 7.770 & 3.8$\pm$3.6\\
2001 & 2.251 & 5.5$\pm$ 5.3 & 2.906 & 7.3$\pm$ 6.2 & 3.671 & 4.7$\pm$6.0 & 5.220 & 5.9$\pm$6.1 & 7.249 & 7.3$\pm$6.0 & 7.760 & 8.1$\pm$7.0\\
2002 & 3.686 & 3.5$\pm$ 0.4 & 3.016 & 3.0$\pm$ 1.3 & 5.041 & 2.7$\pm$2.4 & 6.443 & 4.9$\pm$5.3 & 7.792 & 3.6$\pm$1.4 & 10.742 & 4.1$\pm$5.1\\
2003 & 3.004 & 5.7$\pm$ 5.0 & 4.186 & 2.9$\pm$ 1.7 & 4.447 & 4.9$\pm$2.4 & 7.560 & 3.4$\pm$1.3 & 9.104 & 4.0$\pm$2.3 & 14.034 & 2.9$\pm$2.1\\
2004 & 3.261 & 3.2$\pm$ 0.8 & 2.722 & 3.3$\pm$ 1.8 & 4.340 & 6.1$\pm$2.3 & 5.299 & 1.9$\pm$1.3 & 6.619 & 5.7$\pm$2.4 & 9.922 & 3.9$\pm$2.1\\
2005 & 2.908 & 4.3$\pm$ 1.3 & 2.350 & 7.1$\pm$ 6.7 & 4.122 & 4.4$\pm$3.4 & 4.687 & 5.9$\pm$6.2 & 5.756 & 4.5$\pm$3.1 & 8.575 & 5.5$\pm$4.3\\
2006 & 2.906 & 1.7$\pm$ 0.4 & 1.938 & 1.7$\pm$ 1.7 & 3.979 & 3.0$\pm$2.7 & 4.115 & 3.0$\pm$1.9 & 5.784 & 2.7$\pm$2.8 & 10.459 & 1.9$\pm$1.3\\
\hline
mean && 6.9$\pm$4.1 & & 6.0$\pm$3.4 & & 4.9$\pm$2.2 & &5.6$\pm$3.0 & &4.8$\pm$2.1 & &5.2$\pm$3.0\\
\hline
&&&&&&\\
\hline
Year & \multicolumn{2}{c}{Hb(-5)} & \multicolumn{2}{c}{Hb(+5)} & \multicolumn{2}{c}{Hb(-4)} & \multicolumn{2}{c}{Hb(+4)} &
\multicolumn{2}{c}{Hb(-3)} & \multicolumn{2}{c}{Hb(+3)}\\
& Flux & (e$\pm \sigma$) & Flux &(e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$) & Flux & (e$\pm \sigma$)\\
\hline
1996 & 2.511 & 3.2$\pm$0.17 & 2.535 & 6.5$\pm$ 5.3 & 4.965 & 7.0$\pm$ 5 & 4.470 & 7.1$\pm$1.5 & 8.003 & 7.6$\pm$6.1 & 5.838 & 6.6$\pm$2.5\\
1997 & 1.303 & 17.8$\pm$ 2.1 & 1.507 & 11.6$\pm$ 2.5 & 3.134 & 4.8$\pm$ 5.9 & 3.813 & 4.7$\pm$2.0 & 6.391 & 3.5$\pm$2.0 & 5.381 & 2.4$\pm$0.6\\
1998 & 0.923 & 13.0$\pm$ 9.7 & 1.735 & 5.4$\pm$ 2.4 & 2.393 & 4.7$\pm$ 3.9 & 3.318 & 4.1$\pm$2.5 & 5.268 & 3.0$\pm$1.6 & 5.648 & 2.1$\pm$1.9\\
1999 & 1.215 & 19.3$\pm$14.0 & 1.594 & 12.1$\pm$10.2 & 1.674 & 11.1$\pm$10.4 & 2.786 & 6.1$\pm$3.2 & 4.403 & 3.1$\pm$1.9 & 4.322 & 5.5$\pm$3.8\\
2000 & 0.255 & 6.5$\pm$ 7.1 & 0.822 & 14.9$\pm$ 5.7 & 0.626 & 7.5$\pm$ 3.2 & 1.593 & 3.0$\pm$1.3 & 1.654 & 3.2$\pm$1.9 & 1.977 & 2.2$\pm$1.6\\
2001 & 0.642 & 10.5$\pm$ 8.9 & 1.068 & 15.2$\pm$11.7 & 1.064 & 12.8$\pm$11.4 & 1.620 & 9.2$\pm$9.6 & 1.923 & 3.2$\pm$4.2 & 2.210 & 7.8$\pm$8.1\\
2002 & 1.253 & 10.3$\pm$ 6.0 & 1.011 & 19.2$\pm$10.5 & 1.684 & 8.3$\pm$ 6.1 & 2.217 & 7.4$\pm$4.8 & 2.165 & 5.7$\pm$5.6 & 3.235 & 5.7$\pm$5.3\\
2003 & 1.000 & 9.7$\pm$ 6.1 & 1.568 & 7.8$\pm$ 2.6 & 1.515 & 4.6$\pm$ 2.1 & 2.715 & 5.2$\pm$3.4 & 2.627 & 4.1$\pm$2.5 & 4.347 & 4.5$\pm$3.3\\
2004 & 1.161 & 9.5$\pm$ 6.5 & 1.427 & 12.4$\pm$ 7.0 & 1.568 & 8.1$\pm$ 6.2 & 1.861 & 5.1$\pm$3.7 & 2.062 & 6.7$\pm$5.7 & 2.841 & 3.3$\pm$1.9\\
2005 & 1.062 & 3.8$\pm$ 3.8 & 0.969 & 10.5$\pm$13.0 & 1.458 & 3.5$\pm$ 2.3 & 1.535 & 3.4$\pm$2.3 & 1.818 & 3.5$\pm$3.2 & 2.609 & 8.2$\pm$2.8\\
2006 & 1.012 & 6.6$\pm$ 5.5 & 0.886 & 9.2$\pm$11.5 & 1.536 & 4.2$\pm$ 3.2 & 1.323 & 5.2$\pm$4.1 & 2.024 & 3.1$\pm$3.1 & 2.841 & 2.6$\pm$2.1\\
\hline
mean & &10.0$\pm$ 5.2& & 11.3$\pm$ 4.1& & 7.0$\pm$3.0& &5.5$\pm$1.8& & 5.2$\pm$3.0& & 4.6$\pm$2.3\\
\hline
&&&&&&\\
\hline
Year & \multicolumn{2}{c}{Ha(-2)} & \multicolumn{2}{c}{Ha(+2)} & \multicolumn{2}{c}{Ha(-1)} & \multicolumn{2}{c}{Ha(+1)} &
\multicolumn{2}{c}{Ha(0)}\\
& Flux & (e$\pm \sigma$) & Flux &(e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$) \\
\hline
1996 & 36.664 & 9.4$\pm$6.8 & 29.766 & 11.0$\pm$7.0 & 45.469 & 9.1$\pm$6.8 & 42.018 & 11.6$\pm$7.6 & 48.583 & 11.1$\pm$7.6 & & \\
1997 & 33.714 & 7.0$\pm$8.2 & 28.349 & 8.1$\pm$7.1 & 43.930 & 6.8$\pm$7.1 & 41.554 & 7.9$\pm$9.2 & 48.475 & 7.4$\pm$8.0 & & \\
1998 & 27.717 & 3.5$\pm$3.2 & 26.397 & 4.6$\pm$4.0 & 34.577 & 3.5$\pm$3.8 & 36.438 & 6.4$\pm$5.5 & 39.903 & 7.3$\pm$4.9 & & \\
1999 & 29.420 & 1.7$\pm$2.1 & 27.907 & 1.4$\pm$0.8 & 37.880 & 1.5$\pm$0.1 & 42.658 & 3.5$\pm$2.6 & 48.223 & 2.5$\pm$2.2 & & \\
2000 & 15.153 & 3.9$\pm$4.2 & 12.928 & 4.4$\pm$3.7 & 19.099 & 4.6$\pm$3.9 & 20.783 & 6.6$\pm$5.7 & 22.577 & 7.5$\pm$6.8 & & \\
2001 & 11.216 & 8.1$\pm$6.3 & 9.774 & 8.0$\pm$6.2 & 13.240 & 10.4$\pm$8.4 & 11.934 & 10.1$\pm$8.4 & 13.292 & 16.2$\pm$13.0& & \\
2002 & 12.285 & 4.1$\pm$1.1 & 12.612 & 6.0$\pm$2.1 & 18.827 & 4.7$\pm$3.8 & 16.231 & 14.9$\pm$8.3 & 19.611 & 12.4$\pm$4.7 & & \\
2003 & 16.800 & 2.5$\pm$2.2 & 16.071 & 4.6$\pm$4.5 & 24.724 & 3.6$\pm$2.5 & 21.440 & 13.5$\pm$9.0 & 25.761 & 9.8$\pm$4.9 & & \\
2004 & 9.552 & 4.8$\pm$1.5 & 12.490 & 9.1$\pm$3.2 & 14.567 & 1.4$\pm$0.4 & 13.621 & 18.0$\pm$4.9 & 14.724 & 9.7$\pm$2.5 & & \\
2005 & 7.416 & 4.3$\pm$3.2 & 10.705 & 9.8$\pm$5.3 & 9.008 & 8.3$\pm$6.2 & 9.852 & 32.0$\pm$21.3 & 8.791 & 26.0$\pm$25.4& & \\
2006 & 8.489 & 2.6$\pm$2.9 & 13.613 & 2.6$\pm$2.0 & 12.402 & 4.2$\pm$1.6 & 12.362 & 9.3$\pm$4.9 & 12.101 & 12.8$\pm$5.6 & & \\
\hline
mean& &4.7$\pm$2.4 & & 6.3$\pm$3.1 & & 5.3$\pm$3.0& & 12.2$\pm$7.8 & & 11.2$\pm$6.1 &&\\
\hline
&&&&&\\
\hline
Year & \multicolumn{2}{c}{Hb(-2)} & \multicolumn{2}{c}{Hb(+2)} & \multicolumn{2}{c}{Hb(-1)} & \multicolumn{2}{c}{Hb(+1)} &
\multicolumn{2}{c}{Ha(0)}\\
& Flux & (e$\pm \sigma$) & Flux &(e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$)& Flux & (e$\pm \sigma$) \\
\hline
1996 & 10.383 & 7.0$\pm$4.5 & 8.673 & 6.2$\pm$1.5 & 11.245 & 6.3$\pm$4.5 & 11.531 & 6.7$\pm$2.2 & 12.884 & 7.8$\pm$3.9 & & \\
1997 & 9.883& 3.7$\pm$2.3 & 8.188 & 3.3$\pm$3.1 & 11.509 & 4.1$\pm$2.5 & 10.986 & 4.0$\pm$3.1 & 13.094 & 4.9$\pm$1.1 & & \\
1998 & 7.688& 2.7$\pm$2.5 & 7.905 & 2.1$\pm$1.4 & 8.515 & 2.7$\pm$2.9 & 9.689 & 1.7$\pm$1.4 & 10.530 & 1.8$\pm$2.1 & & \\
1999 & 6.392& 4.1$\pm$1.9 & 6.609 & 5.1$\pm$2.9 & 6.829 & 6.3$\pm$3.9 & 8.664 & 4.8$\pm$3.2 & 9.542 & 6.6$\pm$6.8 & & \\
2000 & 2.499& 2.6$\pm$2.0 & 2.698 & 1.7$\pm$1.0 & 3.105 & 3.9$\pm$2.5 & 3.844 & 3.0$\pm$1.7 & 4.193 & 5.5$\pm$4.6 & & \\
2001 & 2.613& 4.4$\pm$2.5 & 2.716 & 5.6$\pm$5.6 & 3.220 & 4.5$\pm$4.0 & 3.367 & 2.5$\pm$1.3 & 3.772 & 8.0$\pm$8.5 & & \\
2002 & 2.848& 5.0$\pm$4.2 & 3.575 & 4.9$\pm$3.8 & 4.058 & 5.3$\pm$4.2 & 4.222 & 5.3$\pm$4.2 & 4.873 & 6.8$\pm$4.6 & & \\
2003 & 4.171& 4.2$\pm$2.1 & 4.950 & 3.0$\pm$1.5 & 6.619 & 2.9$\pm$2.9 & 6.203 & 3.4$\pm$3.0 & 7.277 & 4.2$\pm$3.2 & & \\
2004 & 2.280& 5.4$\pm$2.5 & 3.476 & 3.3$\pm$2.7 & 3.402 & 4.6$\pm$4.6 & 3.801 & 3.5$\pm$3.1 & 4.153 & 3.9$\pm$5.3 & & \\
2005 & 2.111& 4.0$\pm$3.4 & 3.379 & 5.0$\pm$2.4 & 2.790 & 4.4$\pm$2.5 & 3.395 & 7.7$\pm$2.7 & 3.193 & 9.1$\pm$4.1 & & \\
2006 & 2.604& 2.5$\pm$2.5 & 3.930 & 1.2$\pm$0.9 & 3.007 & 2.0$\pm$1.7 & 4.105 & 2.3$\pm$1.8 & 3.686 & 3.5$\pm$4.0 & & \\
\hline
mean & & 4.1$\pm$1.3 & & 3.8$\pm$1.7 & & 4.3$\pm$1.4 & & 4.1$\pm$1.9 & & 5.6$\pm$2.2 &&\\
\hline
\end{tabular}
\end{table*}
}
In 1996--1997 the blue segments 2 and 3 were slightly brighter than
the red ones, while the segments 1, 4 and 5 were similar to the red
ones. In 1998--2001 the blue segments 4 and 5 (3500--5500 $\ {\rm km
\ s^{-1}}$, regions the closest to the BH) and the segment 3 (in
2002--2006) of both lines were essentially fainter than the red
ones. In 1998--2004 the blue segments 1 and 2 (500 km/s -- 3500
km/s) were close to the corresponding red ones or slightly fainter
(see Fig. \ref{fig10}).
\subsection{The line-segment to line-segment flux
and continuum-line-segment relations}
The line-segment to line-segment flux and continuum-line-segment
relations for the H$\alpha$ and H$\beta$ lines are practically the
same, therefore here we present only results for the H$\alpha$ line.
First, we are looking for relations between the H$\alpha$ segments
which are symmetric with respect to the center (i.e. segments -1,
and 1; -2 and 2, ... -5 and 5). In Fig. \ref{fig11} we present the
response of symmetrical segments of H$\alpha$ to each other for the
three periods given in Paper I. { As can be seen from Fig.
\ref{fig11} the symmetric segments are pretty well correlated, with
some notable exceptions: a) weaker response of the red to the blue
wing in the II and III period and b) the apparent bifurcation in the
Ha2/Ha-2 and Ha3/Ha-3 plots. This appears to be associated with
Period III, related to the appearance of the ~3000 km/s "red
bump"}. This also supports that lines are probably formed in a
multi-component BLR and that the geometry of the BLR is changing
during the monitoring period.
\begin{figure*}[]
\includegraphics[width=6cm]{f12a.ps}
\includegraphics[width=6cm]{f12b.ps}
\includegraphics[width=6cm]{f12c.ps}
\includegraphics[width=6cm]{f12d.ps}
\includegraphics[width=6cm]{f12e.ps}
\includegraphics[width=6cm]{f12f.ps}
\includegraphics[width=6cm]{f12g.ps}
\includegraphics[width=6cm]{f12h.ps}
\includegraphics[width=6cm]{f12i.ps}
\includegraphics[width=6cm]{f12j.ps}
\includegraphics[width=6cm]{f12k.ps}
\caption{The flux of different segments of the line as a function of
the continuum flux. The line flux is given in $ \rm erg \
cm^{-2}s^{-1}$ and the continuum flux in $ \rm erg \
cm^{-2}s^{-1}\AA^{-1}$.} \label{fig12}
\end{figure*}
Additionally, in Fig. \ref{fig12} we present the response of
different H$\alpha$ segments to the continuum flux. As it can be
seen the responses are different: in the far wings, the response to
the continuum is almost linear for the red wing (segments 4 and 5)
and for a fraction of the blue wing (segment -4), but for the far
blue wing (-5500 to -4500 $\ {\rm km \ s^{-1}}$) there is
practically no response for $F_{\rm c}<7 \times 10^{-14} \rm erg \
cm^{-2}s^{-1}\AA^{-1}$. For higher continuum fluxes, the far blue
wing has a higher flux, but it seems also that there is no linear
relation between the line wing and the continuum flux. On the other
hand, the central segments (from - 3500 to 3500 $\ {\rm km \
s^{-1}}$) have a similar response to the continuum like the H$\beta$
and H$\alpha$ total line fluxes (see Paper I) - a linear response
for the low continuum flux $F_{\rm c}<7 \times 10^{-14} \rm erg \
cm^{-2}s^{-1}\AA^{-1}$) and no linear response for the high
continuum flux $F_{\rm c}>7 \times 10^{-14} \rm erg \
cm^{-2}s^{-1}\AA^{-1}$) The linear response indicates that this part
of line (red wings and central segments at the low continuum flux)
is coming from the part of the BLR ionized by the AGN source, while
the blue and partly central part of the line could partly
originating from a substructure out of this BLR (and probably not
photoionized). Note here that the photoionization in the case of a
mix of a thin (fully ionized) and thick (ionization-bounded) clouds
can explain the observed non-linear response of emission lines to
the continuum in variable as seen in the central parts of broad
lines (see Shields et al. 1995), i.e. in the case of the optically
thick BLR the detailed photoionization models show that the response
of the Balmer lines declines as the continuum flux increases (see
Goad at al. 2004, Korista \& Goad 2004). { In our case we found
that the flux in wings (except the far blue wing) have almost linear
response to the continuum flux, while the central parts show
non-linear response to the higher continuum flux. Also the response
to the continuum flux of the far blue wing (-5) and far red wing
(+5) is very different. It may indicate different physical
conditions in sub-regions or across the BLR.}
\onlfig{13}{
\centering
\begin{figure*}[]
\includegraphics[width=6cm]{f13a.ps}
\includegraphics[width=6cm]{f13b.ps}
\includegraphics[width=6cm]{f13c.ps}
\includegraphics[width=6cm]{f13d.ps}
\includegraphics[width=6cm]{f13e.ps}
\includegraphics[width=6cm]{f13f.ps}
\includegraphics[width=6cm]{f13g.ps}
\includegraphics[width=6cm]{f13h.ps}
\includegraphics[width=6cm]{f13i.ps}
\includegraphics[width=6cm]{f13j.ps}
\caption{The flux of different H$\alpha$ segments as a function of
the flux of the central H$\alpha$ segment. The line flux is given in
$ \rm erg \ cm^{-2}s^{-1}$.} \label{fig13}
\end{figure*}
}
In Fig. \ref{fig13} (available electronically only) the flux of
different H$\alpha$ segments as a
function of the flux of the central segment (H$\alpha$0) are
presented. It can be seen that there are different relations between
different H$\alpha$ segments and H$\alpha$0: for segments near the
center (H$\alpha$-1, H$\alpha$-2, H$\alpha$1 and H$\alpha$2) the
relation is almost linear, indicating that the core of the line
originates in the same substructures (Fig. \ref{fig13}); segments in
the H$\alpha$ wings (the near blue wing H$\alpha$-3 and the red wing
H$\alpha$3, H$\alpha$4 and H$\alpha$5) also show a linear response
to the central segment (but the scatter of the points is larger than
in previous case) which also indicates that a portion of the emission in the
center and in these segments is coming from the same emission
region. On the contrary, the far blue wing (H$\alpha$-4 and
H$\alpha$-5) responds weakly to the line center, especially
H$\alpha$-5 that shows practically independent variations with
respect to the central segment.
\section{The Balmer decrement}
\subsection{Integral Balmer decrement}
From month-averaged profiles of the broad H$\alpha$ and H$\beta$
lines we determined their integrated fluxes in the range between -
and +5500 ${\rm km \ s^{-1}}$ of radial velocities. We call
"Integral Balmer decrement" (BD) the integrated flux ratio
$F$(H$\alpha$)/$F$(H$\beta$).
Fig. \ref{fig14} shows the behavior of the integrated BD (upper
panel) and continuum flux at the wavelength 5100\,\AA\, (bottom
panel). In 1999--2006 an anticorrelation between the changes of the
integrated BD and continuum flux was observed. It was especially
noticeable in 1999--2001.
Table \ref{tab5} presents year-averaged values
of the BD and continuum flux determined from month-averaged profiles
in each year.
{ We found that in 1996--1998 the continuum flux was rather large
and was changing within the limits
$F_{\rm c}\sim(6-12) \times 10^{-14} \rm \ erg \ cm^{-2}s^{-1}\AA^{-1}$,
but the BD was practically not changing. In Paper I
we already noted the absence of correlation between the continuum
flux and the integrated flux of the broad lines for the
above-mentioned flux values. Also, we found (see Figure \ref{fig14} that the
Balmer decrement is systematically higher in 1999--2001.}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{f14.ps}
\caption{Variations of the integrated Balmer decrement
BD=F(H$\alpha$)/F(H$\beta$ (upper panel)
and of the continuum flux at $\lambda 5100\, \AA$ (bottom panel)
in 1996--2006.
The abscissae gives the Julian date (bottom) and the corresponding year (top).
The continuum flux is in units
$10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$.
The vertical lines correspond to years 1999 and 2002.}\label{fig14}
\end{figure}
\onlfig{15}{
\begin{figure*}[]
\includegraphics[width=16cm]{f15.ps}
\caption{The variations of the Balmer decrement for different parts
of the broad emission lines. The flux is given in $\rm erg \ cm^{-2}
\ s^{-1} \AA^{-1}$} \label{fig15}
\end{figure*}
}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{f16.eps}
\caption{ Variations of the BD of different segments of the line
profiles and of the continuum flux (bottom panel) in 1996--2006. The
BD of segments in the blue wing (numbers from -5 to -1 from Table
\ref{tab3}) are denoted with plus ($+$), in red wing (numbers from
+5 to +1 from Table \ref{tab3}) with crosses ($\times$), and of
central part (number 0) with open circles. The vertical lines
correspond to years 1999 and 2002. The abscissae shows the Julian
date (bottom) and the corresponding year (top). The ordinate shows
the BD for different segments of the line profile. The continuum
flux is in units
$10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$.}\label{fig16}
\end{figure}
Table \ref{tab5} also gives average data of the BD and continuum
flux for the periods of 1996--1998, 1999, 2000--2001, 2002--2006. It
is evident that in the period of 1996--1998 the BD did not vary
within the error bars in spite of strong variations of the continuum.
Radical changes (the increasing of the BD) started in 1999.
The BD reached its maximum in 2000--2001. Then in 2002--2006 average
values of the BD coincide with those of 1996--1998. Thus, it can be
concluded with confidence that from 1999 to 2001 we observed an
obvious increase of the BD.
\subsection{Balmer decrement of different profile segments}
From month-averaged profiles of H$\alpha$ and H$\beta$ we determined
the BD for the segments of Table \ref{tab3}. Fig. \ref{fig15}
(available electronically only) shows the BD variation in a 2D plane
(year--$V_r$), and Fig. \ref{fig16} gives the changes of the BD of
different segments of the profiles during the whole monitoring
period.
It can be seen from Figs. \ref{fig15} and \ref{fig16} that in
2000--2001 (JD=51200--52300) the values of the BD in different
segments were always on average noticeably larger than in the other
years, and the blue wing had always on average a noticeably larger
BD than the red one. In 1996--1998 BDs of the blue and red wings
practically coincided, and in 2002--2006 they either coincided or
the BD of the blue wing was slightly larger.
During the whole monitoring period the value of the BD along the
line profile in central segments (in segments 0,$\pm$1 corresponding
to the velocity range from -1500 $\ {\rm km \ s^{-1}}$ to 1500 $\
{\rm km \ s^{-1}}$) were considerably larger (by $\sim1.5$) on
average than at the periphery (segments from $\pm$2 to $\pm$5
corresponding to the velocity range from ((-5500)-(-1500)$\ {\rm km
\ s^{-1}}$) to (+5500)-(+1500)$\ {\rm km \ s^{-1}}$)).
\begin{figure*}
\centering
\includegraphics[width=5.7cm]{f17a.ps}
\includegraphics[width=5.7cm]{f17b.ps}
\includegraphics[width=5.7cm]{f17c.ps}
\includegraphics[width=5.7cm]{f17d.ps}
\includegraphics[width=5.7cm]{f17e.ps}
\includegraphics[width=5.7cm]{f17f.ps}
\includegraphics[width=5.7cm]{f17g.ps}
\includegraphics[width=5.7cm]{f17h.ps}
\includegraphics[width=5.7cm]{f17i.ps}
\includegraphics[width=5.7cm]{f17j.ps}
\includegraphics[width=5.7cm]{f17k.ps}
\includegraphics[width=5.7cm]{f17l.ps}
\includegraphics[width=5.7cm]{f17m.ps}
\includegraphics[width=5.7cm]{f17n.ps}
\includegraphics[width=5.7cm]{f17o.ps}
\includegraphics[width=5.7cm]{f17p.ps}
\caption{Variations of the Balmer decrement
BD=F(H$\alpha$)/F(H$\beta$) as a function of the radial velocity for
{ month-averaged spectra} each year of the monitoring period. The
abscissae gives the radial velocities relative to the narrow
components. The { month-averaged} Julian date and the corresponding
year are given at the top of each plot. }\label{fig17}
\end{figure*}
\subsubsection{Balmer decrement variations as a function of
the radial velocity}
As an illustration of the variation of the Balmer decrement with the
radial velocity during the monitoring period, we show in Fig.
\ref{fig17} the BD as a function of the radial velocity. Since the
BD vs. velocity remains the same during one year, we give a few
examples just for { month-averaged spectra} for each year of the
monitoring period.
On the other hand, the behavior of the Balmer decrement as a
function of the radial velocity differs in different years. As a
rule, in 1996--1998 the maximum of the BD was at $\sim-1000\ {\rm km
\ s^{-1}} $, while the BD was slowly decreasing in the velocity
range from 0 to 1000 ${\rm km \ s^{-1}}$, and sharply decreasing in
the region $>\pm1000\ {\rm km \ s^{-1}} $, usually more strongly in
the region of negative velocity. Recall here the results obtained
from photoionization model by Korista \& Goad (2004) where they
found velocity dependent variations in the Balmer decrement. They
found that the Balmer decrement is steeper in the line core than in
line wings, as we obtained in some cases (see Fig. \ref{fig17}), but
it is interesting that in all periods this peak is offset from the
central part to the blue side and also there are several cases where
the Balmer decrement is steeper in the wings. On the other hand in
the case if velocity field is dominated by the central massive
object, one can expect symmetrical BD in the blue and red part of
velocity field (Korista \& Goad 2004). In our case we obtained
different asymmetrical shapes of BD vs. velocity field. { The BD
seems to have a systematic change in behavior starting around 2002
- i.e. corresponding to period III, when the "red bump" appears,
showing a two maxima in the BD vs. velocity field. This may indicate
that velocity field in the BLR is not dominated by central massive
black hole, i.e. it is in favor of some kind of streams of the
emitting material as e.g. outflow or inflow.}
In 1999--2001 the maximum values of
the BD $\sim(6-8)$ were observed in the velocity region $\pm 1000\
{\rm km \ s^{-1}}$, while a steeper decrease of the BD was more
often observed in the region of the positive velocity. The change of
the BD relative the radial velocity in 2002--2006 differed strongly
from those in 1996--2001. In these years 2 peaks (bumps) were
observed in the BD distribution: 1) at radial velocities from -2000
to -1000 with larger values of the BD; 2) at $+3000\ {\rm km \
s^{-1}}$ with somewhat smaller (by $\sim(0.5-1.0)$) values of the
BD.
\begin{figure*}
\centering
\includegraphics[width=6.cm]{f18a.ps}
\includegraphics[width=6.cm]{f18b.ps}
\includegraphics[width=6.cm]{f18c.ps}
\caption{Left: The variation of the ratio of the helium He II
$\lambda$4686 and He I $\lambda$5876 lines as a function of the
continuum flux. Middle: The BD as a function of the helium line
ratio. Right: The variation of the ratio of the helium He II
$\lambda$4686 and H$\beta$ lines as a function of the continuum
flux. The continuum flux is in units $10^{-14} \rm erg \, cm^{-2}
sec^{-1} \AA^{-1}$.}\label{fig18}
\end{figure*}
\subsection{Balmer decrement and helium line ratio}
In order to probe the physical conditions in the BLR, we studied the
flux ratio of He II $\lambda$4686 and HeI $\lambda$5876 broad lines.
From all the available spectra, we selected only 21 spectra where
the broad helium lines could be precisely measured, and where the
two helium lines were observed on the same night. Note that in the
case of the minimum activity, the broad component of the He II
$\lambda$4686 line could not be detected at all.
We use the helium lines He II $\lambda$4686 and He I $\lambda$5876
since these two lines come from two different ionization stages and,
thus, are very sensitive to changes of the electron temperature and
density of the emitting region (Griem 1997).
In Fig. \ref{fig18} we give the HeII/HeI vs. continuum flux (first panel), BD vs.
HeII/HeI (second panel) and He II/H$\beta$ vs. continuum flux (third
panel). As it can be seen from Fig. \ref{fig18} (first panel) there is a good
correlation between the continuum flux and the helium line ratio
(the correlation coefficient r=0.81) which indicates that these
lines are probably coming from the region photoionized by the
continuum source. On the other hand, the Balmer decrement decreases
as the ratio of the two helium lines increases (the anti-correlation
is a bit weaker here, the correlation coefficient is r=0.75),
indicating that as the ionization gets stronger, the BD decreases
(Fig. \ref{fig18}, second panel).
The photoionization model (Korista \& Goad 2004) shows that the
Balmer lines should show less flux variation with continuum state
than He I, that is less varying than He II. In this case one can
expect that He II/H$\beta$ changes much more than the He II/He I.
The observed changes in the He II/H$\beta$ flux ratio (See Fig.
\ref{fig18}, third panel) is probably not due to differences in the
way these two lines respond to the level of the ionizing continuum
flux (see Shapovalova et al. 2008, Fig. 10 in the paper), but rather
to changes in the shape of the ionizing SED. In our case, this ratio
drops about 4 times from the high to low continuum states, while the
He II/He I line ratio changes around 3 times.
{ There is some connection between the helium and Balmer lines
(Fig. \ref{fig18}, left), but the physical properties of the
emitting regions of these lines are probably different. One of the
explanation of this correlation may be that a part of the Balmer
lines is coming from the same region as He I and He II lines. Since
the ratio of He II/ He I is sensitive to the change in temperature
and electron density, in the case of higher He II/ He I ratio the
ionization is higher and population of higher level in Hydrogen has
higher probability, consequently the ratio of H$\alpha$/H$\beta$ is
smaller than in the case of lower He II/ He I ratio.}
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption[]{Year-averaged and period-averaged variations of the
integral Balmer decrement (BD) and continuum fluxes. Columns: 1-year
or year-intervals; 2- F(H$\alpha$)/F(H$\beta$), H$\alpha$ and
H$\beta$ flux ratio or integrated Balmer Decrement (BD); 3 -
$\sigma$(F(H$\alpha$)/F(H$\beta$)), the estimated
F(H$\alpha$)/F(H$\beta$) error; 4 - F(5100), continuum flux at
$\lambda 5100\, \AA$; 5 - $\sigma$, estimated continuum flux error.
The last four lines give the mean BD, mean continuum flux and their
estimated errors in year-periods.} \label{tab5} \centering
\renewcommand{\footnoterule}{}
\begin{tabular}{lcccc}
\hline
\hline
year & \multicolumn{2}{l}{F(H$\alpha$)/F(H$\beta$)$\pm \sigma$} &
\multicolumn{2}{l}{F(5100)$\pm \sigma$\footnote{in units of $10^{-14} \rm erg \ cm^{-2}s^{-1}\AA^{-1}$ }} \\
\hline
1996 & 3.65 & 0.51 & 11.18 & 1.55 \\
1997 & 3.39 & 0.36 & 7.62 & 1.03 \\
1998 & 3.58 & 0.35 & 6.08 & 1.19 \\
1999 & 4.49 & 0.74 & 5.12 & 1.62 \\
2000 & 5.01 & 0.62 & 2.75 & 1.33 \\
2001 & 5.43 & 1.07 & 1.87 & 0.47 \\
2002 & 3.67 & 0.44 & 3.59 & 0.29 \\
2003 & 3.16 & 0.27 & 4.84 & 0.58 \\
2004 & 3.70 & 0.29 & 2.44 & 0.57 \\
2005 & 3.96 & 0.54 & 1.99 & 0.18 \\
2006 & 3.58 & 0.36 & 2.60 & 0.80 \\
\hline
mean (1996-1998): & 3.54 & 0.13 & 8.29 & 2.62 \\
\hline
mean 1999(Jan.-Apr.):& 4.21 & 0.59 & 5.89 & 0.56 \\
1999 (Dec.): & 5.33 & & 2.78 & 0.12 \\
\hline
mean (2000-2001): & 5.22 & 0.30 & 2.31 & 0.62 \\
\hline
mean (2002-2006): & 3.61 & 0.29 & 3.09 & 1.14 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\section{Summary of the results}
In this paper we investigated different aspects of broad H$\alpha$
and H$\beta$ line variations (shapes, ratios, widths and
asymmetries), so in this section we outline the summaries of our
obtained results.
\subsection{Summary of the line profile variations}
i) The line profile of H$\alpha$ and H$\beta$ were changing during
the monitoring period, showing blue (1996-1999) and red (2002-2006)
asymmetry. We observe bumps (shelves or an additional emission) in
the blue region in 1996--1997 at $V_r\sim-4000$ and $-2000 \ \rm km
\ s^{-1}$, in 1998--99 at $V_r\sim-2000 \ \rm km \ s^{-1}$ , and in
2000--2001 at -2000 and -500 $\rm km \ s^{-1}$ already. However, in
2002--2004 these details disappeared in the blue wing of both lines.
It became steeper than the red one and did not contain any
noticeable features. In 2002 a distinct bump appeared in the red
wing of both lines at $+3100\ \rm km \ s^{-1}$. The radial velocity
of a bump in the red wing changed from $+3100\ \rm km \ s^{-1}$
in 2002 to $\sim2100 \ \rm km\ s^{-1}$ in 2006. Possibly, the
appearance of the blue and red bumps are related with a
jet-component.
ii) In 2005 (May-June), when the nucleus of NGC 4151 was in the
minimum activity state, the line profiles had a double-peaked
structure with two distinct peaks (bumps) at radial velocities of
-2586; +2027 $\ {\rm km \ s^{-1}}$ in H$\beta$ and -1306; +2339 $\
{\rm km \ s^{-1}}$ in H$\alpha$.
iii) In 1996-2001 we observed a broad deep absorption line in
H$\beta$ at a radial velocity that took values
from $V_r\sim-1000$ to $V_r\sim-400 \ \rm km \ s^{-1}$.
iv) the FWHM is not correlated with the continuum flux, while the
asymmetry tends to anti-correlate with it (coefficient correlation
r$\sim$-0.5). The FWHM and asymmetry in H$\beta$ are larger than in
H$\alpha$. { It may be explained with the fact that H$\beta$
originates closer to the SMBH, and thus has larger widths.}
v) We divided the line profiles into 11 identical segments, each of
width 1000 $\rm km \ s^{-1}$ and investigated the correlations
between segments vs. continuum flux and segments vs. segments flux.
We found that the far red wings (from 3500 to 5500 $\rm km \
s^{-1}$) and central (at $V_r=\pm3500 \ \rm km \ s^{-1}$) segments
for the continuum flux $F_{\rm c}<7 \times 10^{-14} \rm erg cm^{-2}
s^{-1} A^{-1}$ respond to the continuum almost linearly, and
this segments are probably coming from the part of the BLR
ionized by the AGN source.
vi) The central ($\pm3500 \ \rm km \ s^{-1}$) segments for $F_{\rm
c}>7 \times 10^{-14} \rm erg cm^{-2} s^{-1} A^{-1}$ do not show any
linear relation with the continuum flux. Probably in periods of high
activity this segments of line is partly originating in
substructures which are not photoioninized by the AGN continuum.
vii) The far blue wing (from -5500 to -4500) seems to originate in
a separate region, so it does not respond to the continuum flux
variation as well as to the variation of other line segments, except
in the case of high line flux where it responds to the far red wing
(see Fig. \ref{fig12}). This may indicate that the far blue wing is emitted
by a distinct region which is not photoionized, and also that the
emission is highly blueshifted (as e.g. an outflow with velocities
$>$3500 $\ {\rm km \ s^{-1}}$).
viii) The far red wing is very sensitive to the continuum flux
variation, and is thus probably coming from the part the closest to
the center, i.e. this part of the line seems to be purely
photoionized by the AGN source.
From all the facts mentioned above, one can conclude that the broad
lines of NGC 4151 are produced at least in three kinematically (and
physically) distinct regions.
\subsection{Summary of the BD variations}
The Balmer decrement (BD=F(H$\alpha$)/F(H$\beta$)) varied from 2 to
8 during the monitoring period. It is interesting that BDs are quite
different along line profiles as well. In 1996--1998 there was no
significant correlation between the BD and the continuum flux (the
continuum flux was $F_{\rm c}\sim (6-12) \times 10^{-14} \rm erg \
cm^{-2} s^{-1} A^{-1}$). In 1999--2001 maximal variations of the BD were
observed, especially in the blue part of lines. The maximal value of
the BD along the line profiles strongly differed: in 1996--2001 the
BD was maximum at $V_r\sim \pm1000\ \rm km \ s^{-1} $ and in
2002--2006 the BD had 2 peaks - at velocity from -2000 to -1000 and
at $V_r\sim3000\ \rm km \ s^{-1}$ with somewhat smaller (by
$\sim$(05-1)) values of the BD. In the last case it is possible that
the second bump (the fainter one) is caused by the interaction
between the receding sub-parsec jet and environment.
The different values of the BD observed during the monitoring period
(as well as different values of the BD along the profiles) also
indicate a multicomponent origin of the broad lines. Such different
ratios may be caused by absorption, but also by different physical
conditions in different parts of the BLR.
\section{Discussion}
It is interesting to compare our results with those found in the UV
and X-radiation. Crenshaw \& Kraemer (2007) found a width of 1170 km
s$^{-1}$ (FWHM) for the UV emission lines significantly smaller than
our results for the Balmer lines (FWHM$\sim$ 5000-7000 km s$^{-1}$,
around 5-6 times smaller). This can be interpreted as the
existence of an intermediate component between the broad and narrow
(emission) line regions (see e.g. Popovi\'c et al. 2009). In the
same paper, they found an evidence that the UV emission lines arise
from the same gas responsible for most of the UV and X-ray
absorption. The absorption can be seen in outflow at a distance of
0.1 pc from the central nucleus. Also, it is interesting that
Crenshaw \& Kraemer (2007) favor a magnetocentrifugal acceleration
(e.g., in an accretion disk wind) over those that rely on radiation
or thermal expansion as the principal driving mechanism for the mass
outflow. Obscuration should play an important role, but we found
that while an absorption can clearly be detected in the H$\beta$
line, the H$\alpha$ line displays a small amount of absorption (see
Figs. \ref{fig2} and \ref{fig3}). Absorption in the H$\beta$ line
was previously reported by Hutchings et al. (2002), therefore
obscuration of the optical continuum should be present in NGC 4151.
The location of the obscuring material was estimated by Puccetti et
al. (2007). They analyzed the X-ray variability of the nucleus. They
found that the location of the obscuring matter is within
3.4$\times$10$^4$ Schwarzschild radii from the central X-ray source
and suggested that absorption variability plays a crucial role in
the observed flux variability of NGC 4151.
Let us discuss some possible scenarios for the BLR. As we mentioned
above, the outflow is probably induced by the magnetocentrifugal
acceleration, and starts very close to the black hole. If we adopt a
black hole mass around 4$\times 10^7\ M_\odot$ (obtained from
stellar dynamical measurements, see Onken et al. 2008, which is in
agreement with Bentz et al. 2006), the acceleration (and line
emission) is starting at $\sim$10$^{-4}$ pc from the black hole and
the outflow is emitting until $\sim 0.01$ pc; taking into account
the absorption velocities (e.g. Hutchings et al. 2002) around -1000
km s$^{-1}$,
{ As it was mentioned above, the broad H$\alpha$ and H$\beta$ lines
show different widths and asymmetries during the monitoring period,
indicating a complex structure of the BLR. Also, we found that the
line profiles of H$\alpha$ and H$\beta$ could be different at the
same epoch. Therefore one should consider a multi-component origin
of these lines.
To propose a BLR model, one should take into account that: a) there
is an absorption component in the blue part of lines indicating some
kind of outflow that may start in the BLR (see \S 3.2). Also
Crenshaw \& Kraemer (2007) reported a mass outflow from the nucleus
of the NGC 4151, but they confirmed the observed outflow in the
Intermediate Line Region (ILR).; b) the flux in the far red wing
correlates well with the continuum flux, indicating that it
originates in an emission region very close to the continuum source,
i.e. to the central black hole; c) the flux of the far blue wing
does not correlate with the continuum, { that may indicate some
kind of shock wave contribution to the Balmer line}.
All these facts indicates that an outflow probably exists in the
BLR. In this case a complex line profiles (with different features)
can be expected due to changes in the outflowing structure, as it is
often seen in the narrow lines observed in the jet induced shocks
(see e.g. Whittle \& Wilson 2004). Of course we cannot exclude that
there is the contribution of some different regions to the composite
line profile, as e.g. there can be also contribution of the ILR
which may be with an outflow geometry. In forthcoming paper we will
try to adjust the observational results with those predicted by
various models of the kinematics and structure of the BLR (a
bi-conical outflow, an accelerating outflow (wind), a Keplerian
disk, jets, etc).
It is interesting to note that the line profiles are changing during
monitoring period, especially after 2002, when the 'red bump'
appears. After that the asymmetry of both lines (see Table 2) and BD
show different behavior than in the period 1996 - 2002. Moreover, in
the III period, the line profiles of H$\alpha$ and H$\beta$ very
much changed in the shape from double peaked profiles (see Figs. 1
and 2) to a quite asymmetrical profiles (as was observed in 2006).}
The integrated Balmer decrement was maximum in 1999--2001 (see Fig.
\ref{fig14}). The BD changes shape from 2002, showing two peaks in
the BD vs. velocity field profile. Most probably, this is connected
with strong inhomogeneities in distribution of the absorbing
material during different periods of monitoring. As the line flux
in H$\alpha$ and H$\beta$ at small fluxes ($<7 \times 10^{-14} \rm
erg \ cm^{-2} s^{-1} A^{-1}$) correlates well with that of the
continuum (Paper 1), we infer that the change of the integrated
Balmer decrement in 1999--2001 is also caused, at least partially,
by changes in the continuum flux. Indeed, when the ionization
parameter decreases for a constant density plasma, an increase of
F(H$\alpha$)/F(H$\beta$) intensity ratio is expected (see for
instance Wills et al. 1985). This is due to the decrease of the
excitation state of the ionized gas: the temperature of the ionized
zone being smaller, the population of the upper levels with respect
to the lower ones decreases. In 1996--1998 BDs did not correlate
with continuum, i.e. in this case the main cause of BD variations is
not related to the active nucleus and probably the shock initiated
emission is dominant. We detected that the FWHM of lines does not
correlate with continuum. This confirms our assumption that broad
lines can formed in several (three) different subsystems or that the
emission is affected by an outflow that produces shock initiated
emission.
\section{Conclusions}
This work is subsequent to the Paper I (Shapovalova et al. 2008) and
is dedicated to a detailed analysis of the broad H$\alpha$ and
H$\beta$ line profile variations during the 11-year period of
monitoring (1996--2006). From this study (Section 3) it follows that
the BLR in NGC 4151 is complex, and that broad emission lines are
the sums of components formed in different subsystems:
1) the first component is photoionized by the AGN continuum (far
red line wings, $V_r\sim +4000$ and $+5000 \ \rm km \ s^{-1}$ and
central (at $V_r=\pm3500 \ \rm km \ s^{-1}$) segments for continuum
flux $F_{\rm c}<7 \times 10^{-14} \rm erg cm^{-2} s^{-1} A^{-1}$).
This region is the closest to the SMBH.
2) the second component is independent of changes of the AGN continuum
(far blue line wings, $V_r\sim-5000 \ \rm km \ s^{-1}$; $-4000 \ \rm km \
s^{-1}$). It is possibly generated by shocks initiated by an
outflow.
3) the third component, where the central parts of lines
($V_r<4000 \ \rm km \ s^{-1}$) are formed, in high fluxes $F_{\rm c} > 7 \times
10^{-14} \rm erg \ cm^{-2} s^{-1} A^{-1}$ is also independent of the
AGN continuum (possibly, outflow and jet).
Finally, we can conclude that the BLR of NGC 4151 may not be purely
photoionized, i.e. besides photoionization, there could be some
internal shocks contributing to the broad lines see also
discussion in Paper I). At least there are three different
subregions with different velocity field and probably different
physical conditions, that produce complex variability in the broad
lines of NGC 4151 and changes in the line profile (very often
temporarily bumps). Our investigations indicate
that the reverberation might not be valid as a tool to determine the
BLR size in this AGN and that this AGN is not perfect for this
method. Consequently, the results for the BLR size of NGC 4151
(given in the introduction) should be taken with caution.
\section*{Acknowledgments}
Authors would like to thank Suzy Collin for her suggestions how
to improve this paper. Also, we thank the anonymous referee for very
useful comments. This work was supported by INTAS (grant N96-0328),
RFBR (grants N97-02-17625 N00-02-16272, N03-02-17123 and
06-02-16843, 09-02-01136), State program 'Astronomy' (Russia),
CONACYT research grant 39560-F and 54480 (M\'exico) and the Ministry
of Science and Technological Development of Republic of Serbia
through the project Astrophysical Spectroscopy of Extragalactic
Objects (146002).
| proofpile-arXiv_065-6914 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There has been an extensive debate about the correct expression for
the momentum density of electromagnetic waves in linear media. The
Minkowski's expression $\mathbf{D}\times \mathbf{B}$ and the
Abraham's $\mathbf{E}\times \mathbf{H}/c^2$ are the most famous
ones, both proposed in the beginning of the twentieth century
\cite{pfeifer07}. In these expressions, $\mathbf{E}$ is the electric
field, $\mathbf{B}$ is the magnetic field, $\mathbf{D}\equiv
\varepsilon_0\mathbf{E}+\mathbf{P}$ is the electric displacement
field and $\mathbf{H}$ is defined as $\mathbf{H}\equiv
\mathbf{B}/\mu_0 -\mathbf{M}$, where $\mathbf{P}$ and $\mathbf{M}$
are the electric and magnetic dipole densities of the medium.
$\varepsilon_0$ is the permittivity of free space, $\mu_0$ is the
permeability of free space and $c={1}/{\sqrt{\varepsilon_0\mu_0}}$
is the speed of light in vacuum. These expressions make
qualitatively different predictions for the momentum of light in a
medium. The Minkowski formulation predicts that a photon with
momentum ${\hbar\omega}/{c}$ in vacuum increases its momentum to
$n{\hbar\omega}/{c}$, where $n$ is the refraction index of the
medium, on entering a dielectric medium. The Abraham expression,
however, states that the photon momentum decreases to
${\hbar\omega}/{nc}$ on entering the medium.
The debate of which form is the correct one persisted for many
decades, with experiments and theoretical discussions, from time to
time, seeming to favor either one of the two formulations. Arguments
that sometimes are naively used in favor of the Minkowski
formulation are the experiments of Jones \emph{et al.}
\cite{jones54,jones78} that measured the radiation pressure on
mirrors immersed in dielectric media, the experiments of Ashkin and
Dziedzic \cite{ashkin73} that measured the radiation pressure on the
free surface of a liquid dielectric, the experiments of Gibson
\emph{et al.} \cite{gibson80} that measured the radiation pressure
on the charges of a semiconductor via the photon drag effect and the
experiments of Campbell \emph{et al.} \cite{campbell05} that
measured the recoil momentum of atoms in a gas after absorbing one
photon. All these experiments are consistent with the consideration
that a photon in a medium has momentum $n{\hbar\omega}/{c}$.
Arguments that sometimes are naively used in favor of the Abraham
formulation are the symmetry of its energy-momentum tensor,
compatible with conservation of angular momentum, the agreement of
its electromagnetic momentum density with the predictions of
Einstein box theories \cite{frisch65,brevik81,loudon04} in a direct
way, the experiments of Walker \emph{et al.}
\cite{walker75b,walker77} that measured the torque on a dielectric
disk suspended in a torsion fiber subjected to external magnetic and
electric fields and the experiments of She \emph{et al.}
\cite{she08} that measured a push force on the end face of a silica
filament when a light pulse exits it. These experiments are
consistent with the Abraham form for the momentum density of an
electromagnetic wave in a dielectric medium.
Although the debate is still supported by some researchers, Penfield
and Haus showed, more than forty years ago, that neither of the
forms is completely correct on its own \cite{penfield}. The
electromagnetic momentum of electromagnetic waves in linear media is
always accompanied by a material momentum, and when the material
counterpart is taken into account, both the Minkowski and Abraham
forms for the electromagnetic momentum density yield the same
experimental predictions. They are simply two different ways, among
many others, to divide the total momentum density. A revision of
this discussion and the eventual conclusion can be found in Ref.
\cite{pfeifer07}. In fact, the experimental results of Jones
\emph{et al.} were reproduced by Gordon \cite{gordon73} and Loudon
\cite{loudon02} using the Abraham form for the electromagnetic
momentum density and calculating the material momentum by means of
the Lorentz force. Gordon also reproduced the results of Ashkin and
Dziedzic by the same way \cite{gordon73}. Loudon \emph{et al.}
\cite{loudon05} showed that the experiments of Gibson \emph{et al.}
can also be explained with both formulations. And Leonhardt
\cite{leonhardt06} showed that the experiments of Campbell \emph{et
al.} can also be explained by the use of the Abraham form for the
momentum density and a redefinition of the mechanical momentum. On
the other side, Israel \cite{israel77} has derived the experimental
results of Walker \emph{et al.} using the Minkowski formulation with
a suitable combination of electromagnetic and material
energy-momentum tensors and the conclusions of the experiments of
She \emph{et al.} were recently questioned
\cite{mansuripur09c,mansuripur09a,brevik09}.
What happens is that for each experimental situation one formulation
can predict the behavior of the system in a simpler way, but the
Minkowski, Abraham and other proposed formulations are always
equivalent. In a recent article, Pfeifer \emph{et al.} show the
conditions in which we can neglect the material counterpart of the
Minkowski energy-momentum tensor \cite{pfeifer09}, justifying why it
is possible to predict the behavior of the experiments of Jones
\emph{et al.} and the modeling of optical tweezers only with the
electromagnetic tensor in the Minkowski formulation. To summarize, I
quote Ref. \cite{pfeifer07}: ``(...) no electromagnetic wave
energy-momentum tensor is complete on its own. When the appropriate
accompanying energy-momentum tensor for the material medium is also
considered, experimental predictions of all various proposed tensors
will always be the same, and the preferred form is therefore
effectively a matter of personal choice.''
But we can ask if there is, among all possible ways to divide the
total momentum density of an electromagnetic wave in a medium into
electromagnetic and material parts, a natural one. I believe there
is. $\mathbf{E}$ and $\mathbf{B}$ are the fields that appear in the
Lorentz force law and actually interact with electric charges. They
are the fields that can transfer momentum to matter. So, in my point
of view, they must be the fields that may carry electromagnetic
momentum. $\mathbf{D}$ and $\mathbf{H}$ should be seen as averaged
quantities of material and electromagnetic properties, used to
simplify the calculations. In this sense, it seems natural that the
electromagnetic part of the momentum density of an electromagnetic
wave in a medium has the form
$\mathbf{p}_{\mathrm{e.m.}}=\varepsilon_0\mathbf{E}\times\mathbf{B}$,
which does not have an explicit dependence on the properties of the
medium. The material part of the momentum should be calculated as
the momentum acquired by the medium by the action of the Lorentz
force on the charges of the medium. An electromagnetic
energy-momentum tensor that has these characteristics was previously
proposed by Obukhov and Hehl \cite{obukhov03}. Here I show the
validity of this division of the momentum density in a series of
examples. The form $\varepsilon_0\mathbf{E}\times\mathbf{B}$ for the
electromagnetic momentum density is equivalent to the Abraham one in
non-magnetic media. As in all cited experiments the media under
consideration were non-magnetic, there is no difference between the
treatment with this momentum density and the Abraham one. Only in
magnetic media the differences will appear.
Gordon \cite{gordon73}, Loudon \cite{loudon02,loudon03} and
Mansuripur \cite{mansuripur04} calculated the material momentum
density of electromagnetic waves in linear non-dispersive
dielectrics using directly the Lorentz force law. Scalora \emph{et
al.} \cite{scalora06} used numerical simulations to calculate, also
by the Lorentz force law, the momentum transfer to more general
dispersive media. Recently, Hinds and Barnett \cite{hinds09} used
the Lorentz force to calculate the momentum transfer to a single
atom. Here, following this method in an analytical treatment, I show
that it may exist permanent transfers of momentum to the media due
to the passage of electromagnetic pulses that were not considered
before. I also generalize the previous treatments considering
magnetic media. Mansuripur has treated magnetic materials in another
work \cite{mansuripur07b}, but the force equation, results and
conclusions of this work are different from his. I believe that my
treatment is more adequate. If this method of using the Lorentz
force is adopted to calculate the material momentum of
electromagnetic waves in linear media, we conclude that the
electromagnetic part of the momentum density must have the form
$\varepsilon_0\mathbf{E}\times\mathbf{B}$ in order that we have
momentum conservation in the various circumstances that are
discussed in this paper.
In Sec. \ref{sec:mom}, I calculate the material momentum density of
an electromagnetic pulse in a homogeneous linear dielectric and
magnetic medium. In Sec. \ref{sec:int diel}, I calculate the
momentum transfer to the medium near the interface on the partial
reflection and transmission of an electromagnetic pulse by the
interface between two linear media and show the momentum
conservation in the process. In Sec. \ref{sec:jones}, I show the
compatibility of the present treatment with the experiments of Jones
\emph{et al.} \cite{jones54,jones78} for the momentum transfer from
an electromagnetic wave in a dielectric medium to a mirror upon
reflection, show the momentum conservation in this process and
generalize the treatment of Mansuripur \cite{mansuripur07a} for the
radiation pressure on mirrors immersed in linear media for arbitrary
kinds of mirrors and magnetic media. In Sec. \ref{sec:antirref}, I
use my method to calculate the momentum transfer to an
antireflection layer between two linear media on the passage of an
electromagnetic pulse and show the momentum conservation in the
process. In Sec. \ref{sec:einstein}, I show the compatibility of the
proposed division of the momentum density with the Einstein's theory
of relativity by the use of a \emph{gedanken} experiment of the kind
``Einstein box theories''. Finally, in Sec. \ref{conc}, I present my
concluding remarks.
\section{Material momentum of electromagnetic waves in linear
dielectric and magnetic media}\label{sec:mom}
The momentum transfer to a linear non-absorptive and non-dispersive
dielectric medium due to the presence of an electromagnetic wave can
be calculated directly using the Lorentz force
\cite{gordon73,loudon02,loudon03,mansuripur04,scalora06}. The force
acting on electric dipoles can be written as
\begin{equation}\label{lor force dipoles}
\mathbf{F}_{\mathrm{dip.}}=(\mathbf{p}\cdot \nabla)\mathbf{E}+\frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t}\times
\mathbf{B}\;,
\end{equation} where $\mathbf{p}$ is the dipole moment. In a linear,
isotropic and non-dispersive dielectric, the electric dipole moment
density can be written as
$\mathbf{P}=\chi_\mathrm{e}\varepsilon_0\mathbf{E}$, where
$\chi_\mathrm{e}$ is the electric susceptibility of the medium. It
is important to stress that the consideration of a non-dispersive
linear medium must be seen as an approximation, once every material
medium is inevitably accompanied of dispersion. But if the
electromagnetic wave has a narrow frequency spectrum and the
dispersion is small in this region of frequencies, this treatment
will give the material contribution of the total momentum of the
wave with a good precision. Using Eq. (\ref{lor force dipoles}), the
Maxwell equations and some vectorial identities, the force density
on this medium can be written as \cite{gordon73}
\begin{equation}\label{force density dipoles 2}
\mathbf{f}_{\mathrm{diel.}}=\chi_\mathrm{e}\varepsilon_0\left[\nabla\left(\frac{1}{2}E^2\right)+\frac{\partial}{\partial
t}\left(\mathbf{E}\times \mathbf{B}\right)\right]\;.
\end{equation}
In magnetized media, there is also a bound current
$\nabla\times\mathbf{M}$ that is affected by the Lorentz force. So
the force density in a linear non-dispersive dielectric and magnetic
medium can be written as
\begin{equation}\label{force density}
\mathbf{f}=\underbrace{\chi_\mathrm{e}\varepsilon_0\nabla\left(\frac{1}{2}E^2\right)}_{\mathbf{f}_1}+\underbrace{\chi_\mathrm{e}\varepsilon_0\frac{\partial}{\partial
t}\left(\mathbf{E}\times \mathbf{B}\right)}_{\mathbf{f}_2}+\underbrace{(\nabla\times\mathbf{M})\times\mathbf{B}}_{\mathbf{f}_3}\;.
\end{equation} I will calculate the material momentum due to the
action of the forces $\mathbf{f}_1$, $\mathbf{f}_2$ and
$\mathbf{f}_3$ separately. In a linear, isotropic and non-dispersive
magnetic medium, we have
$\mathbf{M}=\chi_\mathrm{m}\mathbf{H}={\chi_\mathrm{m}}/[{(1+\chi_\mathrm{m})\mu_0]}\mathbf{B}$,
where $\chi_\mathrm{m}$ is the magnetic susceptibility of the
medium.
In his treatment of the material part of the momentum of
electromagnetic waves in magnetic materials \cite{mansuripur07b},
Mansuripur uses a specific model for a magnetic medium, obtaining a
different equation for the bound currents in the medium. The form
$\nabla\times\mathbf{M}$ is more general and agrees with his form in
a homogeneous medium. As the interfaces between different linear
media will be treated here, the general form for the bound currents
must be used. He also takes the vector product of the bound currents
with $\mu_0\mathbf{H}$ instead of $\mathbf{B}$ to find the force
density. I don't think this is adequate. For these reasons, I
believe that the present treatment to find the material momentum of
electromagnetic waves in magnetic media is more adequate than that
of Mansuripur.
Consider an electromagnetic plane wave propagating in
$\mathbf{\hat{z}}$ direction described by the following electric
field: \begin{equation}\label{pulso}
\mathbf{E}_\mathrm{i}(z,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} \mathrm{d}\omega \tilde{E}(\omega)\mathrm{e}^{{i\left(\frac{n\omega}{c}z-\omega
t\right)}}\mathbf{\hat{x}}\;,
\end{equation} with $\tilde{E}(-\omega)=\tilde{E}^*(\omega)$ and
$\mathbf{B}_\mathrm{i}=({n}/{c})|\mathbf{E}_\mathrm{i}|\mathbf{\hat{y}}$,
$n=\sqrt{(1+\chi_\mathrm{e})(1+\chi_\mathrm{m})}$ being the
refraction index of the medium. The consideration of a plane wave
pulse must also be seen as an approximation. We can consider a beam
with a small angular spread, in which all wavevectors that compose
it are very close to the $z$ axis such that their $z$ component are
equal to their norm in a good approximation. For the wave of Eq.
(\ref{pulso}), the force densities $\mathbf{f}_1$ and $\mathbf{f}_3$
from Eq. (\ref{force density}) can be written as
\begin{eqnarray}\label{f3}
\mathbf{f}_1&=&-\frac{\chi_\mathrm{e}\varepsilon_0}{2}\frac{\partial}{\partial
t}(\mathbf{E}\times \mathbf{B})\;,\\
\mathbf{f}_3&=&\frac{\chi_\mathrm{m}(1+\chi_\mathrm{e})\varepsilon_0}{2}\frac{\partial}{\partial
t}(\mathbf{E}\times \mathbf{B})\;.
\end{eqnarray} Substituting these expressions for $\mathbf{f}_1$ and
$\mathbf{f}_3$, the material momentum density of an electromagnetic
wave in a homogeneous linear non-dispersive and non-absorptive
medium can be written as \begin{equation}\label{pmat}
\mathbf{p}_{\mathrm{mat}}(t)=\int_{-\infty}^{t}\mathrm{d}t'\mathbf{f}(t')=\frac{(\chi_\mathrm{e}+\chi_\mathrm{m}+\chi_\mathrm{e}\chi_\mathrm{m})}{2}\varepsilon_0\mathbf{E}(t)\times\mathbf{B}(t).
\end{equation} It is important to stress that this material momentum
density propagates with the pulse and disappears after its passage
through the medium. In the remaining part of this work, it will be
considered that the total momentum density of an electromagnetic
wave in a homogeneous linear medium is given by the sum of the
material momentum density above and the electromagnetic momentum
density $\varepsilon_0\mathbf{E}\times\mathbf{B}$. By also
calculating the permanent transfers of momentum to the media by the
action of the Lorentz force in some situations, the momentum
conservation in these processes and the consistency of the proposed
division of the momentum density will be shown.
\section{Reflection and transmission of an electromagnetic pulse by
the interface between two linear media}\label{sec:int diel}
Consider the situation illustrated in Fig. \ref{fig-pulsos}.
Initially we have a pulse of electromagnetic radiation in medium 1,
with electric susceptibility $\chi_{\mathrm{e1}}$, magnetic
susceptibility $\chi_{\mathrm{m1}}$ and refraction index
$n_1=\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m1}})}$,
propagating in $\mathbf{\hat{z}}$ direction towards the interface
with medium 2, with electric and magnetic susceptibilities
$\chi_{\mathrm{e2}}$ and $\chi_{\mathrm{m2}}$ and refraction index
$n_2=\sqrt{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m2}})}$. The
interface is in the plane $z=0$ and the incidence is normal. Later,
we will have a transmitted pulse in medium 2 and a reflected pulse
in medium 1. It will be shown that the proposed division of the
momentum density of the pulse in electromagnetic and material parts
leads to the momentum conservation in the process. Representing the
electric field of the incident pulse as in Eq. (\ref{pulso}), the
electromagnetic part of its momentum can be written as
\begin{equation}\label{p0}
\mathbf{P}_0=\int \mathrm{d}^3r \varepsilon_0\mathbf{E}_\mathrm{i}\times
\mathbf{B}_\mathrm{i}=\varepsilon_0\int \mathrm{d}x\int \mathrm{d}y \int_{-\infty}^{+\infty}
\mathrm{d}\omega|\tilde{E}(\omega)|^2\mathbf{\hat{z}}\;.
\end{equation} The integrals in $x$ and $y$ are, in principle,
infinite. But a plane wave is always an approximation, so the
amplitude $\tilde{E}(\omega)$ must decay for large $x$ and $y$. I
will not worry about that, only assume that the integral is finite.
Integrating also the material momentum density of Eq. (\ref{pmat}),
we find that the total momentum of the incident
($\mathbf{P}_\mathrm{i}$), reflected ($\mathbf{P}_\mathrm{r}$) and
transmitted ($\mathbf{P}_\mathrm{t}$) pulses are
\begin{eqnarray}\label{pi pr pt}\nonumber
&&\mathbf{P}_\mathrm{i}=\left(1+\frac{\chi_{\mathrm{e1}}+\chi_{\mathrm{m1}}+\chi_{\mathrm{e1}}\chi_{\mathrm{m1}}}{2}\right)\mathbf{P}_0\;,\;\mathbf{P}_\mathrm{r}=-|r|^2\mathbf{P}_\mathrm{i}\;,\\
&&\mathbf{P}_\mathrm{t}=|t|^2\left(1+\frac{\chi_{\mathrm{e2}}+\chi_{\mathrm{m2}}+\chi_{\mathrm{e2}}\chi_{\mathrm{m2}}}{2}\right)\mathbf{P}_0\;,
\end{eqnarray} where $r$ and $t$ are the reflection and transmission
coefficients, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{fig1.eps}\\
\caption{(a) A pulse with total momentum $\mathbf{P}_\mathrm{i}$ propagates in medium 1 with electric and magnetic susceptibilities
$\chi_{\mathrm{e1}}$ and $\chi_{\mathrm{m1}}$
towards the interface with medium 2 with electric and magnetic susceptibilities
$\chi_{\mathrm{e2}}$ and $\chi_{\mathrm{m2}}$. (b) The resultant reflected and
transmitted pulses with total momentum $\mathbf{P}_\mathrm{r}$ and $\mathbf{P}_\mathrm{t}$.}\label{fig-pulsos}\end{center}
\end{figure}
There is also a momentum transfer to medium 1 during reflection that
does not propagate with the electromagnetic pulse. We can observe in
Eq. (\ref{force density}) that it is the total (incident plus
reflected) field that generates the force density $\mathbf{f}_1$. As
$E^2=E_\mathrm{i}^2+E_\mathrm{r}^2+2\mathbf{E}_\mathrm{i}\cdot
\mathbf{E}_\mathrm{r}$ in medium 1, we must consider the term
$\mathbf{f}_1'=\chi_{\mathrm{e1}}\varepsilon_0\nabla\left(\mathbf{E}_\mathrm{i}\cdot\mathbf{E}_\mathrm{r}\right)$.
The momentum transfer due to this term is \begin{eqnarray}\label{p
linha 1} \nonumber
\mathbf{P}_1'&=&\int_{-\infty}^{+\infty}\mathrm{d}t\int
\mathrm{d}x\int \mathrm{d}y \int_{-\infty}^0 \mathrm{d}z
\chi_{\mathrm{e1}}\varepsilon_0\frac{\partial}{\partial
z}\left(\mathbf{E}_\mathrm{i}\cdot\mathbf{E}_\mathrm{r}\right)\mathbf{\hat{z}}\\
&=&r\chi_{\mathrm{e1}}\mathbf{P}_0\;,
\end{eqnarray} with $\mathbf{P}_0$ given by Eq. (\ref{p0}).
As $\mathbf{E}_\mathrm{i}\times \mathbf{B}_\mathrm{r}
+\mathbf{E}_\mathrm{r}\times \mathbf{B}_\mathrm{i}=0$, the force
density $\mathbf{f}_2$ in Eq. (\ref{force density}) does not
contribute to a permanent momentum transfer to the medium. The
permanent momentum transfer from Eq. (\ref{p linha 1}) was not
considered in the previous treatments of reflection of
electromagnetic pulses by interfaces between two dielectrics
\cite{loudon02,loudon03,mansuripur04}, so these works are compatible
with momentum conservation only when the incidence medium is vacuum
and $\mathbf{P}_1'=0$.
The force density $\mathbf{f}_3$ in Eq. (\ref{force density}) also
contributes to a permanent transfer of momentum to medium 1. We can
see that for the pulse of Eq. (\ref{pulso}) this force density can
be written as \begin{equation}\label{f3}
\mathbf{f}_3=
-\frac{\chi_\mathrm{m}}{\mu_0(1+\chi_\mathrm{m})}\nabla\left(\frac{B^2}{2}\right)\;.
\end{equation} Repeating the calculation of Eq. (\ref{p linha 1}),
remembering that $\mathbf{B}_\mathrm{r}=-r\mathbf{B}_\mathrm{i}$, we
can see that this force density transfers a momentum $\mathbf{P}'_3$
to the medium 1 given by \begin{equation}\label{p linha 3}
\mathbf{P}_3'=r\chi_{\mathrm{m1}}(1+\chi_{\mathrm{e1}})\mathbf{P}_0\;.
\end{equation}
There is still another contribution to the momentum transfer to the
interface due to the discontinuity of $\mathbf{M}$ in the interface.
In an extent $\delta z$ much smaller than the wavelength of light
around $z=0$, $\mathbf{f}_3$ from Eq. (\ref{force density}) can be
written as \begin{equation}\nonumber
\mathbf{f}_3|_{z=0} = \frac{1}{\delta z}
\left[\frac{\chi_{\mathrm{m2}}}{1+\chi_{\mathrm{m2}}}B_2-\frac{\chi_{\mathrm{m1}}}{1+\chi_{\mathrm{m1}}}B_1\right]\frac{(-\mathbf{\hat{x}})}{\mu_0}\times\mathbf{B}\;,
\end{equation} where $\mathbf{B}_1=\mathbf{B}_\mathrm{i}(1-r)$ is
the magnetic field just before the interface, in medium 1, and
$\mathbf{B}_2=[({1+\chi_{\mathrm{m2}}})/({1+\chi_{\mathrm{m1}}})]\mathbf{B}_1$
is the magnetic field just after the interface, in medium 2.
Integrating in this region of extent $\delta z$ and in $x$, $y$ and
$t$, we obtain the following momentum transfer to this interface:
\begin{eqnarray}\label{p linha 4}
\mathbf{P}'_4&=&\int_{-\infty}^{+\infty}\mathrm{d}t\int
\mathrm{d}x\int \mathrm{d}y \int_{-\delta z/2}^{+\delta
z/2}\mathrm{d}z \;\mathbf{f}_3\\\nonumber
&=&\frac{(\chi_{\mathrm{m1}}-\chi_{\mathrm{m2}})(1+\chi_{\mathrm{e1}})(1-r)^2}{2}\left[1+\frac{1+\chi_{\mathrm{m2}}}{1+\chi_{\mathrm{m1}}}\right]\mathbf{P}_0.
\end{eqnarray}
Using Eqs. (\ref{pi pr pt}), (\ref{p linha 1}), (\ref{p linha 3})
and (\ref{p linha 4}) and substituting the values of $r$ and $t$
\cite{jackson}, \begin{eqnarray}\nonumber
r&=&\frac{\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}-\sqrt{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}}{\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}+\sqrt{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}}\;,\\\nonumber
t&=&\frac{2\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}}{\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}+\sqrt{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}}\;,
\end{eqnarray} it can be shown that \begin{equation}
\mathbf{P}_\mathrm{r}+\mathbf{P}_\mathrm{t}+\mathbf{P}'_1+\mathbf{P}'_3+\mathbf{P}'_4=\mathbf{P}_\mathrm{i}\;.
\end{equation}
The obtention of a correct momentum balance equation shows the
consistency of the proposed division of the momentum of the wave
into material and electromagnetic parts.
\section{Radiation pressure on submerged mirrors}\label{sec:jones}
In the second example, consider that we put a very good conductor in
place of medium 2 in Fig. \ref{fig-pulsos} that reflects an
electromagnetic pulse described by Eq. (\ref{pulso}). The momentum
transfer to this mirror can be calculated using the Lorentz force
that acts on the induced currents in the mirror. The magnetic field
inside the mirror ($z>0$) can be written as \cite{jackson}
\begin{equation}
\mathbf{B}_{\mathrm{mir}}=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} \mathrm{d}\omega \frac{2n_1}{(1+\chi_{\mathrm{m1}})c}\tilde{E}(\omega)
\mathrm{e}^{(-\kappa+ik)z-i\omega
t}\mathbf{\hat{y}}\;,
\end{equation} where $k$ and $\kappa$ are real functions of
$\omega$. This form for the magnetic field guarantees the continuity
of $\mathbf{H}$ at the interface and decrease exponentially with
$z$. Disregarding the small presence of the electric field inside
the mirror that would generate absorption of electromagnetic energy,
the current density in the mirror can be written as
$\mathbf{J}_{\mathrm{mir}}=({1}/{\mu_0})\nabla \times \mathbf{B}$.
So the momentum transfer to the mirror due to the reflection of the
electromagnetic pulse is \begin{eqnarray}
&& \mathbf{P}_{\mathrm{mir}}=\int \mathrm{d}x \int \mathrm{d}y \int_0^{+\infty} \mathrm{d}z \int _{-\infty}^{+\infty}
\mathrm{d}t\,
\mathbf{J}_{\mathrm{mir}} \times \mathbf{B}_{\mathrm{mir}}\\\nonumber &&=\int \mathrm{d}x \int \mathrm{d}y \int_{-\infty}^{+\infty} \mathrm{d}\omega \frac{2\varepsilon_0(1+\chi_{\mathrm{e1}})}{(1+\chi_{\mathrm{m1}})}|\tilde{E}(\omega)|^2\frac{(\kappa-ik)}{\kappa}\mathbf{\hat{z}}\;.
\end{eqnarray} We have $|\tilde{E}(-\omega)|=|\tilde{E}(\omega)|$,
$\kappa(-\omega)=\kappa(\omega)$ and $k(-\omega)=-k(\omega)$, such
that the integral in $\omega$ of the term that multiplies ($-ik$) is
zero. So the momentum transfer from the pulse to the mirror upon
reflection is \begin{equation}\label{mom esp}
\mathbf{P}_{\mathrm{mir}}=2\left(\frac{1+\chi_{\mathrm{e1}}}{1+\chi_{\mathrm{m1}}}\right)\mathbf{P}_0\;,
\end{equation} with $P_0$ given by Eq. (\ref{p0}).
The same momentum transfer is obtained with the condition
$\mathbf{P}_{\mathrm{mir}}=\mathbf{P}_{\mathrm{i}}-\mathbf{P}_\mathrm{r}-\mathbf{P}'_1-\mathbf{P}'_3-\mathbf{P}'_4$
using Eqs. (\ref{pi pr pt}), (\ref{p linha 1}), (\ref{p linha 3})
and (\ref{p linha 4}) with $r=-1$ and $\chi_{\mathrm{m2}}=0$. On
this way, we arrive at the same expression (\ref{mom esp}) for
$\mathbf{P}_{\mathrm{mir}}$, showing again the consistency of the
treatment.
For non-dispersive linear media, the total energy density of the
wave can be written as
$u_{\mathrm{tot}}=(1+\chi_{\mathrm{e1}})\varepsilon_0|\mathbf{E}|^2$
\cite{jackson}. So the energy of the incident pulse of Eq.
(\ref{pulso}) is \begin{equation}\label{en pulso i}
U_\mathrm{i}=\int \mathrm{d}^3r
(1+\chi_{\mathrm{e1}})\varepsilon_0E_\mathrm{i}^2=\frac{(1+\chi_{\mathrm{e1}})c}{n_1}\,|\mathbf{P}_0|.
\end{equation}
The ratio between the modulus of the momentum transfer to the mirror
(\ref{mom esp}) and the incident energy (\ref{en pulso i}) is
${2n_1}/[{(1+\chi_{\mathrm{m1}})c}]$, in accordance with the
experiments of Jones \emph{et al.} \cite{jones54,jones78}. In these
experiments, the magnetic susceptibilities of the media were too
small to affect the results, so the ratio is usually stated as
$2n_1/c$. These experiments are frequently used to support the
Minkowski formulation, but we can see that the present treatment
also predicts the measured results. As this treatment is for
non-dispersive media, it does not say whether the term ${c}/{n_1}$
in this expression is the group velocity or the phase velocity of
the wave in the medium. The experiments show that it is the phase
velocity \cite{jones78}.
In a recent paper \cite{mansuripur07a}, Mansuripur treated the
problem of radiation pressure on mirrors immersed in linear
dielectric and non-magnetic media using a model of a medium with
imaginary refraction index to describe the mirrors. He considered
the case where the complex reflection coefficient of the mirror is
$\mathrm{{e}}^{i\phi}$ and calculated the Lorentz force on the
electric currents of the mirror. In the present treatment of the
problem of the reflection of an electromagnetic pulse by a
non-magnetic mirror with this complex reflection coefficient
immersed in a linear dielectric and magnetic medium, the momentum
transfer to the mirror can be calculated using Eqs. (\ref{pi pr
pt}), (\ref{p linha 1}), (\ref{p linha 3}) and (\ref{p linha 4}) as
$\mathbf{P}_{\mathrm{mir}}=\mathbf{P}_{\mathrm{i}}-\mathbf{P}_\mathrm{r}-\mathbf{P}'_1-\mathbf{P}'_3-\mathbf{P}'_4$
with $r=\mathrm{{e}}^{i\phi}$ and $\chi_{\mathrm{m2}}=0$. Retaining
terms up to the first power in $\chi_{\mathrm{m1}}$, we obtain
\begin{equation}\nonumber
\mathbf{P}_{\mathrm{mir}}\simeq2\left[1+(\chi_{\mathrm{e1}}-\chi_{\mathrm{m1}}-\chi_{\mathrm{e1}}\chi_{\mathrm{m1}})\sin^2\left(\frac{\phi}{2}\right)\right]\mathbf{P}_0\;,
\end{equation} which is compatible with the result reported by
Mansuripur when $\chi_{\mathrm{m1}}=0$ and can be experimentally
tested. As I rely only on the properties of the linear medium and no
particular model to describe the mirror is adopted, this treatment
is more general than that of Mansuripur.
\section{Momentum transfer to an antireflection
layer}\label{sec:antirref}
In the next example, suppose that we have an antireflection layer
between media 1 and 2 consisted of a material with electric
susceptibility $\chi_\mathrm{e}'$ obeying
$(1+\chi_\mathrm{e}')=\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{e2}})}$,
with magnetic susceptibility $\chi_\mathrm{m}'$ obeying
$(1+\chi_\mathrm{m}')=\sqrt{(1+\chi_{\mathrm{m1}})(1+\chi_{\mathrm{m2}})}$
and thickness ${\lambda'}/{4}$, $\lambda'$ being the wavelength of
the central frequency of the pulse in this medium. If we have an
almost monochromatic incident pulse in medium 1 propagating towards
the interface, it will be almost completely transmitted to medium 2.
The initial and final situations are illustrated in Fig. \ref{fig:
ant ref}. If the electric field of the incident pulse
$\mathbf{E}_\mathrm{i}$ is described by Eq. (\ref{pulso}), the
electric field of the transmitted one will be \cite{born}
\begin{eqnarray}\nonumber
\mathbf{E}_2(z,t)&=&\left[\frac{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}\right]^{1/4}\times\\&&\times\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}
\mathrm{d}\omega
\tilde{E}(\omega)\mathrm{e}^{i\left(\frac{n_2\omega}{c}z-\omega
t\right)}\mathbf{\hat{x}}\;.
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{fig2.eps}\\
\caption{(a) A pulse with total momentum $\mathbf{P}_\mathrm{i}$ propagates in medium 1 with electric and magnetic susceptibilities
$\chi_{\mathrm{e1}}$ and $\chi_{\mathrm{m1}}$
towards medium 2 with electric and magnetic susceptibilities
$\chi_{\mathrm{e2}}$ and $\chi_{\mathrm{m2}}$. There is an antireflection coating layer between media 1 and 2
consisted of a material with electric and magnetic
susceptibilities
$\chi'_{e}$ and $\chi'_{m}$ such that $(1+\chi_\mathrm{e}')=\sqrt{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{e2}})}$ and
$(1+\chi_\mathrm{m}')=\sqrt{(1+\chi_{\mathrm{m1}})(1+\chi_{\mathrm{m2}})}$. The
thickness of the layer is ${\lambda'}/{4}$, $\lambda'$ being the
wavelength of the central frequency of the pulse in this medium.
Despite the figure, it is assumed that the pulse is much larger than
the layer. (b) The pulse was totally transmitted to medium 2 and has
total momentum $\mathbf{P}_2$.}\label{fig: ant ref}\end{center}
\end{figure}
The total momentum of the incident pulse $\mathbf{P}_\mathrm{i}$ and
of the transmitted pulse $\mathbf{P}_2$ can be written as
\begin{eqnarray}\label{p1 p2}
\mathbf{P}_\mathrm{i}&=&\left(1+\frac{\chi_{\mathrm{e1}}+\chi_{\mathrm{m1}}+\chi_{\mathrm{e1}}\chi_{\mathrm{m1}}}{2}\right)\mathbf{P}_0\;\;,\\\nonumber\label{p1 p2 b}
\mathbf{P}_2&=&\left[\frac{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}\right]^{1/2}\times\\&&\times
\left(1+\frac{\chi_{\mathrm{e2}}+\chi_{\mathrm{m2}}+\chi_{\mathrm{e2}}\chi_{\mathrm{m2}}}{2}\right)\mathbf{P}_0\;,
\end{eqnarray} with $\mathbf{P}_0$ given by Eq. (\ref{p0}). Since
the momentum of the initial and final pulses are different, there
must be a momentum transfer to the antireflection layer in order
that we have momentum conservation. Now I will show that the use of
the force density of Eq. (\ref{force density}) acting in the
antireflection layer guarantees momentum conservation in the
process. Let us call $\mathbf{E}'$ and $\mathbf{B}'$ the electric
and magnetic fields in the region of the layer. The boundary
conditions impose \begin{eqnarray}\nonumber
&&\mathbf{E}'|_{z=0}=\mathbf{E}_{1}|_{z=0}\;,\;
\mathbf{E}'|_{z=\lambda'/4}=\mathbf{E}_{2}|_{z=\lambda'/4}\;,\;\\
&&\frac{\mathbf{B}'|_{z=0}}{1+\chi_\mathrm{m}'}=\frac{\mathbf{B}_{1}|_{z=0}}{1+\chi_{\mathrm{m1}}}\;,\;
\frac{\mathbf{B}'|_{z=\lambda'/4}}{1+\chi_\mathrm{m}'}=\frac{\mathbf{B}_{2}|_{z=\lambda'/4}}{1+\chi_{\mathrm{m2}}}\;.
\end{eqnarray} Writing $\mathbf{f}_3$ from Eq. (\ref{force density})
as in Eq. (\ref{f3}) and integrating the force density $\mathbf{f}$
on time and in the volume of the layer, we obtain the momentum
transfer to the layer: \begin{eqnarray}\label{pa}
\mathbf{P}''_\mathrm{a}&=&\int \mathrm{d}x \int \mathrm{d}y\int^{\lambda'/4}_0 \mathrm{d}z\int_{-\infty}^{+\infty} \mathrm{d}t\;
\mathbf{f}\\\nonumber
&=&\frac{\chi_\mathrm{e}'+\chi_\mathrm{m}'+\chi_\mathrm{e}'\chi_\mathrm{m}'}{2}\left[\sqrt{\frac{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}}-1\right]\mathbf{P}_0.
\end{eqnarray}
We must also consider the momentum transfers to the interfaces
between the layer and the mediums 1 and 2 due the discontinuities of
the magnetization $\mathbf{M}$. Calling $\mathbf{P}''_\mathrm{b}$
the momentum transfer in $z=0$ and $\mathbf{P}''_\mathrm{c}$ the
momentum transfer in $z=\lambda'/4$ and repeating the treatment of
Sec. \ref{sec:int diel}, we obtain \begin{eqnarray}\label{pb pc}
\mathbf{P}''_\mathrm{b} &=& \frac{(\chi_{\mathrm{m1}}-\chi'_{m})(2+\chi_{\mathrm{m1}}+\chi'_{m})(1+\chi_{\mathrm{e1}})}{2(1+\chi_{\mathrm{m1}})}\mathbf{P}_0,
\\\nonumber \label{pb pc b}
\mathbf{P}''_\mathrm{c} &=&
\frac{(\chi'_{m}-\chi_{\mathrm{m2}})(2+\chi'_{m}+\chi_{\mathrm{m2}})(1+\chi_{\mathrm{e2}})}{2(1+\chi_{\mathrm{m2}})}\times\\&&\times\sqrt{\frac{(1+\chi_{\mathrm{e1}})(1+\chi_{\mathrm{m2}})}{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m1}})}}\mathbf{P}_0\;.
\end{eqnarray}
The total momentum transfer to the antireflection layer due to the
passage of the pulse is
$\mathbf{P}''=\mathbf{P}''_\mathrm{a}+\mathbf{P}''_\mathrm{b}+\mathbf{P}''_\mathrm{c}$.
Using Eqs. (\ref{p1 p2}), (\ref{p1 p2 b}), (\ref{pa}), (\ref{pb pc})
and (\ref{pb pc b}), we can see that \begin{equation}
\mathbf{P}_2+\mathbf{P}''=\mathbf{P}_\mathrm{i}
\end{equation} and momentum is conserved in the process. Again, we
see the consistency of the proposed division of the momentum of the
wave.
\section{Einstein box theories}\label{sec:einstein}
As a last example, we can see whether the present treatment agrees
with Einstein's theory of relativity testing a \emph{gedanken}
experiment of the kind ``Einstein box theories'' in which a single
photon in free space with energy $\hbar \omega$ and momentum
$\mathbf{P}_0=({\hbar \omega}/{c})\mathbf{\hat{z}}$ is transmitted
through a transparent slab with electric and magnetic
susceptibilities $\chi_{\mathrm{e2}}$ and $\chi_{\mathrm{m2}}$,
refraction index
$n_2=\sqrt{(1+\chi_{\mathrm{e2}})(1+\chi_{\mathrm{m2}})}$, length
$L$, mass $M$ and antireflection layers in both sides. Due to
propagation in the medium, the photon suffers a spacial delay
$(n_2-1)L$ in relation to propagation in vacuum. According to the
theory of relativity, the velocity of the center of mass-energy of
the system must not change due to the passage of the photon through
the slab, so the slab must suffer a displacement $\Delta z$ such
that \cite{frisch65,brevik81,loudon04} \begin{equation}\label{centro
de energia}
Mc^2\Delta z=\hbar\omega(n_2-1)L\;.
\end{equation} The use of the Abraham momentum density as the
electromagnetic part of the total momentum density gives directly
the correct displacement of the slab, so this \emph{gedanken}
experiment is frequently used to support the Abraham formulation.
Now I will show that the present treatment also gives the correct
displacement in a direct way. The mechanical momentum of the slab
$\mathbf{P}_{\mathrm{slab}}$ during the passage of the photon has 2
contributions. The first is the momentum transferred to the first
antireflection layer. The second is the material part of the
momentum of the photon in the slab. Using Eqs. (\ref{pmat}),
(\ref{p1 p2 b}), (\ref{pa}),(\ref{pb pc}) and
(\ref{pb pc b}) with $\chi_{\mathrm{m1}}=\chi_{\mathrm{e1}}=0$, we
find that the total momentum of the slab is \begin{equation}
\mathbf{P}_{\mathrm{slab}}=\frac{n_2-1-\chi_{\mathrm{m2}}}{n_2}\frac{\hbar
\omega}{c}\mathbf{\hat{z}}\;. \end{equation} After the passage of
the pulse, the momentum of the slab comes back to zero. Part of this
momentum is in the form of hidden momentum
\cite{penfield,schockley67,vaidman90}, a relativistic effect that is
not associated with the movement of the slab. A magnetic dipole
$\mathbf{m}$ in an uniform electric field $\mathbf{E}_0$ has a
hidden momentum $\mathbf{m}\times \mathbf{E}_0$. So the density of
hidden momentum of an electromagnetic wave in a linear medium is
given by $\mathbf{M}\times \mathbf{E}$. Integrating this density in
volume, the total hidden momentum of the pulse in the slab is given
by $\mathbf{P}_{\mathrm{hid}}=-{\chi_{\mathrm{m2}}\hbar
\omega}/({n_2}{c})\mathbf{\hat{z}}$. To find the velocity of the
slab, we must subtract the hidden momentum from its total momentum
and divide by its mass. As the pulse takes a time ${n_2L}/{c}$ to
pass through the slab, the displacement of the slab is
\begin{equation}\nonumber
\Delta
z=\frac{|\mathbf{P}_{\mathrm{slab}}-\mathbf{P}_{\mathrm{hid}}|}{M}\frac{n_2L}{c}=\frac{\hbar\omega(n_2-1)L}{Mc^2}\;,
\end{equation} in agreement with Eq. (\ref{centro de energia}). One
more time we see the consistency of the proposed division of the
momentum of the wave into electromagnetic and material parts.
\section{Conclusions}\label{conc}
In summary, it was shown that the momentum density of
electromagnetic waves in linear non-absorptive and non-dispersive
dielectric and magnetic media can be naturally and consistently
divided into an electromagnetic part
$\varepsilon_0\mathbf{E}\times\mathbf{B}$, which has the same form
independently of the medium, and a material part that can be
obtained directly using the Lorentz force. This division was shown
to be consistent with momentum conservation in various circumstances
and with the ``Einstein box theories''. I believe that it may be
possible to extend this division to all kinds of media.
I also calculated permanent transfers of momentum to the media due
to the passage of electromagnetic pulses that were missing in
previous treatments \cite{loudon02,loudon03,mansuripur04}, showed
the compatibility of the division with the experiments of Jones
\emph{et al.} \cite{jones54,jones78} and generalized the treatment
of radiation pressure on submerged mirrors \cite{mansuripur07a},
which can be submitted to experimental verification.
\acknowledgments
The author acknowledges Carlos H. Monken and J\'ulia E. Parreira for
useful discussions and C\'elio Zuppo for revising the manuscript.
This work is supported by the Brazilian agency CNPq.
| proofpile-arXiv_065-6946 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Understanding the structure of gauge theory scattering amplitudes is
important from both the fundamental field-theoretic perspective, and
the pragmatic one of collider phenomenology. Infrared singularities, in
particular, open a window into the all-order structure of perturbation
theory and the relation between the weak and strong coupling limits;
at the same time, they provide the key to resummation of large logarithms
in a variety of phenomenological applications.
The study of infrared singularities in QCD amplitudes, which has a
three-decade-long history~\cite
Mueller:1979ih,Collins:1980ih,Sen:1981sd,Polyakov:1980ca
Dotsenko:1979wb,Brandt:1981kf,StermanWeb
Brandt:1982gz,Sen:1982bt,Gatheral:1983cz,Bassetto:1984ik
Frenkel:1984pz,Collins:1981uk,Korchemsky:1985xj,Sterman:1986aj
Ivanov:1985np,Korchemsky:1987wg,Korchemsky:1988hd
Korchemsky:1988si,Collins:1989gx
Sterman:1995fz,Catani:1996jh,Contopanagos:1996nh
KOS,KOSjet,BSZ,Laenen:2004pm,Dokshitzer:2005ig
Magnea:2008ga,Korchemskaya:1994qp,Botts:1989kf,Kidonakis:1997gm
Magnea:1990zb,Magnea:2000ss,Sterman:2002qn,Catani:1998bh
Aybat:2006wq,Aybat:2006mz,Dixon:2008gr,Eynck:2003fn},
recently received a major
boost~\cite{Becher:2009cu,Gardi:2009qi,Becher:2009qa}.
The factorization properties of soft and collinear modes, also referred
to as Sudakov factorization, were combined with the symmetry of
soft-gluon interactions under rescaling of hard parton momenta, and
were shown to constrain the structure of singularities of any massless
gauge theory amplitude, to any loop order, and for a general number
of colors $N_c$.
A remarkably simple structure emerges as the simplest solution to
these constraints. All non-collinear soft singularities are generated
by an anomalous dimension matrix in color
space~\cite{Brandt:1982gz,Sen:1982bt,Korchemsky:1985xj,Ivanov:1985np
Botts:1989kf,KOS,KOSjet,BSZ,Dokshitzer:2005ig}. In the simplest solution,
this matrix takes the form of a sum over color dipoles, corresponding to
pairwise interactions among hard partons. This interaction is governed by
a single function of the strong coupling, the cusp anomalous dimension,
$\gamma_K(\alpha_s)$. The simplicity of this result is remarkable,
especially given the complexity of multi-leg amplitude computations
beyond tree level. The color dipole structure of soft singularities appears
naturally at the one-loop order~\cite{KOS,KOSjet,Catani:1996jh,Catani:1998bh},
where the interaction is genuinely of the form of a single gluon exchange
between any two hard partons. The validity of this structure at two loops
was not obvious {\it a priori}; it was discovered through the explicit
computation of the anomalous dimension
matrix~\cite{Aybat:2006wq,Aybat:2006mz}.
This remarkable simplicity is peculiar to the case of massless gauge
theories: recent work~\cite{Kidonakis:2009ev,Mitov:2009sv,Becher:2009kw
Beneke:2009rj,Czakon:2009zw,Ferroglia:2009ep,Ferroglia:2009ii,Kidonakis:2009zc}
has shown that the two-loop matrix, when at least two colored legs are
massive, is not proportional to the one-loop matrix, except in particular
kinematic regions. In general, in the massive case, there are new
contributions that correlate the color and momentum degrees of freedom of
at least three partons, starting at two loops. These contributions vanish
as ${\cal O}(m^4/s^2)$ in the small mass
limit~\cite{Ferroglia:2009ep,Ferroglia:2009ii}.
Given that all existing massless results are consistent with the
sum-over-dipoles formula, it is tempting to conjecture that it gives the full
answer~\cite{Bern:2008pv,Becher:2009cu,Gardi:2009qi,Becher:2009qa,LaThuile}.
As emphasized in Ref.~\cite{Gardi:2009qi}, however, constraints based on
Sudakov factorization and momentum rescaling
alone are not sufficient to determine uniquely the
form of the soft anomalous dimension. A logical possibility exists that
further contributions will show up at the multi-loop level, which
directly correlate the kinematic and color degrees of freedom of more
than two hard partons. It is very interesting to establish whether these
corrections exist, and, if they do not, to gain a complete understanding
of the underlying reason. Beyond the significance of the soft singularities
themselves, a complete understanding of their structure may shed light
on the structure of the finite parts of scattering amplitudes.
Ref.~\cite{Gardi:2009qi} showed that precisely two classes of
contributions may appear as corrections to the sum-over-dipoles
formula. The first class stems from the fact that the sum-over-dipoles
formula provides a solution to the factorization-based constraints only
if the cusp anomalous dimension, $\gamma_K^{(i)} (\alpha_s)$,
associated with a hard parton in representation $i$ of the gauge group,
obeys $\gamma_K^{(i)}(\alpha_s) = C_i \, \widehat{\gamma}_K
(\alpha_s)$, where $\widehat{\gamma}_K$ is universal and $C_i$ is the
quadratic Casimir of representation $i$, $C_i = C_A$ or $C_F$ for gluons
and quarks, respectively. This property is referred to as `Casimir scaling'
henceforth. Casimir scaling holds through three
loops~\cite{Moch:2004pa,Kotikov:2004er}; an interesting open question
is whether it holds at four loops and beyond~\cite{AldayMaldacena}.
At four loops, the quartic Casimir first appears in the color factors
of diagrams for the cusp anomalous dimension.
(In QCD, with gluons in the adjoint representation $A$, fermions in the
fundamental representation $F$, and a Wilson line in representation $R$,
the relevant quartic Casimirs are $d_A^{abcd}d_R^{abcd}$
and $d_F^{abcd}d_R^{abcd}$, where $d_X^{abcd}$ are totally symmetric
tensors in the adjoint indices $a,b,c,d$.)
However, Ref.~\cite{Becher:2009qa} provided some
arguments, based on factorization and collinear limits of
multi-leg amplitudes, suggesting that Casimir scaling might actually hold at
four loops. In the strong coupling limit, it is known
to break down for $\NeqFour$ super-Yang-Mills theory in the large-$N_c$
limit~\cite{Armoni}, at least when $\gamma_K$ is computed
for Wilson lines in a special class of representations of the gauge group.
The second class of corrections, the one on which we focus here, can
occur even if the cusp anomalous dimension obeys Casimir scaling.
In this case, the sum-over-dipoles formula solves a set of inhomogeneous linear
differential equations, which follow from the constraints of Sudakov
factorization and momentum rescalings. However, we can contemplate
adding solutions to the homogeneous differential equations,
which are provided by arbitrary functions of conformally (and rescaling)
invariant cross ratios built from the momenta of four hard
partons~\cite{Gardi:2009qi}. Thus any additional terms must correlate
directly the momenta, and colors, of four legs.
Due to the non-Abelian exponentiation
theorem~\cite{StermanWeb,Gatheral:1983cz,Frenkel:1984pz} such
contributions must originate in webs that connect four hard partons,
which first appear at three loops. From this perspective then, the absence
of new correlations at two loops~\cite{Aybat:2006wq,Aybat:2006mz},
or in three-loop diagrams involving matter fields~\cite{Dixon:2009gx},
is not surprising, and it does not provide substantial new evidence in favor
of the minimal, sum-over-dipoles solution. The first genuine test is
from the matter-independent terms at three loops. At this order,
purely gluonic webs may connect four hard partons, possibly inducing
new types of soft singularities that correlate the color and
kinematic variables of the four partons.
The most recent step in addressing this issue was taken in
Ref.~\cite{Becher:2009qa}, in which an additional strong constraint on the
singularity structure of the amplitude was established, based on the
properties of amplitudes as two partons become collinear. Recall that the
primary object under consideration is the fixed-angle scattering
amplitude, in which
all ratios of kinematic invariants are taken to be of order
unity. This fixed-angle limit is violated upon considering the special
kinematic situation where two of the hard partons become collinear.
An additional class of singularities, characterized by the vanishing invariant
mass of the two partons, arises in this limit. The splitting amplitude is
defined to capture this class of singularities. It relates an $n$-parton
amplitude with two collinear partons to an $(n-1)$-parton amplitude, where
one of the legs carries the total momentum and color charge of the
two collinear partons. The basic, universal property of the splitting
amplitude is that it depends only on the momentum and color degrees of
freedom of the collinear partons, and not on the rest of the process.
Splitting amplitudes have been explicitly computed, or extracted
from known scattering amplitudes, at
one~\cite{Bern:1994zx,Bern:1998sc,Bern:1999ry,Kosower:1999rx}
and two~\cite{Bern:2004cz,Badger:2004uk} loops.
A derivation of splitting-amplitude universality to all loop orders,
based on unitarity, has been given in the large-$N_c$
limit~\cite{Kosower:1999xi}. The light-cone-gauge method for
computing two-loop splitting amplitudes~\cite{Bern:2004cz},
in which only the two collinear legs and one off-shell parton appear,
strongly suggests that the same all-orders universality extends
to arbitrary color configurations, not just planar ones.
Based on splitting-amplitude universality, Ref.~\cite{Becher:2009qa}
established additional constraints on the singularity structure of amplitudes.
Using these constraints in conjunction with the Sudakov factorization
constraints discussed above, that paper excluded any possible
three-loop corrections depending linearly on logarithms of cross
ratios. The final conclusion was, however, that more general functions
of conformal cross ratios that vanish in all collinear limits could not
be ruled out.
In the present paper we re-examine the structure of soft singularities at
three loops. We put together all available constraints, starting with the
Sudakov factorization constraints and Bose symmetry, and including
the properties of the splitting amplitude and the expected degree of
transcendentality of the functions involved\footnote{Transcendentality
here refers to the assignment of an additive integer $\tau$ for each type
of factor in a given term arising in an amplitude or an anomalous dimension:
$\tau = 0$ for rational functions, $\tau = 1$ for factors of $\pi$ or single
logarithms, $\ln x$; $\tau = n$ for factors of $\zeta(n)$, $\ln^n x$
or ${\rm Li}_n(x)$, {\it etc.}~\cite{Kotikov:2002ab}. We will provide more
examples in \sect{sec:maxtran}.}.
We make some plausible assumptions on the kinematic dependence,
and consider all possible
products of logarithms, and eventually also polylogarithms. We find that
potential contributions beyond the sum-over-dipoles formula are still possible
at three loops, but their functional form is severely constrained.
The paper is organized as follows. We begin with three short
sections in which we review the main relevant results of
Refs.~\cite{Gardi:2009qi,Becher:2009qa}. In \sect{sec:factorization}
we briefly summarize the Sudakov factorization of the amplitude
and the constraints imposed on the soft anomalous
dimension matrix by rescaling invariance of Wilson lines.
In \sect{sec:amplitude_ansatz} we present the sum-over-dipoles
formula, the simplest possible solution to
these constraints. In \sect{sec:SA} we review the splitting amplitude
constraint. The main part of our study is \sect{sec:corrections}, in
which we put together all available constraints and analyze the possible
color and kinematic structures that may appear beyond the
sum-over-dipoles formula. Most of the discussion is general, and
applies to any loop order, but specific analysis is devoted to potential
three-loop corrections. At the end of the section
we make a few comments concerning four-loop corrections.
Our discussion throughout \sect{sec:corrections} focuses on amplitudes
involving four colored partons, plus any number of color-singlet particles.
The generalization to the multi-parton case is presented in \sect{sec:n-leg}.
Our conclusions are summarized in \sect{sec:conc}, while an
appendix discusses the special case of four-parton scattering at
three loops.
\section{Sudakov factorization and its consequences~\label{sec:factorization}}
We summarize here the infrared and collinear factorization properties
of fixed-angle scattering amplitudes ${\cal M} \left(p_i/\mu,
\alpha_s (\mu^2), \e \right)$ involving $n$ massless partons,
plus any number of color-singlet particles, evaluated
in dimensional regularization with $D = 4 - 2 \e$.
We refer the reader to Ref.~\cite{Gardi:2009qi} for technical
details and operator definitions of the various functions involved.
Multi-parton fixed-angle amplitudes can be expressed in terms of their
color components ${\cal M}_L$ in a chosen basis in the vector
space of available color structures for the scattering process
at hand. All infrared and collinear singularities of ${\cal M}_L$ can be
factorized~\cite{Sen:1982bt,Collins:1989gx,Sterman:1995fz
Sterman:2002qn,Aybat:2006mz,Dixon:2008gr,Gardi:2009qi} into jet
functions $J_i$, one for each external leg $i$, multiplied by a
(reduced) soft matrix $\overline{\cal S}_{LM}$,
\begin{eqnarray}
\label{facamp_bar}
{\cal M}_{L} \left(p_i/\mu, \alpha_s (\mu^2),
\epsilon \right) & = &
\overline{\cal S}_{L M} \left(\rho_{ij} , \alpha_s (\mu^2), \epsilon
\right) \, H_{M} \left( \frac{2 p_i \cdot p_j}{\mu^2},
\frac{(2 p_i \cdot n_i)^2}{n_i^2 \mu^2}, \alpha_s (\mu^2),
\epsilon \right)
\nonumber \\ &&\hspace*{100pt} \times
\prod_{i = 1}^n
J_i\left( \frac{(2 p_i \cdot n_i)^2}{n_i^2 \mu^2},
\alpha_s (\mu^2), \epsilon \right)
\,\,,
\end{eqnarray}
leaving behind a vector of hard functions $H_M$, which are finite
as $\e \to 0$. A sum over $M$ is implied. The hard
momenta\footnote{In our convention momentum conservation reads
$q+\sum_{i=1}^n p_i=0$, where $q$ is the recoil momentum carried
by colorless particles.} $p_i$ defining the amplitude ${\cal M}$ are
assumed to be light-like, $p_i^2 = 0$, while the $n_i$ are
auxiliary vectors used to define the jets in a gauge-invariant way,
and they are not light-like, $n_i^2 \neq 0$.
The reduced soft matrix $\overline{\cal S}_{LM}$ can be computed
from the expectation value of a product of eikonal lines,
or Wilson lines, oriented along the hard parton momenta,
dividing the result by $n$ eikonal jet functions
${\cal J}_i$, which remove collinear divergences and leave only singularities
from soft, wide-angle virtual gluons. It is convenient to
express the color structure of the soft matrix $\overline{\cal S}$ in a
basis-independent way, in terms of operators ${\bf T}_i^a$,
$a=1,2,\ldots,N_c^2 - 1$, representing the generators of ${\rm SU}(N_c)$
acting on the color of parton $i$ ($i=1,2,\ldots,n$)~\cite{Catani:1996jh}.
The partonic (quark or gluon) jet function solves two evolution
equations simultanously, one in the factorization scale $\mu$ and
another in the kinematic variable $(2p_i\cdot n_i)^2/n_i^2$
(see {\it e.g.} Ref.~\cite{Gardi:2009qi}). The latter equation
generalizes the evolution of the renormalization-group invariant
form factor~\cite{Magnea:1990zb}. The resulting solution to
these equations can be written as~\cite{LaThuile}
\begin{eqnarray}
&& \hspace{-2mm}
J_i\left(\frac{(2p_i\cdot n_i)^2}{n_i^2}, \alpha_s(\mu^2), \epsilon
\right) = \, H_{J_i} \left(1,
\alpha_s \left({\textstyle\frac{(2 p_i\cdot n_i)^2}{n_i^2}} \right),
\epsilon\right) \exp \Bigg\{
- \frac12 \int_{0}^{\mu^2} \frac{d\lambda^2}{\lambda^2}
\gamma_{J_i} \left( \alpha_s(\lambda^2,\epsilon) \right)
\nonumber \\
&& \hspace{-2mm} + \, \frac{{\bf T}_i \cdot {\bf T}_i }{2}
\int_0^{(2 p_i\cdot n_i)^2/n_i^2} \frac{d \lambda^2}{\lambda^2}
\Bigg[ \frac14 \widehat{\gamma}_K
\left( \alpha_s( \lambda^2, \epsilon) \right)
\ln \left( \frac{n_i^2 \lambda^2}{(2 p_i\cdot n_i)^2} \right)
+ \frac12 \widehat{\delta}_{{\overline{\cal S}}} \left(\alpha_s
(\lambda^2, \epsilon) \right) \Bigg] \Bigg\} \, ,
\label{J_explicit}
\end{eqnarray}
where $H_{J_i}$ is a finite coefficient function, and all
singularities are generated by the exponent.
The solution depends on just three anomalous dimensions,
which are functions of the $D$-dimensional coupling alone:
$\gamma_{J_i}$ is the anomalous
dimension of the quark or gluon field defining the jet
(corresponding to the quantity $\gamma^i$ defined
in Refs.~\cite{Becher:2009qa,Becher:2009cu}),
while
$\widehat{\gamma}_K=2\alpha_s/\pi+\cdots$ and
$\widehat{\delta}_{{\overline{\cal S}}}=\alpha_s/\pi+\cdots$ are,
respectively, the cusp anomalous dimension and an additional eikonal
anomalous dimension defined in Sec.~4.1 of Ref.~\cite{Gardi:2009qi}.
In eq.~(\ref{J_explicit}) we have already assumed that the latter
two quantities admit Casimir scaling, and we have factored out the
quadratic Casimir operator $C_i \equiv {\bf T}_i \cdot {\bf T}_i$.
Our main interest here is the reduced soft matrix
$\overline{{\cal S}}$, which takes
into account non-collinear soft radiation. It is defined entirely in
terms of vacuum correlators of operators composed of semi-infinite
Wilson lines (see {\it e.g.} Ref.~\cite{Gardi:2009qi}),
and depends on the kinematic variables
\begin{equation}
\rho_{ij} \, \equiv \, \frac{\left(- \beta_i \cdot \beta_j \right)^2}
{\displaystyle \frac{2(\beta_i \cdot n_i)^2}{n_i^2}
\frac{2(\beta_j \cdot n_j)^2}{n_j^2}} \, = \,
\frac{
\, \left| \beta_i \cdot \beta_j \right|^2 \,
{\rm e}^{-2 {\rm i} \pi \lambda_{ij}} }
{\displaystyle \frac{2(\beta_i \cdot n_i)^2}{n_i^2}
\frac{2(\beta_j \cdot n_j)^2}{n_j^2}} \, ,
\label{rhoij}
\end{equation}
which are invariant with respect to rescaling of all the Wilson line
velocities $\beta_i$. The $\beta_i$ are related to the external momenta
by $p_i^\mu=(Q/\sqrt{2})\beta_i^\mu$, where $Q$ is a hard scale whose
precise value will not be important here. The phases $\lambda_{ij}$ are
defined by $\beta_i \cdot \beta_j = - | \beta_i \cdot \beta_j |
{\rm e}^{-{\rm i} \pi \lambda_{ij}}$, where $\lambda_{ij} = 1$ if
$i$ and $j$ are both initial-state partons, or both final-state partons,
and $\lambda_{ij}=0$ otherwise. Note that the sign of the phase in
${\rm e}^{-{\rm i} \pi \lambda_{ij}}$ is determined by the
$+{\rm i}\varepsilon$ prescription for the Feynman propagator.
The reduced soft matrix obeys the renormalization group equation
\begin{equation}
\mu \frac{d}{d \mu} \overline{{\cal S}}_{L M}
\left( \rho_{i j}, \alpha_s, \epsilon \right) = - \,\sum_{N}
\overline{{\cal S}}_{L N}\left( \rho_{i j}, \alpha_s, \epsilon \right)
\,\,
\Gamma^{\overline{{\cal S}}}_{N M} \left( \rho_{i j},
\alpha_s \right)
\,.
\label{rencalS}
\end{equation}
The soft anomalous dimension matrix $\Gamma^{{\overline{\cal S}}}_{N M} \left(
\rho_{i j}, \alpha_s \right)$, in turn, obeys the equation~\cite{Gardi:2009qi}
\begin{equation}
\label{constraints}
\sum_{j \neq i} \frac{\partial}{\partial \ln\rho_{ij}} \,
\Gamma^{{\overline{\cal S}}}_{N M} \left(
\rho_{i j}, \alpha_s \right) = \frac{1}{4} \, \gamma_K^{(i)}
\left( \alpha_s \right) \,\delta_{N M}\,, \qquad \qquad \forall\, i,\,N,M \,,
\end{equation}
found by considering a rescaling of the eikonal velocity $\beta_i$.
The simplest solution of this equation is the sum-over-dipoles
formula~\cite{Gardi:2009qi},
\begin{equation}
\Gamma^{\overline{\cal S}}_{\dip} \left( \rho_{i j}, \alpha_s \right) =
- \, \frac{1}{8} \, \widehat{\gamma}_K(\alpha_s)
\sum_{i = 1}^n \sum_{j\neq i} \, \ln \rho_{ij} \,
{\bf T}_i \cdot {\bf T}_j
+ \frac{1}{2} \, \widehat{\delta}_{{\overline{\cal S}}}(\alpha_s) \,
\sum_{i = 1}^n \, {\bf T}_i \cdot {\bf T}_i \,.
\label{GMeq5p6}
\end{equation}
In this expression the dependence on the scale $\mu$
appears exclusively through
the argument of the $D$-dimensional coupling in $\widehat{\gamma}_K$
and $\widehat{\delta}_{\overline{\cal S}}$. Therefore \eqn{rencalS} is easily
integrated to give the corresponding formula for the reduced soft matrix
$\overline{{\cal S}}$,
\begin{align}
\begin{split}
\label{barS_ansatz}
\overline{{\cal S}}_{\dip}
\left(\rho_{i j}, \alpha_s,\epsilon\right)
&= \, \exp\Bigg\{
-\frac12 \int_0^{\mu^2} \frac{d\lambda^2}{\lambda^2} \, \Bigg[
\, \frac12 \, \widehat{\delta}_{{\overline{\cal S}}}
( \alpha_s(\lambda^2,\epsilon) ) \,
\sum_{i = 1}^n {\bf T}_i \cdot {\bf T}_i \,
\\& \hskip1.5cm
- \frac18 \,
\widehat{\gamma}_K\left(\alpha_s(\lambda^2,\epsilon) \right) \,
\sum_{i = 1}^n \sum_{j \neq i} \,
\ln \rho_{ij} \, {\bf T}_i \cdot {\bf T}_j\,
\Bigg]\Bigg\} \,\, .
\end{split}
\end{align}
\Eqn{GMeq5p6} satisfies the constraints~(\ref{constraints}) if and only
if the cusp anomalous dimension admits Casimir scaling, namely
$\gamma_K^{(i)}(\alpha_s) = C_i \, \widehat{\gamma}_K(\alpha_s)$,
with $\widehat{\gamma}_K$ independent of the color representation
of parton $i$. In this paper we shall assume that this is the case,
postponing to future work the analysis of how higher-order Casimir
contributions to $\gamma_K$ would affect the soft anomalous
dimension matrix (the starting point for such an analysis is eq. (5.5) of
Ref.~\cite{Gardi:2009qi}).
Even under the assumption of Casimir scaling for $\gamma_K^{(i)}$,
\eqn{barS_ansatz} may not be the full result for $\overline{{\cal S}}$,
because $\Gamma^{{\overline{\cal S}}}$ may receive additional
corrections $\Delta^{{\overline{\cal S}}}$ going beyond the
sum-over-dipoles ansatz. In this case the full anomalous dimension can
be written as a sum,
\begin{equation}
\label{Gamma_barS}
\Gamma^{{\overline{\cal S}}}\left(\rho_{i j}, \alpha_s \right)
= \Gamma_{\dip}^{{\overline{\cal S}}}\left(\rho_{i j}, \alpha_s \right)
\,+\, \Delta^{{\overline{\cal S}}} \left(\rho_{i j}, \alpha_s \right)\,.
\end{equation}
Here $\Delta^{{\overline{\cal S}}}$ is a matrix in color space, which is
constrained to satisfy the homogeneous differential equation
\begin{equation}
\label{Delta_oureq_reformulated}
\sum_{j \neq i} \frac{\partial}{\partial \ln\rho_{ij}}
\Delta^{{\overline{\cal S}}} \left(
\rho_{i j}, \alpha_s \right) = 0 \, \qquad \forall i \,.
\end{equation}
This equation is solved by any function of
conformally invariant cross ratios of the form
\begin{equation}
\label{rhoijkl}
\rho_{ijkl} \equiv \frac{\beta_i \cdot \beta_j \ \beta_k
\cdot \beta_l}{\beta_i \cdot \beta_k \ \beta_j \cdot \beta_l} \, ,
\end{equation}
which are related to the kinematic variables $\rho_{ij}$ in \eqn{rhoij}, and
to the momenta $p_i$, by
\begin{equation}
\label{rhoijkl_mod}
\rho_{ijkl}
= \left(\frac{\rho_{i j} \, \rho_{k l}}{\rho_{i k} \, \rho_{j l}}
\right)^{1/2}
= \frac{p_i \cdot p_j \ p_k \cdot p_l}
{p_i \cdot p_k \ p_j \cdot p_l} \,
=\left|\frac{p_i \cdot p_j \ p_k \cdot p_l}
{p_i \cdot p_k \ p_j \cdot p_l} \right|
{\rm e}^{-{\rm i}\pi(\lambda_{ij} + \lambda_{kl}
- \lambda_{ik} - \lambda_{jl})}.
\end{equation}
Each leg that appears in $\rho_{ijkl}$ does so once in the
numerator and once in the denominator, thus cancelling in
the combination of derivatives in \eqn{Delta_oureq_reformulated}.
Hence we define
\begin{align}
\Delta^{{\overline{\cal S}}} \left(
\rho_{i j}, \alpha_s \right)
= \Delta \left( \rho_{i j k l}, \alpha_s \right) \,.
\end{align}
Any additional correction $\Delta$ must introduce new correlations
between at least four partons into the reduced soft function.
Such additional corrections are known not to appear at two
loops~\cite{Aybat:2006wq,Aybat:2006mz}, as expected from the
fact that two-loop webs can correlate at most three hard partons. By
the same token, they cannot show up in matter-dependent diagrams
at three loops, as verified explicitly in Ref.~\cite{Dixon:2009gx}.
On the other hand, they might be generated at three loops by purely
gluonic diagrams, such as the one shown in~\fig{4Elabeled}.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=0,width=5.0cm]{4Elabeled.eps}
\caption{A purely gluonic diagram connecting the four hard partons
labeled $i,j,k,l$, which may contribute to the soft anomalous dimension
matrix at three loops. It correlates the colors of the four partons via
the operator
${\bf T}_i^{a} {\bf T}_j^{b} {\bf T}_k^{c} {\bf T}_l^{d}\ f^{ade} f^{cbe}$.
\label{4Elabeled}}
\end{center}
\end{figure}
The main purpose of the present paper
is to examine all available constraints on the soft anomalous
dimension matrix, and check whether they are sufficient to
rule out a non-vanishing $\Delta$ at three loops.
We will show that, despite the powerful constraints available, corrections
to the sum-over-dipoles formula may indeed appear at this order.
In the case of purely logarithmic functions of the cross ratios
$\rho_{i j k l}$, we find a unique solution to all the constraints.
Allowing also for the appearance of polylogarithms of a single variable,
there are at least two additional solutions.
\section{Minimal ansatz for the singularities of the
amplitude~\label{sec:amplitude_ansatz}}
The factorization formula~(\ref{facamp_bar}) has the attractive
property that each of the singular factors is defined in a gauge-invariant
way. It requires the introduction of the auxiliary vectors $n_i$,
which have been very useful~\cite{Gardi:2009qi} in revealing
the properties of the soft anomalous dimension. At the end of the
day, however, the singularities of the amplitude ${\cal M}$
cannot depend on these auxiliary vectors, but only on the kinematic
invariants built out of external parton momenta. Indeed, as discussed
below, the cancellation of the dependence of the singular terms on the vectors
$n_i$ can be explicitly performed, and one can write the factorization
of the amplitude in a more compact form:
\begin{equation}
\label{introducing_Z}
{\cal M} \left(\frac{p_i}{\mu}, \alpha_s (\mu^2), \epsilon \right) =
Z \left(\frac{p_i}{\mu_f}, \alpha_s(\mu_f^2), \epsilon \right) \,\,
{\cal H} \left(\frac{p_i}{\mu}, \frac{\mu_f}{\mu}, \alpha_s(\mu^2),
\epsilon \right) \, ,
\end{equation}
as used in~Refs.~\cite{Becher:2009qa,Becher:2009cu}.
Here the (matrix-valued) $Z$ factor absorbs all the infrared
(soft and collinear) singularities, while the hard function ${\cal H}$
is finite as $\epsilon\rightarrow0$. We distinguish between two
scales, the renormalization scale $\mu$, which is present in the
renormalized amplitude ${\cal M}$ on the left-hand side of
\eqn{introducing_Z}, and $\mu_f$, a factorization scale that is
introduced through the $Z$ factor. The function ${\cal H}$ (a vector
in color space) plays the role of $H$ in the factorization
formula~(\ref{facamp_bar}), but differs from it by being independent
of the auxiliary vectors $n_i$.
Sudakov factorization implies that the $Z$ matrix is renormalized
multiplicatively. We can then define the anomalous dimension matrix
$\Gamma$, corresponding to $Z$, by
\begin{equation}
\label{Gamma_def}
\frac{d}{d \ln \mu_f} Z \left(\frac{p_i}{\mu_f}, \alpha_s(\mu_f^2),
\epsilon \right) \,
= \, - \,
Z \left(\frac{p_i}{\mu_f}, \alpha_s(\mu_f^2), \epsilon \right)
\, \Gamma \left(\frac{p_i}{\mu_f}, \alpha_s (\mu_f^2) \right) .
\end{equation}
Note that the matrix $\Gamma$ is finite, but it can depend implicitly on
$\epsilon$ when evaluated as a function of the $D$-dimensional running
coupling; it will then generate the infrared poles of $Z$, as usual, through
integration over the scale.
The sum-over-dipoles ansatz for $\Gamma^{\overline{\cal S}}$,
\eqn{GMeq5p6}, implies an analogous formula for $\Gamma$.
In order to see it, one may use the factorization formula~(\ref{facamp_bar}),
substitute in \eqns{J_explicit}{barS_ansatz}, use color conservation,
$\sum_{j\neq i} {\bf T}_j=- {\bf T}_i$, and apply the identity
\begin{equation}
\label{kinematic_variables_combined}
\displaystyle{\underbrace{
\ln \left(\frac{(2p_i\cdot n_i)^2}{n_i^2}\right)}_{J_i}\,+\,
\underbrace{\ln \left(\frac{(2p_j\cdot n_j)^2}{n_j^2}\right)}_{J_j}}\,
\, + \,\underbrace{\ln \left(\frac{\left(\left|\beta_i\cdot\beta_j\right|\,\,
{\rm e}^{-{\rm i} \pi\lambda_{ij}}\right)^2}
{\displaystyle{\frac{2(\beta_i\cdot n_i)^2}{n_i^2}
\frac{2(\beta_j\cdot n_j)^2}{n_j^2}}}\right)}_{\overline{\cal S}}
= 2 \ln(2 \left| p_i\cdot p_j\right|\,{\rm e}^{-{\rm i} \pi\lambda_{ij}}) \, .
\end{equation}
Note also that the poles associated with
$\widehat{\delta}_{{\overline{\cal S}}}(\alpha_s)$
cancel out between the soft and jet functions. In this way, one arrives
at the sum-over-dipoles ansatz for $\Gamma$,
\begin{align}
\label{Gamma_ansatz}
\begin{split}
\Gamma_{\dip} \left( \frac{p_i}{\lambda},
\alpha_s (\lambda^2) \right) = &- \frac14 \,
\widehat{\gamma}_K\left(\alpha_s(\lambda^2) \right)
\sum_{i =1}^n \sum_{j \neq i} \,
\ln\left(\frac{ \, 2 \, \left| p_i \cdot p_j\right|
\,{\rm e}^{-{\rm i} \pi\lambda_{ij}}}
{{\lambda^2}}\right)
{\bf T}_i \cdot {\bf T}_j\,
\\&+ \sum_{i=1}^n \,
\gamma_{J_i} \left(\alpha_s(\lambda^2) \right) \,.
\end{split}
\end{align}
The $Z$ matrix which solves \eqn{Gamma_def} can be written in
terms of the sum-over-dipoles ansatz~(\ref{Gamma_ansatz}) as an
exponential, in a form similar to \eqn{barS_ansatz}.
However, $\overline{\cal S}_{\dip}$ has only simple poles in $\e$ in
the exponent,
while the integration of $\Gamma_{\dip}$ over the scale $\lambda$ of the
$D$-dimensional running coupling will generate double (soft-collinear)
poles within $Z$, inherited from the jet functions in \eqn{J_explicit},
because of the explicit dependence of $\Gamma_{\dip}$ on the logarithm
of the scale $\lambda$.
If a non-trivial correction $\Delta$ appears in the reduced soft
function~(\ref{Gamma_barS}), then the full anomalous dimension is
\begin{align}
\label{Gamma}
\Gamma \left(\frac{p_i}{\lambda}, \alpha_s(\lambda^2) \right)
= \Gamma_{\dip} \left(\frac{p_i}{\lambda}, \alpha_s(\lambda^2) \right)
\,+\, \Delta \left(\rho_{i j k l},\alpha_s(\lambda^2) \right)\,.
\end{align}
In terms of this function the solution of \eqn{Gamma_def} takes the form
\begin{equation}
\label{Z}
Z \left( \frac{p_i}{\mu_f}, \alpha_s(\mu_f^2), \epsilon \right) \, = \,
\, {\rm P} \exp\Bigg\{
-\frac12 \int_0^{\mu_f^2}
\frac{d \lambda^2}{\lambda^2} \, \Gamma \left(\frac{p_i}{\lambda},
\alpha_s \left( \lambda^2, \epsilon \right) \right)
\Bigg\} \,,
\end{equation}
where ${\rm P}$ stands for path-ordering: the order of the color
matrices after expanding the exponential coincides with the ordering in
the scale $\lambda$. We emphasize that path ordering is only
necessary in \eqn{Z} if $\Delta\neq 0$ and
$[\Delta,\Gamma_{\dip}]\neq 0$. Indeed, the ansatz~(\ref{Gamma_ansatz})
has the property that the scale-dependence associated with
non-trivial color operators appears through an overall factor,
$\widehat\gamma_K(\alpha_s(\lambda^2))$, so
that color matrices $\Gamma$ corresponding to different scales are
proportional to each other, and obviously commute. This is no longer
true for a generic $\Delta\neq 0$, starting at a certain loop order $l$.
In this case \eqn{Gamma} would generically be a sum of two non-commuting
matrices, each of them having its own dependence on the coupling and thus
on the scale $\lambda$. Considering two scales $\lambda_1$ and
$\lambda_2$, we would then have $[\Gamma(\lambda_1),
\Gamma(\lambda_2)] \neq 0$, and the order of the matrices
in the expansion of \eqn{Z} would be dictated by the ordering of the
scales. It should be noted, though, that the first loop order in $Z$ that
would be affected is order $l+1$, because $\Gamma$ starts at one loop,
so that
\begin{equation}
\label{noncommutativity}
\left[ \Gamma \left( \lambda_1 \right),
\Gamma \left( \lambda_2 \right)
\right]
\sim
\left[ \Gamma^{(1)} \left( \lambda_1 \right),
\Delta^{(l)} \left( \lambda_2 \right) \right]
= {\cal O} (\alpha_s^{l+1}) \, .
\end{equation}
The issue of ordering can thus be safely neglected at three loops, the first
order at which a non-vanishing $\Delta$ can arise.
\section{The splitting-amplitude constraint~\label{sec:SA}}
Let us now consider the limit where two of the hard partons in the
amplitude become collinear. Following Ref.~\cite{Becher:2009qa},
we shall see that this limit provides an additional constraint on the
structure of $\Delta$. The way we use this constraint in the next
section will go beyond what was done in Ref.~\cite{Becher:2009qa}; we will
find explicit solutions satisfying the constraint (as well as other
consistency conditions discussed in the next section).
The Sudakov factorization described by \eqn{facamp_bar}, and subsequently the
singularity structure encoded in $Z$ in \eqn{introducing_Z}, apply
to scattering amplitudes at fixed angles. All the invariants
$p_i\cdot p_j$ are taken to be of the same order, much larger than
the confinement scale $\Lambda^2$. The limit in which
two of the hard partons are taken collinear, {\it e.g.} $p_1\cdot p_2\to 0$,
is a singular limit, which we are now about to explore.
In this limit, $p_1 \to zP$ and $p_2 \to (1-z)P$, where the longitudinal
momentum fraction $z$ obeys $0<z<1$ (for time-like splitting). We will see,
following Ref.~\cite{Becher:2009qa}, that there is a
relation between the singularities that are associated with the
splitting --- the replacement of one parton by two collinear partons
--- and the singularities encoded in $Z$ in \eqn{introducing_Z}.
It is useful for our derivation to make a clear distinction between the two
scales $\mu_f$ and $\mu$ introduced in \eqn{introducing_Z}. Let us
first define the splitting amplitude, which relates the
dimensionally-regularized amplitude for the scattering of $n - 1$
partons to the one for $n$ partons, two of which are taken collinear.
We may write
\begin{equation}
\label{Sp_M}
{\cal M}_n \left(p_1, p_2, p_j; \mu, \epsilon \right)
\ \iscol{1}{2} \ \, {\bf Sp}\left( p_1, p_2; \mu, \epsilon \right) \,
{\cal M}_{n-1} \left( P, p_j; \mu, \epsilon \right) \,.
\end{equation}
Here the two hard partons that become collinear are denoted by
$p_1$ and $p_2$, and all the other momenta by $p_j$, with $j = 3, 4,
\ldots, n$. We have slightly modified our notation for simplicity:
the number of colored partons involved in the scattering is indicated
explicitly; the dependence of each factor on the running coupling
is understood; finally, the matrix elements have dimensionful arguments
(while in fact they depend on dimensionless ratios, as indicated in
the previous sections).
The splitting described by \eqn{Sp_M} preserves
the total momentum $p_1 + p_2 = P$ and the total color charge $
{\bf T}_1 + {\bf T}_2 = {\bf T}$. We assume \eqn{Sp_M} to be valid
in the collinear limit, up to corrections that must be finite as
$P^2 = 2 p_1 \cdot p_2 \to 0$.
The splitting
matrix ${\bf Sp}$ encodes all singular contribution to the
amplitude ${\cal M}_n$ arising from the limit $P^2 \to 0$, and, crucially,
it must depend only on the quantum numbers of the splitting partons. The
matrix element ${\cal M}_{n - 1}$, in contrast, is evaluated at $P^2 = 0$,
and therefore it obeys Sudakov factorization, \eqn{introducing_Z}, as applied
to an $(n-1)$-parton amplitude.
The operator ${\bf Sp}$ is designed to relate
color matrices defined in the $n$-parton color space to those defined in
the $(n-1)$-parton space: it multiplies on its left the former and on
its right the latter. Thus, the initial definition of ${\bf Sp}$ is not
diagonal. Upon substituting ${\bf T} = {\bf T}_1 + {\bf T}_2$, however,
one can use the $n$-parton color space only. In this space ${\bf Sp}$
is diagonal; all of its dependence on ${\bf T}_1$ and ${\bf T}_2$ can
be expressed in terms of the quadratic Casimirs, using $2 \, {\bf T}_1 \cdot
{\bf T}_2 = {\bf T}^2 - {\bf T}_1^2 - {\bf T}_2^2$.
Because the fixed-angle factorization theorem in \eqn{facamp_bar} breaks
down in the collinear limit, $p_1 \cdot p_2 \to 0$, we expect that some
of the singularities captured by the splitting matrix ${\bf Sp}$ will arise
from the hard functions ${\cal H}$. Specifically, if the $Z$ factor in
\eqn{introducing_Z} is defined in a minimal scheme, ${\cal H}$ will
contain all terms in ${\cal M}_n$ with logarithmic singularities in
$p_1 \cdot p_2$ associated with non-negative powers of $\epsilon$.
We then define ${\bf Sp}_{\cal H}$, in analogy with \eqn{Sp_M}, by
the collinear behavior of the hard functions,
\begin{equation}
\label{Sp_H}
{\cal H}_n \left(p_1, p_2, p_j; \mu, \mu_f, \epsilon \right)
\ \iscol{1}{2} \ \, {\bf Sp}_{\cal H}(p_1, p_2; \mu, \mu_f, \epsilon) \,
{\cal H}_{n - 1} \left(P, p_j;\mu, \mu_f, \epsilon \right) \, ,
\end{equation}
where all factors are finite as $\epsilon \to 0$. As was the case for
\eqn{Sp_M}, \eqn{Sp_H} is valid up to corrections that remain finite in
the limit $P^2 \to 0$. Singularities in that limit are all contained in the
splitting matrix ${\bf Sp}_{\cal H}$, while the function ${\cal H}_{n - 1}$
is evaluated at $P^2 = 0$.
Next, recall the definition of the $Z$ factors in \eqn{introducing_Z}
for both the $n$- and $(n - 1)$-parton amplitudes. In the present notation,
they read
\begin{align}
\label{introducing_Z_n}
{\cal M}_n \left(p_1, p_2, p_j; \mu, \epsilon \right) &=
Z_n \left(p_1, p_2, p_j; \mu_f, \epsilon \right) \,\,
{\cal H}_n \left(p_1, p_2, p_j; \mu, \mu_f, \epsilon \right) \,,
\\
\label{introducing_Z_n-1}
{\cal M}_{n - 1} \left(P, p_j; \mu, \epsilon \right) &=
Z_{n - 1} \left(P, p_j; \mu_f, \epsilon \right) \,\,
{\cal H}_{n-1} \left(P, p_j; \mu, \mu_f, \epsilon \right) \,.
\end{align}
Substituting \eqn{introducing_Z_n-1} into \eqn{Sp_M} yields
\begin{equation}
{\cal M}_n \left(p_1, p_2, p_j; \mu, \epsilon \right)
\ \iscol{1}{2} \ \, {\bf Sp} (p_1, p_2; \mu, \epsilon) \,
Z_{n - 1} \left( P, p_j; \mu_f, \epsilon \right) \,\,
{\cal H}_{n - 1} \left(P, p_j; \mu, \mu_f, \epsilon \right) \,.
\end{equation}
On the other hand, substituting \eqn{Sp_H} into \eqn{introducing_Z_n}
we get
\begin{equation}
{\cal M}_n \left(p_1, p_2, p_j; \mu, \epsilon \right)
\, \iscol{1}{2} \ Z_n \left(p_1, p_2, p_j; \mu_f, \epsilon \right)
\,{\bf Sp}_{\cal H}(p_1, p_2; \mu, \mu_f, \epsilon) \,
{\cal H}_{n-1} \left(P, p_j; \mu, \mu_f, \epsilon \right) \,.
\end{equation}
Comparing these two equations we immediately deduce the relation
between the full splitting matrix ${\bf Sp}$, which is infrared divergent,
and its infrared-finite counterpart ${\bf Sp}_{\cal H}$,
\begin{align}
\label{Sp_Z_relation}
{\bf Sp}_{\cal H}(p_1, p_2; \mu, \mu_f, \epsilon) \, = \,
Z^{-1}_n \left(p_1, p_2, p_j; \mu_f, \epsilon \right)
\, {\bf Sp}(p_1, p_2; \mu, \epsilon) \,
Z_{n - 1} \left(P, p_j; \mu_f, \epsilon \right) \,,
\end{align}
where $Z_n$ is understood to be evaluated in the collinear limit.
This equation ({\it cf.}~eq.~(55) in Ref.~\cite{Becher:2009qa}) is a
non-trivial constraint on both $Z$ and the splitting amplitude
${\bf Sp}$, given that the left-hand side must be finite as $\epsilon
\to 0$, and that the splitting amplitude depends only on the momenta
and color variables of the splitting partons --- not on other hard partons
involved in the scattering process.
To formulate these constraints, we take a logarithmic derivative
of \eqn{Sp_Z_relation}, using the definition of $\Gamma_n$ and
$\Gamma_{n - 1}$ according to \eqn{Gamma_def}. Using the fact
that ${\bf Sp}(p_1, p_2; \mu, \epsilon)$ does not depend on $\mu_f$,
it is straightforward to show that
\begin{align}
\label{Gamma_Sp_der}
\begin{split}
\frac{d}{d \ln \mu_f}
\, {\bf Sp}_{\cal H} (p_1, p_2; \mu, \mu_f, \epsilon)\, = \,
\, &\Gamma_n\left(p_1, p_2, p_j; \mu_f\right)
\, {\bf Sp}_{\cal H}(p_1, p_2; \mu, \mu_f, \epsilon) \\ &-
\, {\bf Sp}_{\cal H}(p_1, p_2; \mu, \mu_f, \epsilon) \,
\Gamma_{n - 1} \left(P, p_j; \mu_f \right) \, ,
\end{split}
\end{align}
where, as above, $(n - 1)$-parton matrices are evaluated in collinear
kinematics ($P^2 = 0$), and corrections are finite in the collinear
limit. Note that all the functions entering (\ref{Gamma_Sp_der}) are
finite for $\epsilon\to 0$. Note also that we have adapted the $\Gamma$
matrices to our current notation with dimensionful arguments; as before,
the matrices involved acquire implicit $\epsilon$ dependence when
evaluated as functions of the $D$-dimensional coupling.
Upon using the identification ${\bf T} = {\bf T}_1 + {\bf T}_2$, the
matrix $\Gamma_{n - 1}$ can be promoted to operate on the $n$-parton color
space. Once one does this, one immediately recognizes that the
splitting matrix
${\bf Sp}$ commutes with the $\Gamma$ matrices, as an immediate
consequence of the fact that it can only depend on the color degrees of
freedom of the partons involved in the splitting, {\it i.e.} ${\bf T}_1$,
${\bf T}_2$ and ${\bf T} = {\bf T}_1 + {\bf T}_2$, and it is therefore
color diagonal. Therefore, we can rewrite \eqn{Gamma_Sp} as an evolution
equation for the splitting amplitude:
\begin{align}
\label{SpHdiffeq}
\frac{d}{d\ln\mu_f} \, {\bf Sp}_{\cal H} (p_1, p_2; \mu, \mu_f, \epsilon) \,
= \, \Gamma_{\bf Sp} (p_1, p_2; \mu_f) \, \,
{\bf Sp}_{\cal H}(p_1, p_2; \mu, \mu_f, \epsilon) \, ,
\end{align}
where we defined
\begin{align}
\label{Gamma_Sp}
\Gamma_{\bf Sp}(p_1, p_2; \mu_f)
\equiv \Gamma_n \left(p_1, p_2, p_j; \mu_f \right) -
\Gamma_{n - 1} \left(P, p_j; \mu_f \right) \, .
\end{align}
We may now solve \eqn{SpHdiffeq} for the $\mu_f$ dependence of
${\bf Sp}_{\cal H}$, with the result
\begin{align}
\label{solu}
{\bf Sp}_{\cal H} (p_1, p_2; \mu, \mu_f, \epsilon) \, = \,
{\bf Sp}_{\cal H}^{(0)} (p_1, p_2; \mu, \epsilon) \, \exp \left[
\frac12 \int_{\mu^2}^{\mu_f^2} \frac{d \lambda^2}{\lambda^2} \,
\Gamma_{\bf Sp} (p_1, p_2; \lambda)
\right] \, .
\end{align}
The initial condition for evolution
\begin{equation}
\label{inicon}
{\bf Sp}_{\cal H}^{(0)} (p_1, p_2; \mu, \epsilon) \, = \,
{\bf Sp}_{\cal H} (p_1, p_2; \mu, \mu_f = \mu, \epsilon)
\end{equation}
will, in general, still be singular as $p_1 \cdot p_2 \to 0$, although
it is finite as $\epsilon \to 0$. We may, in any case, use \eqn{solu}
by matching the $\mu$-dependence in \eqn{Sp_Z_relation}, which yields
an expression for the full splitting function ${\bf Sp}$. We find
\begin{align}
\label{solusp}
{\bf Sp} (p_1,p_2; \mu, \epsilon) \, = \,
{\bf Sp}_{\cal H}^{(0)} (p_1, p_2; \mu, \epsilon) \, \exp \left[
- \frac12 \int_{0}^{\mu^2} \frac{d \lambda^2}{\lambda^2} \,
\Gamma_{\bf Sp} (p_1, p_2; \lambda)
\right] \, .
\end{align}
While collinear singularities accompanied by non-negative powers
of $\epsilon$ are still present in the initial condition, all poles in
$\epsilon$ in the full splitting matrix arise from the integration
over the scale of the $D$-dimensional running coupling in the exponent
of \eqn{solusp}.
The restricted kinematic dependence of $\Gamma_{\bf Sp}$,
which generates the poles in the splitting function ${\bf Sp}$,
is sufficient to provide nontrivial constraints
on the matrix $\Delta$, as we will now see. Indeed,
substituting \eqn{Gamma} into \eqn{Gamma_Sp} we obtain
\begin{align}
\label{Gamma_Sp_explicit}
\Gamma_{\bf Sp}(p_1, p_2; \lambda) \,
= \, \Gamma_{{\bf Sp}, \, {\dip}} (p_1, p_2; \lambda)
+ \Delta_n \left(\rho_{i j k l}; \lambda \right)
- \Delta_{n - 1} \left(\rho_{i j k l}; \lambda \right) \, ,
\end{align}
where
\begin{align}
\label{Gamma_Sp_ansatz_explicit}
\begin{split}
\Gamma_{{\bf Sp}, \,{\dip}}(p_1, p_2; \lambda)
= & - \frac12 \, \widehat{\gamma}_K \left(\alpha_s (\lambda^2)
\right) \Bigg[
\ln \left(\frac{2\left| p_1 \cdot p_2\right|
\, {\rm e}^{-{\rm i} \pi\lambda_{12}}}
{{\lambda^2}}\right) \, {\bf T}_1 \cdot {\bf T}_2\, \\
& - {\bf T}_1 \cdot\left({\bf T}_1 + {\bf T}_2 \right) \ln z
- {\bf T}_2 \cdot\left({\bf T}_1 + {\bf T}_2 \right) \ln(1 - z) \Bigg]
\\ & + \, \gamma_{J_1} \left(\alpha_s(\lambda^2) \right)
+ \, \gamma_{J_2} \left(\alpha_s(\lambda^2) \right)
- \, \gamma_{J_P} \left(\alpha_s(\lambda^2) \right) \, .
\end{split}
\end{align}
\Eqn{Gamma_Sp_ansatz_explicit}
is the result of substituting the sum-over-dipoles
ansatz~(\ref{Gamma_ansatz}) for $\Gamma_n$ and $\Gamma_{n-1}$.
The terms in \eqn{Gamma_Sp_explicit} going beyond
\eqn{Gamma_Sp_ansatz_explicit} depend on conformally invariant cross
ratios in the $n$-parton and $(n-1)$-parton amplitudes, respectively. Their
difference should conspire to depend only on the kinematic variables
$p_1$ and $p_2$ and on the color variables ${\bf T}_1$ and
${\bf T}_2$. In this way \eqn{Gamma_Sp_explicit} provides a
non-trivial constraint on the structure of~$\Delta$, which we will
implement in \sect{CollimitSubsection}.
\section{Constraining corrections to the sum-over-dipoles
formula~\label{sec:corrections}}
\subsection{Functions of conformally invariant cross ratios}
Our task here is to analyze potential contributions of the form
$\Delta\left(\rho_{i j k l}, \alpha_s \right)$ to the soft
singularities of any $n$-leg amplitude. Our starting point is the fact
that these contributions must be written as functions
of conformally invariant cross ratios of the form~(\ref{rhoijkl}).
Because we are dealing with renormalizable theories in four dimensions,
we do not expect $Z$ to contain power-law dependence on the kinematic
variables; instead the dependence should be ``slow'', {\it i.e.} logarithmic
in the arguments, through variables of the form
\begin{align}
\label{lnrho1234}
L_{ijkl}\ \equiv\ \ln\rho_{ijkl}
\ =\ \ln \left(\frac{p_i\cdot p_j \, \, p_k \cdot p_l}{p_i
\cdot p_k \, \, p_j \cdot p_l} \right) \,.
\end{align}
Eventually, at high enough order, dependence on $\rho_{ijkl}$
through polylogarithms and harmonic polylogarithms might arise.
We will not assume here that $\Delta$ is linear in the variables $L_{ijkl}$.
We will allow logarithms of different cross ratios to appear in a product,
raised to various powers, and this will be a key to finding solutions
consistent with the collinear limits. Subsequently, we will examine how
further solutions may arise if polylogarithmic dependence is allowed.
A further motivation to consider a general logarithmic dependence
through the variables in \eqn{lnrho1234} is provided by the collinear limits,
which can take certain cross ratios $\rho_{ijkl}$ to 0, 1, or $\infty$,
corresponding to physical limits where logarithmic divergences in $\Delta$
will be possible. Other values of the cross ratios, on the other hand,
should not cause (unphysical) singularities in $\Delta(\rho_{ijkl})$.
This fact limits the acceptable functional forms. For example, in specifying
a logarithmic functional dependence to be through \eqn{lnrho1234}, we
explicitly exclude the form $\ln (c + \rho_{ijkl})$ for general\footnote{In
\sect{sec:poly} we will briefly consider the possibility of including a
dependence of the form $\ln (1 - \rho)$.} constant $c$. Such a shift
in the argument of the logarithm would generate unphysical
singularities at $\rho_{ijkl} = - c$, and would also lead to
complicated symmetry properties under parton exchange, which would
make it difficult to accomodate Bose symmetry. We will thus focus our
initial analysis on kinematic dependence through the variables $L_{ijkl}$.
Although it seems less natural, polylogarithmic dependence on
$\rho_{ijkl}$ cannot be altogether ruled out, and will be considered in
the context of the three-loop analysis in \sect{sec:3loop}.
The fact that the variables~(\ref{lnrho1234}) involve the momenta of
four partons, points to their origin in webs that connect (at least) four
of the hard partons in the process, exemplified by \fig{4Elabeled}.
The appearance of such terms in the exponent, as a correction to
the sum-over-dipoles formula, implies, through the non-Abelian
exponentiation theorem, that they cannot be reduced to sums
of independent webs connecting just two or three partons,
neither diagrammatically nor algebraically. Indeed, for amplitudes
composed of just three partons the sum-over-dipoles formula is
exact~\cite{Gardi:2009qi}. Similarly, because two-loop webs can connect
at most three different partons, conformally invariant cross ratios cannot
be formed. Consequently, at two loops there are no corrections to the
sum-over-dipoles formula, independently of the number of legs. Thus, the
first non-trivial corrections can appear at three loops, and if they appear,
they are directly related to webs that connect four partons. For the
remainder of this section, therefore, we will focus on corrections to the
sum-over-dipoles formula that arise from webs connecting precisely four
partons, although other partons or colorless particles can be present
in the full amplitude. Our conclusions are fully general at three loops,
as discussed in \sect{sec:n-leg}, because at that order no web can
connect more than four partons.
We begin by observing that, independently of the loop order at which
four-parton corrections appear, their color factor must involve at least
one color generator corresponding to each of the four partons
involved. For example, the simplest structure a term in $\Delta$ can
have in color space is
\begin{equation}
\label{color}
\Delta_4 (\rho_{ijkl}) \, = \, h^{abcd} \, {\bf T}_i^{a} \, {\bf T}_j^{b} \,
{\bf T}_k^{c} \, {\bf T}_l^{d} \, \Delta^{\kin}_4 (\rho_{ijkl}) \, ,
\end{equation}
where $h^{abcd}$ is some color tensor built out of structure constants
corresponding to the internal vertices in the web
that connects the four partons $(i,j,k,l)$ to each other. Note that
$h^{abcd}$ may receive contributions from several different webs at a
given order, and furthermore, for a given $h^{abcd}$, the kinematic
coefficient $\Delta^{\kin}_4 (\rho_{ijkl})$ can receive corrections from
higher-order webs. In what follows, we will not display the dependence
on the coupling of the kinematic factors, because it does not affect our
arguments. As we will see, symmetry arguments will, in general, force
us to consider sums of terms of the form~(\ref{color}), with different
color tensors $h^{abcd}$ associated with different kinematic factors.
More generally, at sufficiently high orders, there can be other
types of contributions in which each Wilson line in the soft function is
attached to more than one gluon, and hence to more than one index in
a color tensor. Such corrections will be sums of terms of the form
\begin{align}
\begin{split}
\label{color_}
\Delta_4 (\rho_{ijkl}) \, = \, &
{\Delta}^{\kin}_4 (\rho_{ijkl}) \,\, h^{a_1, \ldots, a_{m_1},
b_1, \ldots, b_{m_2},
c_1, \ldots, c_{m_3},
d_1, \ldots, d_{m_4} } \\ &
({\bf T}_i^{a_1}{\bf T}_i^{a_2}\ldots {\bf T}_i^{a_{m_1}}
{\bf T}_j^{b_1}{\bf T}_j^{b_2}\ldots {\bf T}_j^{b_{m_2}}
{\bf T}_k^{c_1}{\bf T}_k^{c_2}\ldots {\bf T}_k^{c_{m_3}}
{\bf T}_l^{d_1}{\bf T}_l^{d_2}\ldots {\bf T}_l^{d_{m_1}})_{+}\,,
\end{split}
\end{align}
where $()_{+}$ indicates symmetrization with respect to all the
indices corresponding to a given parton. Note that generators
carrying indices of different partons commute, while the antisymmetric
components have been excluded from \eqn{color_}, because they reduce,
via the commutation relation $[{\bf T}_i^a \,,\, {\bf T}_i^b ] = i
f^{abc} {\bf T}_i^c$, to shorter strings\footnote{One can
make a stronger statement for Wilson lines in the fundamental representation
of the gauge group. In that case the symmetric combination in
\eqn{color_} can also be further reduced, using the identity $\{{\bf t}_a ,
{\bf t}_b \} = \frac{1}{N_c} \delta_{a b} + d_{abc} {\bf t}_c$, so that
the generic correction in \eqn{color_} turns into a combination of terms
of the form~(\ref{color}). We are not aware of generalizations of this
possibility to arbitrary representations.}. In the following subsections, we
will focus on (combinations of) color structures of the form~(\ref{color}),
and we will not consider further the more general case of \eqn{color_},
which, in any case, can only arise at or beyond four loops.
\subsection{Bose symmetry}
The Wilson lines defining the reduced soft matrix are effectively
scalars, as the spin-dependent parts have been stripped off and absorbed
in the jet functions. Consequently, the matrices $\Gamma$ and $\Delta$
should admit Bose symmetry and be invariant under the
exchange of any pair of hard partons. Because $\Delta$
depends on color and kinematic variables, this symmetry
implies correlation between color and kinematics. In particular,
considering a term of the form~(\ref{color}), the symmetry properties
of $h^{abcd}$ under permutations of the indices $a$, $b$, $c$ and $d$
must be mirrored in the symmetry properties of the kinematic factor
${\Delta}^{\kin}_4 (\rho_{ijkl})$ under permutations of the
corresponding momenta $p_i$, $p_j$, $p_k$ and $p_l$. The
requirement of Bose symmetry will lead us to express $\Delta$ as a
sum of terms, each having color and kinematic factors with a definite
symmetry under some (or all) permutations.
Because we are considering corrections arising from four-parton webs,
we need to analyze the symmetry properties under particle exchanges
of the ratios $\rho_{ijkl}$ that can be constructed with four partons.
There are 24 different cross ratios of this type, corresponding to the
number of elements of the permutation group acting on four objects,
$S_4$. However, a $Z_2 \times Z_2$ subgroup of $S_4$ leaves
each $\rho_{ijkl}$ (and hence each $L_{ijkl}$) invariant. Indeed,
one readily verifies that
\begin{equation}
\label{invrho}
\rho_{ijkl} = \rho_{jilk} = \rho_{klij} = \rho_{lkji} \,.
\end{equation}
The subgroup $Z_2 \times Z_2$ is an invariant subgroup of $S_4$.
Thus, we may use it to fix one of the indices, say $i$, in $\rho_{ijkl}$.
This leaves six cross ratios, transforming under the permutation group of
three objects, $S_3 \simeq S_4/(Z_2\times Z_2)$.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=0,width=6cm]{symm_cicr3.eps} \hspace*{20pt}
\includegraphics[angle=0,width=8cm]{symmetries_of_rho1234_2.eps}
\caption{Symmetry properties of conformally invariant cross ratios.
Each of the two triangles in the left-hand figure connects three cross
ratios. The cross ratios associated with the two triangles are related
by inversion, and one moves between the two triangles with odd
permutations of momentum labels. The right-hand figure shows the
resulting antisymmetry of the logarithms of the three conformally
invariant cross ratios under permutations. The three variables
$L_{1234}$, $L_{1423}$ and $L_{1342}$ transform into one
another under permutations, up to an overall minus sign. For example,
under the permutation $1\lr2$, we have
$L_{1234} \to - L_{1423}$,
$L_{1423} \to - L_{1234}$, and
$L_{1342} \to - L_{1342}$.
\label{symm}}
\end{center}
\end{figure}
The permutation properties of the remaining six cross ratios are displayed
graphically in \fig{symm}, where we made the identifications $\{i,j,k,l\}
\to \{1,2,3,4\}$ for simplicity. The analysis can be further simplified by
noting that odd permutations in $S_4$ merely invert $\rho_{ijkl}$, so
that, for example,
\begin{equation}
\label{odd_symmetry_of_rho}
\rho_{ijkl} \, = \, \frac{1}{\rho_{ikjl}} \qquad \longrightarrow \qquad
L_{ijkl} \, = \, - \, L_{ikjl} \, .
\end{equation}
This inversion corresponds to moving across to the
diametrically opposite point in the left-hand plot in \fig{symm}.
We conclude that there are only three different cross ratios
(corresponding to the cyclic permutations of $\{j,k,l\}$ associated with
$S_3/Z_2 \simeq Z_3$), namely $\rho_{ijkl}$, $\rho_{iljk}$ and
$\rho_{iklj}$. They correspond to triangles in \fig{symm}.
Finally, the logarithms of the three cross ratios are linearly
dependent, summing to zero:
\begin{equation}
\label{three_rho_relation}
L_{ijkl} + L_{iljk} + L_{iklj} \,=\,0\,.
\end{equation}
These symmetry properties lead us to consider for $\Delta^{\kin}_4$ in
\eqn{color} the general form
\begin{align}
\label{kin_4legs}
\Delta_4^{\kin}(\rho_{ijkl})
= ( L_{1234} )^{h_1} \, ( L_{1423} )^{h_2}
\, ( L_{1342} )^{h_3} \, ,
\end{align}
where we have adopted the labeling of hard partons by $\{1,2,3,4\}$
as in \fig{symm}. Here the $h_i$ are non-negative integers, and
\eqn{three_rho_relation} has not yet been taken into account.
Our general strategy will be to construct linear combinations of the
monomials in \eqn{kin_4legs} designed to match the symmetries of
the available color tensors, $h^{abcd}$ in \eqn{color}. Such combinations
can be constructed for general $h_i$. As we shall see, however,
transcendentality constraints restrict the integers $h_i$ to be small at
low loop orders. In the three-loop case, this will suffice to eliminate
all solutions to the constraints, except for a single function.
We begin by noting that the antisymmetry of $L_{1234}$ under
the permutation $1 \lr 4$ (or under $2 \lr 3$, see \fig{symm}) is mirrored
by the antisymmetry of the color factor $h^{abcd} \, {\bf T}_1^{a}
{\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}\,$ if the tensor
$h^{abcd} = f^{ade} f^{cbe}$, where $f^{ade}$ are the usual,
fully antisymmetric ${\rm SU}(N_c)$ structure constants. The same
is obviously true for any odd power of $L_{1234}$, while in the
case of even powers, an appropriate type of color tensor is
$h^{abcd}=d^{ade}d^{cbe}$, where $d^{ade}$ are the fully symmetric
${\rm SU} (N_c)$ tensors. Fig.~\ref{symm}, however, shows that under
other permutations, the different cross ratios transform into one another.
Therefore, if we are to write a function with a definite symmetry under all
permutations, it must be a function of all three variables. Specifically,
in order for a term of the form~(\ref{kin_4legs}) to have, by itself, a
definite symmetry under all permutations, the powers $h_1$, $h_2$ and
$h_3$, must all be equal. Alternatively, one can consider a linear combination
of several terms of the form~(\ref{kin_4legs}), yielding together a function
of the kinematic variables with definite symmetry. In this respect it is
useful to keep in mind that the sum of the three logarithms (all with a
single power) is identically zero, by \eqn{three_rho_relation}.
Let us now construct the different structures that realize Bose
symmetry, by considering linear combinations of terms of the form
of \eqn{color}, with $\Delta^{\kin}_4$ given by \eqn{kin_4legs}.
We consider first three examples, where the logarithms
$L_{ijkl}$ are raised to a single power $h$. As we will see, none of
these examples will satisfy all the constraints; they are useful however
for illustrating the available structures.
\begin{itemize}
\item[a)]{} We first consider simply setting $h_1 = h_2 = h_3$ in
\eqn{kin_4legs}, obtaining
\begin{equation}
\label{Delta_a}
\Delta_4 (\rho_{ijkl}) \, = \, h^{abcd} \, \, {\bf T}_1^{a} {\bf T}_2^{b}
{\bf T}_3^{c} {\bf T}_4^{d} \,\,
\Big[ L_{1234} \,L_{1423}\,L_{1342}\Big]^{h}\,.
\end{equation}
For odd $h$ the color tensor $h^{abcd}$ must be completely
antisymmetric in the four indices, while for even $h$ it must be completely
symmetric. We anticipate that odd $h$ is ruled out, because completely
antisymmetric four-index invariant tensors do not exist for simple Lie
groups~\cite{deAzcarraga:1997ya}. Furthermore, while symmetric
tensors do exist, \eqn{Delta_a} is ruled out at three loops, because
from \fig{4Elabeled} it is clear that only $h^{abcd} = f^{ade} f^{cbe}$
(or permutations thereof) can arise in Feynman diagrams at this order.
\item[b)]{}
Our second example is
\begin{align}
\label{Delta_b}
\begin{split}
\Delta_4 (\rho_{ijkl}) \, = \,
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}\,
\,\Big[&f^{ade}f^{cbe} L_{1234}^{h}
+f^{cae}f^{dbe} L_{1423}^{h}
+f^{bae}f^{cde} L_{1342}^{h}\Big] \,,
\end{split}
\end{align}
where $h$ must be odd. Alternatively, each $f^{ade}$ may be replaced
by the fully symmetric ${\rm SU}(N_c)$ tensor $d^{ade}$, and then
$h$ must be even. In \eqn{Delta_b} each term has a definite symmetry
only with respect to certain permutations, but the three terms transform
into one another in such a way that their sum admits full Bose symmetry.
We will see shortly that the structure in \eqn{Delta_b} does not satisfy
the collinear constraints.
\item[c)]{}
Finally, one may consider the case where two of the three logarithms
appear together in a product, raised to some power $h$,
\begin{align}
\label{Delta_c}
\begin{split}
\Delta_4 (\rho_{ijkl}) &=
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}\,
\,\,\Big[ d^{abe}d^{cde} ( L_{1234}\, L_{1423} )^{h}\\
&\hskip2.6cm
+d^{dae}d^{cbe} ( L_{1423}\, L_{1342} )^{h}
+d^{cae}d^{bde} ( L_{1342}\, L_{1234} )^{h}
\Big]\,.
\end{split}
\end{align}
Once again, we observe that these color tensors cannot arise in
three-loop webs. Furthermore, as we will see, \eqn{Delta_c},
at any loop order, fails to satisfy the collinear constraints.
\end{itemize}
We are led to consider more general structures, using
\eqns{color}{kin_4legs} with arbitrary integers $h_i$. As announced,
we will satisfy Bose symmetry by constructing polynomial kinematical
factors mimicking the symmetry of the available color tensors. One
may write for example
\begin{align}
\label{Delta_gen}
\begin{split}
&\Delta_4 (\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d} \\
&\hskip1cm
\times \hskip2.5mm \left[ \, f^{ade}f^{cbe} \, L_{1234}^{h_1} \, \left(
L_{1423}^{h_2} \, L_{1342}^{h_3} \, - \, (-1)^{h_1 + h_2 + h_3} \,
L_{1342}^{h_2} \, L_{1423}^{h_3} \right) \right. \\
&\hskip1.5cm
+ f^{cae}f^{dbe} \, L_{1423}^{h_1} \, \left(
L_{1342}^{h_2} \, L_{1234}^{h_3} \, - \, (-1)^{h_1 + h_2 + h_3} \,
L_{1234}^{h_2} \, L_{1342}^{h_3} \right) \\
&\hskip1.5cm
+ \left. f^{bae}f^{cde} \, L_{1342}^{h_1} \, \left(
L_{1234}^{h_2} \, L_{1423}^{h_3} \, - \, (-1)^{h_1 + h_2 + h_3} \,
L_{1423}^{h_2} \, L_{1234}^{h_3} \right) \,
\right] \, ,
\end{split}
\end{align}
where $h_1$, $h_2$ and $h_3$ can be any non-negative integers.
The first line is invariant, for example, under the permutation $1\lr4$
(when applied to both kinematics and color), the second line is invariant
under $1\lr3$, and the third is invariant under $1\lr2$. The other exchange
symmetries are realized by the transformation of two lines
into one another. For example, under $1\lr4$ the second line
transforms into the third and vice versa. In \eqn{Delta_gen} the
color and kinematic factors in each line are separately antisymmetric
under the corresponding permutation. Note that \eqn{Delta_b}
corresponds to the special case where $h_1$ in \eqn{Delta_gen} is
odd, while $h_2 = h_3 = 0$.
One can also construct an alternative Bose symmetrization
using the symmetric combination,
\begin{align}
\label{Delta_gen_symm}
\begin{split}
&\Delta_4 (\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d} \\
&\hskip1cm
\times \hskip2.5mm \left[ \, d^{ade}d^{cbe} \, L_{1234}^{h_1} \, \left(
L_{1423}^{h_2} \, L_{1342}^{h_3} \, + \, (-1)^{h_1 + h_2 + h_3} \,
L_{1342}^{h_2} \, L_{1423}^{h_3} \right) \right. \\
&\hskip1.5cm
+ d^{cae}d^{dbe} \, L_{1423}^{h_1} \, \left(
L_{1342}^{h_2} \, L_{1234}^{h_3} \, + \, (-1)^{h_1 + h_2 + h_3} \,
L_{1234}^{h_2} \, L_{1342}^{h_3} \right) \\
&\hskip1.5cm
+ \left. d^{bae}d^{cde} \, L_{1342}^{h_1} \, \left(
L_{1234}^{h_2} \, L_{1423}^{h_3} \, + \, (-1)^{h_1 + h_2 + h_3} \,
L_{1423}^{h_2} \, L_{1234}^{h_3} \right) \,
\right] \,.
\end{split}
\end{align}
Note that \eqn{Delta_c} is reproduced by setting $h_1 = 0$ and
$h_2 = h_3 = h$ in \eqn{Delta_gen_symm}.
Eqs.~(\ref{Delta_gen}) and (\ref{Delta_gen_symm}) both yield non-trivial
functions for both even and odd powers $h_i$, with the following exceptions:
For even $h_1$, \eqn{Delta_gen} becomes identically zero if $h_2=h_3$;
similarly, for odd $h_1$ \eqn{Delta_gen_symm} becomes identically zero
if $h_2 = h_3$. It is interesting to note that \eqn{Delta_a} with odd $h$
cannot be obtained as a special case of \eqn{Delta_gen}. Indeed, by choosing
$h_1 = h_2 = h_3$ one obtains the correct kinematic dependence, but then
the color structure factors out and vanishes by the Jacobi identity,
\begin{equation}
\label{Jacobi}
h^{abcd}=f^{ade}f^{cbe}
+f^{cae}f^{dbe}
+f^{bae}f^{cde} = 0 \, .
\end{equation}
In contrast, for even $h$, \eqn{Delta_a} can be obtained as a special
case of \eqn{Delta_gen_symm}, setting
\begin{equation}
\label{f_dd}
h^{abcd}=d^{ade}d^{cbe}
+d^{cae}d^{dbe}
+d^{bae}d^{cde}\,,
\end{equation}
which is totally symmetric, as required. This is expected from the general
properties of symmetric and antisymmetric invariant tensors for simple Lie
algebras~\cite{deAzcarraga:1997ya}.
At any fixed number of loops $l$, the total power of the logarithms in
\eqns{Delta_gen}{Delta_gen_symm}, $h_{\rm tot}
\equiv h_1 + h_2 + h_3$, will play an important role.
Indeed, $h_{\rm tot}$
is the degree of transcendentality of the function $\Delta_4$, as
defined in the Introduction, and it is
bounded from above by the maximal allowed transcendentality of the
anomalous dimension matrix at $l$ loops, as described in \sect{sec:maxtran}.
We expect then that at $l$ loops there will be a finite number of sets of
integers $h_i$ satisfying the available constraints. The most general
solution for the correction term $\Delta_4$ will then be given
by a linear combination of symmetric and antisymmetric polynomials such
as those given in \eqns{Delta_gen}{Delta_gen_symm},
with all allowed choices of $h_i$.
Such combinations include also contributions related to higher-order
Casimir operators. Indeed, summing
over permutations of $\{h_1,h_2,h_3\}$ in the symmetric version of
$\Delta_4$, \eqn{Delta_gen_symm}, one finds a completely symmetric
kinematic factor, multiplying a color tensor which is directly related to the
quartic Casimir operator (with a suitable choice of basis in the space of
symmetric tensors over the Lie algebra~\cite{deAzcarraga:1997ya}),
\begin{align}
\label{Delta_gen_symm_alt}
\begin{split}
&\Delta_4 (\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\times \Big[ d^{ade}d^{cbe} + d^{cae}d^{dbe} + d^{bae}d^{cde}\Bigr]\\
&\hskip0.2cm \times
\Big[
L_{1234}^{h_1}\, L_{1423}^{h_2}\, L_{1342}^{h_3}
+ L_{1423}^{h_1}\, L_{1342}^{h_2}\, L_{1234}^{h_3}
+ L_{1342}^{h_1}\, L_{1234}^{h_2}\, L_{1423}^{h_3}\\
&\hskip0.4cm + (-1)^{h_1+h_2+h_3}\Big(
L_{1234}^{h_1}\, L_{1342}^{h_2}\, L_{1423}^{h_3}
+ L_{1423}^{h_1}\, L_{1234}^{h_2}\, L_{1342}^{h_3}
+ L_{1342}^{h_1}\, L_{1423}^{h_2}\, L_{1234}^{h_3}\Big)
\Big] \,.
\end{split}
\end{align}
For even $h_{\rm tot} \equiv h_1 + h_2 + h_3$ this function is always
non-trivial, while for odd $h_{\rm tot}$ it is only non-trivial if all three
powers $h_i$ are different. We note once again that, due to the Jacobi
identity~(\ref{Jacobi}), \eqn{Delta_gen_symm_alt} does not have an
analog involving the antisymmetric structure constants.
\subsection{Maximal transcendentality}
\label{sec:maxtran}
Our next observation is that, at a given loop order, the total power of
the logarithms, $h_{\rm tot}$, cannot be arbitrarily high. It is well
known (although not proven mathematically)
that the maximal transcendentality $\tau_{\rm max}$ of the coefficient
of the $1/\e^k$ pole in an $l$-loop amplitude (including $k=0$) is
$\tau_{\rm max} = 2 l - k$. If a function is purely logarithmic, this
value corresponds to $2l-k$ powers of logarithms. In general, the space of
possible transcendental functions is not fully characterized mathematically,
particularly for functions of multiple dimensionless arguments.
At the end of this subsection we give some examples of functions of definite
transcendental weight, which appear in scattering amplitudes for massless
particles, and which therefore might be considered candidates from
which to build solutions for $\Delta$.
Because $\Gamma$, $\Gamma_{\bf Sp}$ and $\Delta$ are associated with
the $1/\e$ single pole, their maximal
transcendentality is $\tau_{\rm max} = 2 l - 1$. For $\NeqFour$
super-Yang-Mills theory, in every known instance the terms arising in this
way are purely of this maximal transcendentality: there are no
terms of lower transcendentality. This property
is relevant also for non-supersymmetric massless gauge theories,
particularly at three loops. Indeed, in any massless gauge theory
the purely-gluonic web diagrams that we need to consider at three
loops are the same as those arising in $\NeqFour$ super-Yang-Mills theory.
We conclude that at three loops $\Delta$ should have transcendentality
$\tau = 5$~\cite{Dixon:2009gx}, while for $l > 3$ some relevant
webs may depend on the matter content of the theory, so that $\Delta$
is only constrained to have a transcendentality at most equal to $2 l - 1$.
It should be emphasized that some transcendentality could be
attributed to constant prefactors. For example, the sum-over-dipoles
formula~(\ref{Gamma_ansatz}) for $\Gamma_{\dip}$ attains
transcendentality $\tau = 2 l - 1$ as the sum of $\tau = 2 l - 2$ from
the (constant) cusp anomalous dimension $\gamma_K$ (associated with
a $1/\e^2$ double pole in the amplitude) and $\tau = 1$ from the single
logarithm.
Because the functions $\Delta^{\kin}_4$ are defined up to possible
numerical prefactors, which may carry transcendentality, terms of the
form~(\ref{kin_4legs}), (\ref{Delta_gen}) or (\ref{Delta_gen_symm})
must obey
\begin{equation}
\label{max_trans}
h_{\rm tot} = h_1 + h_2 + h_3 \leq 2 l - 1 \, .
\end{equation}
We note furthermore that constants of transcendentality $\tau = 1$,
{\it i.e.} single factors of $\pi$, do not arise in Feynman diagram
calculations, except for imaginary parts associated with unitarity phases.
We conclude that whenever the maximal transcendentality argument
applies to $\Gamma$, the special case in which our functions
$\Delta_4$ have $\tau = 2 l - 2$ is not allowed.
The sum of the powers of all the logarithms in the
product must then be no more than $h_{\rm tot} = 5$ at three loops,
or $h_{\rm tot} = 7$ at four loops, and so on. In the special cases
considered above, at three loops, the constraint is: $3 h \leq 5$, {\it i.e.}
$h \leq 1$ in \eqn{Delta_a}, $h \leq 5$ in \eqn{Delta_b}, and $2 h \leq 5$,
{\it i.e.} $h \leq 2$ in \eqn{Delta_c}. Clearly, at low orders,
transcendentality imposes strict limitations on the admissible
functional forms. We will take advantage of these limitations at three
loops in \sect{sec:3loop}.
We close this subsection by providing some examples of possible
transcendental functions that might enter $\Delta$, beyond the purely
logarithmic examples we have focused on so far.
For functions of multiple dimensionless arguments, the space of
possibilities is not precisely characterized. Even for kinematical
constants, the allowed structures are somewhat empirically based:
the cusp anomalous dimension, for example, can be expressed through three
loops~\cite{Moch:2004pa} in terms of linear combinations of the
Riemann zeta values $\zeta(n)$ (having transcendentality $n$),
multiplied by rational numbers; other transcendentals that
might be present --- such as $\ln2$, which does appear in
heavy-quark mass shifts --- are not.
The cusp anomalous dimension
governs the leading behavior of the twist-two anomalous dimensions
for infinite Mellin moment $N$. At finite $N$, these anomalous dimensions
can be expressed~\cite{Moch:2004pa} in terms of the harmonic
sums $S_{\vec{n}_\tau}(N)$~\cite{Vermaseren:1998uu},
where $\vec{n}_\tau$ is a $\tau$-dimensional vector of integers.
Harmonic sums are the Mellin transforms of harmonic
polylogarithms ${\rm H}_{\vec{m}_\tau}(x)$~\cite{Remiddi:1999ew},
which are generalizations of the ordinary polylogarithms
${\rm Li}_n(x)$. They are defined recursively by integration,
\begin{equation}
{\rm H}_{\vec{m}_\tau}(x)
\ =\ \int_0^x dx' \ f(a;x') \, {\rm H}_{\vec{m}_{\tau-1}}(x') \,,
\label{harmpolydef}
\end{equation}
where $a=-1$, 0 or 1, and
\begin{equation}
f(-1;x) = {1\over 1+x} \,, \quad
f(0;x) = {1\over x} \,, \quad
f(1;x) = {1\over 1-x} \,.
\label{fxdef}
\end{equation}
Note that the transcendentality increases by one unit for each integration.
All three values of $a$ are needed to describe the twist-two anomalous
dimensions. However, for the four-point scattering amplitude, which is a
function of the single dimensionless ratio $r$ defined in \eqn{r_def},
only $a=0,1$ seem to be required~\cite{Smirnov:2003vi}.
Scattering amplitudes depending on two dimensionless ratios can often be
expressed in terms of harmonic polylogarithms as well, but where the
parameter $a$ becomes a function of the second dimensionless
ratio~\cite{Gehrmann:1999as}. In Ref.~\cite{DelDuca:2009au},
a quantity appearing in a six-point scattering amplitude at two loops
was recently expressed in terms of the closely-related Goncharov
polylogarithms~\cite{Goncharov} in two variables, and at
weight (trancendentality) four. Other recent works focusing more on
general mathematical properties include
Refs.~\cite{Bogner:2007mn,Brown:2009rc}.
In general, the space of possible
functions becomes quite large already at weight five, and our examples
below are meant to be illustrative rather than exhaustive.
\subsection{Collinear limits}
\label{CollimitSubsection}
Equipped with the knowledge of how Bose symmetry and other
requirements may be satisfied, let us return to the splitting
amplitude constraint, namely the requirement that the difference
between the two $\Delta$ terms in \eqn{Gamma_Sp_explicit} must
conspire to depend only on the color and kinematic variables of the
two partons that become collinear.
We begin by analyzing the case of an amplitude with precisely four
colored partons, possibly accompanied by other colorless particles (we
postpone the generalization to an arbitrary number of partons
to \sect{sec:n-leg}). The collinear constraint simplifies for $n = 4$
because for three partons there are no contributions beyond
the sum-over-dipoles formula, so that $\Delta_{n - 1} = \Delta_3 =
0$~\cite{Gardi:2009qi}.\footnote{For $n=4$ we should add a colorless
particle carrying off momentum; otherwise the three-parton kinematics
are ill-defined (for real momenta), and the limit $p_1\cdot p_2 \to 0$
is not really a collinear limit but a forward or
backward scattering limit.}
In~\eqn{Gamma_Sp_explicit} we therefore
have to consider $\Delta_{4}$ on its own, and require that when,
say, $p_1$ and $p_2$ become collinear $\Delta_{4}$ does not
involve the kinematic or color variables of other hard particles in
the process. Because in this limit there remains no non-singular
Lorentz-invariant kinematic variable upon which $\Delta_{4}$ can depend,
it essentially means that $\Delta_{4}$ must become trivial in this limit,
although it does not imply, of course, that $\Delta_{4}$ vanishes
away from the limit. In the following we shall see how this can be realized.
To this end let us first carefully examine the limit under consideration.
We work with strictly massless hard partons, $p_i^2 = 0$ for all $i$.
In a fixed-angle scattering amplitude we usually consider $2 p_i \cdot
p_j = Q^2 \beta_i \cdot \beta_j$ where $Q^2$ is taken large,
keeping $\beta_i \cdot \beta_j = {\cal O}(1)$ for any $i$ and $j$.
Now we relax the fixed-angle limit for the pair of hard partons $p_1$ and
$p_2$. Defining $P \equiv p_1 + p_2$ as in \sect{sec:SA}, we
consider the limit $2 p_1 \cdot p_2/Q^2 = P^2/Q^2 \to 0$.
The other Lorentz invariants all remain large; in
particular for any $j \neq 1,2$ we still have $2 p_1 \cdot p_j =
Q^2 \beta_1 \cdot \beta_j$ and $2 p_2 \cdot p_j = Q^2 \beta_2
\cdot \beta_j$ where $\beta_1 \cdot \beta_j$ and $\beta_2 \cdot
\beta_j$ are of ${\cal O}(1)$. In order to control the way in which the
limit is approached, it is useful to define
\begin{align}
\label{p1_and_p2}
p_1 = z \, P + k \,, \qquad \quad
p_2 = (1 - z) P - k \,,
\end{align}
so that $z$ measures the longitudinal momentum fraction
carried by $p_1$ in $P$, namely
\begin{equation}
\label{z}
z = \frac{p_1^+}{P^+} = \frac{p_1^+}{p_1^+ + p_2^+} \, ,
\end{equation}
where we assume, for simplicity, that the ``$+$'' light-cone
direction\footnote{One can then further specify the frame by choosing
the ``$-$'' direction along the momentum of one of the other hard
partons, say $p_3$.} is defined by $p_1$, so that $p_1 = (p_1^+, 0^-,
0_{\perp})$. In~\eqn{z} both the numerator and denominator are of
order $Q$, so $z$ is of ${\cal O}(1)$ and remains fixed in the limit
$P^2/Q^2 \to 0$. In~\eqn{p1_and_p2} $k$ is a small residual momentum,
making it possible for $P$ to be off the light-cone while $p_1$ and $p_2$
remain strictly light-like. Using the mass-shell conditions $p_1^2 = p_2^2
= 0$ one easily finds
\begin{equation}
k^2 = - z (1 - z) P^2 \,,
\qquad \quad k \cdot P = \frac12 (1 - 2 z) P^2 \, ,
\end{equation}
so that the components of $k$ are
\begin{equation}
k = \left(0^+, - \frac{P^2}{2P^+}, - \sqrt{z(1 - z)\,P^2} \right) \, .
\end{equation}
Note that in the collinear limit $k^-/Q$ scales as $P^2/Q^2$, while
$k_\perp/Q$ scales as $\sqrt{P^2/Q^2}$.
We can now examine the behavior of the logarithms of the three cross
ratios entering $\Delta^{\kin}_4$ in \eqn{kin_4legs}, in the limit $P^2 \to 0$.
Clearly, $L_{1234}$ and $L_{1423}$, which contain the vanishing invariant
$p_1 \cdot p_2$ either in the numerator or in the denominator, will be
singular in this limit. Similarly, it is easy to see that $L_{1342}$
must vanish,
because $\rho_{1342} \to 1$. More precisely, the collinear behavior may be
expressed using the parametrization of \eqn{p1_and_p2}, with the result
\begin{align}
\label{p1p2_get_collinear_rho1234}
\begin{split}
L_{1234} &=
\ln \left(\frac{p_1\cdot p_2 \, p_3 \cdot p_4}
{p_1 \cdot p_3 \, p_2 \cdot p_4} \right) \\
&\simeq \,
\underbrace{\ln\left(\frac{P^2 \ p_3 \cdot p_4}
{2 z (1 - z) \, P \cdot p_3 \, P \cdot p_4}\right)}_{
{\cal O} \left(\ln(P^2/Q^2) \right)}
\, - \underbrace{\frac{k \cdot p_3}{z \, P \cdot p_3}}_{{\cal O}
\left(\sqrt{{P^2}/{Q^2}} \right)}
+ \underbrace{\frac{k \cdot p_4}{(1 - z) \, P \cdot p_4}}_{{\cal O}
\left( \sqrt{{P^2}/{Q^2}} \right)}
\, \to \, \infty \, ,
\end{split}
\\
\label{p1p2_get_collinear_rho1423}
\begin{split}
L_{1423} &=
\ln \left(\frac{p_1 \cdot p_4 \, p_2 \cdot p_3}
{p_1 \cdot p_2 \, p_4 \cdot p_3} \right) \, \\
&\simeq\,
\underbrace{\ln \left(\frac{2 z (1 - z)\ P \cdot p_4 \, P\cdot p_3}
{P^2 \, p_4 \cdot p_3} \right)}_{{\cal O}
\left( \ln(P^2/Q^2) \right)}\,
- \underbrace{\frac{k \cdot p_3}{(1 - z) \, P \cdot p_3}}_{{\cal O}
\left( \sqrt{{P^2}/{Q^2}} \right)}
+ \underbrace{\frac{k \cdot p_4}{z \, P \cdot p_4}}_{{\cal O}
\left( \sqrt{{P^2}/{Q^2}} \right)}
\, \to \, - \infty \, ,
\end{split}
\\
\label{p1p2_get_collinear_rho1342}
\begin{split}
L_{1342} &=
\ln \left(\frac{p_1 \cdot p_3 \, p_4 \cdot p_2}
{p_1 \cdot p_4 \, p_3 \cdot p_2}\right) \,
= \, \frac{1}{z (1 - z)} \,\left(
\frac{k \cdot p_3}{P \cdot p_3} -
\frac{k \cdot p_4}{P \cdot p_4} \right)
\, = \, {\cal O} \left(\sqrt{{P^2}/{Q^2}} \right) \, \to \, 0 \, ,
\end{split}
\end{align}
where we expanded in the small momentum $k$. As expected, two of the
cross-ratio logarithms diverge logarithmically with $P^2/Q^2$, with
opposite signs, while the third cross-ratio logarithm \emph{vanishes
linearly with} $\sqrt{P^2/Q^2}$. We emphasize that this vanishing is
independent of whether the momenta $p_i$ are incoming or outgoing, except,
of course, that the two collinear partons $p_1$ and $p_2$ must either be
both incoming or both outgoing. Indeed, according to \eqn{rhoijkl_mod},
$\rho_{1342}$ carries no phase when $p_1$ and $p_2$ are collinear:
\begin{equation}
\label{rho1342_phase}
\rho_{1342}
=\left|\frac{p_1 \cdot p_3 \ p_4 \cdot p_2}
{p_1 \cdot p_4 \ p_3 \cdot p_2} \right|
{\rm e}^{-{\rm i}\pi(\lambda_{13} + \lambda_{42}
- \lambda_{14} - \lambda_{32})}
\to 1\, ,
\end{equation}
since $\lambda_{13}=\lambda_{32}$ and $\lambda_{42}=\lambda_{14}$.
Let us now examine a generic term with a kinematic dependence
of the form~(\ref{kin_4legs}) in this limit. Substituting
eqs.~(\ref{p1p2_get_collinear_rho1234}) through
(\ref{p1p2_get_collinear_rho1342}) into \eqn{kin_4legs} we see
that, if $h_3$ (the power of $L_{1342}$) is greater than or equal
to $1$, then the result for $\Delta^{\kin}_4$ in the collinear limit
is zero. This vanishing is not affected by the powers of the other
logarithms, because they diverge only logarithmically as $P^2/Q^2
\to 0$, while $L_{1342}$ vanishes as a power law in the same limit.
In contrast, if $h_3 = 0$, and $h_1$ or $h_2$ is greater than
zero, then the kinematic function $\Delta^{\kin}_4$ in \eqn{kin_4legs}
diverges when $p_1$ and $p_2$ become collinear, due to the behavior
of $L_{1234}$ and $L_{1423}$ in
\eqns{p1p2_get_collinear_rho1234}{p1p2_get_collinear_rho1423}.
The first term in each of these equations introduces explicit
dependence on the non-collinear parton momenta $p_3$ and $p_4$
into $\Delta_n$ in \eqn{Gamma_Sp_explicit}, which would violate
collinear universality. We conclude that consistency with the limit
where $p_1$ and $p_2$ become collinear requires $h_3 \geq 1$.
Obviously we can consider, in a similar way, the limits where other
pairs of partons become collinear, leading to the conclusion that all
three logarithms, $L_{1234}$, $L_{1423}$ and $L_{1324}$ must appear
raised to the first or higher power. Collinear limits thus constrain
the powers of the logarithms by imposing
\begin{equation}
h_i \geq 1 \, , \qquad \forall \, i \, .
\label{splitting_ampl_constraint}
\end{equation}
This result puts a lower bound on the
transcendentality of~$\Delta^{\rm kin}_4$, namely
\begin{equation}
\label{min_trans}
h_{\rm tot} = h_1 + h_2 + h_3 \geq 3 \, .
\end{equation}
\subsection{Three-loop analysis\label{sec:3loop}}
We have seen that corrections to the sum-over-dipoles formula involving
four-parton correlations are severely constrained. We can now examine
specific structures that may arise at a given loop order $l$, beginning
with the first nontrivial possibility, $l = 3$. Because we consider webs that
are attached to four hard partons, at three loops they can only attach
once to each eikonal line, as in \fig{4Elabeled}, giving the color factor
in \eqn{color}, where $h^{abcd}$ must be constructed out of the structure
constants $f^{ade}$. The only possibility is terms of the form $f^{ade}
f^{bce}$ --- the same form we obtained in the previous section starting
from the symmetry properties of the kinematic factors depending on
$L_{ijkl}$. In contrast, the symmetric tensor $d^{ade}$ cannot arise in
three-loop webs.
Taking into account the splitting amplitude
constraint~(\ref{splitting_ampl_constraint}) on the one hand,
and the maximal transcendentality constraint~(\ref{max_trans})
on the other, there are just a few possibilities for the various powers
$h_i$. These are summarized in Table~\ref{table:_hi}.
The lowest allowed transcendentality for $\Delta_4^{\rm kin}$
is $\tau = 3$, corresponding to $h_1 = h_2 = h_3 = 1$. This brings
us to \eqn{Delta_a}, in which we would have to construct
a completely antisymmetric tensor $h^{abcd}$ out of the structure
constants $f^{ade}$. Such a tensor, however, does not exist.
Indeed, starting with the general expression~(\ref{Delta_gen}), which
is written in terms of the structure constants, and substituting $h_1 =
h_2 = h_3 = 1$, we immediately see that the color structure factorizes,
and vanishes by the Jacobi identity~(\ref{Jacobi}). The possibility $h_1 =
h_2 = h_3 = 1$ is thus excluded by Bose symmetry.
Next, we may consider transcendentality $\tau = 4$. Ultimately, we
exclude functions with this degree of transcendentality at three loops,
because we are dealing with purely gluonic webs, which are the same
as in ${\cal N} = 4$ super Yang-Mills theory. We expect then that the
anomalous dimension matrix will have a uniform degree of transcendentality
$\tau = 5$, and there are no constants with $\tau = 1$ that might
multiply functions with $h_{\rm tot} = 4$ to achieve the desired result, as
discussed in \sect{sec:maxtran}. However, it is instructive to note that
symmetry alone does not rule out this possibility. Indeed, having
excluded \eqn{Delta_gen_symm}, involving the symmetric tensor
$d^{ade}$, we may consider \eqn{Delta_gen} with $h_1 + h_2 +
h_3 = 4$. Bose symmetry and the splitting amplitude constraint in
\eqn{splitting_ampl_constraint} leave just two potential structures,
one with $h_1 = 2$ and $h_2 = h_3 = 1$, and a second one with
$h_1 = h_2 = 1$ and $h_3 = 2$ ($h_1 = h_3 = 1$ and $h_2 = 2$
yields the latter structure again). The former vanishes identically, while
the latter could provide a viable candidate,
\begin{align}
\label{Delta_112}
\begin{split}
&\Delta_4^{(112)} (\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d} \\
&\hskip1cm
\times \Big[f^{ade}f^{cbe}
L_{1234}\,\Big( L_{1423}\, L_{1342}^{2}
- \, L_{1342} \, L_{1423}^{2}\Big)
\, \\
&\hskip1.5cm
+ f^{cae}f^{dbe}
L_{1423}\,\Big( L_{1342}\, L_{1234}^2
- \, L_{1234} \, L_{1342}^{2}\Big) \\
&\hskip1.5cm
+ f^{bae}f^{cde}
L_{1342}\, \Big(L_{1234}\, L_{1423}^{2}
- \, L_{1423}\, L_{1234}^{2}\Big)
\Big] \,.
\end{split}
\end{align}
We rule out \eqn{Delta_112} based only on its degree of transcendentality.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|l|l l l|}
\hline
$h_1$&$h_2$&$h_3$& $h_{\rm tot}$ & comment&&\\
\hline
1 & 1 & 1 & 3 & vanishes identically by Jacobi
identity~(\ref{Jacobi})&&\\
2 & 1 & 1 & 4 & kinematic factor vanishes identically&&\\
1 & 1 & 2 & 4 & allowed by symmetry, excluded by
transcendentality&& \\
1 & 2 & 2 & 5 & viable possibility, \eqn{Delta_case122}&
\multirow{4}{*}{\hspace*{-130pt} {\fontsize{54}{15}\selectfont $\}$}} &
\multirow{4}{*}{\hspace*{-110pt} all coincide using \eqn{Jacobi} }\\
3 & 1 & 1 & 5 & viable possibility, \eqn{Delta_case311}&&\\
2 & 1 & 2 & 5 & viable possibility, \eqn{Delta_case212}&&\\
1 & 1 & 3 & 5 & viable possibility, \eqn{Delta_case113}&&\\
\hline
\end{tabular}
\end{center}
\caption{Different possible assignments of the powers $h_i$ in
\eqn{Delta_gen} at three loops. We only consider $h_i\geq1$ because
of the splitting amplitude
constraint~(\ref{splitting_ampl_constraint}) and
$h_{\rm tot}\leq 5$ because of the bound on transcendentality,
\eqn{max_trans}. We also omit the combinations that can be obtained
by interchanging the values of $h_2$ and $h_3$;
this interchange yields the same function,
up to a possible overall minus sign.\label{table:_hi}}
\end{table}
We consider next the highest attainable transcendentality at three loops,
$\tau = 5$. \Eqn{Delta_gen} yields four different structures,
summarized in Table~\ref{table:_hi}. The first structure we consider
has $h_1 = 1$ and $h_2 = h_3 = 2$. It is given by
\begin{align}
\label{Delta_case122}
\begin{split}
&\Delta_4^{(122)} (\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\Big[ f^{ade} f^{cbe}
L_{1234} \, (L_{1423}\,L_{1342})^{2}\\&
+
f^{cae} f^{dbe}
L_{1423}\, (L_{1234}\, L_{1342})^{2}
+
f^{bae} f^{cde}
L_{1342}\, (L_{1423}\, L_{1234})^{2}
\Big] \, .
\end{split}
\end{align}
The second structure has $h_1 = 3$ and $h_2 = h_3 = 1$, yielding
\begin{align}
\label{Delta_case311}
\begin{split}
&\Delta_4^{(311)}(\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\Big[f^{ade} f^{cbe}
(L_{1234})^{3}\, L_{1423}\, L_{1342}\\&
+
f^{cae} f^{dbe}
(L_{1423})^{3}\,L_{1234}\, L_{1342}
+
f^{bae} f^{cde}
(L_{1342})^{3}\, L_{1423}\, L_{1234}
\Big] \, .
\end{split}
\end{align}
We now observe that the two functions~(\ref{Delta_case122})
and (\ref{Delta_case311}) are, in fact, one and the same. To show
this, we form their difference, and use relation~(\ref{three_rho_relation})
to substitute $L_{1234} = - L_{1423} \, - \, L_{1342}$. We obtain
\begin{align}
\label{122_and_311_combination}
\begin{split}
\Delta_4^{(122)} - \Delta_4^{(311)} \, =&
\ {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\,\, L_{1234} \, L_{1423} \, L_{1342}
\\& \hskip0.1cm \times \bigg[f^{ade}f^{cbe}
\Big( L_{1423}\,L_{1342} \,-\, L_{1234}^{2}\Big)
\, + \,
f^{cae}f^{dbe}
\Big( L_{1234}\, L_{1342} \,-\, L_{1423}^{2}\Big)
\\& \hskip0.5cm +
f^{bae}f^{cde}
\Big(L_{1423}\, L_{1234} \,-\, L_{1342}^{2} \Big)
\bigg]
\\ =&
\, - \, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\,\, L_{1234} \, L_{1423} \, L_{1342}\
\left[f^{ade}f^{cbe}+ f^{cae}f^{dbe}+f^{bae}f^{cde} \right]
\\ & \hskip0.1cm \times
\left( L_{1342}^2 \, + \, L_{1342} \, L_{1423} \,
+ \, L_{1423}^{2} \right) \, = \, 0 \, ,
\end{split}
\end{align}
vanishing by the Jacobi identity~(\ref{Jacobi}).
The last two structures in Table~\ref{table:_hi} are given by
\begin{align}
\label{Delta_case212}
\begin{split}
\Delta_4^{(212)} (\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\Big[ &f^{ade} f^{cbe}
L_{1234}^2 \, \Big(L_{1423}\,L_{1342}^{2}\,+
\,L_{1423}^{2}\,L_{1342}\Big) \\& +
f^{cae} f^{dbe}
L_{1423}^2\, \Big(L_{1234}\, L_{1342}^{2}\, +\,
L_{1234}^{2}\, L_{1342} \Big) \\& +
f^{bae} f^{cde}
L_{1342}^{2}\, \Big(L_{1423}\, L_{1234}^{2} \,+\,
L_{1423}^{2} \, L_{1234}\Big)
\Big] \, ,
\end{split}
\end{align}
and
\begin{align}
\label{Delta_case113}
\begin{split}
\Delta_4^{(113)}(\rho_{ijkl}) \, = \,
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\Big[&f^{ade} f^{cbe}
L_{1234}\, \Big(L_{1423}\, L_{1342}^{3}\,+\,
L_{1423}^{3}\, L_{1342}\Big) \\& +
f^{cae} f^{dbe}
L_{1423}\,\Big(L_{1234}\, L_{1342}^{3}\,+\,
L_{1234}^{3}\, L_{1342}\Big) \\& +
f^{bae} f^{cde}
L_{1342}\, \Big(L_{1423}\, L_{1234}^{3}\,+\,
L_{1423}^{3}\, L_{1234}\Big) \Big] \, .
\end{split}
\end{align}
One easily verifies that they are both proportional to $\Delta_4^{(122)} =
\Delta_4^{(311)}$. Consider first \eqn{Delta_case212}. In each line we
can factor out the logarithms and use \eqn{three_rho_relation}
to obtain a monomial. For example, the first line may be written as:
\begin{align}
\begin{split}
L_{1234}^2\,\Big( L_{1423}\,L_{1342}^{2}\,+\,L_{1423}^{2}\,L_{1342}\Big)
&=\,L_{1234}^2\,L_{1423}\,L_{1342}\, (L_{1423}+L_{1342}) \\
&=\, - L_{1234}^3 \,L_{1423}\,L_{1342}\, ,
\end{split}
\end{align}
where we recognise that this function coincides with \eqn{Delta_case311} above.
Consider next \eqn{Delta_case113}, where, for example, the first line yields
\begin{align}
\begin{split}
L_{1234}\,\Big(L_{1423}\, L_{1342}^{3}\,+\,L_{1423}^{3}\, L_{1342}\Big) &=
L_{1234}\,L_{1423}\, L_{1342} \Big(L_{1342}^{2}\,+\,L_{1423}^{2}\Big)
\\&=L_{1234}\,L_{1423}\, L_{1342}
\Big(\left(L_{1342}\,+\,L_{1423}\right)^{2}-2L_{1342}\,L_{1423}\Big)\,
\\&=L_{1234}\,L_{1423}\, L_{1342} \Big(L_{1234}^{2}-2L_{1342}\,L_{1423}\Big)\,,
\end{split}
\end{align}
which is a linear combination of \eqns{Delta_case122}{Delta_case311},
rather than a new structure.
We conclude that there is precisely one function,
$\Delta_4^{(122)} = \Delta_4^{(311)}$, that can be
constructed out of arbitrary powers of logarithms and is consistent
with all available constraints at three loops.
We emphasize that this function is built with color and kinematic
factors that one expects to find in the actual web diagram
computations, and it is quite possible that it indeed appears.
Because this structure saturates the transcendentality bound, its
coefficient is necessarily a rational number.
Note that color conservation has not been imposed here, but
it is implicitly assumed that for a four-parton amplitude
\begin{equation}
{\bf T}_1^{a} \,+\, {\bf T}_2^{a}
\,+\, {\bf T}_3^{a} \,+\, {\bf T}_4^{a} = 0 \, .
\end{equation}
Importantly, upon using this relation, the structure~(\ref{Delta_case122})
(or, equivalently, (\ref{Delta_case311})) remains non-trivial.
\subsection{Three-loop functions involving polylogarithms\label{sec:poly}}
Additional functions can be constructed upon removing the requirement
that the kinematic dependence be of the form~(\ref{kin_4legs}),
where only powers of logarithms are allowed. Three key
features of the function $\ln\rho$ were essential in the examples above:
it vanishes like a power at $\rho = 1$, it has a definite symmetry under
$\rho \to 1/\rho$, and it only diverges logarithmically as
$\rho \to 0$ and $\rho \to \infty$. These properties can be mimicked by
a larger class of functions. In particular, allowing dilogarithms one
can easily
construct a function of transcendentality $\tau = 4$, which is consistent
with Bose symmetry and collinear constraints. It is given by
\begin{eqnarray}
\label{Delta_case211_dilog}
\Delta_4^{(211,\, {\rm Li_2})} (\rho_{ijkl}) & = &
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}\\
&& \bigg[ f^{ade}f^{cbe}
\Big({\rm Li}_2(1-\rho_{1234}) - {\rm Li}_2(1-1/\rho_{1234})\Big)
\, \ln\rho_{1423}\ \ln\rho_{1342}
\nonumber \\ && + f^{cae}f^{dbe}
\Big({\rm Li}_2(1-\rho_{1423}) - {\rm Li}_2(1-1/\rho_{1423})\Big)
\ln\rho_{1234}\ \ln\rho_{1342}
\nonumber \\ && + f^{bae}f^{cde}
\Big({\rm Li}_2(1-\rho_{1342}) - {\rm Li}_2(1-1/\rho_{1342})\Big)
\,\ln\rho_{1423}\ \ln\rho_{1234}
\bigg] \,. \nonumber
\end{eqnarray}
The key point here is that the function ${\rm Li}_2(1 - \rho_{1234}) -
{\rm Li}_2(1 - 1/\rho_{1234})$ is odd under $\rho_{1234} \to
1/\rho_{1234}$, which allows it to be paired with the antisymmetric
structure constants $f^{ade}$. It is also easy to verify that the
collinear constraints are satisfied.
We note that it is also possible to construct a potentially relevant function
containing logarithms with a more complicated kinematic dependence.
Indeed, the structure
\begin{eqnarray}
\label{Delta_case211_mod}
\Delta_4^{(211, \, {\rm mod})} (\rho_{ijkl}) & = &
\, {\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\\ && \Bigg[f^{ade}f^{cbe}
\ln \rho_{1234}\ \ln\left(\frac{\rho_{1234}}
{(1 - \rho_{1234})^2}\right) \,\ln\rho_{1423}\ \ln\rho_{1342}
\nonumber \\ && +
f^{cae}f^{dbe}
\ln \rho_{1423}\ \ln\left(\frac{\rho_{1423}}
{(1 - \rho_{1423})^2}\right)\, \ln\rho_{1234}\ \ln\rho_{1342}
\nonumber \\ && +
f^{bae}f^{cde}
\ln \rho_{1342}\,\,\ln\left(\frac{\rho_{1342}}
{(1 - \rho_{1342})^2}\right) \,\ln\rho_{1423}\ \ln\rho_{1234}
\Bigg] \, \nonumber
\end{eqnarray}
fulfills the symmetry requirements discussed above, because
$\ln\left({\rho_{1234}}/{(1 - \rho_{1234})^2}\right)$ is even under
$\rho_{1234} \to 1/\rho_{1234}$. Thanks to the extra power of the
cross-ratio logarithm, eq.~(\ref{Delta_case211_mod}) also vanishes in all
collinear limits, as required. Logarithms with argument $1 - \rho_{ijkl}$
cannot be directly rejected on the basis of the fact that they induce
unphysical singularities, because $\rho_{ijkl} \to 1$ corresponds to a
physical collinear limit\footnote{Note that the analogous structure
containing $\ln \left({\rho_{1234}}/{(1 + \rho_{1234})^2}\right)$ can be
excluded because the limit $\rho_{ijkl} \to -1$ should not be
singular. Indeed, by construction the variables $\rho_{ijkl}$ always
contain an even number of negative momentum invariants, so their real part
is always positive (although unitarity phases may add up and bring their
logarithm to the second Riemann sheet).}. We conclude that
\eqns{Delta_case211_dilog}{Delta_case211_mod} would be viable based on
symmetry and collinear requirements alone. However, we can exclude them
on the basis of transcendentality: as discussed in \sect{sec:maxtran}, a
function with $h_{\rm tot} = 4$ cannot arise at three loops, because it
cannot be upgraded to maximal transcendentality $\tau=5$ by constant
prefactors.
At transcendentality $\tau = 5$, there are at least two further viable
structures that involve polylogarithms, in which second and
third powers of logarithms are replaced, respectively, by appropriate
combinations ${\rm Li}_2$ and ${\rm Li}_3$. The first structure can be obtained
starting from \eqn{Delta_case122}, and using the same combination of
dilogarithms that was employed in \eqn{Delta_case211_dilog}. One finds
\begin{eqnarray}
\label{Delta_case122_mod}
&& \hspace{5mm} \Delta_4^{(122, \, {\rm Li_2})} (\rho_{ijkl}) \, = \,
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\\ && \hspace{-3mm} \times \bigg[f^{ade}f^{cbe}
\ln\rho_{1234}\,
\Big({\rm Li}_2(1 - \rho_{1342}) - {\rm Li}_2 (1 - 1/\rho_{1342}) \Big)
\Big({\rm Li}_2(1 - \rho_{1423}) - {\rm Li}_2 (1 - 1/\rho_{1423}) \Big)
\nonumber \\ && \hspace{-3mm} + f^{cae}f^{dbe}
\ln\rho_{1423}\,
\Big({\rm Li}_2(1-\rho_{1234}) - {\rm Li}_2(1-1/\rho_{1234})\Big)
\Big({\rm Li}_2(1-\rho_{1342}) - {\rm Li}_2(1-1/\rho_{1342})\Big)
\nonumber \\ && \hspace{-3mm} + f^{bae}f^{cde}
\ln\rho_{1342}\,
\Big({\rm Li}_2(1-\rho_{1234}) - {\rm Li}_2(1-1/\rho_{1234})\Big)
\Big({\rm Li}_2(1-\rho_{1423}) - {\rm Li}_2(1-1/\rho_{1423})\Big)
\bigg] . \nonumber
\end{eqnarray}
Here it was essential to replace both $\ln^2$ terms in order to keep
the symmetry properties in place. Starting instead from
\eqn{Delta_case311}, there is one possible polylogarithmic
replacement, which, however, requires introducing trilogarithms,
because using ${\rm Li}_2$ times a logarithm would turn the odd
function into an even one, which is excluded. One may write instead
\begin{eqnarray}
\label{Delta_case311_mod}
\Delta_4^{(311, \, {\rm Li_3})} (\rho_{ijkl}) & = &
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}\\
&& \times \bigg[f^{ade}f^{cbe}
\Big({\rm Li}_3(1-\rho_{1234}) - {\rm Li}_3(1-1/\rho_{1234})\Big)
\, L_{1423}\, L_{1342}
\nonumber \\
&& \hskip0.4cm +f^{cae}f^{dbe}
\Big({\rm Li}_3(1-\rho_{1423}) - {\rm Li}_3(1-1/\rho_{1423})\Big)
\,L_{1234}\, L_{1342}
\nonumber \\
&& \hskip0.4cm +f^{bae}f^{cde}
\Big({\rm Li}_3(1-\rho_{1342}) - {\rm Li}_3(1-1/\rho_{1342})\Big)
\, L_{1423}\, L_{1234}
\bigg] \, . \nonumber
\end{eqnarray}
Neither \eqn{Delta_case122_mod} nor \eqn{Delta_case311_mod} can be
excluded at present, as they satisfy all available constraints.
We can, however, exclude similar constructions with higher-order
polylogarithms. For example, ${\rm Li}_4$ has transcendentality
$\tau = 4$, so it could be accompanied by at most one logarithm; this
product would not satisfy all collinear constraints.
We do not claim to be exhaustive in our investigation of
polylogarithmic functions; additional possibilities may arise upon allowing
arguments of the polylogarithms that have a different functional
dependence on the cross ratios.
\subsection{Four-loop analysis}
Let us briefly turn our attention to contributions that may arise beyond
three loops. At the four-loop level several new possibilities open up. First,
there are potential quartic Casimirs in $\gamma_K$. Corresponding
corrections to the soft anomalous dimension would satisfy
inhomogeneous differential equations, eq. (5.5) of Ref.~\cite{Gardi:2009qi}.
Beyond that, new types of corrections may appear even if $\gamma_K$
admits Casimir scaling. First, considering the logarithmic expressions of
\eqn{kin_4legs}, purely gluonic webs might give rise to functions
of transcendentality up to $h_{\rm tot} = h_1 + h_2 + h_3 = 7$.
At this level, there are four potential functions: (a) $h_1 = 5$ and
$h_2 = h_3 = 1$; (b) $h_1 = 4$, $h_2 = 2$, $h_3 = 1$; (c) $h_1 = 1$
and $h_2 = h_3 = 3$; (d) $h_1 = 3$ and $h_2 = h_3 = 2$.
Of course, as in the three-loop case, also polylogarithmic structures may
appear, and functions with $h_{\rm tot} \leq 5$ might be present (of the
type already discussed at three loops), multiplied by transcendental
constants with $\tau\geq2$.
It is interesting to focus in particular on color structures that are related
to quartic Casimir operators, which can appear at four loops not only in
$\gamma_K$ but also in four-parton correlations. Indeed, a structure
allowed by Bose symmetry and collinear constraints is given by
\eqn{Delta_a}, where the group theory factor $h^{abcd}$ is generated
by a pure-gluon box diagram attached to four different hard partons, giving
rise to a trace of four adjoint matrices. It is of the form
\begin{equation}
\label{four_loop_example_adjoint}
\Delta_4^{C_{4, A}} (\rho_{ijkl}) \, = \,
{\rm Tr}\left[F^{a}F^{b}F^{c}F^{d}\right]
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\,\,\Big[ \ln\rho_{1234} \, \ln\rho_{1423} \, \ln\rho_{1342}
\Big]^{h} \, ,
\end{equation}
where $F^a$ are the ${\rm SU}(N_c)$ generators in the adjoint
representation, $(F^a)_{bc} = - {\rm i} f^a_{\, \, \, bc}$. This
expression may be relevant {\it a priori} for both odd and even $h$,
projecting respectively on the totally antisymmetric or symmetric
parts of ${\rm Tr}\left[F^{a}F^{b}F^{c}F^{d}\right]$. As noted
above, however, a totally antisymmetric tensor cannot be constructed
with four adjoint indices, so we are left with the completely symmetric
possibility, which indeed corresponds to the quartic Casimir operator.
The transcendentality constraint comes into play here: the only even
integer $h$ that can give transcendentality $\tau \leq 7$ is $h = 2$.
For $\NeqFour$ super-Yang-Mills theory, \eqn{four_loop_example_adjoint}
with $h=2$ can be excluded at four loops, because there is no
numerical constant with $\tau = 1$ that could bring the
transcendentality of \eqn{four_loop_example_adjoint} from 6 up to 7.
On the other hand, in theories with a lower number of supersymmetries,
and at four loops, there are potentially both pure-glue and matter-loop
contributions of lower transcendentality, because only the specific loop
content of $\NeqFour$ super-Yang-Mills theory is expected to be purely
of maximal transcendentality. Thus \eqn{four_loop_example_adjoint} may
be allowed for $h=2$ for generic adjoint-loop contributions (for example
a gluon box in QCD), and analogously
\begin{equation}
\label{four_loop_example_fundm}
\Delta_4^{C_{4, F}} (\rho_{ijkl}) \, = \,
{\rm Tr}\left[t^{a}t^{b}t^{c}t^{d}\right]
\,{\bf T}_1^{a} {\bf T}_2^{b} {\bf T}_3^{c} {\bf T}_4^{d}
\,\,\Big[ \ln\rho_{1234} \, \ln\rho_{1423} \, \ln\rho_{1342}
\Big]^{h} \,,
\end{equation}
for loops of matter in the fundamental representation (with
generators $t^a$), {\it e.g.} from quark box diagrams.
As before, the other power allowed by
transcendentality, $h=1$, is excluded by symmetry,
because there is no projection of
${\rm Tr}\left[t^{a}t^{b}t^{c}t^{d}\right]$
that is totally antisymmetric under permutations.
While \eqn{four_loop_example_adjoint} is excluded by transcendentality
for $\NeqFour$ super-Yang-Mills theory, another construction involving
the quartic Casimir is allowed: \eqn{Delta_gen_symm_alt}
for $h_1 = 2$, $h_2 = h_3 = 1$ can be used, after multiplying it by the
transcendentality $\tau = 3$ constant $\zeta(3)$. Finally, as already
mentioned, there are a number of other purely logarithmic structures
with partial symmetry in each term, as represented
by~\eqns{Delta_gen}{Delta_gen_symm}, that may appear at four loops.
\section{Generalization to $n$-leg amplitudes\label{sec:n-leg}}
The above analysis focused on the case of four partons, because this is
the first case where cross ratios can be formed, and thus $\Delta$ may
appear. However, rescaling-invariant ratios involving more than four
momenta can always be split into products of cross ratios involving
just four momenta. Therefore it is straightforward to generalize
the results we obtained to any $n$-parton process at three loops.
Indeed, contributions to
$\Delta_n$ are simply constructed as a sum over all possible sets of
four partons,
\begin{equation}
\label{Delta_n_legs}
\Delta_n = \sum_{i,j,k,l} \Delta_4(\rho_{ijkl}) \, ,
\end{equation}
just as the sum-over-dipoles formula~(\ref{barS_ansatz}) is written as
a sum over all possible pairs of legs. The indices in the sum in
\eqn{Delta_n_legs} are of course all unequal. Assuming a purely
logarithmic structure, at three loops the function $\Delta_4$ in
\eqn{Delta_n_legs} is given by $\Delta_4^{(122)}$ in \eqn{Delta_case122}
(or, equivalently, $\Delta_4^{(311)}$ in \eqn{Delta_case311}).
Of course the overall prefactor to $\Delta_4$ could still be zero;
its value remains to be determined by an explicit computation.
The total number of terms in the
sum increases rapidly with the number of legs: for $n$ partons, there
are $({n \atop 4})$ different terms.
Now we wish to show that this generalization is a consistent one.
To do so, we shall verify that the difference between $\Delta_n$ and
$\Delta_{n - 1}$ in \eqn{Gamma_Sp_explicit},
for the splitting amplitude anomalous dimension,
does not introduce any dependence on the kinematics or color of
any partons other than the collinear pair.
The verification is non-trivial for $n\geq5$ because
$\Delta_{n - 1}$ is no longer zero.
Consider the general $n$-leg amplitude, in which the two legs $p_1$
and $p_2$ become collinear. The terms entering
\eqn{Gamma_Sp_explicit} include:
\begin{itemize}
\item{} A sum over $({n - 2 \atop 4})$ different terms in $\Delta_n$ that
do not involve any of the legs that become collinear. They depend on
the cross ratios $\rho_{ijkl}$ where none of the indices is $1$ or $2$.
However, exactly the same terms appear in $\Delta_{n - 1}$, so
they cancel in \eqn{Gamma_Sp_explicit}.
\item{} A sum over $({n - 2 \atop 2})$ different terms in $\Delta_n$
depending on the variables $\rho_{12ij}$ (and permutations), where
$i, \, j \neq 1, \, 2$. These variables involve the two legs that become
collinear. According to \eqn{Delta_n_legs}, each of these
terms is $\Delta_4$, namely it is given by a sum of terms that admit the
constraint~(\ref{splitting_ampl_constraint}). Therefore each of them is
guaranteed to vanish in the collinear limit, and we can discard them
from \eqn{Gamma_Sp_explicit}. The same argument applies to any
$\Delta_4$ that is consistent at the four-parton level, such as the
polylogarithmic constructions~(\ref{Delta_case122_mod}) and
(\ref{Delta_case311_mod}).
\item{} Finally, $\Delta_n$ brings a sum over
$2 \times ({n - 2 \atop 3})$ terms involving just one leg among
the two that become collinear.
These terms depend on $\rho_{1jkl}$ or $\rho_{2jkl}$,
where $j, \, k, \, l \neq 1, \, 2$. In contrast, $\Delta_{n - 1}$ brings just
one set of such terms, because the $(12)$ leg, $P = p_1 + p_2$, is now
counted once. Recalling, however, that this leg carries the color
charge
\begin{equation}
\label{color_consrv_12}
{\bf T}^{a}\,=\,{\bf T}_1^{a} \,+\, {\bf T}_2^{a} \, ,
\end{equation}
it becomes clear that any term of this sort having a color factor of
the form~(\ref{color}) would cancel out in the difference. Indeed
\begin{align}
\label{collmixed}
\begin{split}
&\Delta_n \left(\rho_{1 j k l}, \rho_{2 j k l} \right)
- \Delta_{n - 1} \left(\rho_{(12) j k l} \right) \\
& \hspace{-2mm} = \,
\sum_{j,k,l} \, h^{abcd} \, {\bf T}_j^{b} {\bf T}_k^{c}
{\bf T}_l^{d} \,
\Bigg[ \underbrace{\,{\bf T}_1^{a} \Delta^{\kin}(\rho_{1jkl})
+ \,{\bf T}_2^{a}
\, \Delta^{\kin}(\rho_{2jkl})}_{\text{from}\, \Delta_n}
- \underbrace{ \,{\bf T}^{a}
\, \Delta^{\kin}(\rho_{(12) j k l})}_{\text{from}\, \Delta_{n - 1}} \Bigg]
\\
& \hspace{-2mm} = \, 0 \, .
\end{split}
\end{align}
To show that this combination vanishes we used \eqn{color_consrv_12}
and the fact that the kinematic factor $\Delta^{\kin}$ in all
three terms is identical because of rescaling invariance, that is,
it depends only on the directions of the partons, which coincide
in the collinear limit.
\end{itemize}
We conclude that \eqn{Delta_n_legs} is consistent with the limit as
any two of the $n$ legs become collinear.
A similar analysis also suggests that \eqn{Delta_n_legs} is
consistent with the triple collinear limit in which $p_1$, $p_2$ and $p_3$
all become parallel. We briefly sketch the analysis.
We assume that there is a universal factorization
in this limit, in which the analog of ${\bf Sp}$ again only depends on the
triple-collinear variables: $P^2\equiv (p_1+p_2+p_3)^2$, which vanishes
in the limit; $2p_1\cdot p_2/P^2$ and $2p_2\cdot p_3/P^2$;
and the two independent longitudinal momentum fractions for the $p_i$, namely
$z_1$ and $z_2$ (and $z_3=1-z_1-z_2$) --- see {\it e.g.}
Ref.~\cite{Catani:2003vu} for a discussion at one loop.
In the triple-collinear limit there are the following types of
contributions:
\begin{itemize}
\item $({n-3 \atop 4})$ terms in $\Delta_n$
that do not involve any of the collinear legs. They cancel
in the analog of~\eqn{Gamma_Sp_explicit} between $\Delta_n$ and
$\Delta_{n-2}$, exactly as in the double-collinear case.
\item $3 \times ({n - 3 \atop 3})$ terms containing cross ratios
of the form $\rho_{1jkl}$, or similar terms with 1 replaced by 2 or 3.
These contributions cancel exactly as in \eqn{collmixed}, except
that there are three terms and the color conservation equation is
${\bf T}^{a}={\bf T}_1^{a}+{\bf T}_2^{a}+{\bf T}_3^{a}$.
\item $3 \times ({n - 3 \atop 2})$ terms containing cross ratios
of the form $\rho_{12kl}$, or similar terms with $\{1,2,3\}$ permuted.
These terms cancel for the same reason as the $\rho_{12ij}$ terms
in the double-collinear analysis, namely one of the logarithms is
guaranteed to vanish.
\item $(n-3)$ terms containing cross ratios of the form $\rho_{123l}$.
This case is non-trivial, because no logarithm vanishes (no cross ratio
goes to 1). However, it is easy to verify that in the limit, each of the
cross ratios that appears depends only on the triple-collinear
kinematic variables, and in a way that is independent of $p_l$.
Therefore the color identity
$\sum_{l\neq1,2,3} {\bf T}_l^{a} = - {\bf T}^{a}$
can be used to express the color, as well as the kinematic dependence,
of the limit of $\Delta_n$ solely in terms of the collinear variables,
as required by universality.
\end{itemize}
Thus all four contributions are consistent with a universal
triple-collinear limit. However, because the last type of contribution
is non-vanishing in the limit, in contrast to the
double-collinear case, the existence of a non-trivial $\Delta_n$
would imply a new type of contribution to the $1/\epsilon$ pole
in the triple-collinear splitting function, beyond that implied by the
sum-over-dipoles formula.
We conclude that \eqn{Delta_n_legs} provides a straightforward and consistent
generalization of the structures found in the four-parton case to $n$
partons. At the three-loop level, if four-parton correlations arise, they
contribute to the anomalous dimension matrix through a sum over
color `quadrupoles' of the form~(\ref{Delta_n_legs}).
At higher loops, of course, structures that directly correlate the colors
and momenta of more than four partons may also arise.
\section{Conclusions\label{sec:conc}}
Building upon the factorization properties of massless scattering
amplitudes in the soft and collinear limits, recent
work~\cite{Gardi:2009qi,Becher:2009qa} determined the principal
structure of soft singularities in multi-leg amplitudes.
It is now established that the cusp anomalous dimension $\gamma_K$ controls
all pairwise interactions amongst the hard partons, to all loops, and for
general~$N_c$. The corresponding contribution to the soft anomalous
dimension takes the elegant form of a sum over color dipoles,
directly correlating color and kinematic degrees of freedom. This
recent work also led to strong constraints on any additional
singularities that may arise, thus opening a range of interesting
questions.
In the present paper we studied multiple constraints on the form of
potential soft singularities that couple directly four hard partons, which
may arise at three loops and beyond.
We focused on potential corrections to the
sum-over-dipoles formula that do not require the presence of higher
Casimir contributions to the cusp anomalous dimension $\gamma_K$.
The basic property of these functions is that they satisfy the
homogeneous set of differential
equations~(\ref{Delta_oureq_reformulated}), and therefore they can be
written in terms of conformally invariant cross
ratios~\cite{Gardi:2009qi}.
Our main conclusion is that indeed, potential structures of this kind
may arise starting at three loops. Their functional dependence on
both color and kinematic variables is, however, severely constrained by
\begin{itemize}
\item{} Bose symmetry;
\item{} Sudakov factorization and momentum-rescaling symmetry,
dictating that corrections must be
functions of conformally invariant cross ratios;
\item{} collinear limits, in which the (expected) universal properties
of the splitting amplitude force corrections to vanish (for $n = 4$
partons) or be smooth (for $n > 4$ partons) in these limits;
\item{} transcendentality, a bound on which is expected to be
saturated at three loops, based on the properties of ${\cal N}= 4$
super-Yang-Mills theory.
\end{itemize}
In the three-loop case, assuming purely logarithmic dependence on the
cross ratios, these constraints combine to exclude all but one
specific structure. The three-loop result for $\Delta_n$ can therefore
be written in terms of the expression $\Delta_4^{(122)}$ in
\eqn{Delta_case122}, up to an overall numerical coefficient.
Because this structure has the maximal possible
transcendentality, $\tau=5$, its coefficient is a
rational number. For all we know now, however, this
coefficient may vanish. It remains for future work to decide
whether this contribution is present or not.
Considering also polylogarithmic functions of conformally
invariant cross ratios in $\Delta_4$, we find that at three loops
at least two additional acceptable functional forms arise,
\eqns{Delta_case122_mod}{Delta_case311_mod}.
The range of admissible functions at four loops is even larger.
A particularly interesting feature at this order is the possible
appearance of contributions proportional to quartic Casimir
operators, not only in the cusp anomalous dimension, but in
four-parton correlations as well.
Explicit computations at three and four loops will probably be
necessary to take the next steps toward a complete understanding of
soft singularities in massless gauge theories.
\vskip0.8cm
{\bf Acknowledgments}
\vskip0.2cm
We thank Thomas Binoth, David Kosower and George Sterman for stimulating
discussions. We thank Thomas Becher and Matthias Neubert for a suggestion
leading to a streamlined analysis in \sect{sec:SA}.
L.~D. and L.~M. thank CERN for hospitality while this work was completed.
This research was supported by the US Department of Energy under contract
DE--AC02--76SF00515, by MIUR (Italy) under contract 2006020509$\_$004,
and by the European Community's Marie-Curie Research Training Network
`Tools and Precision Calculations for Physics Discoveries at
Colliders' (`HEPTOOLS'), under contract MRTN-CT-2006-035505.
\vskip1.2cm
| proofpile-arXiv_065-6947 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
If $P$ is a reversible Markov chain over a sample space $\Omega$, and $\pi$ is a reversibility function (not necessarily a probability distribution), then $P$ is a self-adjoint operator in $\ell^2(\pi)$, the space generated by the inner product
$$<f,g>_{\pi}=\sum_{x \in S} f(x)g(x)\pi(x)$$
induced by $\pi$. If $P$ is tridiagonal operator (i.e. a nearest-neighbor random walk) on $\Omega=\{0,1,2,\dots\}$, then it must have a simple spectrum, and is diagonalizable via orthogonal polynomials as it was studied in the 50's and 60's by Karlin and McGregor, see \cite{km2a}, \cite{szego}. There the extended eigenfuctions $Q_j(\lambda)$ ($Q_0 \equiv 1$) are orthogonal polynomials with respect to a probability measure $\psi$ and
$$p_t(i,j)=\pi_j \int_{-1}^1 \lambda^t Q_i(\lambda) Q_j(\lambda) d\psi(\lambda)~~~\forall i,j \in \Omega, $$
where $\pi_j$ ($\pi_0=1$) is the reversibility measure of $P$.
In this paper we are testing a possibility of calculating mixing rates using Karlin-McGregor diagonalization with orthogonal polynomials.
In order to measure the rate of convergence to a stationary distribution, the following distance is used.
\begin{Def}
If $\mu$ and $\nu$ are two probability distributions over a sample space $\Omega$, then the {\it total variation distance} is
$$\| \nu - \mu \|_{TV} = {1 \over 2} \sum_{x \in \Omega} |\nu(x)-\mu(x)|=\sup_{A \subset \Omega} |\nu(A)-\mu(A)|$$
Observe that the total variation distance measures the coincidence between the distributions on a scale from zero to one.
\end{Def}
\noindent
If $\rho=\sum_{k=0}^{\infty} \pi_k < \infty$, then $\nu={1 \over \rho}\pi$ is the stationary probability distribution. If in addition, the aperiodic nearest neighbor Markov chain originates at site $i$, then
the total variation distance between the distribution $\mu_t=\mu_0P^t$ and $\nu$ is given by
\begin{eqnarray*}
\left\|\nu - \mu_t \right\|_{TV}
& = & {1 \over 2} \sum_{j} \pi_j \left|\int_{(-1,1)} \lambda^t Q_i(\lambda) Q_j(\lambda) d\psi(\lambda)\right|,
\end{eqnarray*}
as measure $\psi$ contains a point mass of weight ${1 \over \rho}$ at $1$,
see \cite{kov}.
The rates of convergence are quantified via mixing times. In the case of a Markov chain over an infinite state space with a unique stationary distribution, the notion of a mixing time depends on the state of origination of the chain.
\begin{Def}
Suppose $P$ is a Markov chain with a stationary probability distribution $\nu$ that commences at $X_0=i$. Given an $\epsilon >0$, the mixing time $t_{mix}(\epsilon)$ is defined as
$$t_{mix}(\epsilon)=\min\left\{t~:~\|\nu-\mu_t\|_{TV} \leq \epsilon \right\}$$
\end{Def}
\vskip 0.2 in
\noindent
In the case of a nearest-neighbor process on $\Omega=\{0,1,2,\dots\}$ commencing at $i$, the corresponding mixing time has the following simple expression in orthogonal polynomials
$$t_{mix}(\epsilon)=\min\left\{t~:~\sum_{j} \pi_j \left|\int_{(-1,1)} \lambda^t Q_i(\lambda) Q_j(\lambda) d\psi(\lambda)\right| \leq 2 \epsilon \right\}$$
Observe that the above expression is simplified when $i=0$. Here we concentrate on calculating mixing times for simple positive recurrent nearest-neighbor Markov chains over $\Omega$, originating from $i=0$. Our main result concerns the distance to stationarity for a simple random walk with a drift.
In the main theorem and its corollary we will explore the following Markov chain
$$P=\left(\begin{array}{ccccc}0 & 1 & 0 & 0 & \dots \\q & r & p & 0 & \dots \\0 & q & r & p & \ddots \\0 & 0 & q & r & \ddots \\\vdots & \vdots & \ddots & \ddots & \ddots\end{array}\right) \qquad q>p,~~~r>0$$
\begin{thm} \label{thm1} Suppose the above Markov chain begins at the origin, $i=0$. Consider the orthogonal polynomials $Q_n$ for the chain. Then the integral
$\int_{(-1,1)} \lambda^t Q_n(\lambda) d\psi(\lambda)$ can be expressed as\\
${(1+q-p)(q+r)-q \over (1+q-p)(q+r)}\cdot\left(-{q \over q+r}\right)^{t+n}
+\left(\sqrt{q \over p}\right)^n \left({p \over q+r}\right) {1 \over 2\pi i}\oint_{|z|=1}{(\sqrt{pq}(z+z^{-1})+r)^t z^n (z-z^{-1}) \over \left(z-\sqrt{p \over q}{r+(1+q-p) \over 2(q+r)}\right)\left(z-\sqrt{p \over q}{r-(1+q-p) \over 2(q+r)}\right)}dz$
and the total variation distance $\left\|\nu - \mu_t \right\|_{TV} $ is bounded above by\\
$$A\left({q \over q+r}\right)^t +B(r+2\sqrt{pq})^t,$$
where $A={(1+q-p)(q+r)-q \over (1+q-p)(1-2p)}$ and $B={\left({p \over q+r}\right)\left(1+{1 \over \sqrt{pq}-p}\right) \over \left(1-\sqrt{p \over q}{r+(1+q-p) \over 2(q+r)}\right)\left(1+\sqrt{p \over q}{r-(1+q-p) \over 2(q+r)}\right)}$.
Therefore, taking $\varepsilon \downarrow 0$, the mixing time $$t_{mix}(\varepsilon)=O\left({\log(\varepsilon) \over \log m(p,q)}\right),$$
where $m(p,q)=\max\left[(r+2\sqrt{pq}),\left({q \over q+r}\right)\right]$.
\end{thm}
Observe that in the above complex integral all three finite poles are located inside the unit circle. Thus we only need to consider a pole at infinity.
The proof is provided in section \ref{proof}. The result in Theorem \ref{thm1} is the first instance the Karlin-McGregor orthogonal polynomials approach is used to estimate mixing rates. As it was suggested in \cite{kov} we would like the approach to work for a larger class of reversible Markov chains over an infinite state space with a unique stationary distribution.
There is an immediate corollary (see section \ref{proof}):
\begin{cor}
If ${q \over q+r}>r+2\sqrt{pq}$,
$$\left\|\nu - \mu_t \right\|_{TV} \geq A\left({q \over q+r}\right)^t - B(r+2\sqrt{pq})^t$$
for $t$ large enough, i.e. we have a lower bound of matching order.
\end{cor}
Observe that one can easily adjust these results for any origination site $X_0=i$.
In the next section we will compare the above Karlin-McGregor approach to some of the classical techniques for estimating the ``distance to stationarity" $\left\|\nu - \mu_t \right\|_{TV}$.
\section{Comparison to the other techniques}
For the case of geometrically ergodic Markov chains, there are several techniques that produce an upper bound on the distance to stationarity that were developed specifically for the cases when the sample space is large, but finite. These methods are not directly applicable to chains on general state spaces. The coupling method stands out as the most universal. Here we compare the geometric rate in Theorem \ref{thm1} to the one obtained via a classical coupling argument. Then we explain why other geometric ergodicity methods based on renewal theory will not do better than coupling.
See \cite{mt} and \cite{lindvall} for detailed overview of geometric convergence and coupling.
\subsection{Geometric convergence via coupling}
Consider a coupling process $(X_t,Y_t)$, where $X_0=0$ as in Theorem \ref{thm1}, while $Y_0$ is distributed according to the stationary distribution $\nu={1 \over \rho}\pi$. A classical Markovian coupling construction allows $X_t$ and $Y_t$ evolve independently until the coupling time $\tau_{coupling}=\min\{t:~X_t=Y_t\}$. It is natural to compare $P(\tau_{coupling}>t)$ to
$P(\tau>t)$, where $\tau=\min\{t:~Y_t=0\}$ is a hitting time, as the chain is positive recurrent.
Now, simple combinatorics implies, for $k \geq n$,
$$P(\tau=k~|~Y_0=n)=\sum_{i,j:~2i+j=k-n} {k! \over i!(i+n)!j!} p^iq^{i+n}r^j$$
Therefore
$$P(\tau>t) \leq {1 \over p\rho} \sum_{k:~k>t}\left( \sum_{i,j,n:~2i+j=k-n} {k! \over i!(i+n)!j!} p^{i+n}q^ir^j \right),$$
where `$\leq$' appears because $\pi_0=1<{1 \over p}$, but it does not change the asymptotic rate of convergence, i.e. we could write `$\approx$' instead of `$\leq$'.
The right hand side can be rewritten as
$${1 \over p\rho} \sum_{k:~k>t} \sum_{j=0}^k \left(\begin{array}{c}k \\j\end{array}\right)r^j(p+q)^{k-j}\ell(k-j)$$
for $\ell(m)=P(Y \geq m/2)$, where $Y$ is a binomial random variable with parameters $\left(m,\widetilde{p}={p \over p+q} \right)$.
Now, by Cram\'er's theorem, $\ell(m) \sim e^{[\log2+{1 \over 2}\log\widetilde{p}+{1 \over 2}\log(1-\widetilde{p})]m}$, and therefore
\begin{equation} \label{tau}
P(\tau>t) \sim {1 \over p\rho} \sum_{k:~k>t}(r+2\sqrt{pq})^k={r+2\sqrt{pq} \over p\rho(\sqrt{q}-\sqrt{p})^2}(r+2\sqrt{pq})^t
\end{equation}
Recall that in the Corollary, if $q$ is sufficiently larger than $r$ and $p$, then $\left({q \over q+r}\right)^t$ dominates
$(r+2\sqrt{pq})^t$, and the total variation distance $$\left\|\nu - \mu_t \right\|_{TV}=A\left({q \over q+r}\right)^{t} \pm B(r+2\sqrt{pq})^t,$$
where $A$ and $B$ are given in Theorem \ref{thm1} of this paper.
Thus we need to explain why, when $q$ is sufficiently large, in the equation (\ref{tau}), we fail to notice the dominating term of $\left({q \over q+r}\right)^t$. In order to understand why, observe that the second largest eigenvalue $\left(- {q \over q+r} \right)$ originates from the difference between $\tau_{coupling}$ and $\tau$. In fact, $Y_t$ can reach state zero without ever sharing a site with $X_t$ (they will cross each other, of course). Consider the case when $p$ is either zero, or close to zero. There, the problem reduces to essentially coupling a two state Markov chain with transition probabilities $p(0,1)=1$ and $p(1,0)={q \over q+r}$. Thus the coupling time will be expressed via a geometric random variable with the failure probability of ${q \over q+r}$.
Of course, one could make the Markov chain $P$ ``lazier" by increasing $r$ at the expense of $p$ and $q$, while keeping proportion ${q \over p}$ fixed, i.e. we can consider $P_{\varepsilon}={1 \over 1+\varepsilon}(P+\varepsilon I)$. This will minimize the chance of $X_t$ and $Y_t$ missing each other, but this also means increasing $(r+2\sqrt{pq})$, and slowing down the rate of convergence in (\ref{tau}).
In order to obtain the correct exponents of convergence, we need to {\bf redo the coupling rules} as follows. We now let the movements of $X_t$ and $Y_t$ be synchronized whenever both are not at zero (i.e. $\{X_t,Y_t\} \cap \{0\} =\emptyset$), while letting $X_t$ and $Y_t$ move independently when one of them is at zero, and the other is not. Then at the hitting time $\tau$,
either $X_t=Y_t=0$ and the processes are successfully coupled, or $X_t=1$ and $Y_t=0$. In the latter case we are back to the geometric variable with the failure probability of ${q \over q+r}$. That is, the only way for $X_t$ and $Y_t$ to couple would be if one of the two is at state $0$ and the other is at state $1$. Using the set theory notations, if $\{X_t,Y_t\}=\{0,1\}$, conditioning on $\{X_{t+1},Y_{t+1}\} \not= \{1,2\}$ would give us
$$\{X_{t+1},Y_{t+1}\}=
\begin{cases}
\{1\} & \text{ with probability }{r \over q+r}, \\
\{0,1\} & \text{ with probability }{q \over q+r},
\end{cases}$$
When ${q \over q+r}>r+2\sqrt{pq}$, the above modified coupling captures the order $\left({q \over q+r}\right)^t$. The coefficient $A$ however is much harder to estimate using the coupling approach, while it is immediately provided in Theorem \ref{thm1} and its corollary. Take for example $p={1 \over 11}$, $r={1 \over 11}$ and $q={9 \over 11}$. There ${q \over q+r}>r+2\sqrt{pq}$, and according to the Corollary, the lower bound of $A\left({q \over q+r}\right)^{t} - B(r+2\sqrt{pq})^t$ and the upper bound of $A\left({q \over q+r}\right)^{t} + B(r+2\sqrt{pq})^t$ are of the matching order, and the oreder of convergence is tight
$$\left\|\nu - \mu_t \right\|_{TV}={91 \over 171}\left({9 \over 10}\right)^t \pm {39 \over 28} \left({7 \over 11}\right)^t $$
\subsection{Drift, minorization and geometric ergodicity}
The optimal ``energy function" $V(x)=\left({q \over p}\right)^{x/2}$ converts the geometric drift inequality in Meyn and Tweedie \cite{mt} Chapter 15 into equality
$$E[V(X_{t+1})~|~X_t=x]=(r+2\sqrt{pq})V(x)+\left(\sqrt{q \over p}- (r+2\sqrt{pq}) \right) 1\!\!1_C(x)$$
thus confirming the geometric convergence rate of $(r+2\sqrt{pq})^t$ for the tail probability $P(\tau_C>t)$, where $C=\{0\}$ is the obvious choice for the ``small set", and $\tau_C$ is the hitting time. Once again all the challenge is at the origin.
In fact there is only a trivial ``minorization condition" when $C=\{0\}$. The minorization condition reads
$$p(x,A) \geq \epsilon Q(A)~~~\forall x \in C,~A \subset \Omega,$$
where, if $C=\{0\}$, the only choice for the probability measure $Q$ is $Q=\delta_1$, and $\epsilon=1$.
With $\epsilon=1$ the split of the Markov chain is trivial, and as far as the corresponding coupling goes, the only issue would be (as we mentioned before) to compute the tail of the hitting time $\min\{t:~(X_t,Y_t) \in C \times C\}$ when $q$ is large.
If $C=\{0,1,\dots,k\}$ for some $k >0$, there is no minorization condition. In the latter case, estimating the hitting time $\min\{t:~(X_t,Y_t) \in C \times C\}$ is straightforward, but without minorization, this will not be enough to estimate the tail for the coupling time. The ``splitting technique" will not work, rather a coupling approach of the preceding subsection to be pursued.
The case of recurrent reflecting random walk (the M/M/1 queue) had been considered as one of the four benchmark examples in the geometric ergodicity theory (see \cite{bax} and references therein). There, in the absence of the second largest eigenvalue of $\left(-{q \over q+r}\right)$, with $r=0$, the rate of $(2\sqrt{pq})^t$ was proven to be the optimal (see \cite{lund}).
The methods in the theory of geometric convergence are in the most part based on the renewal theory (see \cite{mt}, \cite{bax}, \cite{rose} and references therein), and concentrate more on the tail probability for the hitting time $\tau_C$ in the splitting method.
As for the Markov chain $P$ considered in this paper, in the absence of a useful splitting of it, the approach that works is the coupling. While the coupling provides the right exponents, it does not necessarily produce the tight coefficients.
\section{The proof of Theorem \ref{thm1}} \label{proof}
\begin{proof}
Since we require $r>0$ for aperiodicity, we will need to obtain the spectral measure $\psi$ via an argument similar to that of Karlin and McGregor in \cite{km2a}, where the case of $r=0$ was solved.
The orthogonal polynomials are obtained via solving a simple linear recursion:
$Q_0=1$, $Q_1=\lambda$, and
$Q_n(\lambda)=c_1(\lambda) \rho^n_1(\lambda)+c_2(\lambda)\rho^n_2(\lambda)$,
where $\rho_1(\lambda)={\lambda-r+\sqrt{(\lambda-r)^2-4pq} \over 2p}$ and
$\rho_2(\lambda)={\lambda-r-\sqrt{(\lambda-r)^2-4pq} \over 2p}$
are the roots of the characteristic equation for the recursion,
and $c_1={\rho_2-\lambda \over \rho_2 -\rho_1}$ and $c_2={\lambda-\rho_1 \over \rho_2 -\rho_1}$.
\vskip 0.2 in
\noindent
Now $\pi_0=1$, $\pi_n={p^{n-1} \over q^n}$ ($n\geq 1$) and $\rho={q-p+1 \over q-p}$. Also, we observe that
$$
\begin{cases}
|\rho_2(\lambda)| >\sqrt{q \over p} & \text{ on }[-1,r-2\sqrt{pq}), \\
|\rho_2(\lambda)| <\sqrt{q \over p} & \text{ on }(r+2\sqrt{pq}, 1], \\
|\rho_2(\lambda)|=\sqrt{q \over p} & \text{ on }[r-2\sqrt{pq}, r+2\sqrt{pq}],
\end{cases}
$$
and $\rho_1\rho_2={q \over p}$.\\
The above will help us to identify the point mass locations in the measure $\psi$ since each point mass in $\psi$ occurs when
$\sum_{k} \pi_k Q_k^2(\lambda) < \infty$. Thus we need to find all $\lambda \in (r+2\sqrt{pq}, 1]$ such that
$c_1(\lambda)=0$ and all $\lambda \in [-1,r-2\sqrt{pq})$ such that $c_2(\lambda)=0$. There are two roots, $\lambda=1$ and $\lambda=-{q \over q+r}$.
\vskip 0.2 in
\noindent
We already know everything about the point mass at $\lambda=1$: $Q_k(1)=1$ for all $k \geq 0$, and $\rho=\sum_{k=0}^{\infty} \pi_k Q_k^2(1)={1+q-p \over q-p}$ is the reciprocal of the point mass at $\lambda=1$.
\vskip 0.2 in
\noindent
The only other point mass is at $\lambda=-{q \over q+r}$. One can verify that $\rho_1\left(-{q \over q+r}\right)=-{q \over q+r}$ and $Q_k\left(-{q \over q+r}\right)=\left(-{q \over q+r}\right)^k$, and therefore
$$\sum_{k=0}^{\infty} \pi_k Q_k^2\left(-{q \over q+r}\right)=1+{q \over (q+r)^2-pq}={(1+q-p)(q+r) \over (1+q-p)(q+r)-q}$$
is the reciprocal of the point mass at $\lambda=-{q \over q+r}$.
\vskip 0.2 in
\noindent
It follows that the rest of the mass of $\psi$ (other than the two point masses) is spread inside $[r-2\sqrt{pq}, r+2\sqrt{pq}]$. In order to find the density of $\psi$ inside $[r-2\sqrt{pq}, r+2\sqrt{pq}]$ we need to find $(e_0,(P-sI)^{-1}e_0)$ for $Im(s) \not=0$, i.e. the upper left element in the resolvent of $P$. Let $(a_0(s),a_1(s),\dots)^T=(P-sI)^{-1}e_0$, then
$$-sa_0+a_1=1,~~~\text{ and }~~~qa_{n-1}+(r-s)a_n+pa_{n+1}=0$$
Thus $a_n(s)=\alpha_1 \rho_1(s)^n+\alpha_2 \rho_2(s)^n,$
where $\alpha_1={a_0(\rho_2-s) -1 \over \rho_2(s)-\rho_1(s)}$ and $\alpha_2={1-a_0(\rho_1-s) \over \rho_2(s)-\rho_1(s)}$.
\vskip 0.2 in
\noindent
Since $(a_0,a_1,\dots) \in \ell^2(\mathbb{C}, \pi)$,
$$|a_n| \sqrt{q^n \over p^n} \rightarrow 0 \qquad \text{ as } ~~n \rightarrow +\infty$$
Hence when $|\rho_1(s)| \not= |\rho_2(s)|$, either $\alpha_1=0$ or $\alpha_2=0$, and therefore
\begin{equation}\label{a0}
a_0(s)={\chi_{|\rho_1(s)|< \sqrt{q \over p}} \over \rho_1(s)-s}+{\chi_{|\rho_2(s)|<\sqrt{q \over p}} \over \rho_2(s)-s}
\end{equation}
\vskip 0.2 in
\noindent
Now, because of the point masses at $1 $ and $-{q \over q+r}$, $a_0(s)=\int_{(-1,1]}{d\psi(z) \over z-s}$ can be expressed as
$$a_0(s)={q-p \over 1+q-p}\left({1 \over 1-s}\right)+{(1+q-p)(q+r)-q \over (1+q-p)(q+r)}\left({1 \over -{q \over q+r}-s}\right)+\int_{(-1,1)}{\varphi(z)dz \over z-s},$$
where $\varphi(z)$ is an atom-less function. Next we will use the following basic property of Cauchy transforms $Cf(s)={1 \over 2\pi i} \int_{\mathbb{R}}{f(z)dz \over z-s}$ that
can be derived using the Cauchy integral formula, or similarly, an approximation to the identity formula
\footnote{The curve in the integral does not need to be $\mathbb{R}$ for $C_+-C_-=I$ to hold.}:
\begin{equation} \label{cauchy}
C_+-C_-=I
\end{equation}
Here $C_+f(z)=\lim_{s \rightarrow z:~ Im(s)>0} Cf(s)$ and $C_-f(z)=\lim_{s \rightarrow z:~ Im(s)<0} Cf(s)$ for all $z \in \mathbb{R}$.
The above equation (\ref{cauchy}) implies
$$\varphi(x)={ 1\over 2\pi i} \left(\lim_{s=x+i\varepsilon~:~\varepsilon \rightarrow 0+} a_0(s) -\lim_{s=x-i\varepsilon~:~\varepsilon \rightarrow 0+} a_0(s) \right)$$
for all $x \in (-1,1)$.
Recalling (\ref{a0}), we express $\varphi$ as
$\varphi(x)={\rho_1(x)-\rho_2(x) \over 2\pi i(\rho_1(x)-x)(\rho_2(x)-x)}$
for $x \in (r-2\sqrt{pq}, r+2\sqrt{pq})$, which in turn simplifies to
$$\varphi(x)=\begin{cases}
{\sqrt{(x-r)^2-4pq} \over 2\pi i ((r+q)x+q)(1-x)} & \text{ if } x \in (r-2\sqrt{pq}, r+2\sqrt{pq}), \\
0 & \text{ otherwise }
\end{cases}$$
Let $\mathcal{I}=(r-2\sqrt{pq}, r+2\sqrt{pq})$ denote the support interval, and let $1\!\!1_{I}(x)$ be its indicator function.
Here $$\int_{-1}^1 \varphi(x) dx={p \over q+r}$$
and one can check that
$$\psi(x)={q-p \over 1+q-p}\cdot \delta_1(x)+{(1+q-p)(q+r)-q \over (1+q-p)(q+r)}\cdot \delta_{-{q \over q+r}}(x)+{\sqrt{4pq-(x-r)^2} \over 2\pi ((r+q)x+q)(1-x)} \cdot 1\!\!1_{\cal I}(x)$$
integrates to one.
\vskip 0.2 in
\noindent
Observe that the residues of $g(z)={\sqrt{(z-r)^2-4pq} \over ((r+q)z+q)(1-z)}$ are
$$Res(g(z),1)={q-p \over 1+q-p}~~~\text{ and }~~~
Res\left(g(z),-{q \over q+r}\right)={(1+q-p)(q+r)-q \over (1+q-p)(q+r)}$$
in the principle branch of the $\log$ function.
\vskip 0.2 in
\noindent
Now\\
$\int_{(-1,1)} \lambda^t Q_n(\lambda) d\psi(\lambda)={(1+q-p)(q+r)-q \over (1+q-p)(q+r)}\cdot\left(-{q \over q+r}\right)^{t+n}
+\int_{r-2\sqrt{pq}}^{r+2\sqrt{pq}} \lambda^t (c_1\rho_1^n+c_2\rho_2^n){\rho_1 -\rho_2 \over 2\pi i(\rho_1- \lambda)(\rho_2-\lambda)} d\lambda$
\vskip 0.16 in
\noindent
and therefore, since $c_1={\rho_2-\lambda \over \rho_2 -\rho_1}$ and $c_2={\lambda-\rho_1 \over \rho_2 -\rho_1}$,
\vskip 0.16 in
$\int_{(-1,1)} \lambda^t Q_n(\lambda) d\psi(\lambda)={(1+q-p)(q+r)-q \over (1+q-p)(q+r)}\cdot\left(-{q \over q+r}\right)^{t+n}
+{1 \over 2\pi i}\int_{r-2\sqrt{pq}}^{r+2\sqrt{pq}} \lambda^t \left({\rho_2^n \over \rho_2-\lambda}-{\rho_1^n \over \rho_1-\lambda}\right)d\lambda,$
\vskip 0.16 in
\noindent
where, if we
let $\rho_1=\sqrt{q \over p} z$ for $z$ in the lower semicircle and $\rho_2=\sqrt{q \over p} z$ for $z$ in the upper semicircle, then
\vskip 0.2 in
${1 \over 2\pi i}\int_{r-2\sqrt{pq}}^{r+2\sqrt{pq}} \lambda^t \left({\rho_2^n \over \rho_2-\lambda}-{\rho_1^n \over \rho_1-\lambda}\right)d\lambda
=\left(\sqrt{q \over p}\right)^n {1 \over 2\pi i}\oint_{|z|=1}{(\sqrt{pq}(z+z^{-1})+r)^t z^n \sqrt{pq}(1-z^{-2})dz\over \sqrt{q \over p}z-(\sqrt{pq}(z+z^{-1})+r)}$
$$=\left(\sqrt{q \over p}\right)^n \left({p \over q+r}\right) {1 \over 2\pi i}\oint_{|z|=1}{(\sqrt{pq}(z+z^{-1})+r)^t z^n (z-z^{-1}) \over \left(z-\sqrt{p \over q}{r+(1+q-p) \over 2(q+r)}\right)\left(z-\sqrt{p \over q}{r-(1+q-p) \over 2(q+r)}\right)}dz$$
Here the absolute value of the function in the last integral is bounded by $M(r+2\sqrt{pq})^t$ with
$M={2 \over \left(1-\sqrt{p \over q}{r+(1+q-p) \over 2(q+r)}\right)\left(1+\sqrt{p \over q}{r-(1+q-p) \over 2(q+r)}\right)}$.
Therefore, plugging in the values of $\pi_n$, we show that the distance to stationarity $\left\|\nu - \mu_t \right\|_{TV}={1 \over 2}\sum_{n=0}^{\infty} \pi_n \left|\int_{(-1,1)} \lambda^t Q_n(\lambda) d\psi(\lambda)\right|$ is bounded above by
$$A \left({q \over q+r}\right)^t+B(r+2\sqrt{pq})^t$$
where
$$A={(1+q-p)(q+r)-q \over 2(1+q-p)(q+r)} \sum_{n=0}^{\infty}\pi_n \left({q \over q+r}\right)^n={(1+q-p)(q+r)-q \over (1+q-p)(1-2p)}$$
and
$$B={M \over 2} \left({p \over q+r}\right)\left(1+{1 \over \sqrt{pq}-p}\right)={\left({p \over q+r}\right)\left(1+{1 \over \sqrt{pq}-p}\right) \over \left(1-\sqrt{p \over q}{r+(1+q-p) \over 2(q+r)}\right)\left(1+\sqrt{p \over q}{r-(1+q-p) \over 2(q+r)}\right)}$$
The above upper bound can be improved if one obtains a better estimate of the trigonometric integrals involved in the sum.
\vskip 0.2 in
\noindent
We conclude that
$t_{mix}(\varepsilon)=O\left({\log(\varepsilon) \over \log m(p,q)}\right)$
as $\varepsilon \downarrow 0$.
\end{proof}
\section*{Acknowledgment}
The author would like to acknowledge useful comments about the idea of using orthogonal polynomials for computing mixing times he received from R.Burton, A.Dembo, P.Diaconis, M.Ossiander, E.Thomann, E.Waymire and J.Zu\~{n}iga.
\bibliographystyle{amsplain}
| proofpile-arXiv_065-6956 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{intro}Introduction}
The pioneering work of Kibble \cite{KIBBLE} has shown that topological defects necessarily form at cosmological phase transitions. What type of defect network is formed and its basic properties (such as whether or not it is long-lived) will depend on the characteristics of each phase transition, and most notably on the specific symmetry being broken. Understanding these processes, as well as the subsequent evolution of these networks, is a key aspect of particle cosmology. A thorough overview of the subject can be found in the book by Vilenkin and Shellard \cite{VSH}.
Most of the work on defects in the past three decades has focused on cosmic strings. This is justified on the grounds that they are usually cosmologically benign, and are a generic prediction of inflationary models based on Grand Unified Theories \cite{Rachel1,Rachel2} or branes \cite{Branes1,Branes2}. By contrast domain walls and monopoles are cosmologically dangerous (and any models in which they arise are strongly constrained). However, a solid understanding of the latter is still important, as it is the only way one can trust these constraints. Moreover, it is becoming increasingly clear, particularly in the context of models with extra dimensions such as brane inflation, that hybrid defect networks will often be produced. Two examples that have attracted considerable interest are semilocal strings and cosmic necklaces \cite{Dasgupta1,Chen,Dasgupta2}.
This is the second report on an ongoing project which is addressing some of these issues. In a previous work \cite{MONOPOLES} we have developed an analytic model for the evolution of networks of local and global monopoles \cite{DIRAC,THOOFT,POLYAK}. The model is analogous to the velocity-dependent one-scale model for cosmic strings \cite{MS0,MS2}, which has been extensively tested against field theory \cite{ABELIAN,MS3} and Goto-Nambu simulations \cite{MS3,MS4}. In \cite{MONOPOLES} we have also discussed the solutions of this analytic model, and compared them with existing (relatively low-resolution) simulations of global monopoles. Here, after a brief overview of hybrid networks, we extend our analysis to the case of monopoles attached to one string (the so-called hybrid networks), and we also briefly discuss how to apply it to vortons.
\section{\label{review}Overview of Hybrid Networks}
We start with a brief overview of previous results on the evolution of hybrid networks, emphasizing the dynamical aspects we seek to model. A more detailed can be found in the Vilenkin and Shellard textbook \cite{VSH} and in other references that will be pointed out where appropriate. We will first discuss the case of local symmetries, and then highlight relevant differences in the global case.
\subsection{Local case}
Hybrid networks of monopoles connected by strings will be produced, for example, in the following symmetry breaking scheme \cite{VILENKIN}
\begin{equation}
G\to K\times U(1)\to K\,.
\end{equation}
The first phase transition will lead to the formation of monopoles, and the second produces strings connecting monopole-antimonopole pairs; the corresponding defect masses will be
\begin{equation}
m\sim \frac{4\pi}{e}\eta_m\label{mmass}
\end{equation}
and
\begin{equation}
\mu\sim 2\pi\eta^2_s\,.\label{smass}
\end{equation}
The characteristic monopole radius and string thickness are
\begin{equation}
\delta_m\sim (e\eta_m)^{-1}
\end{equation}
\begin{equation}
\delta_s\sim (e\eta_s)^{-1}\,,
\end{equation}
In the simplest models all the (Abelian) magnetic flux of the monopoles is confined into the strings. In this case the monopoles are often dubbed 'beads'. However, in general (and in most realistic) models stable monopoles have unconfined non-Abelian magnetic charges. As in the case of isolated monopoles, the key difference between the two cases is that unconfined magnetic fluxes lead to Coulomb-type magnetic forces between the monopoles.
Up to the second transition the formalism that we have developed for plain monopoles \cite{MONOPOLES} obviously applies, but the hybrid network requires a separate treatment. From the point of view of analytic model-building, the hybrid case presents one crucial difference. In the case of isolated monopoles we saw that their evolution can be divided into a pre-capture and a post-capture period: captured monopoles effectively decouple from the rest of the network, and most of the radiative losses occur in the captured phase, where monopole-antimonopole pairs are bound and doomed to annihilation. This meant that the network's energy losses could be described as losses to bound pairs and we did not need to model radiative losses explicitly. Now compare this to the present scenario: the monopoles are now captured \textit{ab initio} (as soon as the strings form) and therefore we will need to rethink the loss terms---as well as to account for the force the strings exert on the monopoles.
The lifetime of a monopole-antimonopole pair is to a large extent determined by the time it takes to dissipate the energy stored in the string, $\epsilon_s\sim\mu L$ for a string of length $L$. This is because the energy in the string is typically {\bf larger} than (or at most comparable to) the energy of the monopole. Monopoles are pulled by the strings with a force $F_s\sim\mu\sim\eta^2_s$, while the frictional force acting on them is $F_{fri}\sim\theta T^2 v$ (where $\theta$ is a parameter counting the number of degrees of freedom interacting with the monopoles). This friction term is of course already included in the evolution equations of the analytic model \cite{MONOPOLES,MS0,MS2}: the corresponding friction lengthscale is $\ell_f\equiv M/(\theta T^2)\sim\eta_m/(\theta T^2)$.
In a friction-dominated epoch the string tension should be compensated by the friction force, so the velocity of the monopoles can be estimated from
\begin{equation}
m{\dot v}=\mu-\theta T^2v\sim0
\end{equation}
which gives
\begin{equation}
v\sim\frac{\mu}{\theta T^2}\,.
\end{equation}
Note that in addition to the damping due to Hubble expansion and frictional scattering, we now have a third dynamical mechanism, which we may call string forcing. It's important to notice that in the radiation epoch the frictional scattering term has the same time dependence as the Hubble damping term, while in the matter era it is subdominant.
At string formation $F_s\sim F_{fri}$, so friction domination will end very quickly. This differs from the case of plain monopoles, whose evolution is always friction-dominated in the radiation era \cite{MONOPOLES}. At low temperatures the string tension is much greater than the friction force, and a monopole attached to a string moves with a proper acceleration $a=\mu/m\sim\eta^2_s/\eta_m$.
If there are non-confined fluxes, accelerating monopoles can also lose energy by radiating gauge quanta. The rate of energy loss is expected to be given by the classical electromagnetism radiation formula
\begin{equation}
{\dot \epsilon_{gauge}}\sim -\frac{(ga)^2}{6\pi}\sim- \left(\frac{g\mu}{m}\right)^2\sim- \left(\frac{\mu}{\eta_m}\right)^2
\end{equation}
where $g$ is the magnetic charge. This should be compared with the ratio of gravitational radiation losses
\begin{equation}
{\dot \epsilon_{grav}}\sim - G\mu^2\,;
\end{equation}
the ratio of the two is therefore
\begin{equation}
\frac{\dot \epsilon_{grav}}{\dot \epsilon_{gauge}}\sim \left(\frac{\eta_m}{m_{Pl}}\right)^2\,,
\end{equation}
so if there are unconfined fluxes the gauge radiation will be dominant (except in the extreme case where the monopoles form at the Planck scale itself). The characteristic timescales for this process to act on a monopole-antimonopole pair attached to a string of length $L$ can therefore be written
\begin{equation}
\tau_{rad}\sim \frac{L}{Q}
\end{equation}
where $Q_{gauge}=(\eta_s/\eta_m)^2$ for gauge radiation and $Q_{grav}=(\eta_s/m_{Pl})^2$ for gravitational radiation.
These radiation losses should be included in the model's evolution equation for the characteristic lengthscale $L$ of the monopoles. It's easy to see that the corresponding term has the form
\begin{equation}
3\frac{dL}{dt}=-L\frac{\dot\rho}{\rho}\sim-L\frac{\dot\epsilon}{\epsilon}\sim\frac{L}{\tau_{rad}}=Q\,.\label{losses}
\end{equation}
Note that in principle these terms should be velocity-dependent (this is discussed in \cite{MS2}). However, we will soon see that in this case the monopoles will become ultra-relativistic ($v\sim1$) shortly after the strings form and therefore the velocity-dependence is not crucial for the analysis.
Another possible dynamical mechanism is that of string intercommuting and consequently the possibility of energy losses from the production of string loops. Its importance relative to other energy loss mechanisms is much harder to estimate than for ordinary strings, since were we're looking for an indirect effect on the monopole evolution, and probably one can only quantify it by using numerical simulations. What one can say is that, as in the standard case, it should lead to a term in the $L$ equation of the form \cite{MS2}
\begin{equation}
\frac{dL}{dt}=c_{hyb}v
\end{equation}
where the dimensionless coefficient $c_{hyb}$ need not have the same value (or indeed the same order of magnitude) as the standard one. We can now observe that since one expects to be dealing with ultra-relativistic string ($v\sim1$) the velocity dependence will again not be crucial, so we can group this term with that coming from radiation losses (Eq. \ref{losses}) and replace the coefficient $Q$ with an effective $Q_\star$ which will include the effects of gauge radiation (if it exists), gravitational radiation and loop production.
It is expected that monopole-antimonopole pairs will annihilate very quickly once the friction force becomes unimportant. The characteristic monopole velocity can be estimated to be approximately
\begin{equation}
v\sim\left(\frac{\mu L}{m}\right)^{1/2},
\end{equation}
or $v\sim1$ if the above is larger than unity. In the wake of the above discussion the corresponding rate of energy loss is naively estimated to be ${\dot\epsilon}\sim-T^2v^2$. The approximate lifetime of a pair should then be
\begin{equation}
\tau\sim\frac{\mu L}{T^2v^2}\,,
\end{equation}
which in the non-relativistic case can be simply written
\begin{equation}
\tau_{nr}\sim\frac{m}{T^2}\,,
\end{equation}
while for the ultra-relativistic case
\begin{equation}
\tau_{rel}\sim\frac{\eta_s}{T^2}\,.
\end{equation}
In either case monopoles annihilate in a timescale shorter than a Hubble time, which in these units is
\begin{equation}
t\sim\frac{m_{Pl}}{T^2}\,.
\end{equation}
\subsection{Global case}
One can also have hybrid networks of global monopoles connected by global strings \cite{VILENKIN}. Just as in the case of plain monopoles \cite{MONOPOLES}, the scale-dependent monopole mass and the different behavior of the forces between monopoles in this case (long-range rather than Coulomb-type) changes the detailed properties of these defects and warrants a separate treatment.
The tension of a global string is
\begin{equation}
F_s\sim 2\pi\eta^2_s\ln{\frac{L}{\delta_s}}\,,
\end{equation}
so now there's an additional logarithmic correction, while the long-range force between the monopoles is
\begin{equation}
F_m\sim 4\pi \eta^2_m\,.
\end{equation}
If $\eta_m >> \eta_s$ the monopoles initially evolve as if they were free, with $L\sim t$. Note that this scaling law implies that there are some monopole annihilations. The strings become dynamically important when $F_s\sim F_m$, that is
\begin{equation}
\ln{\left(t \eta_s\right)} \sim \frac{\eta^2_m}{\eta^2_s}\,,\label{globstrd}
\end{equation}
at which point they pull the monopole-antimonopole pairs together, and the network is expected to disappear within a Hubble time.
The scale-dependent monopole mass is important for the friction lengthscale, which is now $\ell_f\equiv M/(\theta T^2)\sim\eta_m^2 L/(\theta T^2)$. Another difference is that instead of gauge radiation we now have Goldstone boson radiation, whose rate of energy loss is
\begin{equation}
{\dot \epsilon_{gold}}\sim - \eta_m^2\,;
\end{equation}
notice that this is much stronger than the energy loss rates due to gauge radiation (except if the monopoles and strings form at the same energy scale, in which case they will be comparable) and due to gravitational radiation (except if both form that the Planck scale, in which case they will all be comparable).
This energy loss term should be similarly included in the evolution equation for the characteristic lengthscale $L$. Note that there's a crucial difference between this and the gauge case: here the string energy is typically {\bf smaller} than the monopole energy, $E_m\sim\eta^2_mL$. The string is therefore (to some extent) irrelevant, and in this case the $Q$ coefficient is $Q_{gold}\sim cv$ as in the case of isolated global monopoles \cite{MONOPOLES}. Notice the presence of the velocity-dependence, although one expects that the velocities will always be relativistic. The effect of string loop production can in principle be accounted for by a redefined (effective) coefficient, $c_\star$.
\section{\label{models}Modeling Hybrids}
The above overview leads to the expectation that the network will annihilate shortly after the strings form. In fact we will see below that there are circumstances where the network can survive considerably more than a Hubble time, although it has to be said that we do expect this to be the exception rather than the rule.
We will therefore proceed fairly quickly, discussing only the late-time regime where the force due to the strings is dominating and the hybrid networks are therefore about to annihilate. As discussed in Sect. \ref{review}, in the local case this will happen very shortly after the string-forming phase transition. In the global case, however, there will be an intermediate epoch where the monopoles evolve as if free (because the strings are comparatively very light), and the strings only become dynamically important at an epoch given by Eq. (\ref{globstrd}).
\subsection{Local case}
In this case our evolution equations for the characteristic lengthscale $L$ and RMS velocity $v$ of the monopoles take the form
\begin{equation}
3\frac{dL}{dt}=3HL+v^2L\left(H+\frac{\theta T^2}{\eta_m}\right)+Q_\star
\end{equation}
where $Q_\star$ includes the energy loss terms from gauge radiation (if it exists), gravitational radiation and loop production discussed earlier, possibly with some coefficient of order unity, and
\begin{equation}
\frac{dv}{dt}=(1-v^2)\left[\frac{k_m}{\eta_mL^2}\frac{L}{d_H}+k_s\frac{\eta_s^2}{\eta_m}-v\left(H+\frac{\theta T^2}{\eta_m}\right)\right]\,.
\end{equation}
The velocity equation now has two accelerating terms, due to the strings and the inter-monopole Coulomb forces, which we parametrize with coefficients $k_s$ and $k_m$ which are expected to be of order unity. An exception is the Abelian case, where there are no Coulomb forces, so $k_m=0$ in this case.
It's important to realize that friction will play a crucial role in the radiation era, where it is more important than the Hubble damping term (the opposite is true for the matter era). Indeed in the radiation era we can write
\begin{equation}
\frac{1}{\ell_d}=H+\theta\frac{T^2}{\eta_m}=\frac{1}{t}\left(\frac{1}{2}+\theta\frac{m_{Pl}}{\eta_m}\right)\equiv\frac{\lambda_\star}{t}
\end{equation}
where in the last step we have defined an effective coefficient that is usually much larger than unity (except if there were no particles interacting with the string, $\theta=0$). On the other hand, in other cosmological epochs (in particular the matter-dominated era) the friction term is negligible, and we have $\lambda_\star=\lambda$ (where we are defining $a\propto t^\lambda$) as usual.
It is illuminating to start by comparing the various terms in the velocity equation. The ratio of the monopole and string accelerating terms is
\begin{equation}
\frac{F_m}{F_s}=\frac{k_m}{k_s}\frac{1}{Ld_H\eta^2_s}\sim\left(\frac{\delta_s}{L}\right)\left(\frac{\delta_m}{d_H}\right)
\end{equation}
which is always much less than unity (except if $k_s$ happened to be extremely small). The Coulomb forces are always negligible relative to the string forces. This was expected, since as we pointed out earlier the energy in the strings is typically larger than that in the monopoles. A useful consequence is that we do not need to treat the Abelian and non-Abelian cases separately.
Now let us compare the damping and string acceleration terms
\begin{equation}
\frac{F_d}{F_s}=\frac{\lambda_\star}{k_s}\left(\frac{\eta_m}{m_{Pl}}\right)\left(\frac{T}{\eta_s}\right)^2v\sim\frac{\theta}{k_s}\left(\frac{T}{\eta_s}\right)^2v
\end{equation}
where in the last step we assumed that we are in the radiation era. Given that the evolution of the monopoles before string formation leads to a scaling law $v\propto a^{-1}$ for monopole velocities \cite{MONOPOLES}, which are therefore extremely small when the strings form, the above ratio is always less than unity.
This analysis therefore quantitatively confirms the naive expectation that as soon as the strings are formed the string acceleration term will dominate the dynamics and drive the monopole velocity to unity. Recalling that the initial monopole velocity can effectively be taken to be zero, we can write the following approximate solution of the monopole velocity equation
\begin{equation}
\ln{\frac{1+v}{1-v}}=2f_s(t-t_s)
\end{equation}
and we can compare the epoch at which the monopoles become relativistic ($t_c$) with that of the string-forming phase-transition ($t_s$), finding
\begin{equation}
\frac{t_c}{t_s}\sim1+\frac{\eta_m}{m_{Pl}}\,, \label{tdecayv}
\end{equation}
so they become relativistic less than a Hubble time after the epoch of string formation; notice that this ratio depends only on the energy scale of the monopoles, and not on that of strings.
We can now proceed to look for solutions for the characteristic monopole lengthscale $L$, assuming for simplicity that $v=1$ throughout. We can look for solutions of the form $L\propto t^\alpha$ for generic expansion rates ($a\propto t^{\lambda}$), and it is straightforward to see that there are two possibilities. There is a standard linear scaling solution
\begin{equation}
L=\frac{Q_\star}{3(1-\lambda)-\lambda_\star}t
\end{equation}
in which the energy loss terms effectively dominate the dynamics. In this case the monopole density decays slowly relative to the background density,
\begin{equation}
\frac{\rho_m}{\rho_b}\propto t^{-1}\,.
\end{equation}
This scaling law can in principle exist for any cosmological epoch provided $\lambda<3/4$ (for example, in the matter era we have $L=3Q_\star t$). However, in the radiation epoch we would need the unrealistic $\theta m_{Pl}/\eta_m<2$, which effectively would mean that friction is absent ($\theta=0$).
The alternative scaling solution, for epochs when friction dominates over Hubble damping (such as the radiation era) and also for any epoch with $\lambda\ge3/4$ (even without friction) has $L$ growing with a power $\alpha>1$ given by
\begin{equation}
\alpha=\lambda+\frac{1}{3}\lambda_\star=\frac{1}{3}\left(4\lambda+\theta\frac{m_{Pl}}{\eta_m}\right)\,;
\end{equation}
if there is no friction the scaling law can simply be written
\begin{equation}
L\propto a^{4/3}\,,
\end{equation}
from which one sees that although the Hubble term is dominant, the scaling is faster than conformal stretching ($L\propto a$) because the velocities are ultra-relativistic. The important point here is that in this regime $L$ grows faster than $L\propto t$, and the number of monopoles per Hubble volume correspondingly decreases.
At the phenomenological level of our one-scale model, this corresponds to the annihilation and disappearance of the monopole network. We can easily estimate the timescale for this annihilation---it will occur when the number of monopoles per Hubble volume drops below unity (or equivalently $L>d_H$). Assuming a lengthscale $L_s=st_s$ at the epoch of string formation (note that the evolution of the monopole network before the stings form is such that $s$ can be much smaller than unity), we easily find
\begin{equation}
\frac{t_a}{t_s}\sim1+\frac{3}{\theta}\left(\frac{2}{s}-1\right)\frac{\eta_m}{m_{Pl}}\,, \label{tdecayr}
\end{equation}
which is comparable to the estimate we obtained using the velocity equation, Eq. (\ref{tdecayv}). Notice that this is a much faster timescale than the one associated with radiative losses, which can therefore be consistently neglected in this case. There's nothing unphysical about this 'superluminal' behaviour, as explained in \cite{EVERETT}. We do have a physical constraint that the timescale for the monopole disappearance should not be smaller than the (initial) length of the string segments, $t_a\ge L_s$, but this should always be the case for the above solutions.
This analysis shows that the monopoles must annihilate during the radiation epoch, if one wants to solve the monopole problem by invoking nothing but a subsequent string-forming phase transition. Any monopoles that survived into the matter era would probably be around today. It also shows that monopoles can also be diluted by a sufficiently fast expansion period. Inflation is of course a trivial example of this, but even a slower expansion rate $3/4<\lambda<1$ would be sufficient, provided it is long enough to prevent the monopoles from coming back inside the horizon by the present time.
\subsection{An aside: string evolution}
Notice that in the above we haven't yet said anything about the evolution of the strings. One could try to define a string characteristic lengthscale in the usual way, $\rho_s=\mu_s/L^2_s$. However, one should be careful about doing this, since in this case there is no a priori expectation that the distribution of string segments connecting the pairs of monopoles and antimonopoles will form an effectively Brownian network. (In fact it is quite likely that it doesn't, although this is something that warrants numerical investigation.)
A safer and simpler way of describing them is to look at the evolution of an individual string segment, and then make use of the fact that we already know how the monopole number and characteristic lengthscale evolve. The evolution of a given segment of a local string of length $\ell$ in the context of the VOS model has been studied in \cite{MS1}. In the present notation the evolution equation has the form
\begin{equation}
\frac{d\ell}{dt}=(1-2v^2)H\ell-\frac{\ell}{\ell_f}v^2-Q_\star\,;
\end{equation}
note that as expected energy loss mechanisms reduce the length of the string segment. There's an analogous equation for the string segment's velocity, but we already know we can safely assume $v\sim1$. Using this and the specific form of the friction term, the above equation can in fact be written
\begin{equation}
\frac{d\ell}{dt}=-\frac{\lambda_\star}{t}\ell-Q_\star\, \label{dynforsegments}
\end{equation}
which can easily be integrated to yield (for $\ell=\ell_s=st_s$ at the string formation epoch $t=t_s$)
\begin{equation}
\ell(t)=\ell_s\left(\frac{t_s}{t}\right)^{\lambda_\star}+\frac{Q_\star}{1+\lambda_\star}\left[t_s\left(\frac{t_s}{t}\right)^{\lambda_\star}-t\right]\,.\label{timedecay0}
\end{equation}
We emphasize that for simplicity we are assuming that all string segments are formed with the same length: at a detailed level this is unrealistic, but it is nevertheless sufficient to provide reliable qualitative estimates.
We can now estimate the monopole annihilation epoch by simply looking for $\ell\sim0$. As before, the answer will depend on the relative importance of the the radiative loss and friction terms. If friction is negligible and the radiative losses dominate, then we find the radiative decay timescale
\begin{equation}
\frac{t_r}{t_s}\sim\left(1+s\frac{1+\lambda_\star}{Q_\star}\right)^{1/1+\lambda_\star}\,,\label{timedecay1}
\end{equation}
which is much larger than unity. On the other hand, if friction is important and the $Q$ term is subdominant the annihilation timescale is the much faster
\begin{equation}
\frac{t_a}{t_s}\sim1+\frac{1}{\theta}\frac{\eta_m}{m_{Pl}}\,,\label{timedecay2}
\end{equation}
which is again comparable to our previous estimates, given by Eqs. (\ref{tdecayv}) and (\ref{tdecayr}).
Since each piece of string is connecting two monopoles, a very simple estimate of the total length in string in a Hubble volume (and hence of the string density) is obtained multiplying half the number of monopoles in that volume by the typical length of each segment. This leads to
\begin{equation}
\frac{\rho_s}{\rho_m}\propto \frac{\eta^2_s}{\eta_m}\ell(t)
\end{equation}
and as expected the string density decays relative to that of the monopoles.
\subsection{Global case}
In the global case the evolution equations will be \cite{MONOPOLES}
\begin{equation}
3\frac{dL}{dt}=3HL+v^2\frac{L}{\ell_d}+c_\star v
\end{equation}
and
\begin{equation}
\frac{dv}{dt}=(1-v^2)\left[\frac{k_m}{L}\left(\frac{L}{d_H}\right)^{3/2}+\frac{k_s}{L}\frac{\eta^2_s}{\eta^2_m}\ln{\frac{L}{\delta_s}}-\frac{v}{\ell_d}\right]\,,
\end{equation}
and the scale dependence of the monopole mass and string tension imply that this case differs in two ways from the local case.
Firstly, the damping term at the string formation epoch (assumed to be in the radiation epoch) can be written
\begin{equation}
\frac{1}{\ell_d}=H+\theta\frac{T^2}{\eta_m^2L}\sim\frac{1}{t}\left[\frac{1}{2}+\theta\left(\frac{\eta_s}{\eta_m}\right)^2\right]\,.
\end{equation}
The friction term is now subdominant relative to Hubble damping, and of course it decreases faster than it. Friction can therefore be ignored in the analysis.
Secondly, the monopole acceleration due to the (global) strings now has an extra logarithmic correction, but more importantly the logarithmically divergent monopole mass again implies that this force is inversely proportional to $L$ (as opposed to being constant in the local case). The large lengthscale inside the logarithm should be the string lengthscale, but we have substituted it for the monopole one, which we'll simple denote $L$.
Starting again with the velocity equation, the ratio of the string and monopole acceleration terms is now
\begin{equation}
\frac{F_s}{F_m}\sim\frac{k_s}{k_m}\frac{\eta^2_s}{\eta^2_m}\frac{d_H^{3/2}}{L^{3/2}}\ln{(L\eta_s)}\,,
\end{equation}
and in particular at the epoch of string formation we have
\begin{equation}
\left(\frac{F_s}{F_m}\right)_{t_s}\sim\frac{k_s}{k_m}\frac{\eta^2_s}{\eta^2_m}\ln{\frac{m_{Pl}}{\eta_s}}\,,
\end{equation}
so now $F_s$ is initially sub-dominant, except if we happen to have $\eta_s\sim\eta_m$. Moreover, given that $L\propto t$ with a proportionality factor not much smaller than unity, while monopoles evolve freely (before the effect of the strings is important), the ratio will only grow logarithmically, and so the effect of the strings might not be felt for a very long time. Specifically this should happen at an epoch
\begin{equation}
\frac{t_r}{t_s}\sim\frac{\eta_s}{m_{Pl}}exp\left(\frac{\eta^2_m}{\eta^2_s}\right)\,;
\end{equation}
naturally this is exactly the same as Eqn. (\ref{globstrd}).
While strings are dynamically unimportant we have $v=const.$ as for free global monopoles, and even when they become important the velocity will still grow very slowly (logarithmically) towards unity. Hence, although the ultimate asymptotic result is the same in both cases ($v\to1$), the timescale involved should be much larger in the global case.
Moreover, recall from \cite{MONOPOLES} that in the gauge case the monopole velocities before string formation were necessarily non-relativistic and indeed extremely small, and it is the strings that make them reach relativistic speeds. In the global case this is usually not so, as the monopoles will typically have significant velocities while they are free (although their magnitude depends on model parameters that need to be determined numerically). We can therefore say that the impact of the forces due to the strings is vastly smaller in the global case. All this is due to the fact that the force due to the strings is inversely proportional to $L$ instead of being constant.
As for the evolution of $L$, at early times (before strings become important) we have $L\propto t$ just like for free global monopoles. Eventually the strings push the monopole velocities close(r) to unity, and in this limit the $L$ evolution equation looks just like that for the global case with the particular choices $\lambda_\star=\lambda$ and $Q_\star=c$. However, we must be careful about timescales, since here the approach to $v=1$ is only asymptotic.
Bearing this in mind, for $\lambda<3/4$ we will still have linear scaling solution
\begin{equation}
L=\frac{c}{3-4\lambda}t\,;
\end{equation}
notice that this is exactly the ultra-relativistic ($v=1$) scaling solution we have described in \cite{MONOPOLES}---cf. Eqs. (70-71) therein. Therefore, if the network happened to be evolving in the other (subluminal) linear scaling solution, the only role of the strings would be to gradually switch the evolution to the ultra-relativistic branch. This scaling law can in principle occur both in the radiation and in the matter eras, and it follows that in this case the monopoles will not disappear at all, but will continue to scale indefinitely (with a constant number per Hubble volume).
In this case an analysis in terms of the length of each string segment would have to take into account an initial distribution of lengths. Moreover, since the initial velocities need not be ultrarelativistc, the larger segments (which will have smaller coherent velocities) should grow at early times, while the smaller ones will shrink. The decay time will obviously depend on the initial size. We note that such a behavior has been seen in numerical simulations of semilocal strings \cite{SEMILOCALSIM}.
The alternative scaling solution, for any epoch with $\lambda\ge3/4$ has $L$ growing with as
\begin{equation}
L\propto a^{4/3}\,,
\end{equation}
and again corresponds to the annihilation and disappearance of the monopole network, which would occur at
\begin{equation}
\frac{t_a}{t_r}\sim\frac{1}{\left[(1-\lambda)r\right]^{3/(4\lambda-3)}}\,
\end{equation}
for an initial lengthscale $L_s=rt_r$; given that we now expect $r$ to be not much smaller than unity, this is likely to be very soon after the onset of this scaling regime---that is, very soon after the velocities become ultra-relativistic. (Notice that in the local case the monopoles became ultra-relativistic very soon after the strings formed so $t_c\sim t_s$, but in the present case $t_c>>t_s$, except if the free monopoles were already evolving in the ultra-relativistic branch.)
\section{\label{topology}Topological Stability}
There is in principle a further energy loss mechanism that we have not discussed so far. These strings are not topologically stable: they can break, producing a monopole-antimonopole pair at the new ends. This process was first discussed in \cite{VILENKIN} and more recently in \cite{LEBLOND}.
This is a tunneling process (usually called the Schwinger process), and its probability per unit string length and per unit time has been estimated to be \cite{VILENKIN,Monin1,Monin2}
\begin{equation}
P\sim \frac{\mu}{2\pi}\exp{\left(-\pi\frac{m^2}{\mu}\right)}\,.
\end{equation}
By substituting Eqs.(\ref{mmass}-\ref{smass}) one notices that in typical GUT models the exponent is very large, say $\sim10^3(\eta_m/\eta_s)^2$. This implies that this probability is negligibly small, even in the most favorable case where the strings and monopoles form at the same energy scale. Therefore, for all practical purposes such strings (whether they are local or global) can be considered stable.
Nevertheless, it is conceivable that in string-inspired models such as brane inflation there are models (or regions of parameter space in some models) where the exponent is not much larger than unity. Here we briefly discuss what happens in this case. A different scenario, where the bead- and string-forming phase transitions are separated by an inflationary epoch (and thus the beads are initially outside the horizon), has been recently discussed in \cite{LEBLOND}.
Again it is simplest to look at the effect of this additional energy loss term on an individual string segment. It's straightforward to show that Eq. (\ref{dynforsegments}) now becomes
\begin{equation}
\frac{d\ell}{dt}=-\frac{\lambda_\star}{t}\ell-Q_\star-P\ell^2\,. \label{dynbreaking}
\end{equation}
Assuming that the exponent in the Schwinger term is of order unity, it's easy to see that for large segments (that is those much larger than the string thickness $\delta_s$) the Schwinger term is the dominant one, and it quickly breaks up the segment into smaller ones. On average (and neglecting the standard damping and energy loss terms) the length of a given segment is expected to shrink as
\begin{equation}
\ell(t)=\frac{\ell_s}{1+P\ell_s(t-t_s)}\,
\end{equation}
and the epoch (relative to that of string formation) at which this process would, on average, have reduced the segment to a size $\delta_s$ is
\begin{equation}
\frac{t_\delta}{t_s}=1+\frac{1}{e}\frac{\eta_s}{m_{Pl}}
\end{equation}
which is typically less than a Hubble time. As the segments shrink the standard damping and especially the radiative energy loss terms will become more important, and we would eventually switch to a solution of the type discussed earlier, cf. Eq. (\ref{timedecay0}). The segment size when this happens will be very small, so $s<<1$ in Eq. (\ref{timedecay1}), and these segments should therefore decay in much less than a Hubble time.
\section{\label{aside}A digression: vortons}
Since vortons \cite{VORTONS} are to a large extent point-like objects decoupled from the string network that produced them, it would be interesting to use the monopole model to describe the evolution of their overall density.
The most naive approach would be simply to say that there are no inter-vorton or other forces (other than friction and Hubble damping) affecting their dynamics. This immediately implies that there will be no acceleration term in the velocity equation ($k=0$), and hence vortons will necessarily be non-relativistic.
At the same naive level there should be no energy losses to account for in the lengthscale evolution equation. Classically this is justifiable by saying that we can assume that all vortons form around the same epoch (related to the superconducting phase transition, which need not be the same as the string-forming phase transition) and that once formed a vorton stays there forever (or at least has a very long lifetime)---these issues are discussed in \cite{VSH}. Of course we're not addressing the issue of the vorton density at formation (which would set the initial conditions for the model). A different issue which we are also neglecting is that there could well be an energy loss term due to quantum effects.
With these caveats in mind, it follows from our previous work \cite{MONOPOLES} that the evolution equations will be
\begin{equation}
3\frac{dL}{dt}=(3+v^2)HL+v^2\frac{L}{\ell_f}
\end{equation}
and
\begin{equation}
\frac{dv}{dt}=-(1-v^2)v\left(H+\frac{1}{\ell_f}\right)\,.
\end{equation}
Note that here $L$ is a characteristic vorton lengthscale, which is related to the vorton density through
\begin{equation}
\rho_v=\frac{M_v}{L^3}\sim\frac{\eta^2_s\ell_v}{L^3}
\end{equation}
where $\eta_s$ is the string symmetry breaking scale ($\mu\sim\eta^2_s$) and $\ell_v$ is an effective vorton length (not necessarily its radius/size, since there will be a significant amount of energy associated with the charge/current). For simplicity (and consistently with the above approximations) we will further assume that this length is a constant.
The simplest possibility we can consider is the case where friction is negligible ($\ell_f\to\infty$). The analysis of this case is trivial, and it leads to
\begin{equation}
L\propto a\,,\qquad v\propto\frac{1}{a}\propto\frac{1}{L}\,.
\end{equation}
This of course leads to $\rho_v\propto a^{-3}$, which is in fact what has been found in previous analysis both for generic vortons \cite{BRANDENBERGER} and for the specific case of chiral vortons \cite{CARTERDAVIS}. In either case there are several possibilities for the initial density, depending on details of the underlying particle physics model. There is no published analysis of vorton scaling velocities, although the effect of superconducting currents on the loop velocities has been discussed in \cite{MSVORTONS}, and the above scaling law for velocities is well known for defects in condensed matter systems.
However it's also important to consider the role of friction, which in this context should not have the same form as the standard case. This is discussed in \cite{CHUDNOVSKY} and also in the Vilenkin and Shellard textbook \cite{VSH}. Because a superconducting string creates a magnetic field
\begin{equation}
B_s(r)\sim2\frac{J}{r}
\end{equation}
($J$ being the current) a string moving in a plasma is shielded by a magnetocylinder which contains the magnetic field which does not allow the plasma to penetrate. The magnetocylinder radius is
\begin{equation}
r_s\sim\frac{J}{v\sqrt{\rho_p}}\,,
\end{equation}
where $\rho_p$ is the plasma density, and this leads to a friction force per unit length
\begin{equation}
F_p\sim Jv\sqrt{\rho_p}
\end{equation}
which in our notation corresponds to a friction lengthscale
\begin{equation}
\ell_f^{-1}\sim\frac{J\sqrt{\rho_p}}{\eta^2_s}\,.
\end{equation}
Note that the current can have a maximum value
\begin{equation}
J_{max}\sim e\sqrt{\mu}\sim \eta_s\,.
\end{equation}
We can now study the effect of this friction term on the above evolution equations. It's easy to see that the scaling of the characteristic vorton scale is unchanged, in other words we still have $L\propto a$. However, there is an effect on the velocity equation, which now takes the form
\begin{equation}
\frac{dv}{dt}=-(1-v^2)v\left(H+\frac{J\sqrt{\rho_p}}{\eta^2_s}\right)\,.
\end{equation}
Now for a universe with a critical density
\begin{equation}
\rho_p=\frac{3H^2}{8\pi G}
\end{equation}
so in fact the two terms are proportional to the Hubble parameter, and which of the two dominates will depend on the strength of the current $J$. For a generic expansion law $a\propto t^\lambda$ we can write the scaling law for the vorton velocities as
\begin{equation}
v\propto t^{-\beta}
\end{equation}
where
\begin{equation}
\beta=\lambda\left(1+\sqrt{\frac{3}{8\pi}}\frac{Jm_{Pl}}{\eta^2_s}\right)\,.
\end{equation}
For small currents we have $v\propto t^{-\lambda}\propto a^{-1}$ as before, but the effect of the plasma is to reduce the velocities at a much faster rate. Note that for a maximal current the scaling exponent will be approximately $(m_{Pl}/\eta_s)$, which is much larger than unity.
It would be interesting to test this behavior numerically, as well as to understand the magnitude and form of energy loss terms due to quantum effects.
\section{\label{concl}Summary and outlook}
We have applied a recently developed analytic model for the evolution of monopole networks \cite{MONOPOLES} to the case hybrid networks of monopoles attached to a single string, and also to the case of vortons. We discussed scaling solutions for both local and global hybrid networks, generically confirming the expectation that the network will annihilate shortly after the strings form, but also highlighting the fact that there are circumstances where the network can be long-lived---although it has to be said that we do expect this to be the exception rather than the rule.
Studying the evolution of hybrid defects may seem relatively uninteresting because of their brief existence or even due to the topological instability discussed in Sect. \ref{topology}. However, this study has a second motivation. We have studied local strings attached to local monopoles and global strings attached to global monopoles. These are two simple test cases before tackling a third, more interesting case: local strings attached to global monopoles. This corresponds (modulo a few subtleties) to the case of semilocal strings \cite{SEMILOCAL1,SEMILOCAL2}, which we will address in a subsequent publication.
\begin{acknowledgments}
I am grateful to Ana Ach\'ucarro for valuable discussions and collaboration on related issues, and to Petja Salmi and Jon Urrestilla for many discussions in the early stages of this work. The work of C.M. is funded by a Ci\^encia2007 Research Contract, supported by FSE and POPH-QREN funds. I also thank the Galileo Galilei Institute for Theoretical Physics (where part of this work was done) for the hospitality and the INFN for partial support.
\end{acknowledgments}
| proofpile-arXiv_065-6958 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{sec:Intro}Introduction}
Cold molecules currently attract great attention due to their potential application for precision measurements~\cite{Hudson,Krems:2009}, quantum information schemes~\cite{DeMille,Krems:2009}, degenerate quantum gases with complex interactions~\cite{Baranov,Krems:2009}, and for cold chemistry~\cite{Weck:2006,Bodo:2004,Willitsch:2008,Smith:2008,Krems:2005,Krems:2009}. Besides the approach of forming cold molecules out of a sample of ultracold trapped atoms~\cite{Masnou:2001,Jones:2006,Chin:2009,Krems:2009}, recent experimental efforts mainly focus on developing techniques of slowing or filtering, cooling, and trapping molecules produced in effusive or nozzle beams. Well established techniques are buffer gas cooling \cite{Doyle:1995,Krems:2009}, the deceleration of beams of polar molecules using time-varying electric fields~\cite{Bethlem1:1999,Hutson:2006,Krems:2009}, as well as filtering of polar molecules out of an effusive source using static or time-varying electric fields~\cite{Rangwala:2003,Junglen:2004}. Recently, the method of decelerating supersonic molecular beams has been extended to magnetic~\cite{VanHaecke:2007,Narevicius:2008} and optical fields~\cite{Fulton:2006}. Alternative routes to producing cold molecules have been demonstrated utilizing the kinematics in elastically or reactively colliding molecular beams~\cite{Elioff:2003,liu:2007,Smith:2008}.
Furthermore, producing beams of slow and cold atoms and molecules by mechanical means has been demonstrated~\cite{Gupta:1999,Gupta:2001,Narevicius:2007}. In particular the technique of translating a supersonic jet to low longitudinal velocities by means of a rapidly counter-rotating nozzle, pioneered by Gupta and Herschbach, holds the promise of producing cold, slow and intense beams of nearly any molecular species available as a gas at ambient temperatures~\cite{Gupta:1999, Gupta:2001}. Attempts to improve the original arrangement have been made by the groups of H.-J. Loesch in Bielefeld and of M. DeKieviet in Heidelberg~\cite{LangThesis:2002}. By controlling the sense and speed of rotation of the nozzle as well as by using the seeded beams technique, tunable beam velocities ranging from thousands down to a few tens of meters per second are feasible. Typically, the translational and internal temperatures in the moving frame of the slowed beam are in the range of a few Kelvin and a few tens of Kelvin, respectively. Due to the extremely simple concept, this technique can be realized by a relatively simple mechanical device. Therefore this versatile source of cold molecules may be particularly useful for novel studies of molecular reaction dynamics at low collision energies in the gas phase or with surfaces~\cite{Weck:2005,Weck:2006,Bodo:2004,Liu:2004}.
The principle of operation relies on the Galilei transformation of particle velocities in the moving frame of the rotating nozzle into the resting laboratory frame. As the nozzle moves backwards with respect to the molecular beam motion the beam velocity $v_s$ is reduced to $v_0=v_s-v_{rot}$. In this way, Gupta and Herschbach succeeded in slowing beams of krypton (Kr) and xenon down to 42 and 59 m/s, respectively, at beam temperatures down to 1.5 and 2.6 K, respectively~\cite{Gupta:2001}. In their prototype setup they used a tapered aluminium barrel with a hollow bore as a rotor. The special aluminum alloy was chosen for its favorable ratio of tensile strength to mass density, allowing high rotational speeds. However, machining such a rotor barrel requires considerable efforts. The gas injection was realized by inserting a polymer needle into the rotor. However, erosion of this type of sealing and the resulting gas leakage was found to be a serious limitation. The nozzle orifice was fabricated by gluing a stainless steel disk with a 0.1\,mm aperture close to the tip of the rotor. In this case, swatting of slowly emerging molecules at high rotational speeds by the rotor arm as it comes back after one full revolution may be an issue. Furthermore, a general drawback of producing slow beams using a moving source is the fact that the flux of molecules $\dot{N}_0$ on the beam axis drops down sharply with decreasing laboratory beam velocity $v_0$ and with increasing distance $d$ of the detector from the nozzle, $F(v_{0}, d)\propto (v_0/d)^2$. This is a consequence of the beam spreading in transverse directions which becomes more and more important as $v_0$ decreases and hence the flight time goes up. The aim of the improved setup presented in this paper is to remedy the drawbacks mentioned above.
\section{\label{sec:Setup}Experimental setup}
\begin{figure}
\includegraphics[width=6cm]{Fig1_SetupSchematic.eps
\caption{\label{fig:schematic} Schematic overview over the experimental arrangement used for characterizing the rotating nozzle molecular beam source.}
\end{figure}
The experimental arrangement used for characterizing the new rotating nozzle setup is schematically depicted in Fig.~\ref{fig:schematic}. The upper part represents the moving molecular beam source including a hollow rotor and a skimmer with 1\,mm diameter. The nozzle chamber is pumped by an 8000\,l/s oil diffusion pump backed by roots and rotary vane pumps. Typical background gas pressure during operation with argon (Ar) at $p_0=0.3$\,bar nozzle pressure is $p_{bg}\approx 10^{-4}$\,mbar. The molecular beam follows a trajectory through the following elements: At a distance of about 20\,mm behind the nozzle it enters an intermediate chamber through a skimmer with 1\,mm opening diameter. The intermediate chamber is pumped by a 600\,l/s turbo pump providing a background gas pressure $p_{detect}\approx 5\times 10^{-9}$\,mbar during operation. The intermediate chamber is connected to a detector chamber pumped by a 150\,l/s turbo pump to maintain a background pressure $p_{detect}\lesssim 10^{-8}$\,mbar.
For the purpose of guiding beams of polar molecules, a set of four parallel steel rods is installed around the beam axis between the skimmer and a commercial quadrupole mass spectrometer (QMS) (Pfeiffer QMS200) which we use as a detector (see Sec.~\ref{sec:Guide}). The mounting discs used for holding the rods also serve as apertures for differential pumping between the intermediate chamber and the detector chamber. The total fight distance from the nozzle to the detector amounts to $\ell =35$\,cm. Since the positions of the quadrupole guides and the QMS detector cannot be adjusted, the beam axis is initially aligned by sight by adjusting the position of the skimmer. For additional characterization of the beam, a Pitot tube is installed about $5\,$mm behind the ionizer of the QMS detector. It consists of a cylindrical chamber 4\,cm in diameter and 10\,cm in length having a conical aperture on the side close to the QMS and a cold cathode ion gauge on the opposite side as an end cap. When a continuous molecular beam (nozzle at rest) is directed into the detector chamber, the residual gas pressure inside the Pitot tube rises until equilibrium between incoming flux and outgoing flux of molecules effusing out of the Pitot tube aperture according to its molecular flow conductance is reached. Thus, from the difference between pressures inside the Pitot tube and in the QMS chamber, $\Delta p$, the flux of molecules on the beam axis, $\dot{N}_0$, can be inferred according to $\dot{N}_0=\Delta p A a \bar{v}/(4k_{\mathrm{B}}T)$. Here $A$, $a$, and $\bar{v}$ denote the area of the entrance channel into the Pitot tube, a scaling parameter to account for the channel length, and the mean velocity of the molecules at ambient temperature $T$.
The on-axis beam density, which is obtained from the molecule flux using $n=\dot{N}_0/(v_0 A)$, is compared to the one determined by the QMS signals in Sec.~\ref{sec:Results}.
\begin{figure}
\includegraphics[width=7cm]{Fig2_setup.eps
\caption{\label{fig:setup} Overview (a) and close-up views (b,c) of the experimental setup. (a) The nozzle orifice is mounted on one tip of the hollow rotor and produces molecule pulses along the beam axis each time it passes nearby the skimmer. The gas is injected into the rotor through a rotary feedthrough from below. The motor is mounted to a massive brass mount on top of the rotor, to which it is connected by a flexible coupler (silicone tube). (b) Sectional view of the central part of the setup. (c) Close-up view of the rotor tip, which contains a bored thin-walled titanium ferrule as a nozzle.}
\end{figure}
The experimental realization of our rotating source of cold atoms and molecules is depicted in Fig.~\ref{fig:setup} (a). The following main modifications with respect to the work of Gupta and Herschbach have been implemented: The rotor is made of a thin carbon fiber (CFK) tube with 4 mm outer diameter and 38 cm in length. CFK is superior to any metallic material regarding the ratio of tensile strength to density by at least a factor of $4$. Moreover, the small mass of the CFK tube makes it simpler to suppress deterioral mechanical vibrations that may occur due to slight imbalance of the rotor at high speeds of rotation. The CFK tube is sealed on both ends by gluing thin-walled (0.25\,mm) titanium grade 5 ferrules as end-caps into the inner surface of the CFK tube using special epoxy for CFK (Fig.~\ref{fig:setup} (c)). One of the ferrules has a borehole $d=0.1\,$mm in diameter at a distance of 0.5\,mm from the closed end, thus forming a well defined nozzle aperture. A simple alternative that we also tried is capping the CFK tube by gluing circular pieces of capton foil to front and back sides of the tube. Capping the CFK tube this way has proven to be stable enough to withstand the high centrifugal forces acting at rotational speeds up to $f_{rot}=350$\,Hz (21000 rpm). The nozzle orifice was fabricated by inserting a $50\,\mu$m wire half way in between the CFK front side and the capton foil prior to gluing. After the epoxy has hardened the wire is pulled out and an approximately 1\,mm long channel remains. This method allows to easily realize small nozzle channels of diameters $<100\,\mu$m, which limits the gas load and thus the required pumping speed. Moreover, using this method of fabrication the distance of the nozzle from the tip of the rotor is kept minimum, thereby suppressing swatting of slow molecules by the rotor. However, the expansion out of a long nozzle channel is unfavorable compared to a thin-walled orifice as far as adiabatic cooling conditions are concerned. Besides, nozzles fabricated using this procedure vary considerably with respect to length and shape of the nozzle channels. For all measurements presented in this work, bored thin-walled titanium ferrules are used.
The CFK tube is inserted and glued into the core of the rotor consisting of an aluminium or brass cylinder (Fig.~\ref{fig:setup} (b)). The latter has a borehole along its axis for inserting the bored shaft of a rotary feedthrough from below and the shaft of a flexible coupler attached to a motor (faulhaber-group.com) from above. In order to avoid any imbalance it is important to perfectly align the motor axis and the feedthrough axis. This is ensured by inserting the two axes into the same through-hole through the rotor core. The rotor including the metal core is first balanced statically by suspending the rotor core between two needles in the center of gravity and by abrading CFK material from the heavier side of the rotor until it is perfectly balanced. In a second step, the rotor is dynamically balanced by adding screws to the rotor core at different places and by monitoring the vibration spectrum using acceleration sensors attached to the setup. Using this procedure, vibrations at high rotation speeds resulting from the rotor are small compared to vibrations induced by the motor itself. Therefore, additional masses are attached to the motor (Fig.~\ref{fig:setup} (b)), which reduce vibrations at frequencies $\gtrsim 100$\,Hz.
Gas injection into the rotor is realized using a commercial rotary feedthrough sealed by ferrofluidic metal sealings. The bored shaft of the feedthrough is inserted into the rotor core and O-ring sealed. Metal-sealed feedthroughs (ferrotec.com) features very high leak tightness ($\lesssim 10^{-11}\,$mbar\,l/s) at low friction torque.
The rotor axis is aligned with the motor axis by connecting the rotary feedthrough and the motor to a U-shaped brass construction. This brass mount is broken in two parts to introduce a rubber layer in between the two parts as a damping element for suppressing vibrations. The brass mount is fixed to a triangular massive metal plate supported by neoprene dampers. In our new setup, the neoprene damper positioned underneath the skimmer resides on a vertical translation stage and the remaining two dampers reside on two horizontal translation stages. The translation stages are actuated by turning knobs outside of the vacuum chamber which allows full adjustment of the nozzle position with respect to the beam axis even during operation (cf. Fig.~\ref{fig:TwoPeaks}).
In our present setup the maximum speed of rotation is limited to about 350\,Hz (21000\,rpm). The limiting factor so far is mechanical vibrations, which rapidly increase in amplitude at frequencies $\gtrsim 300\,$Hz. The origin of these vibrations is presumably a slight unbalance of the rotor as well as vibrations caused by the motor. These vibrations are found to reduce the lifetime of the ferrofluidic sealings of the feedthrough, which is specified to rotation frequencies $\lesssim 166\,$Hz (10000\,rpm). The motor as well as the CFK rotor, however, are able to withstand much higher frequencies up to 970 and 800\,Hz, respectively.
\section{Analysis of time of flight data}
\label{sec:theory}
\begin{figure}
\includegraphics[width=8cm]{Fig3_TwoPeaks.eps
\caption{\label{fig:TwoPeaks} Illustration of the dependence of measured signal (right side) on the alignment of the rotating nozzle setup with respect to the beam axis (dotted line). (a) Proper nozzle alignment; When the rotor is set to a perpendicular position with respect to the beam axis, nozzle and skimmer are in line with the beam axis up to the detector. (b) Nozzle shifted above the beam axis; Two pulses are produced each time the nozzle intersects the beam axis at two slightly tilted rotor positions. (c) Nozzle shifted below the beam axis; One pulse is measured with reduced amplitude and width.}
\end{figure}
Although the expansion out of the rotating nozzle is continuous, the beam passing through the skimmer and entering the detector chamber is chopped.
Each time the nozzle comes close to the skimmer part of the ejected particles travel along trajectories that match the beam axis, determined by the positions of skimmer, quadrupole guides, and QMS detector. Using the vertical and horizontal translation stages the rotor position can be aligned during rotation of the nozzle to optimize the detector signal. A peculiarity of the rotating-nozzle arrangement is the fact that the signal traces as a function of time are changed in an asymmetric manner as the rotor is shifted horizontally across to the beam axis, as illustrated in Fig.~\ref{fig:TwoPeaks}. The signal traces are recorded using Ar gas expanding out of the nozzle at a nozzle pressure $p_0=200$\,mbar and at a frequency of rotation of 10\,Hz. At this rotation frequency, the flight time of Ar atoms to the detector is short compared to the nozzle motion in front of the skimmer. Thus, the recorded signal traces mainly reflect the effective slit opening function given by the skimmer with respect to the moving nozzle. In Fig.~\ref{fig:TwoPeaks} (a), the nozzle is properly aligned such that the signal trace features a slightly asymmetric trapezoidal shape.
As the nozzle is moved across the beam axis by 2\,mm, the measured signal splits into two maxima, as shown in Fig.~\ref{fig:TwoPeaks} (b).
In this geometry, particle trajectories matching the beam axis occur at two different angles of inclination of the rotor with respect to the beam axis.
Misalignment by 2\,mm in the other direction leads to a reduction of the signal width and amplitude, as shown in Fig.~\ref{fig:TwoPeaks} (c). The inclination signal envelopes in Fig.~\ref{fig:TwoPeaks} (a) and (b) results from the decreasing solid angle defined by nozzle and skimmer aperture as the nozzle backs off away from the skimmer.
\begin{figure}
\includegraphics[width=9cm]{Fig4_OpeningFunc.eps
\caption{\label{fig:Slit} Slit opening function measured at low rotation speeds of the nozzle (10\,Hz) (solid line). The dashed line represents a model curve fitted to the data, consisting of the sum of 3 Gaussian functions (light solid lines).}
\end{figure}
In order to extract the exact values of the beam velocity, temperature, and density from the measured signal traces, the effective slit opening function has to be characterized quantitatively and included into a model for the time-dependent density distribution of the molecular beam. The effective slit opening function for optimum horizontal nozzle alignment measured at low rotor frequency (10\,Hz) is plotted in Fig.~\ref{fig:Slit} (solid line).
At such low rotor frequency longitudinal dispersion of the beam pulse during flight time is negligible. This is verified
measuring the signal intensity at rest as a function of the rotor position using a micrometer driven manipulator. The resulting curve has identical shape compared to the time of flight curves at low rotor velocities. The dashed red line in Fig.~\ref{fig:Slit} represents a fitted model function consisting of the sum of 3 Gaussian functions,
\begin{equation}\label{eq:slit}
s_z(z)=\sum_{i=1}^{3} A_ie^{-\frac{(z-z_i)^2}{\Delta z_i^2}},
\end{equation}
where amplitude factors $A_i$, center positions $z_i$, and widths $\Delta z_i$ are determined by the fit to the experimental data. The density distribution in phase-space of a gas pulse produced by the rotating nozzle is modeled by the expression
\begin{equation}\label{phasespace}
f_0(\vec{r},\vec{v})=N\,f_r(\vec{r})\,f_v(\vec{v}),
\end{equation}
which is normalized to the total number of molecules per bunch, $N$.
The spatial distribution $f_r$ is assumed to be anisotropic according to
\begin{equation}\label{spatial}
f_r(\vec{r})=f_{\perp}(x,y)\,f_z(z).
\end{equation}
The distribution in transverse direction,
\begin{equation}\label{spatial}
f_{\perp}(x,y)=f_{\perp}^0\,e^{-\frac{x^2+y^2}{\Delta r^2}},
\end{equation}
is normalized to $f_{\perp}^0=(\pi\Delta r^2)^{-1}$. The distribution along the beam axis ($z$-axis) is obtained from the slit opening function (Eq.~(\ref{eq:slit})) by rescaling the z-coordinate according to $z\rightarrow z/q$, where $q=|v_s/v_{rot}|$ denotes the ratio between beam velocity $v_s=\sqrt{\kappa/(\kappa-1)}\hat{v}$ and the velocity of the rotor tip, $v_{rot}=2\pi R f_{rot}$. Here, $\kappa=5/3$ for monoatomic gases, $\hat{v}=\sqrt{2k_{\mathrm{B}}T_0/m}$ is the most probable velocity of the molecules inside the nozzle at a temperature $T_0$, and $R=19\,$cm stands for the rotor radius. Thus,
\begin{equation}\label{spatial}
f_z(z)=f_z^0/q\,\sum_{i=1}^3 A_ie^{-\frac{(z-z_i q)^2}{(\Delta z_iq)^2}},
\end{equation}
where the amplitude factor $f_z^0$ is given by $f_z^0=(\sqrt{\pi}\sum_{i=1}^3 A_i\Delta z_i)^{-1}$. The total number of molecules per pulse (integral over space of $f_r(\vec{r})$) is a function of speed of rotation,
$N=\dot{N}\Delta t$, where $\dot{N}=A\,p_0/(k_B T_0)\,\hat{v}\,\sqrt{2\kappa/(\kappa+1)}((\kappa+1)/2)^{1/(1-\kappa)}$~\cite{Pauly:2000} is the flux of molecules out of the nozzle orifice of area $A=\pi (d/2)^2$ at pressure $p_0$ and $\Delta t =\Delta z_{slit}/v_{rot}\approx 2\,$cm$/v_{rot}$.
In other words, as the nozzle velocity $v_{rot}$ falls below the beam velocity $v_s$, the molecule bunches become longer than the spatial width of the slit opening function $\Delta z_{slit}$ and the number of molecules per bunch increases accordingly.
The velocity distribution is chosen according to the most frequently used ellipsoidal drifting Maxwellian model $f_v(\vec{v})=f_{v_{\perp}}(v_x,v_y)\times f_{vz}(v_z)$, where
\begin{equation}\label{velocitydist}
f_{v_{\perp}}=f_{v_{\perp}}^0\,e^{-\frac{v_{\perp}^2}{\Delta v_{\perp}^2}},\,\,\mathrm{and}\,\,f_{vz}=f_{vz}^0\,e^{-\frac{(v_z-v_0)^2}{\Delta v_z^2}},
\end{equation}
with normalization constants $f_{v_{\perp}}^0=(\pi\Delta v_{\perp}^2)^{-1}$ and $f_{vz}^0=(\sqrt{\pi}\Delta v_z)^{-1}$~\cite{Scoles:1988, Pauly:2000}. The widths of the velocity distributions, $\Delta v_{\perp}$ and $\Delta v_{z}$ are related to the transverse and longitudinal beam temperatures $T_{\perp}$ and $T_{\|}$ by $T_{\|,\perp}=m \Delta v_{\|,\perp}^2 /(2 k_B)$. While $T_{\|}$ results from adiabatic cooling of the gas in the jet expansion, $T_{\perp}$ is an approximate measure of the transverse beam divergence. Typical values of $T_{\|}$ attainable in free jet expansions range between $T_{\|}\sim 1$\,K and $T_{\|}\sim 10$\,K, while the transverse beam divergence corresponds to $T_{\perp}\sim 300\,$K.
During free expansion, in the absence of collisions and external forces, the distribution function remains unchanged according to Liouville's theorem. The time evolution is given by
\begin{equation}\label{phasespacetime}
f(\vec{r},\vec{v},t)=f_0(\vec{r}-\vec{v}t,\vec{v}).
\end{equation}
The spatial density as a function of time is then obtained by integrating over velocities,
\begin{equation}\label{vintegral}
n(\vec{r},t)=\int d^3v f(\vec{r},\vec{v},t).
\end{equation}
The analytic solution of this expression is given by
\begin{equation}\label{eq:densitytime}
n(\vec{r},t)=N n_{\perp}(x,y) n_z(z),
\end{equation}
where
\begin{equation}
n_{\perp}=\pi\left(\frac{1}{\Delta v_{\perp}^2}+\frac{t^2}{\Delta r^2}\right)^{-1}\,e^{-\frac{x^2+y^2}{\Delta r^2+\Delta v_{\perp}^2t^2}}
\end{equation}
and
\begin{equation}\label{eq:zdensitytime}
n_{z}=\sqrt{\pi}\sum_{i=1}^3 A_i\left(\frac{1}{\Delta v_z^2}+\frac{t^2}{\Delta z_i^2}\right)^{-1/2}\,e^{-\frac{(z-z_i-v_0 t)^2}{\Delta z_i^2+\Delta v_z^2t^2}}.
\end{equation}
\begin{figure}
\includegraphics[width=8cm]{Fig5_Faltung.eps
\caption{\label{fig:Faltung} Simulated density distributions on the beam axis as a function of time of flight at various speeds of rotation of the nozzle. Solid lines are calculated according to Eq.~(\ref{eq:densitytime}) based on the measured slit-opening function, dashed lines reflect the longitudinal velocity distribution rescaled to flight times.}
\end{figure}
Fig.~\ref{fig:Faltung} displays simulated Ar density distributions on the beam axis, $n(0,0,\ell,t)$, according to Eq.~(\ref{eq:densitytime}) for typical beam temperatures $T_{\|}=5\,$K and $T_{\perp}=323\,$K at various speeds of rotation (solid lines). Here, $\ell =0.38\,$m denotes
the total flight distance from the nozzle to the QMS detector. The nozzle speed is $v_{rot}=-179$\,m/s ($-150\,$Hz) in Fig.~\ref{fig:Faltung} (a), 24\,m/s
(20\,Hz) (b), 179\,m/s (150\,Hz) (c), and 358\,m/s (300\,Hz) (d). Besides shifting linearly to longer flight times at higher rotational speeds,
the distribution is first broadened at rotations around 0\,Hz due to the effect of the slit opening function. At higher speeds, the distribution first narrows down
as the nozzle moves faster past the skimmer ($150\,$Hz), and then becomes broader again
as a consequence of longitudinal dispersion, \textit{i.\,e.} due to the fact that longer flight times allow the molecule bunches to longitudinally
broaden due to the finite beam temperature $T_{\|}$. The dashed lines represent the neat velocity distributions horizontally rescaled to flight times
according to $t=\ell /v_z$. Thus, at negative speeds of rotation (molecular beam is accelerated) the measured signal is predominantly determined
by the shape of the velocity distribution, while at speeds around 0\,Hz the measured trace is strongly distorted by the slit-opening function. At higher rotation frequencies,
the effect of the slit function diminishes and the two distributions eventually converge. Therefore, at rotations $f_{rot}\gtrsim 200\,$Hz, Eq.~(\ref{eq:zdensitytime})
may be replaced by the simpler expression $n'(t)=A'/t^2\,\exp\left( -(s/t-v_0)^2/\Delta v_z^2\right)$ in the procedure of fitting the measured data.
\section{\label{sec:Results}Beam parameters}
\begin{figure}
\includegraphics[width=8cm]{Fig6_TOFs.eps
\caption{\label{fig:TOFs}(a) Measured time of flight density distributions of decelerated argon beams at various speeds of rotation of the nozzle.
(b) Same data as in (a), where the horizontal axis is rescaled to velocities.}
\end{figure}
Typical time of flight distributions measured with Ar at a nozzle pressure $p_0=400$\,mbar are displayed in Fig.~\ref{fig:TOFs} (a) for various speeds
of rotation. In these measurements, the Ar gas pressure inside the gas line was carefully adjusted such that the effective pressure at the nozzle remains
constant. Due to the centrifugal force the nozzle pressure $p_0$ rises sharply at increasing speed of rotation $v_{rot}$ while holding the gas-line pressure $p_{0,gas\,line}$ constant according to
$p_{0}=p_{0,gas\,line}\exp\left( m v_{rot}^2/(2 k_B T_0)\right)$~\cite{Gupta:1999, Gupta:2001}. Signal broadening due to the convolution with the slit opening function around 0\,Hz as well as broadening due to longitudinal dispersion at high rotation frequencies
are clearly visible (\textit{cf.} inset of Fig.~\ref{fig:TOFs} (a)).
The time of flight distributions
shown in Fig.~\ref{fig:TOFs} (a) are horizontally rescaled to velocity distributions according to $v=\ell /t$ and replotted in Fig.~\ref{fig:TOFs} (b). In this
representation of the data the signal broadening due to dispersion at high rotation frequencies cancels out. However, broadening due to the slit opening function still remains and masks the real velocity distribution. The absolute densities are estimated by scaling the QMS detector signal to the output of the pressure gauge in the detector chamber.
Besides, densities can be estimated from the beam intensity measurement using the Pitot as discussed in Sec.~\ref{sec:Setup}. However, the density values obtained in this way turn out to be about a factor of 10 higher than the ones from the QMS measurement. This may be due to some feedback effect of the cold cathode ion gauge onto the vacuum inside the small volume of the Pitot tube, thus causing a nonlinear response of the vacuum gauge.
\begin{figure}
\includegraphics[width=8cm]{Fig7_ArPeakDensity.eps
\caption{\label{fig:PeakDens}Peak values of the time of flight argon density distributions as a function of the rotation frequency. Symbols are the measured values, the solid line represents the result of fitting (Eq.~(\ref{eq:densitytime})) to the data.}
\end{figure}
Strikingly, the Ar signal amplitude rapidly drops by about a factor of 30 when increasing the speed of rotation from 0\,Hz to 300\,Hz. In contrast, when the beam is accelerated by spinning the nozzle in the opposite direction up to 250\,Hz, the signal only slightly increases by a factor of 2. The dependence of
the measured peak density from the speed of rotation is shown in Fig.~\ref{fig:PeakDens} as filled symbols. The solid line represents maximum values of the model function, $\max\left(n\right)$,
according to Eq.~(\ref{eq:densitytime}). Here, the amplitude factor of the model function, $N$, is scaled by a factor of 0.5 to fit the experimental data. This strong amplitude modulation as a function of sense and speed of rotation is caused by both longitudinal as well as transverse beam dispersion.
Since the rate of the signal drop is determined by the beam temperatures $T_{\|}$ and $T_{\perp}$, it is evident that beam temperatures have to be kept
low by ensuring optimum free jet expansion conditions, \textit{i.\,e.} high effective nozzle pressure $p_{0,nozz}\gtrsim 100\,$mbar at sufficiently low
background pressure $p_{bg}\lesssim 10^{-3}\,$mbar as well as a high quality nozzle aperture and skimmer~\cite{Scoles:1988}.
\begin{figure}
\includegraphics[width=9cm]{Fig8_KrTOF.eps
\caption{\label{fig:KrTOF}Measured time of flight density distributions of krypton at various speeds of rotation of the nozzle.}
\end{figure}
Since the terminal beam velocity of jet beams is reduced with increasing mass of the beam particles, deceleration of beams of the heavier nobel gas atoms
krypton (Kr) and xenon to small velocities is achieved at lower
speeds of rotation. Beams of Kr, \textit{e.\,g.}, have terminal velocities around $400$\,m/s, such that deceleration down to velocities below 100\,m/s are easily
achieved at rotation frequencies $\gtrsim 250$\,Hz, as shown in Fig.~\ref{fig:KrTOF}. However, due to longitudinal dispersion, consecutive beam packets start to overlap
at frequencies $\gtrsim 250$\,Hz ("wrap around effect"), which impedes quantitative analysis of the time of flight distributions. One way of passing around
this effect could be the implementation of an additional mechanical beam chopper to pick out individual beam packets.
In comparison with other methods of producing beams of slow molecules, our results are quite similar regarding beam densities at low velocities. Using electrostatic quadrupole filters, \textit{e.\,g.}, typical peak densities around $10^9$\,cm$^{-3}$ at velocities around 60\,m/s are achieved~\cite{Motsch:2009}. However, the rotating nozzle setup offers the possibility of tuning beam velocities in a wide range. Due to free jet expansion conditions, the internal degrees of freedom of the molecules are expected to be cooled to the Kelvin range, which can be reached using quadrupole filters only with substantial experimental efforts~\cite{buuren:2009}. Besides, the rotating nozzle is suitable for any atomic of molecular substance available in the gas-phase at ambient temperatures, in contrast to filtering or deceleration techniques, which rely on particularly suitable Stark of Zeeman effects of molecules in external electrostatic or magnetic fields.
\begin{figure}
\includegraphics[width=8cm]{Fig9_TempVel.eps
\caption{\label{fig:TempVel} Argon and krypton beam parameters as a function of nozzle pressure.}
\end{figure}
In order to characterize the beam parameters and to test the analytic model, flight distributions of decelerated Ar and Kr beams are measured at various expansion pressures and speeds of rotation and are fitted by the model function, Eq.~(\ref{eq:densitytime}). Free fit parameters are the total number of atoms per bunch, $N$, the reduced beam velocity $v_0$, and the longitudinal beam temperature $T_{\|}$. The resulting values for Ar and Kr beams are depicted in Fig.~\ref{fig:TempVel} as symbols. Clearly, the results do not depend on the speed of rotation of the nozzle within the experimental uncertainty. As expected, the peak densities of Ar and Kr beams measured at rotation frequencies 150 and 100\,Hz, respectively, evolve roughly proportionally to the nozzle pressure $p_0$ (Fig.~\ref{fig:TempVel} (a)). The Kr densities fall behind the ones reached with Ar by about a factor of 3 due to the smaller flux of Kr out of the nozzle and due to the smaller longitudinal beam velocity of Kr. The fitted beam velocities $v_s$ shown in Fig.~\ref{fig:TempVel} (b) are found to perfectly match the theoretical values for supersonic beams, $v_s$, provided the nozzle pressure exceeds about 100\,mbar. The latter are given by $v_s=\sqrt{\frac{2\kappa}{\kappa -1}\frac{k_B T_0}{m}}\left(1-\left(\frac{p_{bg}}{p_0}\right)^{\frac{\kappa -1}{\kappa}}\right)$, where $T_0$ and $p_0$ are the nozzle temperature and pressure, $p_{bg}$ is the background gas pressure and $\kappa=c_P/c_V=1+2/f$ is the adiabatic exponent for a gas of particles with $f$ degrees of freedom~\cite{Pauly:2000}. For mono-atomic gas ($\kappa = 5/3$) and low background pressure $p_{bg}\ll p_0$ we obtain the simpler formula $v_s=\sqrt{ 5\,k_B T_0/m}$, which yields $v_s=579\,$m/s for Ar and $v_s=399$\,m/s for Kr at $T_0=323\,$K. At lower nozzle pressure $p_0$, the fitted values of $v_0$ slightly deviate from the model curve, indicating expansion conditions at the transition from the effusive to the free jet expansion regime.
Fit values of the longitudinal temperature $T_{\|}$ of Ar beams are shown in Fig.~\ref{fig:TempVel} (c) as symbols. The solid lines represent model curves according to two different collision models. Assuming hard-spheres (HS) collisions, the terminal speed ratio $S=v_s/\Delta v$ is given by $S=0.289(p_0 d \sigma/T_0)^{0.4}$, while atom-atom interactions according to the Lennard-Jones (LJ) potential yield $S=0.397(p_0d\epsilon^{1/3}r_m^2/T_0^{4/3})^{0.53}$ \cite{Pauly:2000}. Here, $\sigma=46.3$\,[$50.3$]\,$\mathrm{\AA}^2$ represents the HS-cross section, $\epsilon =141.6\,$[$50.3$]\,K$\times k_{\mathrm{B}}$ and $r_m = 3.76\,$[$4.01$]\,$\mathrm{\AA}$ stand for the potential well depth and the equilibrium distance of the LJ-potential for argon [Kr], respectively. In comparison with the experimental data, the model assuming interactions according to the LJ-potential appears to yield more precise values than the one assuming HS-collisions. However, the LJ-model systematically slightly under-estimates longitudinal temperatures, in contrast to the HS-model, which gives too large values. The measured temperature continuously decreases as a nozzle pressure up to about 1\,bar is applied, reaching values $T_{\|}\lesssim 1.5\,$K with both Ar and Kr. Evidently, high nozzle pressures are desirable if low temperatures are to be achieved.
\section{\label{sec:Problems}Detrimental effects}
\begin{figure}
\includegraphics[width=8cm]{Fig10_KrAttenuation.eps
\caption{\label{fig:KrAtt}Attenuation of the krypton peak beam intensity as a function of nozzle pressure for different speeds of rotation of the nozzle.}
\end{figure}
In order to work at high nozzle pressures while keeping the gas load low it may be favorable to use smaller apertures, which goes at the expense of beam intensity, though. The maximum applicable gas pressure, however, is limited by the pumping speed available in the rotor chamber. In our setup, the rotor-chamber pressure increases roughly linearly from $1.9\times 10^{-5}$\,mbar to $2.2\times 10^{-4}$\,mbar, as the Ar pressure at the nozzle rises from 100\,mbar up to 1000\,mbar. At these vacuum conditions, collisions of beam molecules with the ones from the residual gas may seriously deplete the beam. This effect is studied by measuring the peak beam density of Kr beams as a function of nozzle pressure up to $p_0\lesssim 2\,$bar at different rotation frequencies, $f=50\,$Hz ($v_0=340\,$m/s), $f=100\,$Hz ($v_0=280\,$m/s), $f=150\,$Hz ($v_0=220\,$m/s).
As shown in Fig.~\ref{fig:KrAtt}, the Kr beam density is affected by rest gas collisions as the nozzle pressure exceeds about 600\,mbar. Nozzle pressures higher than about 900\,mbar even lead to the destruction of the beam and beam densities drop down again, irrespective of the rotor frequency. Only at high frequencies leading to beam velocities below about $100$\,m/s the position of the maximum of the dependence $n(p_0)$ shifts down to lower values of $p_0$ due to the fact that slow molecules spend more time inside the region of high rest-gas pressure and are therefore more sensitive to rest-gas collisions. The solid lines in Fig.~\ref{fig:KrAtt} are fit curves according to the model of the peak density (Eq.~(\ref{eq:densitytime})), taking into account a density reduction due to collisions, $n_{red}=n\exp\left( -\sigma n_{bg}v_{rel}\,s/v_0\right)$. Here, the relative velocity between colliding atoms is taken as $v_{rel}=\sqrt{v_0^2+v_{bg}^2}$, $s\approx 2\,$cm is the average flight distance from nozzle to skimmer, and $n_{bg}=p_{bg}/k_B T$ and $v_{bg}=\sqrt{8k_B T_0/(\pi m)}$ are the residual gas density and mean thermal velocity, respectively. The Kr-Kr collision cross section $\sigma$ is varied as a free fit parameter, yielding an average value $\sigma = 130\pm 30\mathrm{\AA}^2$. This value is in reasonable agreement with the Landau-Lifshitz approximation for the maximum scattering cross section, $\sigma_{max}=8.083\left(C^{(6)}/(\hbar v_{rel})\right)^{2/5}=390\mathrm{\AA}^2$, with $C^{(6)}=77.4\,$eV$\mathrm{\AA}^6$ for Kr~\cite{Levine:2005}.
\begin{figure}
\includegraphics[width=9cm]{Fig11_EffusiveBeam.eps
\caption{\label{fig:effusive}Comparison of beam densities of the continuous nozzle beam (rotor at rest) and the effusive beam produced by the residual gas in the rotor chamber. Circles depict the data measured using the quadrupole mass spectrometer scaled to densities obtained from the detector chamber pressure measured using a cold cathode pressure gauge. Squares represent the density values obtained from the Pitot tube measurement.
}
\end{figure}
High nozzle pressures, which lead to increased rest-gas pressure in the nozzle chamber, cause another unwanted effect to occur -- the formation of a continuous effusive beam. The latter adds up to the pulsed nozzle beam and may have comparable beam intensities as the nozzle beam is decelerated to low speeds and low densities. We therefore compare the densities of nozzle beams and effusive beams by recording the detector signal with the nozzle standing properly aligned in front of the skimmer, on the one hand, and with the rotor being tilted completely out of the beam axis, on the other. The resulting density values are plotted in Fig.~\ref{fig:effusive}. While the measurement using the Pitot tube yields higher nozzle beam densities by a factor of 10, equal densities are found for the effusive beam. Thus, the nozzle beam is higher in density by 3-4 orders of magnitude than the effusive beam.
For decelerated beams of Ar and Kr, however, equal densities of nozzle and effusive beams are reached at beam velocities $v_0\lesssim 60$ and $v_0\lesssim 40$\,m/s, respectively. Provided the expansion is operated in the supersonic regime, the ratio of nozzle and effusive beam densities is independent of the nozzle pressure $p_0$. This ratio can be improved either by increasing the pumping speed in the nozzle chamber, which is characteristic for such an arrangement based on a continuous gas expansion. The effusive background can be further suppressed by implementing a mechanical velocity selector synchronized with the rotating nozzle. Therefore, in the near future a chopper wheel will be placed behind the skimmer.
\section{\label{sec:Guide}Electrostatic guiding}
\begin{figure}
\includegraphics[width=9cm]{Fig12_EnhancementTOF.eps
\caption{\label{fig:EnhancementTOF}Typical CHF$_3$ density distributions as a function of flight time for two different speeds of rotation of the nozzle. The data recorded at 250\,Hz are scaled up by a factor of 400. Signals shown as back lines are recorded with no voltage applied to the guide electrodes, light red lines are obtained at 3.5\,kV electrode voltage.}
\end{figure}
The issue of transverse divergence of decelerated beams can be relieved for beams of polar molecules by implementing additional guiding elements using inhomogeneous electrostatic fields. This technique has been used in the past for focussing and even state selecting molecular beams for reactions dynamics experiments~\cite{Bernstein:1982}. More recently, the technique has been used for guiding and trapping of cold polar molecules produced by Stark deceleration or Stark filtering~\cite{Bethlem1:1999,Rangwala:2003,Junglen:2004,Krems:2009}.
In our setup we have implemented an electrostatic quadrupole guide between the skimmer and the ionizer of the QMS. It consists of four 259\,mm long stainless steel rods 2\,mm in diameter with a gap of 2\,mm between two diagonally opposing rods. In a first experiment, slow beams of fluoroform (CHF$_3$) molecules have been produced by expanding CHF$_3$ either as a neat gas or seeded in Kr out of the rotating nozzle. CHF$_3$ is a polar molecule with a permanent dipole moment of 1.65\,Debye. Since its mass ($m_{CHF3}\approx 70\,$amu) lies close to the mass of Kr ($m_{Kr}\approx 80\,$amu) it can be efficiently cooled and decelerated by coexpansion with Kr.
When applying a high voltage $U=3.5\,$kV to a pair of opposing electrodes the QMS detector signal clearly rises, as depicted in Fig.~\ref{fig:EnhancementTOF}. In these measurements, CHF$_3$ is expanded at a nozzle pressure $p_0=200\,$mbar at rotor frequencies 75 and 250\,Hz, leading to most probable beam velocities $v_w=450$ and $v_w=165\,$m/s, respectively. Clearly, the enhancement of the QMS signal at an electrode voltage $U=3.5\,$kV in proportion to the signal with no high voltage is higher at low beam velocities, reaching roughly a factor of 5 for $v_w=165\,$m/s. Great care was taken to suppress any direct influence of the electrode voltage on the ionization efficiency of the QMS ionizer.
Similar experiments have also been performed with deuterated ammonia (ND$_3$). In addition to increasing the beam intensity, the quadrupole fields act as quantum state selectors which filter out those states that are attracted towards the beam axis where the electric field amplitude is minimal (low-field seeking states). Provided the rotational temperature of the ND$_3$ molecules is comparable to the translational one, \textit{i.\,e.} in the range of a few Kelvin, we may expect to produce ND$_3$ molecules in the state $J=1$, $M=K=-1$ with very high purity~\cite{Bethlem:2002}.
\begin{figure}
\includegraphics[width=9cm]{Fig13_EnhancementVoltage.eps
\caption{\label{fig:EnhancementVoltage}
Relative efficiency of guiding CHF$_3$ molecules at various beam velocities as a function of electrode voltage.}
\end{figure}
Interestingly, the dependence of the enhancement of peak density on the electrode voltage $U$ features a non-monotonic increase, as shown in Fig.~\ref{fig:EnhancementVoltage}. A first local signal maximum is observed at electrode voltages between 200 and 500\,V, depending on beam velocity. We interpret this behavior in terms of two competing mechanisms being active: Guiding of molecules in low-field seeking states enhances the detector signal whereas deviation of high-field seeking molecules away from the beam axis reduces the measured beam density. At low voltages $U\approx 500\,$V, the guiding effect outreaches the deviation efficiency, leading to a local density maximum. As the electrode voltage rises, all high-field seeking molecules are expelled out of the beam and only low-field seekers are guided to the detector. This guiding effect partly compensates the transverse blowing up of the beam, which is most relevant for low longitudinal beam velocities. Consequently, the relative guiding efficiency increases up to a factor of 5 as the beam velocity is reduced down to $v_w=165\,$m/s. The fact that the same qualitative behavior is observed with beams of ND$_3$ supports our interpretation, which does not consider specific Stark shifts of the molecules. In future efforts the transmission of our setup will be further studied both experimentally and using molecular trajectory simulations. Besides, guiding of decelerated molecules around bent electrodes will be implemented to provide state-purified beams of cold molecules. This has the additional advantage of efficiently suppressing the hot effusive background.
\begin{figure}
\includegraphics[width=9cm]{Fig14_EnhancementRot.eps
\caption{\label{fig:EnhancementRot}Enhancement of the CHF$_3$ peak density as a function of beam velocity for fixed electrode voltage $U=3.5$\,kV.}
\end{figure}
The dependence of the enhancement factor on the beam velocity is illustrated in Fig.~\ref{fig:EnhancementRot}. In this measurement, the electrode voltage is set to the maximum value $U=3.5\,$kV and the beam velocity is tuned from $v_w=500$ down to $v_w=250\,$m/s by changing the frequency of rotation of the nozzle. Further deceleration down to $v_w\approx 150\,$m/s is obtained by seeded expansion of CHF$_3$ in Kr at a mixing ratio 1:2. The enhancement factor continuously rises as the beam velocity is reduced, which highlights the potential of combining decelerated nozzle beams with electrostatic guiding fields.
\section{\label{sec:Conclusion}Conclusion and outlook}
In conclusion, we have presented a versatile apparatus that produces cold beams of accelerated or decelerated molecules based on a rapidly rotating nozzle. Various technical improvements with respect to the original demonstration by Gupta and Herschbach~\cite{Gupta:1999,Gupta:2001} are introduced. In particular, gas injection into the rotor is now realized using a fluid metal-sealed rotary feedthrough which eliminates gas leakage into the vacuum chamber. Using this setup, beam velocities well below $100$\,m/s and longitudinal beam temperatures down to about 1\,K are achieved. A detailed analytic model to simulate the measured density distributions as functions of flight time is presented. The fundamental drawback of this technique, the sharp drop of beam intensity at slow beam velocities, is relieved by combining the setup with electrostatic guiding elements, provided the molecules feature a suitable Stark effect. Moreover, Stark guiding goes along with internal-state selectivity. Thus, a significant increase of beam intensity as well as an expected high state-purity is achieved with beams of fluoroform and ammonia molecules.
Thanks to the technical improvements the rotating nozzle setup has evolved into a simple and reliable source of cold molecular beams with tunable velocity. In particular, slow beams of molecules that are not amenable to efficient filtering or deceleration by means of the Stark interaction with external electric fields can now be produced with high intensity. In the near future, we will utilize this beam source for studying reactive collisions between slow molecules and cold atoms produced by other means, \textit{e.\,g.} using a magneto-optical trap. At collision energies down to $\lesssim 1\,$meV, interesting new quantum effects may be expected in the reaction dynamics, \textit{e.\,g.} strong modulations of the scattering cross-section in reactions of the type Li+HF$\rightarrow$LiF+H~\cite{Weck:2005}. Furthermore, intense cold molecular beams with tunable velocity and internal-state purity are of current interest for surface scattering experiments~\cite{Liu:2004}.
\begin{acknowledgments}
We thank H.J. Loesch for ceding to us the main part of the experimental setup. Generous advice by D. Weber, K. Zrost and M. DeKieviet as well as assistance in setting up the apparatus by J. Humburg, Ch. Gl\"uck and O. B\"unermann is gratefully acknowledged. We are grateful for support by the Landesstiftung Baden-W\"urttemberg as well as by DFG.
\end{acknowledgments}
\newpage
| proofpile-arXiv_065-6971 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{action}
\par Due to its occurrence in a wide range of natural phenomena and its fundamental importance for surface engineering \cite{Quere, Dorrer}, the behavior of liquid drops on solid surfaces is an active field of research \cite{QuereAnnu}. Individual droplets can serve as ideal chemical reactors \cite{Ajdari}, carriers of information \cite{Prakash} or in ink-jet printers as well as surface preparation prior to painting or coating.
Although there exist a few solids that are molecularly flat (e.g.\ mica), most of solids are rough on the micro scale \cite{Rabbe} so that the well known Young's law, $\cos\theta_{\mathrm{Y}}=(\sigma_{\mathrm{SV}}-\sigma_{\mathrm{SL}})/\sigma_{\mathrm{LV}}$ \cite{Young}, \delete{describing the behavior of a droplet on a perfectly flat and chemically homogeneous surface} must be modified in order to take account of surface roughness. Here, $\theta_{\mathrm{Y}}$ is the contact angle on a flat substrate and $\sigma_{\mathrm{LV}}$, $\sigma_{\mathrm{SL}}$ and $\sigma_{\mathrm{SV}}$ are the liquid-vapor, solid-liquid and solid-vapor specific surface free energies, respectively. The simplest and most popular modification of the Young's law for rough surfaces dates back to the works of Wenzel \cite{Wenzel} and Cassie and Baxter \cite{Cassie}, where the effect of roughness on wetting is assumed to be a mere change of the average surface areas involved in the problem.
Assuming that the liquid completely penetrates into the roughness grooves (collapsed state), Wenzel obtained $\cos\theta^*_{\mathrm{W}}=r\cos\theta_{\mathrm{Y}}$ for the apparent contact angle $\theta^*_{\mathrm{W}}$ (the roughness factor $r$ is the real solid area within a square of unit length). Cassie and Baxter, on the other hand, considered the case of a droplet pending on the top of roughness tips (suspended state) and obtained
\begin{equation}
\cos\theta^*_{\mathrm{C}}=\phi\cos\theta_{\mathrm{Y}}-(1-\phi),
\label{eq:Cassie}
\end{equation}
where the roughness density $\phi$ gives the fraction of the
droplet's base area, which is in contact with the solid. It is
important to realize that both the Wenzel and the Cassie-Baxter
equations do not explicitly take account of three phase contact
line structure. This shortcoming may, however, be neglected as
long as the contact area reflects the structure and energetics of
the three phase contact line \cite{Gao2007a}.
The suspended state is often separated from the Wenzel state by a finite free energy barrier, which depends both on the droplet size and roughness characteristics \cite{Reyssat1, Jopp, Markus}. On a rough hydrophobic substrate, the apparent contact angle of a droplet in the suspended state is typically higher than in the collapsed state \cite{ Dorrer, Li, Oner}.
Furthermore, the contact angle hysteresis significantly increases when a suspended droplet undergoes a transition to the Wenzel state \cite{Quere,QuereAnnu}. Indicative of stronger pinning \cite{Joanny} of the three phase contact line, this feature reflects itself in a sticky behavior of liquid drops in the collapsed state \cite{Dorrer, Lafuma} as compared to their high mobility in the suspended state.
While droplet behavior on homogeneous roughness has widely been investigated in the literature, only few works exist dealing with the case of \emph{inhomogeneous} topography \cite{Zhu, Yang, Shastry, Reyssat, Fang}. Indeed, experimental observation of a roughness gradient induced spontaneous motion is not an easy task \cite{Reyssat}. The authors of \cite{Shastry,Reyssat}, for example, resort to shaking vertically the substrate in order to overcome pinning forces.
Recently, a two-dimensional theoretical model is proposed aiming at a study of the present topic \cite{Fang}. To the best of our knowledge, however, computer simulations of the problem are lacking so far. The present work is aimed at filling this gap. We provide first direct numerical evidence for spontaneous droplet motion actuated by a gradient of pillar density. Furthermore, we investigate the influence of specific distribution/arrangement of roughness elements (pillars in our case) on the behavior of the droplet. An important observation is that, depending on the specific arrangement of the pillars, both complete arrest and motion over the entire gradient zone can be observed for the same gradient of pillar density. This underlines the importance of the topography design for achieving high mobility drops.
For the case of mobile drops, we provide a simple model for the dependence of drop velocity on pillar density difference $\Delta \phi = \phi_{\mathrm{Right}}-\phi_{\mathrm{Left}}$. The model accounts for the observed linear dependence and predicts further that the velocity should scale linearly with the liquid-vapor surface tension. This prediction is also in line with our computer simulations for the studied range of parameters.
\section{Numerical Model}
We employ a free energy based two phase lattice Boltzmann (LB) method, first proposed by Swift \cite{Swift}. After establishing the Galilean invariance \cite{Holdych}, it was developed further \cite{Briant,Dupuis} in order to take the wetting effect of the solid substrate into account. Since then, the approach has been used to study e.g.\ stability and dynamics of droplets on topographically patterned hydrophobic substrates \cite{Kusumaatmaja,Dupuis,Yeomans}, effect of chemical surface patterning on droplet dynamics \cite{Kusumaatmaja2,Yeomans} as well as chemical gradient induced separation of emulsions \cite{VarnikF}. A detailed description of the method can be found in \cite{Briant,Dupuis}. For the sake of completeness, however, we present a short overview of the method. Equilibrium properties of the present model can be obtained from a free energy functional
\begin{equation}
\Psi = \int_{V} \left(\psi_{\mathrm{b}}(\rho({\bf r}))+\frac{\kappa}{2} (\partial_{\alpha}\rho({\bf r}))^2 \right) d{\bf r}^3 + \int_{S} \psi_{\mathrm{s}} ds.
\label{eq_free_energy_model}
\end{equation}
In \equref{eq_free_energy_model}, $\psi_{\mathrm{b}}$ is the bulk free energy density of the fluid, $V$ denotes the system volume, $S$ is the substrate surface area and $\rho({\bf r})$ the fluid density at point ${\bf r}$. Considering a simple van der Waals model \cite{Briant}, the bulk free energy per unit volume, $\psi_{\mathrm{b}} (\rho)$, can be given as $\psi_{\mathrm{b}} (\rho) = p_{\mathrm{c}} (\nu_\rho+1)^2 (\nu_\rho^2-2\nu_\rho+3-2\beta\nu_T)$ in which $\nu_\rho = (\rho-\rho_{\mathrm{c}})/\rho_{\mathrm{c}}$ and $\nu_T = (T_{\mathrm{c}}-T)/T_{\mathrm{c}}$ are the reduced density and reduced temperature, respectively. The critical density, pressure, and temperature are set to $\rho_{\mathrm{c}}=7/2$, $p_{\mathrm{c}}=1/8$, and $T_{\mathrm{c}}=4/7$, respectively. Below $T_{\mathrm{c}}$, the model describes liquid-vapor coexistence with related equilibrium densities $\rho_{\mathrm{L,V}}=\rho_{\mathrm{c}}(1\pm\sqrt{\beta\nu_{T}})$.
The parameter $\beta$ is related to the interface thickness $\xi$ and the surface tension $\sigma$ via $\xi=\sqrt{\kappa\rho_{\mathrm{c}}^{2}/(4\beta\nu_T\rho_{\mathrm{c}})}$ and $\sigma=4/3\sqrt{2\kappa pc}(\beta\nu_T)^{3/2}\rho_{\mathrm{c}}$. When combined with an appropriate variation of $\kappa$, it allows to vary the surface tension and interface width independently. Using Cahn-Hilliard approach, the surface free energy per unit area, $\psi_{\mathrm{s}}$, is approximated as $-\phi_{1}\rho_{\mathrm{s}} $, where $\rho_{\mathrm{s}}$ is the density of fluid on the solid substrate and $\phi_{1}$ is a constant which can be used to tune the contact angle. Minimizing the free energy functional $\Psi $, \equref{eq_free_energy_model}, subject to the condition $\psi_{\mathrm{s}}=-\phi_{1}\rho_{\mathrm{s}} $ leads to an equilibrium boundary condition for the spatial derivative of fluid density in the direction normal to the substrate, $\partial_{\perp}\rho=-\phi_{1}/\kappa$. The parameter $\phi_{1}$ is related to the Young contact angle via
\begin{eqnarray}
\phi_{1}=2\beta\tau_{T}\sqrt{2p_{\mathrm{c}}\kappa} \mathrm{sign} (\frac{\pi}{2}-\theta)\sqrt{\cos\frac{\alpha}{3}-(1-\cos\frac{\alpha}{3})},
\label{eq02}
\end{eqnarray}
where $\alpha=\cos^{-1}(\sin^{2}\theta_{\mathrm{Y}})$, and $\theta_{\mathrm{Y}}$ is the equilibrium Young contact angle and the function ``sign'' determines the sign of its argument. All the quantities in this paper are given in dimensionless lattice Boltzmann units.
The LB relaxation time is set to $\tau=0.8$ and the temperature is fixed to $T=0.4$. For a typical choice of $\beta=0.1$, for example, this leads to the equilibrium liquid and vapor densities $\rho_{\mathrm{L}} \approx$ 4.1 and $\rho_{\mathrm{V}} \approx$ 2.9. Depending on the case of interest, $\kappa$ lies in the range $[0.002, 0.008]$ and the size of the simulation box is varied with values around $L_{x}\times L_{y}\times L_{z} =$ 125 $\times$ 90 $\times$ 90 lattice nodes for spherical, and $L_{x}\times L_{y}\times L_{z} =$ 125 $\times$ 25 $\times$ 90 for cylindrical droplets. Periodic boundary conditions are applied in the $x$ and $y$-directions.
\section{Discussion and Results}
As mentioned above, in the experimental reports which have considered the motion of a suspended droplet on surfaces with a gradient of texture, the behavior of droplets is not unique \cite{Yang,Zhu,Shastry,Reyssat}. This indicates that the roughness factor as well as the roughness density are not sufficient for a full characterization of a rough surface. Although \equref{eq:Cassie} predicts a decrease of the effective contact angle upon an increase of roughness density and hence a driving force along the gradient of $\phi$, the contact angle hysteresis~\cite{Kusumaatmaja3} may be strong enough in order to prevent a spontaneous droplet motion \cite{Reyssat}.
The present work underlines this aspect by explicitly showing that
the behavior of a droplet on substrates patterned by pillar
microstructure with the same pillar density gradient, but
different pillar geometries (e.g.~rectangular posts with different
pillar width and spacing) can be qualitatively different. While in
the one case the droplet spontaneously moves due to the roughness
gradient induced driving force, it may become arrested if an
unfavorable geometry is chosen.
A simple way to study the effect of a gradient texture is to introduce an abrupt (stepwise) change in the roughness (pillar) density along a given spatial direction. Adopting this choice, we design a substrate divided into two regions, each with a constant pillar density. In order to underline the crucial role of pillar arrangement on the behavior of droplet, we consider two different cases of the same pillar density gradient as shown in \figref{fig:substrates}.
\begin{figure}
\includegraphics[width=4.25 cm]{Sub_R_L1.eps}
\includegraphics[width=4.25 cm]{Sub_A_L1.eps}
\caption{Top view of two step gradient substrates. In the left panel (referred to as case A) , the pillar density to the left side ($x<50$) is $\phi_{\mathrm{Left}}=0.187$ (square posts of length $a=b=3$) while it is set to $\phi_{\mathrm{Right}}=0.321$ on the right side ($x>50$, rectangular posts of length $a=9$ and width $b=3$). The spacing distance of the pillars in the $x$-direction is $d_x=5$ and in the $y$-direction is $d_y=3$ overall on the substrate. The height of the posts is $c=6$. The right panel (case B) is obtained from A by shifting the posts on each second raw horizontally by an amount of $(a+d_x)/2$ with $d_x=5$, $a=3$ for $x<50$ and $a=9$ for $x>50$. All lengths are given in LB units.}
\label{fig:substrates}
\end{figure}
Using the two substrates shown in \figref{fig:substrates}, we performed a series of lattice Boltzmann simulations placing at time $t=0$ a spherical liquid droplet close to the top of the border line separating the two regions of different pillar density. A close look at the left panels in \figref{fig:sph_drop_snapshots} reveals that, both in the case of substrates A and B, the presence of a roughness gradient leads to an asymmetric spreading of droplet. However, despite this similarity of the dynamics at the early stages of spreading, the long time behavior of the droplet strongly depends on the specific arrangement of pillars. In particular, in the case of substrate A, the droplet motion is stopped on the gradient zone, while in case of substrate B it completely reaches the more favorable region of higher $\phi$.
\begin{figure}
\includegraphics[width=4.25 cm]{R_S_L_setup1.eps}
\includegraphics[width=4.25 cm]{R_S_L_final1.eps}
\includegraphics[width=4.25 cm]{A_S_L_setup1.eps}
\includegraphics[width=4.25 cm]{A_S_L_final1.eps}
\caption{Initial setup and final states of a spherical droplet on substrates with an abrupt (step-wise) change of pillar density. The cases of substrates A and B (see \figref{fig:substrates}) are compared.}
\label{fig:sph_drop_snapshots}
\end{figure}
In order to study the effect of droplet shape on the above phenomenon, we also performed a set of simulations using a cylindrical droplet instead of a sphere. The results of these simulations, shown in \figref{fig:cyl_drop_snapshots}, are in line with the case of spherical droplets.
\begin{figure}
\includegraphics[width=4.25 cm]{R_C_L_setup1.eps}
\includegraphics[width=4.25 cm]{R_C_L_final1.eps}
\includegraphics[width=4.25 cm]{A_C_L_setup1.eps}
\includegraphics[width=4.25 cm]{A_C_L_final1.eps}
\caption{The same set of simulations as in \figref{fig:sph_drop_snapshots} but for the case of cylindrical droplets.}
\label{fig:cyl_drop_snapshots}
\end{figure}
For further considerations, we use the substrate type B. The left panel of \figref{fig05} depicts the $xz$-cross section through the center of mass of a spherical drop ($R_{\Omega}=40$) at different times during its motion over the step gradient zone (the pillar densities on the left and right halves of the substrate are fixed to $\phi_{\mathrm{Left}}=0.187$ and $\phi_{\mathrm{Right}}=0.375$). The interested reader can see the motion of droplet in the supplementary movie. The corresponding footprint of the droplet (three phase contact line) is shown in the right panel of \figref{fig05}.
\begin{figure}
\includegraphics[width=4.2 cm]{xz_cut_40.eps}
\includegraphics[width=4.2 cm]{footprint_40.eps}
\caption{The $xz$-cross section of the liquid-vapor interface (left) and the corresponding footprint (right) of a spherical droplet
on a step gradient substrate. In the both panels, the time increases from left to right: $t=5\times10^{4}$, $6\times10^{5}$,
$1.2\times10^{6}$ and $2\times10^{6}$.}
\label{fig05}
\end{figure}
The footprint of the droplet reflects the geometry (shape and arrangement) of the posts. A trend towards increasing contact area is also observed in accordance with the lower effective contact angle in the right region (higher pillar density). A closer look at the footprints in \figref{fig05} (right panel) shows how the chess board-like arrangement of the posts allows the droplet to find the neighbor posts in the gradient direction.
\begin{figure}
\begin{center}
\includegraphics[ width=7 cm]{x_t_8fig_1.eps}
\end{center}
\caption{The $x$-component of the center of mass position versus time for a cylindrical droplet using, $\phi_{\mathrm{Left}}=0.187$ and $\phi_{\mathrm{Right}}=0.321$, $0.333$, $0.35$ and $0.375$. The two group of curves belong to two different surface tensions of $\sigma_0=5.4\times 10^{-4}$(LB units) (right; also labeled as (1) for further reference) and $4\sigma_0$ (left).
}
\label{fig:cyl_cm_motion}
\end{figure}
Next we create substrate patterns of type B with various values of $\Delta \phi$ by keeping $\phi_{\mathrm{Left}}$ unchanged and varying $\phi_{\mathrm{Right}}$. Results on the dynamics of a cylindrical drop on such texture gradient substrates are shown in \figref{fig:cyl_cm_motion}. A survey of the center of mass position versus time in \figref{fig:cyl_cm_motion} reveals that the droplet motion is first linear in time until it reaches a constant value. The plateau corresponds to the case, where the droplet has completely left the region of lower pillar density. Since no driving force exists in this state, the droplet velocity vanishes due to viscous dissipation.
\begin{figure}
\begin{center}
\includegraphics[width=4.25 cm]{velocity5.eps}
\includegraphics[width=4.25 cm]{velocity2.eps}
\end{center}
\caption{Left: Droplet's center of mass velocity versus the difference in pillar density, $\Delta \phi = \phi_{\mathrm{Right}}-\phi_{\mathrm{Left}}$, extracted from the linear part of the center of mass motion (see e.g.\ \figref{fig:cyl_cm_motion}). Results for three different liquid-vapor surface tensions are depicted. From top to bottom: $4\sigma_0$, $2\sigma_0$ and $\sigma_0$, where $\sigma_0=5.4\times 10^{-4}$ (LB units). In all cases, a linear variation is seen in accordance with the simple model, \equref{eq:velocity}.
Right: A further test of \equref{eq:velocity}, where the dependence of droplet velocity on surface tension is shown for a fixed $\Delta \phi$.}
\label{fig:velocity}
\end{figure}
Using the linear part of the data shown in \figref{fig:cyl_cm_motion}, we define an average velocity for the motion of droplet's center of mass upon the action of texture gradient forces. Importantly, \figref{fig:cyl_cm_motion} reveals the strong effect of the surface tension on droplet dynamics. Both absolute values of droplet velocity for a given $\Delta \phi$ as well as the slope of the data significantly depend upon $\sigma$.
In order to rationalize these observations, we provide a simple model based on scaling arguments. Noting that the flow we consider is in the viscous regime, we neglect inertial terms in the Navier-Stokes equation and write for the steady state $0= -\nabla p + \eta \Delta u$, where $u$ is the fluid velocity, $p$ is the hydrostatic pressure and $\eta$ the viscosity. The velocity $u$ varies only over a distance of the order of the droplet radius, hence $\Delta u \sim u/R^2$. On the other hand, $\nabla p \sim - dp_{\mathrm{Laplace}}(\theta^*_{\mathrm{C}})/R = (\sigma/R^2) dR/R$, assuming that the driving force originates from the Laplace pressure variation (over a length of the order of $R$) within the droplet. For the case of a cylindrical droplet of unit length, the condition of constant droplet volume, $\Omega=R^2 [\theta^*_{\mathrm{C}}- \sin(2\theta^*_{\mathrm{C}})/2]$, \equref{eq:Cassie} and some algebra lead to $dR/R \sim (\pi-\theta^*_{\mathrm{C}}) d \phi \sim \sqrt{\phi} d\phi$ (the relation $\pi-\theta^*_{\mathrm{C}}\sim \sqrt{\phi}$ follows from \equref{eq:Cassie} assuming $\theta^*_{\mathrm{C}}$ close to $\pi$ \cite{Reyssat}). Putting all together, and after a change of notation $d\phi \equiv \Delta \phi =\phi_{\mathrm{Right}}-\phi_{\mathrm{Left}}$, we arrive at $\eta u/R^2 \sim (\sigma/R^2) \sqrt{\phi} \Delta \phi$. Hence,
\begin{equation}
u \sim \frac{\sigma}{\eta} \sqrt{\phi} \Delta \phi.
\label{eq:velocity}
\end{equation}
Interestingly, despite different mechanisms at work, both \equref{eq:velocity} and Eq.\ (5) in \cite{Reyssat} predict a linear dependence of droplet velocity on $\Delta \phi$. In \cite{Reyssat}, the lateral velocity is estimated from roughness gradient induced asymmetry of dewetting of a droplet, flattened due to impact. The situation we consider is different. There is no impact and hence a related flattening is absent in the present case. Furthermore, the dynamics we study is in the viscous regime whereas the high impact velocity in \cite{Reyssat} supports the relevance of inertia. These differences show up in different predictions regarding the dependence of the droplet velocity on surface tension, fluid viscosity and density. While Eq.\ (5) in \cite{Reyssat} predicts a dependence on the square root of $\sigma$, \equref{eq:velocity} suggests that, in our case, a linear dependence on $\sigma$ should be expected.
We therefore examine \equref{eq:velocity} not only with regard to the relation between droplet velocity $u$ and difference in roughness density $\Delta \phi$ (left panel in \figref{fig:velocity}) but also check how $u$ changes upon a variation of the surface tension $\sigma$ for a fixed $\Delta\phi$. Results of this latter test are depicted in the right panel of \figref{fig:velocity}, confirming the expected linear dependence of $u$ on $\sigma$. It is noteworthy that $\sigma$ in the right panel of \figref{fig:velocity} varies roughly by a factor of 10 so that a square root dependence can definitely be ruled out.
It is worth emphasizing that the above discussed linear relation between droplet velocity $u$ and difference in pillar density $\Delta \phi$ is expected to hold as long as pinning forces are weaker than texture gradient induced driving force. In this case, the specific details of pillar arrangement seem to modify the prefactors entering the scaling relation, \equref{eq:velocity}, but not the predicted linear law. \Figref{fig:velocityB} is devoted to this aspect. In this figure, the dependence of $u$ on $\Delta \phi$ is compared for two slightly different ways of realizing $\Delta\phi$: In (1) $\phi_{\mathrm{Left}}=0.187$ while $\phi_{\mathrm{Left}}=0.2$ in (2). All other aspects/parameters are identical. As a consequence, since the list of investigated $\phi_{\mathrm{Right}}$ is exactly the same, slightly higher values of $\Delta \phi$ are realized in (1) as compared to (2). If the details of pillar arrangement were unimportant, all the velocity data obtained from these two series of simulations should lie on the same line. As shown in \figref{fig:velocityB}, this is obviously not the case. Rather, the linear relation between $u$ and $\Delta\phi$ seems to hold independently for each studied case.
\begin{figure}
\begin{center}
\includegraphics[width=4.25 cm]{x_t_4fig_2.eps}
\includegraphics[width=4.25 cm]{velocity4.eps}
\end{center}
\caption{Left: The $x$-component of the center of mass position versus time for a cylindrical droplet using, $\phi_{\mathrm{Left}}=0.2$ and $\phi_{\mathrm{Right}}=0.321$, $0.333$, $0.35$ and $0.375$ [$\sigma=\sigma_0=5.4\times 10^{-4}$ (LB units)]. Right: Droplet's center of mass velocity extracted from the linear part of the data shown in the left panel and in \figref{fig:cyl_cm_motion} (labeled as (1)).}
\label{fig:velocityB}
\end{figure}
Next we examine how the droplet's contact area changes with time as the droplet moves on the gradient zone. We determine this quantity by simply counting the number of grid points beneath the droplet. The time evolution of the contact area is compared to that of the center of mass position in \figref{fig06} (left panel) for a spherical droplet of radius $R_{\Omega}=36$. In contrast to the center of mass position, which increases monotonously with time, the area beneath the droplet exhibits irregularities and oscillations.
\begin{figure}
\includegraphics[ width=4.2 cm]{x_area_t_36.eps}
\includegraphics[width=4.2 cm]{foot_36_t.eps}
\caption{Left: The contact area and the $x$-component of droplet's center of mass position versus time. Right: Footprint of the droplet on the step gradient substrate for times {$t_{\mathrm{A}}$, $t_{\mathrm{B}}$ and $t_{\mathrm{C}}$} corresponding to the extrema of the contact area as labeled in the left panel.}
\label{fig06}
\end{figure}
We presume that these irregularities and oscillations are closely related to the dynamics of three phase contact line. This idea is confirmed by the plot in the right panel of \figref{fig06}, where droplet's footprint is shown for times $t_{\mathrm{A}}$, $t_{\mathrm{B}}$ and $t_{\mathrm{C}}$ corresponding to the three extrema in the contact area labeled by A, B and C. As seen from this plot, the increase of the droplet's base area between times $t_{\mathrm{A}}$ and $t_{\mathrm{B}}$ is accompanied by a significant motion of the three phase contact line on the right side of the footprint while it remains essentially pinned to the pillars on the left side (with the exception of depinning from the left most pillar). The state B is, however, energetically unfavorable due to a stretched shape of the contact line. The transition from B to C reduces this asymmetry, thereby leading to a smaller contact area at $t_{\mathrm{C}}$. We emphasize here that such local events are not included in the Cassie-Baxter picture, \equref{eq:Cassie}.
\section{conclusion}
We use a two-phase lattice Boltzmann model to study the dynamic behavior of suspended droplets on patterned hydrophobic substrates with a step-wise change in pillar density. We show that the specific arrangement of pillars may play a significant role for the dynamics of the droplet on such substrates. In particular, varying the pillar arrangement while keeping the gradient of pillar density unchanged (\figref{fig:substrates}), we show that both full transport over the gradient zone as well as complete arrest between the two regions of different pillar density may occur (\figsref{fig:sph_drop_snapshots}{fig:cyl_drop_snapshots}).
The relation between the droplet motion and the gradient of pillar density is investigated, revealing a linear dependence for the range of parameters studied (\figsref{fig:cyl_cm_motion}{fig:velocity}). A simple model is provided based on the balance between the viscous dissipation and the driving force, the latter assumed as the gradient of the internal droplet (Laplace) pressure (\equref{eq:velocity}). The model not only reproduces the observed dependence on the pillar density gradient but also predicts a linear dependence of the steady state droplet velocity on the surface tension. This prediction is in line with results of lattice Boltzmann simulations, where the surface tension is varied by roughly a factor of 10 (\figref{fig:velocity}).
Moreover, comparing droplet dynamics for two slightly different ways of realizing the gradient of texture, it is shown that the gradient in pillar density does not uniquely determine the droplet velocity. Rather, the way this gradient is implemented also matters to some extent (\figref{fig:velocityB}).
A detailed survey of the contact line dynamics is also provided revealing interesting pinning and depinning events leading to small amplitude oscillations of the droplet's contact area during its motion over the gradient zone (\figref{fig06}).
\section{Acknowledgments}
We thank David Qu\'er\'e for sending us a version of his recent manuscript on gradient of texture and Alexandre Dupuis for
providing us a version of his LB code. N.M. gratefully acknowledges the grant provided by the Deutsche Forschungsgemeinschaft (DFG) under the number Va 205/3-3. ICAMS gratefully acknowledges funding from ThyssenKrupp AG, Bayer MaterialScience AG, Salzgitter Mannesmann Forschung GmbH, Robert Bosch GmbH, Benteler Stahl/Rohr GmbH, Bayer Technology Services GmbH and the state of North-Rhine Westphalia as well as the European Commission in the framework of the European Regional Development Fund (ERDF).
| proofpile-arXiv_065-6972 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Mixing of species (e.g. contaminants, tracers and particles)
and thermodynamical quantities (e.g. temperature) are dramatically
influenced by fluid flows \cite[][]{D05}.
Controlling the rate of mixing
in a flow is an objective of paramount importance
in many fields of science and technologies with wide-ranging consequences
in industrial applications \cite[][]{WMD01}. \\
The difficulties of the problem come from the
intricate nature of the underlying fluid flow,
which involves many active nonlinearly coupled
degrees of freedom \cite[][]{F95}, and on the poor comprehension of the
way through which the fluid is coupled to the transported quantities.
The problem is even more difficult when the transported quantity reacts back
to the flow field thus affecting its dynamics. An instance is provided by the
heat transport in convection \cite[][]{S94}.\\
Mixing emerges as a final stage of successive hydrodynamic instabilities
\cite[][]{DR81}
eventually leading to a fully developed turbulent stage. The possibility
of controlling such instability mechanisms thus allows one to have
a direct control on the mixing process. In some cases the challenge is
to enhance the mixing process by stimulating the turbulence transition,
in yet other cases the goal is to suppress deleterious instabilities
and the ensuing turbulence. Inertial confinement fusion \cite[][]{CZ02}
is an example whose success relies on the control of the famous
Rayleigh--Taylor (RT) instability occurring when a heavy, denser, fluid
is accelerated into a lighter one.
For a fluid in a gravitational field,
such instability was first described Lord Rayleigh in the 1880s
\cite[][]{R883} and later generalized to all accelerated fluids by
Sir Geoffrey Taylor in 1950 \cite[][]{T50}.
Our attention here is focused on RT instability with the aim
of enhancing the perturbation growth-rate in its early stage of
evolution. The idea is to inject polymers into the fluid and to study
on both analytical and numerical
ground how the stability of the resulting viscoelastic
fluid is modified.
Similar problems were already investigated in more specific context,
including RT instability of viscoelastic fluids with suspended particles
in porous medium with a magnetic field \cite[][]{SR92} and RT linear
stability analysis of viscoelastic drops in high-speed airstream
\cite[][]{JBF02}. We also mention that the viscoelasticity is
known to affect also other kind of instabilities, including
Saffman--Taylor instability \cite[][]{W90,C99},
Faraday waves \cite[][]{WMK99,MZ99},
the stability of Kolmogorov flow \cite[][]{BCMPV05},
Taylor--Couette flow \cite[][]{LSM90,GS96},
and Rayleigh--B\'enard problem \cite[][]{VA69,ST72}.
The paper is organized as follows. In Sec.~2 the basic equations ruling
the viscoelastic immiscible RT system are introduced
together with the phase-field approach. In Sec.~3 the linear analysis
is presented and the analytical results shown and discussed
in Sec.~4. The resulting scenario
is corroborated in Sec.~5 by means of direct numerical simulations of the
original field equations.
\section{Governing equations}
The system we consider is composed of two incompressible fluids
(labeled by 1 and 2) having different densities,
$\rho_1$ and $\rho_2 > \rho_1$, and different dynamical viscosities,
$\mu_1$ and $\mu_2$, with the denser fluid placed above
the less dense one. For more generality, the two fluids are supposed
to be immiscible so that the surface tension on the interface separating
the two fluids will be explicitly taken into account.
The effects of polymer additives is here studied within the framework of the
Oldroyd-B model \cite[][]{O50,H77,BHAC87}.
In this model polymers are treated as elastic
dumbbells, i.e. identical pairs of microscopic beads connected by harmonic
springs. Their concentration is supposed to be
low enough to neglect polymer-polymer interactions.
The polymer solution is then regarded as a continuous medium,
in which the reaction of polymers on the flow is
described as an elastic contribution to the
total stress tensor of the fluid \cite[see \eg ][]{BHAC87}.
In order to describe the mixing process of the resulting viscoelastic
immiscible fluids we follow the phase-field approach
(for a general description of the method see, \eg ,~\cite{B02, CH58},
and for application
to multiphase flows see, \eg ,~\cite{BCB03,DSS07,M07,CMMV09}).
Here, we only recall that
the basic idea of the method is to treat
the interface between two immiscible fluids as a thin mixing layer
across which physical properties vary steeply but continuously.
The evolution of the mixing layer is ruled by an order parameter
(the phase field) that obeys a Cahn--Hilliard equation \cite[][]{CH58}.
One of the advantage of the method is that the boundary conditions at
the fluids interface need not to be specified being encoded in the
governing equations.
From a numerical point of view, the method permits to
avoid a direct tracking of the interface and easily
produces the correct interfacial tension from the mixing-layer free energy.
To be more specific, the evolution of the viscoelastic binary fluid
is described by the system of differential equations
\begin{equation}
\rho_0
\left(\partial_t {\bm v} + {\bm v} \cdot{\bm \partial} {\bm v}\right) =
-{\bm \partial} p + {\bm \partial} \cdot (2 \mu {\bm e} )
+ A \rho_0 {\bm g} \phi
-\phi {\bm \partial} {\cal M}
+{2 \mu \eta \over \tau}
{\bm\partial}\cdot ({\bm \sigma}-\mathbb{I})
\label{eq1}
\end{equation}
\begin{equation}
\partial_t \phi + {\bm v} \cdot {\bm \partial} \phi = \gamma
\partial^2 {\cal M}
\label{eq2}
\end{equation}
\begin{equation}
\partial_t {\bm \sigma}+{\bm v} \cdot{\bm \partial} {\bm \sigma} =
({\bm \partial}{\bm v})^T \cdot{\bm \sigma}+
{\bm \sigma}\cdot{\bm \partial}{\bm v}-{2 \over \tau}({\bm \sigma}-
\mathbb{I})
\,\,\, .
\label{eq3}
\end{equation}
Eq.~(\ref{eq1}) is the usual Boussinesq Navier--Stokes equation
\cite[][]{KC01} with two additional stress contributions.
The first one arises
at the interface where
the effect of surface tension enters into play \cite[][]{B02,YFLS04,BBCV05},
the last term represents the polymer back-reaction to the flow field
\cite[][]{BHAC87}.\\
In (\ref{eq1}),
we have defined $\rho_0=(\rho_1 + \rho_2)/2$, $\bf{g}$ is the gravitational
acceleration pointing along the $y$-axis,
$\mathcal{A}\equiv(\rho_2-\rho_1)/(\rho_2+\rho_1)$
is the Atwood number, $e_{ij}\equiv\left(
\partial_i v_j + \partial_j v_i \right) / 2$ is the rate of strain tensor and
$\mu=\mu(\phi)$ is the dynamical viscosity field parametrically defined as
\cite[][]{LS03}
\begin{equation}
\frac{1}{\mu} = \frac{1+\phi}{2 \mu_1}+\frac{1-\phi}{2 \mu_2}
\label{eq4}
\end{equation}
$\phi$ being the phase field governed by (\ref{eq2}).
The phase field $\phi$ is representative of density fluctuations and
we take $\phi=1$ in the regions of density $\rho_1$
and $\phi=-1$ in those of density $\rho_2 \ge \rho_1$.
${\bm \sigma}\equiv \frac{\langle{\bm R}{\bm R} \rangle}{R_0^2}$
is the polymer conformation tensor, ${\bm R}$ being the end-to-end
polymer vector ($R_0$ is the polymer length at equilibrium),
the parameter $\eta$ is proportional to polymer concentration and
$\tau=\tau(\phi)$ is the (slowest) polymer relaxation time which, according
to the Zimm model \cite[][]{DE86}, is assumed to be proportional to
the viscosity $\mu$ (therefore we have $\tau=\tau_1$ for $\phi=1$ and
$\tau=\tau_2$ for $\phi=-1$ with $\mu(\phi)/\tau(\phi)$ constant).
Finally, $\gamma$ is the mobility and ${\cal M}$ is the chemical
potential defined in terms of
the Ginzburg--Landau free energy ${\cal F}$ as \cite[][]{CH58,B02,YFLS04}
\begin{equation}
{\cal M} \equiv \frac{\delta {\cal F} }{\delta \phi}\qquad\mbox{and}\qquad
{\cal F}[\phi] \equiv \lambda \int_{\Omega} \mathrm{d}\bm{x} \;
\left( \frac{1}{2} |{\bm \partial} \phi|^2+ V(\phi) \right) \, .
\label{eq5}
\end{equation}
where $\Omega$ is the region of space occupied by the system, $\lambda$
is the magnitude of the free-energy and the potential $V(\phi)$ is
\begin{equation}
V(\phi)\equiv \frac{1}{4 \epsilon^2} (\phi^2 -1 )^2
\label{eq6}
\end{equation}
where $\epsilon$ is the capillary
width, representative of the interface thickness.\\
The unstable equilibrium state with heavy fluid placed on the top of
light fluid is given by
\begin{equation}
\bm{v}=\bm{0}\, , \quad \phi(y)=-\tanh\left (\frac{y}
{\epsilon\sqrt{2}}\right )\qquad \mbox{and}\qquad {\bm \sigma}=\mathbb{I}
\label{eq7}
\end{equation}
corresponding to a planar interface of width
$\epsilon$ with polymers having their equilibrium length $R_0$.
In this case, the surface tension, ${\cal S}$, is given by
\cite[see, for example,][]{LL00}:
\begin{equation}
{\cal S} \equiv \lambda \int_{-\infty}^{+\infty} dy \;\left( \frac{1}{2}
|{\bm \partial} \phi|^2+ V(\phi)
\right) = \frac{2\lambda \sqrt{2}}{3\epsilon} \, .
\label{eq8}
\end{equation}
The sharp-interface limit is obtained by taking
the $\lambda$ and $\epsilon$ to zero,
keeping $\cal{S}$ fixed to the value prescribed by surface tension
\cite[][]{LS03}.
\section{Linear stability analysis}
Let us now suppose to impose a small perturbation on the
interface separating the two fluids.
Such perturbation will displace the phase field from
the previous equilibrium configuration, which minimizes the free energy
(\ref{eq5}) to a new configuration for which, in general,
$\mathcal{M} \neq 0$. We want to determine how the perturbation evolves
in time.
Focusing on the two-dimensional case (corresponding to translational invariant
perturbations along the $z$ direction), let us
denote by $h(x,t)$ the perturbation imposed to the planar interface
$y=0$ in a way that we can rewrite the phase-field $\phi$ as:
\begin{equation}
\phi = f\left(\frac{y-h(x,t)}{\epsilon \sqrt{2}}\right)\, ,
\label{eq9}
\end{equation}
where $h$ can be larger than $\epsilon$, yet it has to
be smaller than the scale of variation of $h$ (small amplitudes).
In this limit we assume the interface to be locally in equilibrium,
i.e. $\partial^2 f/\partial y^2 = V'(f)$, and thus $f(y)=-\tanh(y)$
and therefore ${\cal M} = - \lambda \frac{\partial^2 f}{\partial x^2}$
($'$ denotes derivative with respect to the argument).
Linearizing the momentum equation for small interface
velocity we have
\begin{equation}
\rho_0 \partial_t v_y = - \partial_y p - \phi \partial_y {\cal M} -
A g \rho_0 \phi + {2 \mu \eta \over \tau}\partial_i\sigma_{i2}
+\mu \left(\partial_x^2 + \partial_y^2 \right)v_y
+2(\partial_y v_y)\partial _y\mu \,.
\label{eq10}
\end{equation}
Integrating on the vertical direction and using derivations by parts
one gets
\begin{equation}
\rho_0 \partial_t q = {\cal S} \frac{\partial^2 h}{\partial x^2}+
2 A g \rho_0 h + {2 \mu \eta \over \tau} \Sigma +Q
\label{eq11}
\end{equation}
where we have defined
\begin{equation}
Q\equiv \int_{-\infty}^{+\infty}\mu
\left (\frac{\partial^2 }{\partial x^2} -
\frac{\partial^2 }{\partial y^2}\right )v_y\,dy\qquad
q \equiv \int_{-\infty}^{\infty} v_y\,dy \qquad
\Sigma\equiv \int_{-\infty}^{\infty}\partial_x \sigma_{12}\,dy \, ,
\label{eq12}
\end{equation}
and we have used the relations
$\int (f')^2 dy = 2 \sqrt{2}/(3 \epsilon)$,
$\int f f''' dy = 0$, $\int f dy = 2 h$.
Note that, unlike what happens in the inviscid case,
Eq.~(\ref{eq11}) does not involve solely
the field $q_y$ but also second-order derivatives of $v_y$.
In order to close the equation,
let us resort to a potential-flow description.
The idea is to evaluate $Q$ for a potential flow $v_y$ and then to
plug $Q=Q^{pot}$ into (\ref{eq11}) \cite[][]{M93}.
The approximation is justified when viscosity is sufficiently small
and its effects are confined in a narrow region around the interface.
Because for a potential flow $\partial^2 {\bm v}=0$ we have
\begin{equation}
Q^{pot}=
2 \int_{-\infty}^{+\infty}\mu {\partial^2 u_y \over \partial x^2}\, dy=
2 \int_{-\infty}^{0}\mu {\partial^2 u_y \over \partial x^2}\, dy +
2 \int_{0}^{\infty}\mu {\partial^2 u_y \over \partial x^2}\, dy =
(\mu_1+\mu_2){\partial^2 q \over \partial x^2} \, .
\label{eq14}
\end{equation}
Substituting in (\ref{eq11}) and defining $\nu=(\mu_1+\mu_2)/(2 \rho_0)$
one finally obtains
\begin{equation}
\partial_t q =
{{\cal S} \over \rho_0} {\partial^2 h \over \partial x^2}+ 2 A g h +
{2 \mu \eta \over \tau \rho_0} \Sigma +
2 \nu {\partial^2 q \over \partial x^2} \, .
\label{eq15}
\end{equation}
Let us now exploit the equation (\ref{eq2}) for the phase field to relate
$q_y$ to $h$. For small amplitudes, we have:
\begin{equation}
\partial^2{\cal M} = {\lambda \over \epsilon \sqrt{2}}
\left [f'\frac{\partial^4h}{\partial x^4} + {1 \over 2\epsilon^2}
f'''\frac{\partial^2 h}{\partial x^2} \right ]
\label{eq16}
\end{equation}
and therefore, from (\ref{eq2})
\begin{equation}
- {1 \over \epsilon} f' \partial_t h + v_y {1 \over \epsilon} f' =
{\gamma \lambda \over \epsilon} \left [f' \partial_x ^4 h +
{1 \over 2\epsilon^2} f''' \partial_x^2 h \right ] \, .
\label{eq17}
\end{equation}
Integrating over $y$, observing that $1/(2\sqrt{2}\epsilon) f'$
approaches $\delta(y-h)$ as $\epsilon \to 0$ and using the limit
of sharp interface ($\gamma \lambda \to 0$) one obtains
\begin{equation}
\partial_t h = v_y(x,h(t,x),t) \equiv v_y^{(int)}(x,t) \, .
\label{eq18}
\end{equation}
The equation for the perturbation $\sigma_{12}$ of the conformation
tensor is obtained by linearizing (\ref{eq3}) around
$\sigma_{\alpha \beta}=\delta_{\alpha \beta}$
\begin{equation}
\partial_t \sigma_{12}= \partial_{x} v_{y}+ \partial_{y} v_{x} -
\frac{2}{\tau}\sigma_{12}
\label{eq19}
\end{equation}
from which, exploiting incompressibility, we obtain
\begin{equation}
\partial_t \partial_x \sigma_{12}=
(\partial_{x}^2 - \partial_{y}^2) v_{y} -
{2 \over \tau} \partial_x \sigma_{12} -
2 \sigma_{12} \partial_x {1 \over \tau}\, .
\label{eq20}
\end{equation}
For small amplitude perturbations the last term, which is proportional to
$\sigma_{12} \partial_x \phi$, can be neglected at the leading order.
Integrating over $y$ and using again the potential flow approximation
one ends up with
\begin{equation}
\partial_t \Sigma=2 \partial_x^2 q - {2 \over \bar{\tau}} \Sigma
- ({1 \over \tau_1}-{1 \over \tau_2}) \int dy \phi \partial_x \sigma_{12}
\, .
\label{eq21}
\end{equation}
where we have introduced $\bar{\tau}=2 \tau_1 \tau_2/(\tau_1 + \tau_2)$.
In conclusion, we have the following set of equations (in the $(x,t)$
variables) for the
linear evolution of the Rayleigh--Taylor instability in a
viscoelastic flow
\begin{equation}
\left \{
\begin{array}{lll}
\partial_t h & = & v_y^{(int)} \\
\partial_t q & = &
{{\cal S} \over \rho_0} \partial_x^2 h + 2 A g h +
{2 \nu \eta c \over \bar{\tau}} \Sigma + 2 \nu \partial_x^2 q \\
\partial_t \Sigma & = & 2 \partial_x^2 q - {2 \over \bar{\tau}} \Sigma
- ({1 \over \tau_1}-{1 \over \tau_2}) \int dy \phi \partial_x \sigma_{12}
\, .
\end{array}
\right .
\label{eq22}
\end{equation}
where $c=4 \mu_1 \mu_2/(\mu_1+\mu_2)^2 \le 1$.
\section{Potential flow closure for the interface velocity}
The set of equations (\ref{eq22}) is not closed because
of the presence of the interface velocity $v_y^{(int)}$
and of the integral term in the equation for $\Sigma$.
In order to close the system we exploit again the potential
flow approximation for which $v_y=\partial_y \psi$.
Taking into account the boundary condition for $y\to \infty$, the potential
can be written (e.g.~for $y\ge 0$) as
\begin{equation}
\psi(x,y,t)=\int_0^{\infty} e^{-k y+i k x} \hat{\psi}(k,t) dk + c.c.
\label{eq23}
\end{equation}
where ``\^{ }'' denotes the Fourier transform, and therefore
\begin{equation}
v_y(x,y,t)= - \int_0^{\infty} k e^{-k y+i k x} \hat{\psi}(k,t) dk + c.c.
\label{eq24}
\end{equation}
\begin{equation}
q(x,t)= - 2 \int_0^{\infty} e^{i k x} \hat{\psi}(k,t) dk + c.c.
\label{eq25}
\end{equation}
and taking a flat interface, $y=0$, at the leading order
\begin{equation}
v^{(int)}(x,t)= - \int_0^{\infty} k e^{i k x} \hat{\psi}(k,t) dk + c.c.
\label{eq26}
\end{equation}
Assuming consistently that also
\begin{equation}
\sigma_{12}(x,y,t)= \int_0^{\infty} e^{-k y+i k x} \hat{\sigma}_{12}(k,t) dk + c.c. \, ,
\label{eq26b}
\end{equation}
in the limit of small amplitudes one has
$\int dy \phi \partial_x \sigma_{12}=0$ and
the set of equation (\ref{eq22}) for the Fourier coefficients becomes
\begin{equation}
\left \{
\begin{array}{lll}
\partial_t \hat{h} & = & {k \over 2} \hat{q} \\
\partial_t \hat{q} & = &
- {{\cal S} \over \rho_0} k^2 \hat{h} + 2 A g \hat{h} +
{2 \nu c \eta \over \bar{\tau}} \hat{\Sigma} - 2 \nu k^2 \hat{q} \\
\partial_t \hat{\Sigma} & = & - 2 k^2 q - {2 \over \bar{\tau}} \hat{\Sigma} \, .
\end{array}
\right .
\label{eq27}
\end{equation}
Restricting first to the case without polymers ($\eta=0$),
the growth rate $\alpha_N$
of the perturbation is obtained by looking for a solution
of the form $\hat{h} \sim e^{\alpha_N t}$ which gives
\begin{equation}
\alpha_N = - \nu k^2 + \sqrt{\omega^2 + (\nu k^2)^2}
\label{eq28}
\end{equation}
where it has been defined
\begin{equation}
\omega=\sqrt{A g k - {{\cal S} \over 2 \rho_0} k^3} \, .
\label{eq29}
\end{equation}
The expression (\ref{eq29}) is the well-known growth rate for a
Newtonian fluid in the limit of zero viscosity \cite[][]{C61},
while (\ref{eq28}) is a known upper bound to the growth rate for
the case with finite viscosity \cite[][]{MMSZ77}.
Let us now consider the case with polymers, i.e. $\eta>0$.
The growth rate $\alpha$ is given by the solution of
\begin{equation}
(\alpha \bar{\tau})^3 + 2 (\alpha \bar{\tau})^2 (1+\nu k^2 \bar{\tau})+\alpha
\bar{\tau}
\left[4 \nu (1+c \eta) k^2 \bar{\tau} -\omega^2 \bar{\tau}^2 \right]-2 \omega^2 \bar{\tau}^2=0 \, .
\label{eq30}
\end{equation}
The general solution is rather complicated and not very enlightening.
In the limit of stiff polymers, $\bar{\tau} \to 0$, one gets
\begin{equation}
\alpha_0 \equiv \lim_{\bar{\tau} \to 0} \alpha = - \nu(1+c \eta) k^2 + \sqrt{\omega^2 + [\nu(1+c \eta) k^2]^2} \, .
\label{eq31}
\end{equation}
Comparing with (\ref{eq28}) one sees that in this limit polymers
simply renormalize solvent viscosity. This result is in agreement
with the phenomenological definition of $c \eta$ as the zero-shear
polymer contribution to the total viscosity of the mixture \cite[][]{V75}.
Therefore, in order to quantify the effects of elasticity on RT instability,
the growth rate for viscoelastic cases at finite $\bar{\tau}$
has to be compared with the Newtonian case with renormalized
viscosity $\nu(1+c \eta)$.
Another interesting limit is $\bar{\tau} \to \infty$. In this case from
(\ref{eq30}) one easily obtains that the growth rate
coincides with that of the
pure solvent (\ref{eq28}), i.e. $\alpha_{\infty} = \alpha_N$.
The physical interpretation is that in the limit $\bar{\tau} \to \infty$
and at finite time for which polymer elongation is finite, the last
term in (\ref{eq1}) vanishes and one recovers the Newtonian
case without polymers (i.e. $\eta=0$).
Of course, this does not mean that in general polymer effects
for high elasticity disappear.
Indeed in the long-time limit polymer elongation is able to
compensate the $1/\tau$ coefficient and in the late, non-linear
stages, one expects to observe strong polymer effects at high elasticity.
From equation (\ref{eq30}) one can easily show (using implicit differentiation)
that $\alpha(\bar{\tau})$ is a monotonic function and, because
$\alpha_\infty \ge \alpha_0$, we have that instability rate
grows with the elasticity, or the Deborah number, here defined
as $De \equiv \omega \bar{\tau}$.
The case of stable stratification, $g \to -g$, is obtained
by $\omega^2 \to -\omega^2$ neglecting surface tension.
In this case (\ref{eq30}) has no solution for positive $\alpha$,
therefore polymers alone cannot induce instabilities in a stably
stratified fluid.
\section{Numerical results}
The analytical results obtained in the previous Sections
are not exact as they are based on a closure obtained
from the potential flow approximation. While this approximation is consistent
for the inviscid limit $\nu=0$ (where it gives the
correct result (\ref{eq29}) for a Newtonian fluid) for finite
viscosity we have shown
that it gives a known upper bound to the actual growth rate of
the perturbation \cite[][]{MMSZ77} (this is because the potential
flow approximation underestimates the role of viscosity which
reduces the instability). Nonetheless, in the case of
Newtonian fluid this upper bound is known to be a good approximation
of the actual value of the growth rate measured in
numerical simulations \cite[][]{MMSZ77}.
Because both $\bar{\tau} \to 0$ and $\bar{\tau} \to \infty$ limits
correspond to Newtonian fluids, we expect that also in
the viscoelastic case the potential flow description is a good approximation.
To investigate this important point, we have performed a
set of numerical simulations of the full model (\ref{eq1}-\ref{eq3})
in the limit of constant viscosity and relaxation time
(i.e. $\mu_1=\mu_2$, $c=1$ and $\tau_1=\tau_2=\bar{\tau}$)
in two dimensions by means of a standard, fully dealiased,
pseudospectral method on a square doubly periodic domain.
The resolution of the simulations is $1024\times 1024$ collocation points
(a comparative run at double resolution
did not show substantial modifications on the results).
More details on the numerical simulation method can be found
in \cite{CMV06} and \cite{CMMV09}.
The basic state corresponds to a zero velocity field,
a hyperbolic-tangent profile for the phase field and an uniform
distribution of polymers in equilibrium,
according to (\ref{eq7}).
The interface of the basic state is perturbed
with a sinusoidal wave at wavenumber $k$ (corresponding to
maximal instability for the linear analysis)
of amplitude $h_0$ much smaller than the wavelength
($k h_0 =0.05$).
The growth rate $\alpha$ of the perturbation is measured
directly by fitting the height of the perturbed interface at
different times with an exponential law.
For given values of $A\,g$, $\mathcal{S}/ \rho_0$, $\nu$ and $\eta$,
this procedure is repeated for different values of $\bar{\tau}$ at the
maximal instability wavenumber $k$ (which, for
the range of parameters considered here, is always $k=1$, i.e.
it is not affected by elasticity).
Figure~\ref{fig1} shows the results for two sets of runs at
different values of $\eta$ and $\nu$.
As discussed above, we find that
the theoretical prediction given by (\ref{eq30}) is indeed an upper
bound for the actual growth rate of the perturbation.
Nevertheless, the bound gives grow rates which are quite close
to the numerical estimated values (the error is of the order of
$10\%$).
The error is smaller for the runs having a larger value of $\eta$ and
$\nu$, as was already discussed by \cite{CMMV09}.
\begin{figure
\centering
\includegraphics[scale=0.8]{fig.eps}
\caption{The perturbation growth-rate $\alpha$ normalized
with the inviscid growth rate $\omega$ (\ref{eq29}) as a
function of the Deborah number $De=\omega \bar{\tau}$.
Points are the results of numerical simulations of the
full set of equations (\ref{eq1}-\ref{eq3}), lines represent
the theoretical predictions obtained from (\ref{eq30}).
The values of parameters are: $c=1$, $k=1$, $A g=0.31$,
$\mathcal{S} / \rho_0=0.019$ and $\eta=0.3$, $\nu=0.3$
(upper points and line) and $\eta=0.5$, $\nu=0.6$ (lower
points and line).}
\label{fig1}
\end{figure}
Both theoretical and numerical results show that the effect of
polymers is to increase the perturbation growth-rate.
$\alpha$ grows with the elasticity and saturates for sufficiently large value
of $De$.
\section{Conclusions and perspectives}
We investigated the role of polymers on the linear phase of the
Rayleigh--Taylor instability in an Oldroyd-B viscoelastic model.
In the limit of vanishing Deborah number (i.e.
vanishing polymer relaxation time) we recover a known
upper bound for the growth rate of the perturbation in a
viscous Newtonian fluid with modified viscosity.
For finite elasticity,
the growth rate is found to increase monotonically with
the Deborah number reaching the solvent limit for high Deborah numbers.
Our findings are corroborated by a set of direct numerical simulations on the
viscoelastic Boussinesq Oldroyd-B model.
Our analysis has been confined to the linear phase of the perturbation
evolution. When the perturbation amplitude becomes sufficiently large,
nonlinear effects enter into play and a fully developed turbulent regime
rapidly sets in \cite[][]{CC06,VC09,BMMV09}.
In the turbulent stage we expect more dramatic effects of polymers.
In turbulent flows,
a spectacular consequence of viscoelasticity induced by polymers
is the drag reduction effect: addition
of minute amounts (a few tenths of p.p.m. in weight)
of long-chain soluble polymers
to water leads to a strong reduction (up to $80\%$)
of the power necessary to maintain a
given throughput in a channel \cite[see \eg ][]{T49,V75}.
We conjecture that a similar phenomenon
might arise also in the present context.
Heuristically, the RT system can indeed be
assimilated to a channel inside
which vertical motion of thermal plumes
is maintained by the available potential energy.
This analogy suggests the possibility
to observe in the viscoelastic RT system a ``drag'' reduction
(or mixing enhancement) phenomenon,
i.e. an increase of the velocity of thermal plumes with respect to the
Newtonian case.
Whether or not this picture does apply
to the fully developed turbulence regime
is left for future research.
We thank anonymous Referees for useful remarks.
| proofpile-arXiv_065-6978 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
UV divergences in quantum field theories can be
regularized by introducing higher derivatives
in the kinetic term so that the propagator
approaches faster to $0$ than $1/k^2$ at large momenta $k$.
But this is usually done at the cost of unitarity.
For example, by the Pauli-Villars regularization \cite{2,3}
the propagator is modified as
\begin{equation}
\frac{1}{k^2+m^2}\longrightarrow\frac{1}{(k^2+m^2)(k^2+M^2)}
=\frac{1}{M^2-m^2}\left(
\frac{1}{k^2+m^2}-\frac{1}{k^2+M^2}
\right).
\end{equation}
This propagator $\sim 1/k^4$ at large $k$,
hence alleviates the UV divergence of a Feynman diagram.
However, the norm of the propagating mode
at $k^2 = -M^2$ is negative
due to the minus sign of the pole at $M^2$.
Unitarity is violated for energies beyond the ghost mass $M$.
In \cite{1}, a higher derivative correction to the propagator
of the form
\begin{equation}
f(k^2)=\sum_{n=0}^{\infty}
\frac{c_n}{k^2+m_n^2},\quad\quad c_n>0\quad \forall n
\label{propagator}
\end{equation}
is considered.
Due to the condition $c_n > 0$,
Cutkosky's rules \cite{4} ensure
purturbative unitarity for generic Feynman diagrams.
\footnote{
In the check of unitarity, only poles with masses
lower than the center of mass energy $E$ need to be considered,
and thus the fact that there are infinitely many poles
in the propagator is irrelevant.
}
In this paper we adopt the same type of propagators,
thus our theories are automatically unitary.
The models we consider are also manifestly Lorentz invariant.
Hence, in the following
we will focus our attention on the removal of UV divergence.
In \cite{1} it was shown that
to avoid UV divergence in the four dimensional $\phi^4$ theory,
the following conditions are sufficient:
\begin{subequations}
\label{condition}
\begin{align}
&\sum_n c_n m_n^2=0, \\
&\sum_n c_n=0.
\end{align}
\end{subequations}
Since all the parameters $c_n$'must be greater than zero
to ensure unitarity,
thes conditions look impossible.
The trick is that, since there is an infinite number of $c_n$'s,
analytic continuation can be used \cite{1} to
satisfy both conditions in eq. (\ref{condition}).
For example, with two constant parameters $z > 0$ and $a > 0$,
let
\begin{subequations}
\begin{align}
c_0&=\frac{1}{1-e^{-z}},\\
c_n&=e^{zn}\qquad n\ge 1,\\
m^2_0&=\frac{1-e^{-z}}{1-e^{-(z+a)}},\\
m^2_n&=e^{an},\qquad n\ge 1,
\end{align}
\end{subequations}
then one can analytically continue the infinite sum
to a simple form
\begin{subequations}
\label{zeta}
\begin{align}
\sum_{n=0}^{\infty} c_n &=
\frac{1}{1-e^{-z}} + \sum_{n=1}^\infty e^{zn}
=\frac{1}{1-e^{-z}} + \frac{e^z}{1-e^z} = 0, \\
\sum_{n=0}^{\infty} c_n m_n^2
&= \frac{1}{1-e^{-(z+a)}} + \sum_{n=1}^{\infty} e^{n(z+a)}
= \frac{1}{1-e^{-(z+a)}} + \frac{e^{z+a}}{1-e^{z+a}} = 0.
\end{align}
\label{examplealc}
\end{subequations}
Note that the two infinite series above diverge
if $z > 0$ or $(z+a) > 0$, respectively,
but we define them by analytic continuation.
Three important issues must be addressed immediately.
First, it is important that the analytic continuation can
be carried out consistently throughout all calculations.
This will be the main concern when we give a prescription
for the computation of Feynman diagrams.
Secondly, some of the readers may be uncomfortable with
this analytic continuation,
in the absence of an intuitive physical interpretation.
However, we will point out that
a similar analytic continuation is naturally incorporated in string theory
from the viewpoint of the worldsheet theory.
It will be very interesting to construct
an analogous worldsheet theory that will
directly justify the analytic continuation used in our models.
But we shall leave this problem for the future.
Finally, while there is an infinite number of poles
in the propagator,
this theory is also equivalent to a theory with
an infinite number of scalar fields with masses $m_n$.
If $m^2_n \gg m^2_0$ for all $n > 0$,
the low energy behavior of this theory
is approximated by an ordinary scalar field theory
with a single scalar field with mass $m_0$.
In this paper we will discuss a generic scalar field theory
in general even dimensions.
After studying the relations among interaction vertices, internal lines,
external lines and loops in Feynman diagrams,
we enumerate the conditions sufficient to eliminate
all superficial divergences to ensure UV-finiteness.
In the last section, we will discuss the physical meaning of
analytic continuation, making an analogy with string theory.
\section{$\phi^n$ theory in 4 dimensions}
In this section we study the patterns of UV-divergence
in a $\phi^n$ theory in 4 dimensional space-time,
and list all the conditions needed to eliminate all the UV-divergences.
Roughly speaking, the more divergent a Feynman diagram is,
the more conditions we need to make it finite.
Thus we are particularly interested in the most divergent diagrams
in order to find all the conditions needed to guarantee UV finiteness.
For the sake of simplicity,
we assume that there is a unique $\phi^n$ interaction in the theory.
Nevertheless, our conclusion will also apply to more general theories
including $\phi^{n-2}, \cdots, \phi^4$ interactions,
since one can always construct the most divergent diagrams with
$\phi^n$ interactions alone.
In a diagram with superficial divergence of dimension $D$,
in general there are divergent terms proportional to \cite{1}
\begin{equation}
\sum_n c_n \Lambda^D,\quad\sum_nc_nm_n^2\Lambda^{D-2},
\quad \cdots, \quad
\sum_nc_nm_n^{D-2}\Lambda^{2},
\quad\sum_nc_nm_n^{D}\log(\Lambda^2),
\end{equation}
In 4 dimensions,
the superficial divergence $D$ is determined by
the number of loops $L$ and
the number of internal lines (propagators) $I$ as
\begin{equation}
D=4L-2I.
\label{DLI}
\end{equation}
On the other hand, the number of loops $L$
is related to the number of vertices $V$ and internal lines $I$ via
\footnote{
This equality does not apply to the one-loop diagram
without vertices ($I = 1$, $V = 0$ and $L = 1$).
This is because the propagator in the loop does not
have its endpoints ending on vertices.
}
\begin{equation}
L=I-V+1.
\label{loopandvertex}
\end{equation}
This equation can be understood as follows.
The calculation of a Feynman diagram with $L$ loops
always turns out to be an integration over
$L$ free momentum parameters ($p_1, \cdots, p_L$).
On the other hand,
the number of free momentum parameters
should also equal the total number of momenta $I$
assigned to each propagator ($q_1, \cdots, q_I$)
minus the number of constraints $V$
for the momentum conservation at each vertex.
However, the constraints of momentum conservation
at all vertices are not linearly independent.
The number $1$ on the right hand side of (\ref{loopandvertex})
corresponds to the momentum conservation
of the whole diagram,
which is automatically satisfied by the assignment of external momenta.
Another equality that will be used later is
\begin{equation}
E = nV - 2I,
\label{EVI}
\end{equation}
where $E$ is the number of external lines
and $n$ the number of legs of each interaction vertex.
Using the relations above, we can express $D$ as
\begin{equation}
D=(n-4)V-E+4.
\end{equation}
In the 4 dimensional $\phi^4$ theory,
$D$ only depends on the number of external lines $E$ as $D = 4-E$,
and is thus bounded from above by $D\leq 4$.
This is why we only need two conditions (\ref{condition})
to eliminate the divergences of $D = 4$ and $D = 2$.
For $\phi^n$ theories with $n>4$,
the large $V$ is, the higher superficial divergence $D$ can be.
{\em A priori} this may enforce us to impose infinitely many conditions
of the form
\begin{equation}
\sum_n c_n m_n^{2r}=0
\label{typicalcond}
\end{equation}
with $r = 0, 1, 2, \cdots, \infty$.
However the divergence of a diagram can sometimes be decomposed
into lower dimensional divergences.
For example, in the $\phi^4$ theory,
there is a diagram with superficial divergence $D=4$
(see Fig.\ref{scalartwoloops}),
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{scalartwoloops.pdf}
\end{center}
\caption{}
\label{scalartwoloops}
\end{figure}
but since the two loops are separable,
this diagram only needs a single condition of dimension $2$
(\ref{condition}a) to avoid the divergence.
From section 3.4 of \cite{1},
a generic Feynman diagram with $L$ loops is of the form
\begin{equation}
\mathcal{M}=\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\int d^4p_1\cdots \int d^4 p_L \prod_{i=1}^I\frac{1}{q_i^2+m_{n_i}^2},
\end{equation}
where $q_i$ is the momentum of the $i$-th internal line,
which is a linear combination of the loop mementa $p_j$
and the momenta of external lines $k_i$.
Using Feynman's parameters, this quantity can be rewritten as
\begin{equation}
\mathcal{M}\propto\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\int_0^1 d\alpha_1 \cdots \int_0^1 d\alpha_I\delta(\alpha_1+\cdots+\alpha_I)
\int d^4p_1\cdots \int d^4 p_L
\frac{1}{\left(\sum_{i=1}^I\alpha_i(q_i^2+m_{n_i}^2)\right)^I}.
\label{Mgeneral}
\end{equation}
By shifting the loop momenta $p_j\rightarrow p'_j$, the integrand can be simplified as
\begin{equation}
\frac{1}{\left(\sum_{j=1}^L\beta_jp_j'^2+\Delta\right)^I},
\end{equation}
where $\beta_j$'s are functions of the parameters $\alpha_i$, and
\begin{equation}
\Delta=\Delta_0+\sum_{i,j=1}^E A_{ij}k_ik_j,
\quad \Delta_0=\sum_{i=1}^I\alpha_im_{n_i}^2,
\label{DD}
\end{equation}
where $A_{ij}$'s are function of the Feynman parameters $\alpha_i$.
Before summing over $n_1, \cdots, n_I$,
each integral in (\ref{Mgeneral}) is potentially divergent.
Our prescription of calculation is to first regularize all divergent integrals
by dimensional regularization $d= 4-\epsilon$,
and after imposing the conditions (\ref{condition}),
we take the limit $\epsilon\rightarrow 0$ to obtain the final result.
One could also apply other regularization schemes instead of
dimensional regularization.
It was shown in \cite{1} that, for the diagrams we computed explicitly,
various different regularization methods give exactly the same result.
This may be a general feature of our models,
although a rigorous proof is yet to be given.
By the general formula of dimensional regularization\cite{7}
\begin{equation}
\int \frac{d^dl}{(2\pi)^d}\frac{1}{(l^2+\nu^2)^n}=\frac{1}{(4\pi)^{d/2}}\frac{\Gamma(n-\frac{d}{2})}{\Gamma(n)}\left(\frac{1}{\nu^2}\right)^{n-\frac{d}{2}},
\end{equation}
apart from the integration over $\alpha$'s,
equation(\ref{Mgeneral}) can be integrated over loop momenta $p'_i$'s one by one as
\begin{eqnarray}
&&\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\int d^d p_1\cdots \int d^d p_L
\frac{1}{\left(\sum_{j=1}^L\beta_jp_j'^2+\Delta\right)^I} \nonumber \\
&\propto&\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\int d^d p_1\cdots \int d^d p_{L-1}
\frac{1}{\left(\sum_{j=1}^{L-1}\beta_jp_j'^2+\Delta\right)^{I-d/2}}
\Gamma\left(I-\frac{d}{2}\right) \nonumber \\
&\propto&\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\int d^d p_1\cdots \int d^d p_{L-2}
\frac{1}{\left(\sum_{j=1}^{L-2}\beta_jp_j'^2+\Delta\right)^{I-2d/2}}
\Gamma\left(I-\frac{d}{2}\right)
\frac{\Gamma(I-\frac{2d}{2})}{\Gamma(I-\frac{d}{2})}. \nonumber \\
\label{Mgeneral2}
\end{eqnarray}
It is easy to see that
the Gamma function appearing in the denominator
after integrating over a loop momentum always cancels
the numerator due to the previous integral.
After we integrate over all loop momenta,
the final result of (\ref{Mgeneral2}) is proportional to
\begin{eqnarray}
&&
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\left(\frac{1}{\Delta}\right)^{I-Ld/2}
\Gamma\left(I-\frac{Ld}{2}\right) \nonumber \\
&=&
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\left(\frac{1}{\Delta}\right)^{\frac{\epsilon L}{2}-\frac{D}{2}}
\Gamma\left(\frac{\epsilon L}{2}-\frac{D}{2}\right) \nonumber \\
&\approx&
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I} \Delta^{D/2}
\left[1+\frac{\epsilon L}{2}\log{\left(\frac{1}{\Delta}\right)}
+{\cal O}(\epsilon^2)\right]
\frac{(-1)^{D/2}}{\left(\frac{D}{2}\right)!}
\left[\frac{2}{L\epsilon}+\left(-\gamma+\sum_{k=1}^{D/2}\frac{1}{k}\right)
+{\cal O}(\epsilon)\right] \nonumber \\
&\propto&
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I} \Delta^{D/2}
\left[\frac{2}{L\epsilon}+
\log{\left(\frac{1}{\Delta}\right)}+
\left(-\gamma+\sum_{k=1}^{D/2}\frac{1}{k}\right)+{\cal O}(\epsilon)\right],
\label{Mgeneral3}
\end{eqnarray}
where $D$ is the superficial degree of divergence and $\epsilon=4-d$.
The UV divergence of the diagram is summarized in
the first term in (\ref{Mgeneral3}),
which diverges in the limit $\epsilon\rightarrow 0$.
To eliminate this UV divergence, we need
\begin{equation}
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}\Delta^{D/2}=0.
\label{20}
\end{equation}
If this condition is satisfied,
the third term also vanishes and
the second term in (\ref{Mgeneral3})
contributes to the finite part of the amplitude
\begin{equation}
\mathcal{M}\propto\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\int_0^1 d\alpha_1 \cdots \int_0^1 d\alpha_I\delta(\alpha_1+\cdots+\alpha_I)
\Delta^{D/2}\log{\left(\frac{1}{\Delta}\right)}.
\end{equation}
As we look at diagrams with
higher and higher superficial divergence $D$,
there is a chance of finding new conditions
of the form (\ref{typicalcond}) with larger and larger values of $r$
in order for (\ref{20}) to remain valid.
To understand the precise connection between $D$ and the values of $r$,
we decompose (\ref{20}) intro equations of the form (\ref{typicalcond})
with different values of $r$.
But we only care about the largest value of $r$, $r_{max}$,
(or the largest power on the masses $m_n$),
since all conditions of the form (\ref{typicalcond}) with $r < r_{max}$
are needs for all diagrams to be UV finite.
According to (\ref{DD}), eq. (\ref{20}) can be expanded
(note that $D$ is always even, see (\ref{DLI})) as
\begin{equation}
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\left(\Delta_0^{D/2} +
\frac{D}{2} \Delta_0^{D/2-1}\left(\sum_{i, j=1}^E A_{ij}k_i k_j\right)
+ C^{D/2}_2 \Delta_0^{D/2-2}\left(\sum_{i, j=1}^E A_{ij}k_i k_j\right)^2
+ \cdots
\right),
\end{equation}
where $C^{D/2}_2 = \frac{(D/2)(D/2-1)}{2}$,
and the largest power on $m_n^2$ in (\ref{20}) resides in the term
\begin{equation}
\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}\Delta_0^{D/2}
=\sum_{n_1\cdots n_I}c_{n_1}\cdots c_{n_I}
\left(\sum_{i=1}^I\alpha_im_{n_i}^2\right)^{D/2}=0.
\label{n1n2n3am}
\end{equation}
Apparently, the conditions (\ref{condition})
($\sum c_n=0$ and $\sum c_n m_n^2=0$)
needed for the $\phi^4$ theory
must also be needed for $\phi^n$ theory with $n > 4$.
Thus we can first remove all the terms in (\ref{n1n2n3am})
that already vanish due to these conditions.
This means that in the expansion of $(\sum_{i=1}^I\alpha_im_{n_i}^2)^{D/2}$,
we must be able to associate at least two factors of $m_{n_i}^2$
to each $c_{n_i}$ in order for a particular term to survive.
However,
since each term in the expansion of $(\sum_{i=1}^I\alpha_im_{n_i}^2)^{D/2}$
is a product of $D/2$ powers of $\alpha_im_{n_i}^2$,
and there are $I$ possible values of the index $i$ on $c_{n_i}$ to check,
it will not be possible to associate two or more factors of $m_{n_i}^2$
for all values of $i$ if
\begin{equation}
2I > D/2=\frac{4L-2I}{2}=2L-I.
\end{equation}
As a result there will be no condition other than
$\sum c_n=0$ and $\sum c_n m_n^2$=0 if
\begin{equation}
3I > 2L.
\end{equation}
Combining this with eq. (\ref{loopandvertex})
leads to a trivial condition
\begin{equation}
V+I/2 > 1.
\label{Vge1}
\end{equation}
This condition is violated only by the one-loop diagram
without vertex ($V=0$),
which is already considered in the $\phi^4$ theory
and vanishes under the conditions (\ref{condition}).
Thus we have proven that in 4 dimensions
all $\phi^n$ theories are UV finite
if the propagator (\ref{propagator})
satisfies the conditions (\ref{condition}).
\section{$\phi^n$ theory in arbitrary even dimensions}
In general, the relation between the superficial divergence $D$
and space-time dimension $d$ is
\begin{equation}
D=dL-2I.
\label{DdLI}
\end{equation}
In this paper we restrict our disscussion to
the cases of even dimensional space-time.
The reason is that odd dimensions may lead to
odd values of superficial divergence $D$,
and $\Delta^{D/2}$ is no longer a polynomial of $\Delta_0$.
Repeating the arguments in the previous section
for a generic even dimension $d$,
we find (\ref{Vge1}) replaced by
\begin{equation}
V > 1+\left(1-\frac{6}{d}\right) I.
\end{equation}
This condition can be easily violated when $d > 4$.
For example, the simple $\phi^4$ one-loop diagram
in Fig.\ref{oneloop5D} for 6 dimensional spacetime has
a superficial divergence of $6\times 1-2\times 1=4$.
Clearly we need one more condition $\sum c_n m_n^4=0$
in addition to (\ref{condition}).
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{oneloop5D.pdf}
\end{center}
\caption{}
\label{oneloop5D}
\end{figure}
The next question is:
for given $n$ and $d>4$,
do we need infinitely many conditions to ensure
all diagrams to be finite,
or only a finite number of conditions suffice to avoid all UV divergences?
To answer this question,
we revisit eq. (\ref{n1n2n3am}) in more detail.
If we impose sufficiently many conditions
of the form (\ref{typicalcond}) to ensure
that (\ref{n1n2n3am}) vanishes,
there would be no UV divergences.
If $2I\leq D/2$,
there are terms in (\ref{n1n2n3am})
with a factor of $m_{n_i}$ to the 4th or higher powers
associated with each factor of $c_{n_i}$'s,
and thus we need the condition $\sum c_n m_n^4=0$
in order to remove such terms.
Similarly, if $3I\leq D/2$,
we also need $\sum c_n m_n^6=0$, and so on.
In general, for a Feynman diagram with superficial divergence $D$
and $I$ internal lines,
we need conditions (\ref{typicalcond})
with $r = 1, 2, \cdots, [D/2I]$,
where $[D/2I]$ denotes the integer part of $D/2I$.
Therefore, we are interested in the maximal value of
$[D/2I]$ for a $\phi^n$ theory in $d$ dimensions with given $n$ and $d$.
If the set of $[D/2I]$ for all Feynman diagrams
is unbounded from above,
we need an infinite number of conditions.
Using (\ref{DdLI}), and then (\ref{loopandvertex}),
one can express $D$ in terms of $V$ and $I$ as
\footnote{
Again we are excluding the diagram without vertices.
It can be checked separately that
the conditions we will impose later will also ensure
that these diagrams are free of UV divergences.
}
\begin{equation}
D = dL - 2I = d(I-V+1) - 2I = (d-2)I - d(V-1) \leq (d-2)I.
\end{equation}
This implies that there is an upper bound to the number $[D/2I]$, i.e.
\begin{equation}
D/2I \leq \frac{d-2}{2}.
\label{choiceofk}
\end{equation}
This means that for any given $n$ and space-time dimension $d$,
we only need the conditions
\begin{equation}
c_n m_n^{2r}=0 \qquad \mbox{for} \qquad r = 0, 1, \cdots, \frac{d-2}{2}.
\label{generalcond}
\end{equation}
Remarkably, this condition is independent of $n$.
It follows that, for given dimension $d$,
the same propagator that satisfies (\ref{generalcond})
suits all polynomial interactions of $\phi$.
sion in our theory.
As it was commented in \cite{1},
the desired propagators satisfying all the conditions
are easy to construct.
Here we give a systematic way to construct
propagators satisfying (\ref{generalcond}) for generic $d$.
With a set of $d/2$ positive parameters $x_i$,
we define
\begin{subequations}
\label{example}
\begin{align}
c_n&=\left[
1+x_1(n+1)+x_2(n+2)(n+1)+\cdots
+x_{d/2}\frac{(n+d/2)!}{n!}
\right]e^{zn}, \\
m^2_n&=e^{an}
\end{align}
for $n = 0, 1, 2, \cdots$.
\end{subequations}
Denoting $\rho\equiv e^{z+ar}$ for convenience,
we carry out the infinite sum $\sum c_nm^{2r}_n$
first assuming that $\rho<1$,
and then we analytically continue it back to $\rho>1$.
The result of $\sum c_nm^{2r}_n$ is
\begin{subequations}
\begin{align}
\sum_{n=0}^\infty c_nm_n^{2r}&=\frac{1}{1-\rho}+
x_1\frac{d}{d\rho}\left(\frac{1}{1-\rho}\right)+
x_2\frac{d^2}{d\rho^2}\left(\frac{1}{1-\rho}\right)+
\cdots x_{d/2}\frac{d^{\frac{d}{2}}}{d\rho^\frac{d}{2}}\left(\frac{1}{1-\rho}\right)\\
&=\frac{1}{\xi}+\frac{x_1}{\xi^2}+\frac{x_2}{\xi^3}+
\cdots\frac{x_{d/2}}{\xi^{d/2+1}} \equiv h(\xi),
\end{align}
\end{subequations}
where $\xi\equiv\frac{1}{1-\rho}$, which is negative definite when $\rho>1$.
We have sufficient parameters $\{x_1,x_2\cdots x_\frac{d}{2}\}$
to fix the $d/2$ roots of $\xi$ at desired positions
$\{-\xi_1,-\xi_2,\cdots,-\xi_{d/2}\}$ ($\xi_i$'s are positive).
We can find the correspondence between $x_i$'s and $\xi_i$'s from
\begin{equation}
\xi^{d/2+1}h(\xi) =
c(\xi+\xi_1)(\xi+\xi_1)\cdots(\xi+\xi_{d/2}),
\label{findx}
\end{equation}
where $c$ is an arbitrary real parameter.
Apparently all $x_i$'s are positive because the polynomial (\ref{findx})
has no negative coefficients.
As a result, all $c_n$'s are positive and unitarity is preserved.
\section{Analytic continuation and string theory}
\label{AC}
\subsection{Analytic continuation}
It might appear strange to some readers that
the analytic continuation of a parameter in the propagator is used
to eliminate UV divergences.
What is the physical meaning of this analytic continuation?
We will try to give some hint to answering this question.
First, analytic continuation means the extension of the domain
of a function $f(x)$ under the requirement of analyticity.
For example, if we define $f(x)$ by the series
\begin{equation}
f(x)=1+x+x^2+x^3+\cdots=\sum_{n=0}^\infty x^n,
\label{seriessol}
\end{equation}
the domain of $f(x)$ should be restricted to $(-1,1)$
because the radius of convergence is $1$.
However, we can extend the definition of $f(x)$
by analytic continuation to the whole complex plane
except the point at $x=1$, so that
\begin{equation}
f(x)=\frac{1}{1-x}, \qquad (x\in\mathbb{C}, \quad x\neq 1).
\label{exact}
\end{equation}
In mathematical manipulations of physical equations,
there is a physical reason for analytic continuation.
Due to the use of certain computational techniques
or one's choice of formulation,
the validity of some mathematical expressions may be restricted,
but often the physical quantities we are computing
could be well-defined with a larger range of validity.
Relying on the analyticity of the physical problem,
analytic continuation allows us to retrieve the full range of validity
of our results, even though the validity of derivation is more restricted.
As an example, imagine that in a physical problem,
we need to solve the following differential equation
\begin{equation}
(1-x)f'(x)-f(x)=0.
\end{equation}
One might try to solve this differential equation as an expansion
\begin{equation}
f(x)=f_0+f_1x+f_2x^2+\cdots,
\end{equation}
and obtain some recursion relations which results in the solution (\ref{seriessol}),
up to an overall constant.
If one analytically continues this result to (\ref{exact}),
one can directly check that it is the correct solution of the differential equation
even for $x$ outside the range $(-1,1)$.
The appearance of the series (\ref{seriessol})
and the convergence condition $|x| < 1$ is merely an artifact of
the technique used in derivation.
\subsection{One-loop diagrams in string theory}
In this subsection,
we shall review how UV divergence is avoided in string theory
via analytic continuation.
Apart from factors involving vertex operators,
the formula for the amplitudes of one-loop diagrams
contain a common factor \cite{6}
\begin{equation}
A_0=\int_0^1d\omega \; \mathrm{Tr} \omega^{L_0-2},
\label{openstring1}
\end{equation}
where $L_0=\frac12p^2+N$.
This factor comes from the self energy diagram of an open string.
\begin{figure}[h]
\begin{center}
\includegraphics[]{worldsheet1.pdf}
\end{center}
\caption{}
\label{worldsheet1}
\end{figure}
The trace of eq. (\ref{openstring1})
includes summation over each state in the spectrum
and integration of energy-momentum.
The factor $\omega^{L_0}$ is an operator that
propagates a string through a proper time of length $(-\ln\omega)$
(which is positive, since $\omega\le1$).
The regime $\omega\rightarrow 1$ corresponds to a very short proper time,
and thus a very narrow cylinder (see Fig.\ref{worldsheet1});
this is thus the ultraviolet regime.
To take a closer look at the UV behavior of $A_0$ (\ref{openstring1}),
one can formally compute $A_0$ as
\footnote{
For bosonic strings, the term $0^{L_0-1}$ also diverges.
But our attention is on the UV divergent terms
due to integration over the regime $\omega \sim 1$.
}
\begin{eqnarray}
A_0&=&\mathrm{Tr}\frac{1}{L_0-1}\omega^{L_0-1}|_0^1 \nonumber \\
&=&\int d^D p\;\sum_N \frac{c_N}{p^2+N-1}(1^{L_0-1}-0^{L_0-1}) \nonumber \\
&=&\int d^D p\;\sum_N \frac{c_N}{p^2+N-1}.
\label{stringdivergence}
\end{eqnarray}
The factor $c_N$ comes from the symmetry factor of particles
depending on their spin.
Here we notice that
the momentum integration leads to a UV divergence
for each particle propagator.
Naively, the sum over the contributions from infinitely many particles
can only make the UV divergence infinitely worse.
But it is well known that string theory is free from UV divergence.
We will see below that the trick is analogous to the analytic continuation
we used to regularize the scalar field theories.
Let us recall that string theory solves this UV problem
by conformal symmetry and open-closed string duality.
By scaling symmetry,
a cylinder with length $\frac{2\pi^2}{-\ln\omega}$ (see Fig. \ref{worldsheet2})
and circumference $2\pi$ is equivalent to Fig. \ref{worldsheet1}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.67]{worldsheet2.pdf}
\end{center}
\caption{}
\label{worldsheet2}
\end{figure}
We can look at this diagram with a different perspective,
interpreting it as a propagating closed string over a proper time
$\frac{2\pi^2}{-\ln\omega}$.
When $\omega$ is close to 1, the cylinder is very long.
The $\omega\rightarrow 1$ regime is no longer the ultraviolet regime,
but the infrared regime.
Defining $-\ln q\equiv\frac{2\pi^2}{-\ln\omega}$,
let us consider a closed string propagating for the proper time $(-\ln q)$.
The amplitude for this process is of the form
\begin{equation}
A_0\sim\int_0\frac{dq}{q}\; U,
\label{closedoneloop}
\end{equation}
where $U$ is the evolution operator that carries the initial state to the final state
\begin{equation}
U=\langle f | e^{(L_0+\tilde{L_0}-2)\ln q} | i \rangle.
\end{equation}
In the infrared regime $\omega\rightarrow 1$,
a closed string with zero momentum propagates for a very long proper time.
It follows that the states $|i\rangle$ and $|f\rangle$ are of nearly zero momentum,
and classical trajectories dominate the path integral.
Since the equation of motion and level matching condition
$L_0=\tilde{L}_0$ are satisfied for the classical trajectories,
we expect that
\begin{equation}
U\sim\langle f | 0\rangle
\langle 0e^{(2L_0-2)\ln q} |0\rangle
\langle 0|i\rangle.
\end{equation}
As there is a $1/q$ factor in the denominator in (\ref{closedoneloop}),
the only divergence in $A_0$ comes from the tachyon ($L_0=0$),
which can be removed in superstring theories,
and the infrared divergence of the dilaton ($L_0=1$),
which is analogous to the IR divergence typical in massless field theories.
Therefore, one concludes that the UV divergence
in an open superstring one-loop diagram disappears
if we compute it in the closed string picture.
Since it is the string worldsheet duality that allows us to identify
the ill-defined expansion of $A_0$ in (\ref{stringdivergence})
with the UV-finite closed string tree level diagram,
we want to take a closer look at the string worldsheet duality
and compare it with our trick of analytic continuation.
\subsection{$s-t$ duality and analytic continuation}
The simplest example of string worldsheet duality is
the $s-t$ duality of open-string 4-point amplitudes.
Consider the 4-point tree-level scattering amplitudes in $t-$channel
\begin{equation}
A(s,t)=-\sum_J\frac{g^2_J(-s)^J}{t-M^2_J},
\label{eqn41}
\end{equation}
and $s-$channel
\begin{equation}
A'(s,t)=-\sum_J\frac{g^2_J(-t)^J}{s-M^2_J}.
\label{eqn42}
\end{equation}
Note that in order for the two quantities to be identical $A(s, t) = A'(s, t)$,
the sum $\sum_J$ must be an infinite series
and the masses $M_J$ and couplings $g_J$ must be fine-tuned.
The parameters are chosen such that
\begin{subequations}
\begin{align}
A(s,t)&=-\sum_{n=0}^{\infty}
\frac{(\alpha(s)+1)(\alpha(s)+2)\cdots(\alpha(s)+n)}{n!}\frac{1}{(\alpha(t)-n)},\\
A'(s,t)&=-\sum_{n=0}^{\infty}
\frac{(\alpha(t)+1)(\alpha(t)+2)\cdots(\alpha(t)+n)}{n!}\frac{1}{(\alpha(s)-n)},
\end{align}
\end{subequations}
where $\alpha(s) = \alpha' s + \alpha_0$.
Since $\alpha'>0$, when $s>0$ and $t>0$, by the relation of gamma function
\begin{equation}
\Gamma(n+1)=n\Gamma(n),
\end{equation}
the amplitudes can also be written as
\begin{subequations}
\begin{align}
A(s,t)&=-\sum_{n=0}^{\infty}
\frac{\Gamma(\alpha(s)+n+1)}{\Gamma(\alpha(s)+1)n!}\frac{1}{(\alpha(t)-n)},\\
A'(s,t)&=-\sum_{n=0}^{\infty}
\frac{\Gamma(\alpha(t)+n+1)}{\Gamma(\alpha(t)+1)n!}\frac{1}{(\alpha(s)-n)}.
\end{align}
\label{eqn45}
\end{subequations}
These expressions seem divergent at first sight.
According to Stirling's formula,
the numerator of each term is of order $n^{\alpha(s)}$ or $n^{\alpha(t)}$
and the denominator is of order $1/n$.
But if we first assume that $\alpha' < 0$,
the series (\ref{eqn45}) converge to the form of an expansion of the beta function
\begin{equation}
B(x,y)=\sum_{n=0}^\infty\frac{\Gamma(n-y+1)}{n!\Gamma(-y+1)}
\frac{1}{x+n},\qquad y>0.
\end{equation}
Then we can analytically continue the quantities back to $\alpha'>0$,
and see that
both $A(s,t)$ and $A'(s,t)$ in (\ref{eqn45}) can be expressed
by the well-known Veneziano amplitude
\begin{equation}
A(s,t)=\frac{\Gamma(-\alpha(s))\Gamma(-\alpha(t))}{\Gamma(-\alpha(s)-\alpha(t))}.
\end{equation}
In the sample calculation above,
we reminded ourselves that the worldsheet duality,
which is at the heart of UV-finiteness of string theory,
is also a result of analytic continuation --
the same trick we used to remove UV divergences in our field theory models.
The duality that interchanges
a one-loop open string diagram with a tree-level closed string diagram
is a result of the Wick rotation on the worldsheet.
The Wick rotation is an analytic continuation.
The infinite number of poles, fine-tuned masses and couplings
in string theory are all reminiscent to
our choice of the propagator (\ref{propagator}).
The key ingredients that allow us to remove UV divergences
are exactly the same in our model and in string theory.
The only difference is that in string theory the (much finer) fine-tuning
leads to a large symmetry (conformal symmetry),
and is capable of removing UV divergences
even in the presence of vector and tensor fields.
It is tempting to make the conjecture that
the fine-tuning conditions (\ref{generalcond})
also correspond to some symmetries.
We leave this question for future investigation.
\section{Conclusion}
Let us summarize our results.
For a scalar field theory in $d$-dimensional spacetime
($d$ must be even)
with an action of the form
\begin{equation}
\label{S1}
S = \int d^d x \; \left( \frac{1}{2} \phi f^{-1}(-\partial^2) \phi
- V(\phi) \right),
\end{equation}
where $V(\phi)$ is a polynomial of $\phi$ of arbitrary order,
and the function $f(-\partial^2)$ in the kinetic term is given in (\ref{propagator}),
with the conditions in (\ref{generalcond}) satisfied,
the theory is {\em UV-finite, unitary and Lorentz invariant}
to all orders in the perturbative expansion.
Remarkably, the conditions (\ref{generalcond})
are independent of the order of the polynomial interactions.
It should be straightforward to generalize our discussion above
to scalar field theories (\ref{S1}) with more than one scalar fields $\phi_a$
with polynomial type interactions.
The prescription for calculating Feynman diagrams is
to first use dimensional regularization to regularize
integrals over internal momenta,
and then impose the conditions (\ref{generalcond})
to remove all UV divergences in the limit $\epsilon \rightarrow 0$.
The infinite sums involved in the calculation are dealt with via
analytic continuation.
Roughly speaking,
the conditions (\ref{generalcond}) remove
the first $d/2$ terms in the large $k$ expansion of the propagator
\begin{equation}
f(k^2) = \sum_n \frac{c_n}{k^2+m_n^2} =
\frac{\sum_n c_n}{k^2} - \frac{\sum_n c_n m_n^2}{k^4}
+ \frac{\sum_n c_n m_n^4}{k^6} - \cdots.
\end{equation}
Hence the propagator goes to zero as fast as $1/k^{d+2}$
as $k \rightarrow \infty$,
removing UV divergences for all diagrams.
Since the propagator $f(k^2)$ is the same as the sum over
ordinary propagators of particles of mass $m_n$
with a normalization constat $c_n$,
the perturbation theory of (\ref{S1}) is the same as that of the action
\begin{equation}
\label{S2}
S' = \int d^d x \; \left( \sum_n \frac{1}{2c_n} \phi_n (-\partial^2 + m_n^2) \phi_n
- V({\sum_n \phi_n}) \right).
\end{equation}
The same action can also be written as
\begin{equation}
\label{S3}
S'' = \int d^d x \; \left(
\sum_n \frac{1}{2} \hat{\phi}_n (-\partial^2 + m_n^2) \hat{\phi}_n
- V({\sum_n \sqrt{c_n} \hat{\phi}_n}) \right),
\end{equation}
where $\hat{\phi}_n = \phi_n/\sqrt{c_n}$.
Therefore, the nonlocal scalar field theory (\ref{S1}) is equivalent to
a theory of infinitely many scalar fields with fine tuned masses
and coupling constants.
The analogy between string theory and the higher derivative theory
defined by the propagator (\ref{propagator})
was made in Sec. \ref{AC}.
While the worldsheet conformal symmetry justifies
the analytic continuation and fine tuning of the mass spectrum
in string theory,
it would be of crucial importance to search for
a symmetry principle underlying
the fine-tuning conditions (\ref{generalcond}).
We notice that the partition function
\begin{equation}
Z[J_n] = \int \prod_n D\hat{\phi}_n \;
e^{-S''[\hat{\phi}_n]+\int d^d x \; \hat{\phi}_n J_n}
\end{equation}
has the algebraic property
\begin{equation}
Z[J_n + \alpha \sqrt{c_n}m_n^{2r+2}] =
e^{\int dx^d \; \sum_n (\frac{1}{2} \alpha^2 c_n m_n^{4r+2}
+ \alpha \sqrt{c_n} m_n^{2r} J_n)}
Z[J_n]
\qquad
(r = 0, 1, \cdots, (d-2)/2).
\end{equation}
which implies that the quantity
\begin{equation}
\tilde{Z}[J_n] \equiv
e^{-\int d^d x \; \sum_n \frac{1}{2m_n^2} J_n^2} Z[J_n]
\end{equation}
is invariant under the transformation
\begin{equation}
J_n \rightarrow J_n + \alpha \sqrt{c_n} m_n^{2r+2} \qquad
(r = 0, 1, \cdots, (d-2)/2).
\end{equation}
However,
the physical significance of this algebraic property
and the underlying symmetry principle
still remain mysterious.
Another direction for future study is to extend our results to
field theories of various spins.
It will be very interesting to generalize
our approach to incorporate gauge fields, the graviton,
and even higher spin fields.
\subsection*{Acknowledgments}
The authors thank Chuang-Tsung Chan, Chien-Ho Chen, Ru-Chuen Hou,
Yu-Ting Huang, Takeo Inami, Hsien-Chung Kao, Yeong-Chuan Kao,
Yutaka Matsuo, Darren Sheng-Yu Shih and Chi-Hsien Yeh
for helpful discussions.
This work is supported in part by the National Science Council,
and the National Center for Theoretical Sciences, Taiwan, R.O.C.
| proofpile-arXiv_065-6982 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Ginzburg-Landau Theory: Phenomenological Analysis}
Eq.~(1) of the main text is obtained by demanding that the free energy density is a real functional transforming according to the trivial irreducible representation (IR) of the ensuing point group. Here, we assume a system with tetragonal and inversion symmetries present, which is described by the D$_{\rm 4h}$ point group symmetry. The free energy density transforms according to the A$_{\rm 1g}$ IR of D$_{\rm 4h}$ and is here also assumed invariant under time reversal.
\subsection{Equation of Motion}
The equation describing the nematic field is found via the Euler-Lagrange equation of motion (EOM):
\begin{eqnarray}
\frac{\partial{\cal F}}{\partial N}=\partial_x\frac{\partial{\cal F}}{\partial(\partial_xN)}+\partial_y\frac{\partial{\cal F}}{\partial(\partial_yN)}
\end{eqnarray}
\noindent and reads:
\begin{eqnarray}
&&\big[\alpha(T-T_{\rm nem})-c\bm{\nabla}^2\big]N(\bm{r})+\beta[N(\bm{r})]^3\quad\nonumber\\
&&\qquad\qquad=-g\left(\partial_x^2-\partial_y^2\right)V(\bm{r})\,.\phantom{\dag}\quad\qquad\label{eq:EOMfull}
\end{eqnarray}
\noindent From the above, one notes that if the potential $V(\bm{r})$ is homogeneous, i.e. $V(\bm{r})=V$, the EOM includes only derivatives of $N$ and no other spatially-dependent functions or source terms. Thus, for an infinite (bulk) system $N(\bm{r})=N$. When $T>T_{\rm nem}$, the appearance of ne\-ma\-ti\-ci\-ty is disfavored and, thus, $N=0$ in the bulk. In contrast, the presence of an inhomogeneous potential functions as a source of nematicity and allows for non-zero inhomogeneous solutions of $N(\bm{r})$.
\subsection{Case Study: Single Impurity for $\bm{T>T_{\rm nem}}$}
Above $T_{\rm nem}$, we drop the cubic term in the EOM in Eq.~\eqref{eq:EOMfull} above, and obtain Eq.~(2) of the manuscript. For a potential satisfying $V(\bm{q})=V(|\bm{q}|)$, we set $q_x=q\cos\theta$, $q_y=q\sin\theta$, $x=r\cos\phi$ and $y=r\sin\phi$, and find:
\begin{eqnarray}
N(r,\phi)=
\cos(2\phi)\int_{+\infty}^0\frac{q{\rm d}q}{2\pi}\phantom{.}\frac{g}{c}\frac{q^2V(q)}{q^2+\xi_{\rm nem}^{-2}}J_2(qr)\,,\label{eq:angular_profile}
\end{eqnarray}
\noindent with $J_2(z)$ the respective Bessel function of the first kind. One notes the distinctive angular dependence of the spatial profile of the induced nematic order, which is fixed by the B$_{\rm 1g}$ IR of $N$, the $A_{\rm 1g}$ IR of $V$, and the fourfold-symmetric impurity profile. We resolve the radial dependence in the case $V(\bm{r})=V/r$, and find:
\begin{align}
N(r,\phi)=\frac{\gamma}{c}\left[I_{2}\left(\frac{r}{\xi_{\rm nem}}\right)-L_{-2}\left(\frac{r}{\xi_{\rm nem}}\right)\right]\cos(2\phi),\label{eq:profile}
\end{align}
\noindent where we introduced the modified Bessel and Struve functions, and defined $\gamma=-\pi gV/(2\xi_{\rm nem})$. Notably, the decaying function in the brac\-kets yields $\approx 1/2$ for $r=\xi_{\rm nem}$.
\subsection{Non-Induction of Net Nematicity by a C$_4$-Symmetric Potential}
In this section we explore whether there exists a term in the Ginzburg-Landau expansion which can induce a nonzero $N(\bm{q}=\bm{0})$ for an impurity-potential profile which preserves C$_4$ symmetry. Consider the most general term:
\begin{eqnarray}
\int {\rm d}\bm{r}\phantom{.}[N(\bm{r})]^{n}[V(\bm{r})]^m\big(\partial_x^2-\partial_y^2\big)^{\ell}V(\bm{r})
\end{eqnarray}
\noindent where, if $\ell$ is odd, then $n=\ell+2\mathbb{N}$. We fix the spatial profile of $V$ to be C$_4$-symmetric. The above general term can be mapped to two distinct types of couplings:
\begin{eqnarray}
\int {\rm d}\bm{r}\phantom{.}[N(\bm{r})]^{2n} \phantom{\dag}{\rm and}\phantom{\dag} \int {\rm d}\bm{r}\phantom{.}[N(\bm{r})]^{2n+1}\big(\partial_x^2-\partial_y^2\big)V(\bm{r})\,.
\end{eqnarray}
\noindent The respective equations of motion read:
\begin{align}
N(\bm{r})\propto [N(\bm{r})]^{2n-1}\phantom{.}{\rm and}\phantom{.} N(\bm{r})\propto [N(\bm{r})]^{2n}\big(\partial_x^2-\partial_y^2\big)V(\bm{r})\,.
\end{align}
\noindent We Fourier transform the first equation and find:
\begin{eqnarray}
&&N(\bm{q}=\bm{0})\propto\nonumber\\&&\int {\rm d}\bm{p}_1\ldots {\rm d}\bm{p}_{2n-1} N(\bm{p}_1)\ldots N(\bm{p}_{2n-1})\delta\left(\sum_s^{2n-1}\bm{p}_s\right)\,.\qquad
\end{eqnarray}
\noindent Assuming that the components appearing on the rhs are given by:
\begin{eqnarray}
\bar{N}(\bm{q})=\frac{g}{c}\frac{\big(q_x^2-q_y^2\big)V(\bm{q})}{\bm{q}^2+\xi_{\rm nem}^{-2}}\equiv
\cos(2\theta)\phantom{.}\frac{g}{c}\frac{q^2V(q)}{q^2+\xi_{\rm nem}^{-2}}\,,
\end{eqnarray}
\noindent where we set $q_x=q\cos\theta$ and $q_y=q\sin\theta$, we find that the angular part of the integral is proportional to:
\begin{eqnarray}
&&\int_0^{2\pi}{\rm d}\theta_1\ldots\int_0^{2\pi}{\rm d}\theta_{2n-2}\cos(2\theta_1)\ldots\cos(2\theta_{2n-2})\cdot\qquad\nonumber\\
&&\left[\sum_{s=1}^{2n-2}p_s^2\cos\big(2\theta_s\big)+\sum_{s\neq\ell}^{2n-2}p_sp_\ell\cos\big(\theta_s+\theta_{\ell}\big)\right]=0\,,\qquad
\end{eqnarray}
\noindent where we set $\cos\theta_s=p_{s,x}/p_s$ and $\sin\theta_s=p_{s,y}/p_s$, with $p_s=|\bm{p}_s|$. A similar treatment for the remaining equation also yields zero. This result naturally confirms, that, a C$_4$-symmetric spatial profile for the impurity potential cannot lead to net nematicity.
\subsection{Case Study: Single Impurity for $\bm{T<T_{\rm nem}}$}
In order to explain the elongated clover-like spatial profile induced by the impurity in the bulk nematic phase ($T< T_{\rm nem}$), we need to include higher order terms in the free energy described by Eq.~\eqref{eq:EOMfull} of the SM. To demonstrate how this elongation comes about, it is sufficient to solely retain the first term of Eq.~(5) presented in the main text. The modified EOM has the form:
\begin{eqnarray}
&&\quad\big[\alpha(T-T_{\rm nem})-c\bm{\nabla}^2\big]N(\bm{r})+\beta[N(\bm{r})]^3\quad\nonumber\\
&&\phantom{\dag}=-g\big(\partial_x^2-\partial_y^2\big)V(\bm{r})+g'N(\bm{r})V(\bm{r})\,.\qquad\qquad\label{eq:EOMmod}
\end{eqnarray}
\noindent We separate the nematic field into two parts, i.e. $N(\bm{r})=N_{\rm B}+\delta N(\bm{r})$. Here, $N_{\rm B}$ denotes the value of the bulk nematic order parameter and is given by $\beta N_{\rm B}^2=\alpha(T_{\rm nem}-T)$ for $T<T_{\rm nem}$, while $\delta N(\bm{r})$ denotes the contribution stemming from the presence of the impurity. For $|\delta N(\bm{r})|\ll|N_{\rm B}|$ we linearize the above EOM and obtain:
\begin{eqnarray}
&&\big[2\alpha(T_{\rm nem}-T)/c-\bm{\nabla}^2\big]\delta N(\bm{r})\qquad\nonumber\\
&&\phantom{\dag}=-\frac{g}{c}\left(\partial_x^2-\partial_y^2\right)V(\bm{r})+\frac{g'}{c}N_{\rm B}V(\bm{r})\,.
\end{eqnarray}
\noindent In the above, we retained the terms which lead to a $\delta N(\bm{r})$ which is linear in terms of the strength of the impurity potential. Within this assumption, we dropped the term $\delta N(\bm{r})V(\bm{r})$ which leads to higher-order contributions with respect to the strength of the impurity potential. In the same line of arguments as the ones leading to Eq.~\eqref{eq:angular_profile}, we obtain a constant angular profile superimposed on the usual $\cos(2\phi)$-form:
\begin{eqnarray}
\delta N(r,\phi)&=&\cos(2\phi)\int_{+\infty}^0\frac{q{\rm d}q}{2\pi}\phantom{.}\frac{g}{c}\frac{q^2V(q)}{q^2+\xi_{\rm nem}^{-2}}J_2(qr)\quad\nonumber\\
&&\phantom{\dag}-N_{\rm B}\int^{0}_{+\infty}\frac{q{\rm d}q}{2\pi}\phantom{.}\frac{g'}{c}\frac{V(q)}{q^2 + \xi_{\rm nem}^{-2}}J_0(qr)\,,\quad
\end{eqnarray}
\noindent with the coherence length being given now by $\xi_{\rm nem}^{-2}=2\alpha(T_{\rm nem}-T)$ due to the contribution of the quartic term of the free energy. In connection to Eq.~\eqref{eq:profile} of the SM, we find that for $V(\bm{r})=V/r$:
\begin{eqnarray}
\delta N(r,\phi)&=&\frac{\gamma}{c}\left[I_{2}\left(\frac{r}{\xi_{\rm nem}}\right)-L_{-2}\left(\frac{r}{\xi_{\rm nem}}\right)\right]\cos(2\phi)\nonumber\\
&-&\frac{\gamma'}{c}\left[I_{0}\left(\frac{r}{\xi_{\rm nem}}\right)-L_{0}\left(\frac{r}{\xi_{\rm nem}}\right)\right]N_{\rm B}\nonumber\\
&\equiv&f(r)\cos(2\phi)+h(r)N_{\rm B}
\end{eqnarray}
\noindent with $\gamma'=-\pi g'V\xi_{\rm nem}/2$. From the above, one can read off the decaying functions $f(r)$ and $h(r)$ discussed in the main text. This spatial profile does indeed lead to a profile on the same form as the anisotropic induced order in Fig.~2(b) of the main text. Furthermore, note that it is the presence of the nonzero bulk nematic order parameter $N_{\rm B}$, that induces the anisotropy.
\section{Interaction in the Nematic Channel and Mean-Field Theory Decoupling}
We assume the presence of the interaction
\begin{eqnarray}
\widehat{\cal H}_{\rm int}=-V_{\rm nem}\sum_{\bm{R}}\widehat{{\cal O}}_{\bm{R}}^2/2\,,
\end{eqnarray}
\noindent which contributes to the desired nematic channel. In the above, we have introduced:
\begin{eqnarray}
\widehat{{\cal O}}_{\bm{R}}=\sum_{\bm{\delta}}^{\pm\hat{\bm{x}},\pm\hat{\bm{y}}}f_{\bm{\delta}}
\left(c^{\dag}_{\bm{R}+\bm{\delta}}c_{\bm{R}}+c^{\dag}_{\bm{R}}c_{\bm{R}+\bm{\delta}}\right)\,,
\end{eqnarray}
\noindent where we have defined the form factor $f_{\pm\hat{\bm{x}}}=-f_{\pm\hat{\bm{y}}}=1/4$. Note that the lattice constant has been set to unity. We perform a mean-field decoupling of the interaction in the direct channel by introducing the nematic order parameter $N_{\bm{R}}=-V_{\rm nem}\big<\widehat{{\cal O}}_{\bm{R}}\big>$. The latter steps led to Eq.~(9) of the main text.
In wavevector space we have $N_{\bm{q}}=\sum_{\bm{R}}N_{\bm{R}}e^{-i\bm{q}\cdot\bm{R}}$ and the complete mean-field Hamiltonian reads:
\begin{align}
\widehat{{\cal H}}=\frac{1}{\cal N}\sum_{\bm{q},\bm{k}}c^{\dag}_{\bm{k}+\bm{q}/2}
\big(\varepsilon_{\bm{k}}{\cal N}\delta_{\bm{q},\bm{0}}+V_{\bm{q}}+N_{\bm{q}}f_{\bm{q},\bm{k}}\big)c_{\bm{k}-\bm{q}/2}
\end{align}
\noindent with ${\cal N}$ being the number of lattice sites, while the nematic form factor in wavevector space takes the form:
\begin{eqnarray}
f_{\bm{q},\bm{k}}=\frac{f_{\bm{k}+\bm{q}/2}+f_{\bm{k}-\bm{q}/2}}{2}\phantom{\dag} {\rm with}\phantom{\dag} f_{\bm{k}}=\cos k_x-\cos k_y\,.\phantom{.}
\end{eqnarray}
\noindent The mean-field Hamiltonian has to be supplemented with the self-constistency equation for the nematic order parameter, which reads
\begin{eqnarray}
N_{\bm{q}}
&=&-V_{\rm nem}\sum_{\bm{k}}f_{\bm{q},\bm{k}}\big<c^{\dag}_{\bm{k}-\bm{q}/2}c_{\bm{k}+\bm{q}/2}\big>\nonumber\\
&\equiv&-V_{\rm nem}T\sum_{k_n,\bm{k}}f_{\bm{q},\bm{k}}G_{\bm{k}+\bm{q}/2,k_n;\bm{k}-\bm{q}/2,k_n}
\end{eqnarray}
\noindent where we introduced the full single-particle fermionic Matsubara Green function:
\begin{eqnarray}
G_{\bm{k}+\bm{q}/2,k_n;\bm{k}-\bm{q}/2,k_n}=-\big<c_{\bm{k}+\bm{q}/2,k_n}c^{\dag}_{\bm{k}-\bm{q}/2,k_n}\big>\,.
\end{eqnarray}
\noindent In the above, $k_n=(2n+1)\pi T$ ($k_{\rm B}=1$) and the Matsubara Green function for the free electrons has the form $G^0_{\bm{k},k_n}=1/(ik_n-\varepsilon_{\bm{k}})$. The above construction allows us to employ Dyson's equation in order to perform an expansion of the rhs of the self-consistency equation with respect to the nematic order parameter and/or the impurity potential.
\section{Ginzburg-Landau Theory: Microscopic Analysis}
Given the above, here we show how the electro-nematic coefficient $g$ relates to the microscopic parameters for the disorder-free microscopic model under consideration. We employ a perturbative expansion by employing the Dyson equation for the full Matsubara Green function which reads:
\begin{eqnarray}
&&G_{\bm{k}+\bm{q}/2,k_n;\bm{k}-\bm{q}/2,k_n}=G^0_{\bm{k},k_n}\delta_{\bm{q},\bm{0}}\nonumber\\
&+&G^0_{\bm{k}+\bm{q}/2,k_n}\sum_{\bm{p}}U_{\bm{p};\bm{k}+\bm{q}/2}G_{\bm{k}+\bm{q}/2-\bm{p},k_n;\bm{k}-\bm{q}/2,k_n}\,,\qquad
\end{eqnarray}
\noindent where we introduced $U_{\bm{q};\bm{k}}=\big(V_{\bm{q}}+N_{\bm{q}}f_{\bm{q},\bm{k}}\big)/{\cal N}$. We obtain the lowest order contribution of $U$ by replacing the full Green function on the rhs by the bare one. We find:
\begin{eqnarray}
g_{\bm{q}}
=-\frac{T}{\cal N}\sum_{k_n,\bm{k}}f_{\bm{q},\bm{k}}G^0_{\bm{k}+\bm{q}/2,k_n}G^0_{\bm{k}-\bm{q}/2,k_n}\,.
\end{eqnarray}
\noindent To facilitate the calculations, we consider the continuum limit of our model and assume spinless single-band electrons with a parabolic dispersion $\varepsilon(\bm{k})=E_F\big[\left(k/k_F\right)^2-1\big]$ with $\bm{k}=(k_x,k_y)$, $k=|\bm{k}|$ and set $f(\bm{k})=\big(k_x^2-k_y^2\big)/k_F^2$. The quantity of interest, after taking into account the symmetries of $\varepsilon(\bm{k}),f(\bm{k})$ and restricting up to second order terms in $\bm{q}$, reads:
\begin{eqnarray}
&&g(\bm{q})\approx
-\int \frac{{\rm d}\bm{k}}{(2\pi)^2}\bigg\{n_F'[\varepsilon(k)]\nonumber\\
&&\quad+\big[f(\bm{k})\big]^2\frac{1}{3}E_F^2n_F'''[\varepsilon(k)]\bigg\}f(\bm{q}/2)
\equiv g\big(q_x^2-q_y^2\big)\,.\qquad
\end{eqnarray}
\section{Self-Consistent Calculation of the Nematic Order Parameter}
By means of the microscopic Hamiltonian in Eq.~(10) of the main text, we calculate the nematic order pa\-ra\-me\-ter self-consistently until we reach an accuracy of $10^{-6}$, while keeping the electron density fixed. The expectation values entering in the order parameter and the electron density are calculated by expressing the fermionic field operators in the diagonal basis of the Hamiltonian $c_{\bm{R}}=\sum_{m}\gamma_m\langle m|\bm{R}\rangle$ with the defining equation $\widehat{\cal H}\gamma^{\dagger}_m|0\rangle=E_m|m\rangle$. This leads to the following simplified expressions for the order parameter, and electron density, respectively:
\begin{align}
N_{\bm{R}}&=-V_{\rm nem}\sum_{\bm{\delta},\,m}f_{\bm{\delta}}\langle \bm{R}+\bm{\delta}|m\rangle n_{F}(E_m)\langle m|\bm{R}\rangle+{\rm c.c.}\,,\nonumber
\\
\langle n\rangle&=\frac{1}{\mathcal{N}}\sum_{m}n_{F}(E_m)\,.
\end{align}
\section{Disorder-Modified Stoner Criterion and the Resulting $\bm{T_{\rm nem}^{\rm imp}}$}
In the presence of dilute and uncorrelated identical impurities, the disorder may enhance the $T_{\rm nem}$. This was shown in the main text by investigating the modified nematic Stoner criterion. In Fig.~1 of the SM, we provide additional results for other electron-density values. The electron density is calculated via:
\begin{align}
\langle n\rangle=\frac{1}{\mathcal{N}}\sum_{\bm{k}}\int_{-\infty}^{\infty}\frac{{\rm d}\varepsilon}{2\pi}\frac{1}{\tau_{\bm{k}}}\frac{n_{F}(\varepsilon)}{(\varepsilon-\varepsilon_{\bm{k}})^2+1/(2\tau_{\bm{k}})^2},
\end{align}
\noindent which recovers its usual form $\langle n\rangle=\sum_{\bm{k}}n_{F}(\varepsilon_{\bm{k}})/\mathcal{N}$ in the disorder-free case, i.e. $\tau_{\bm{k}}\rightarrow \infty$. For these calculations finite size effects are diminishing for $\mathcal{N} \sim 40\times 10^{3}$.
In Fig.~1 we demonstrate two typical situations, in which, $T_{\rm nem}$ becomes either enhanced or reduced. This is reflected in the behavior of the quantity $\delta\chi_{\rm nem}/\chi^0_{\rm nem}\equiv(\chi_{\rm nem}^{\rm imp}-\chi^0_{\rm nem})/\chi^0_{\rm nem}$ which is depicted. We first focus on $n_{\rm imp}$ in the vicinity of $5\%$, i.e. the optimal value discussed in the main text.
For the value $\langle n\rangle=0.51$ of the electron density, the Fermi energy is tuned very near the van Hove singu\-la\-ri\-ty (see Figs.~1(a,b)), which constitutes the sweet spot for the development of the nematic order parameter in the absence of disorder, since there, $\chi^0_{\rm nem}$ obtains its ma\-xi\-mum value. From Fig.~1(c) we find that introducing disorder worsens the tendency of the system to develop a nematic order parameter as reflected in the negative va\-lues of $\delta\chi_{\rm nem}/\chi^0_{\rm nem}$. The addition of disorder broadens the spectral function, and the density of states (DOS) unavoidably becomes lowered, since contributions from low DOS $\bm{k}$ points are taken into account. In contrast, in the case $\langle n\rangle=0.53$ discussed in the main text, and also shown here, the broadening allows the DOS to increase by picking up contributions from the van Hove singularity, while at the same time avoiding significant contributions from other low DOS $\bm{k}$ points. Increasing the electron density to $\langle n\rangle=0.55$ shifts the Fermi level further away from the van Hove singularity and thus reduces its favorable impact on the DOS. As a result, the nematic susceptibility drops and $\delta\chi_{\rm nem}/\chi^0_{\rm nem}$ is negative.
The balance between the contributions to the DOS originating from the van Hove singularity and the low DOS $\bm{k}$ points is controlled by the concentration of impurities. Varying $n_{\rm imp}$ leads to a modification of the re\-la\-ti\-ve strength of the two competing contributions and allows the sign changes of $\delta\chi_{\rm nem}/\chi^0_{\rm nem}$ which are shown in Fig.~1(c) for $\langle n\rangle=0.55$.
\begin{figure}[b!]
\centering
\includegraphics[width = \columnwidth]{mod_stoner_T_0p75_away}
\caption{(a) Energy dispersion along the ${\rm \Gamma}-{\rm X}$ line, (b) Fermi line in $(k_x,k_y)$ space and (c) $\delta\chi_{\rm nem}/\chi^0_{\rm nem}$ as a function of $n_{\rm imp}$, all shown for different electron fillings $\langle n\rangle=0.51,0.53,0.55$. Panel (c) reveals that disorder always has a negative impact on the nematic susceptibility when the Fermi level is tuned very near the van Hove singularity, as inferred for $\langle n\rangle=0.51$. When the Fermi level is tuned suffieciently away from the van Hove singularity, the resulting nematic susceptibility can be either enhanced or reduced depending on the relative strength of the contributions to the density of states (DOS) stemming from the van Hove singularity and the low DOS $\bm{k}$ points. This ratio is controlled by the concentration of impurities $n_{\rm imp}$.}
\end{figure}
\end{document}
| proofpile-arXiv_065-7007 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Computer science has a rich history of problem solving through computational procedures seeking to minimize an objective function maximized by another procedure. For example, chess programs date back to 1945~\cite{zuse45chess}, and for many decades have successfully used a recursive minimax procedure with continually shrinking look-ahead, e.g.,~\cite{wiener1965}.
Game theory of adversarial players originated in 1944~\cite{morgenstern1944}.
In the field of machine learning, early adversarial settings include reinforcement learners playing against themselves~\cite{Samuel:59} (1959), or the evolution of parasites in predator-prey games, e.g.,~\cite{hillis1990co,seger1988parasites} (1990).
In 1990, a new type of adversarial technique
was introduced in the field of {\em unsupervised or self-supervised artificial neural networks} (NNs)~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab} (Sec. \ref{ac}). Here a {\em single} agent has two separate learning NNs. Without a teacher, and without external reward for achieving user-defined goals, the first NN somehow generates outputs. The second NN learns to predict consequences or properties of the generated outputs, minimizing its errors, typically by gradient descent. However, the first NN {\em maximizes} the objective function {\em minimized} by the second NN, effectively trying to generate data from which the second NN can still learn
to improve its predictions.
This survey will review such {\em unsupervised minimax} techniques, and relate them to each other. Sec. \ref{ac} focuses on unsupervised Reinforcement Learning (RL) through {\em Artificial Curiosity} (since 1990). Here the prediction errors are (intrinsic) reward signals maximized by an RL controller. Sec. \ref{special}
points out that {\em Generative Adversarial Networks} (GANs, 2010-2014) and its variants are special cases of this approach. Sec. \ref{brains} discusses a more sophisticated adversarial approach of 1997. Sec. \ref{pm}
addresses unsupervised encoding of data through {\em Predictability Minimization} (PM, 1991), where the predictor's error is maximized by the encoder's feature extractors. Sec. \ref{convergence} addresses issues of convergence.
For historical accuracy, I will sometimes refer not only to peer-reviewed publications but also to technical reports, many of which turned into reviewed journal articles later.
\newpage
\section{Adversarial Artificial Curiosity (AC, 1990)}
\label{ac}
In 1990, unsupervised or self-supervised adversarial NNs were
used to implement {\em curiosity}~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab}
in the general context of exploration in
RL~\cite{Kaelbling:96,Sutton:98,wiering2012}
(see Sec. 6 of~\cite{888} for a survey of deep RL).
The goal was to overcome drawbacks of traditional reward-maximizing RL machines
which use naive strategies (such as random action selection) to explore their environments.
The basic idea is: An RL agent with a predictive NN world model maximizes intrinsic reward obtained for
provoking situations where the error-minimizing world model still has high error
and can learn more.
I will refer to this approach as {\em Adversarial Artificial Curiosity} (AC) of 1990, or
AC1990 for short, to distinguish it from our later types of
Artificial Curiosity since 1991 (Sec. \ref{ac1991}).
In what follows, let $m,n,q$ denote positive integer constants.
In the AC context, the first NN is often called the controller C.
C may interact with an environment through
sequences of interactions called {\em trials} or {\em episodes}.
During the execution of a single interaction of any given trial,
C generates an output vector $x \in \mathbb{R}^n$. This may influence
an environment, which produces a reaction to $x$ in form of an
observation $y \in \mathbb{R}^q$.
In turn, $y$ may affect C's inputs during the next interaction if there is any.
In the first variant of AC1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab},
C is recurrent, and thus a general purpose computer.
Some of C's adaptive recurrent units are
mean and variance-generating Gaussian units,
such that C can become a {\em generative model} that produces a probability distribution over outputs---see
Section {\em ``Explicit Random Actions versus Imported Randomness''}~\cite{Schmidhuber:90diffenglish} (see also~\cite{Schmidhuber:91nips,Williams:88b}).
(What these stochastic units
do can be equivalently accomplished by having C perceive pseudorandom numbers or noise,
like the generator NNs of GANs~\cite{goodfellow2014generative}; Sec. \ref{special}).
To compute an output action during an interaction, C updates all its NN unit activations
for several discrete time steps in a row---see Section {\em ``More Network Ticks than Environmental Ticks''}~\cite{Schmidhuber:90diffenglish}.
In principle, this allows for computing highly nonlinear, stochastic mappings from
environmental inputs (if there are any) and/or from internal ``noise'' to outputs.
The second NN is called the world model M~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sandiego,Schmidhuber:91nips,ha2018world}.
In the first variant of AC1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab}, M is also recurrent, for reasons of generality.
M receives C's outputs $x \in \mathbb{R}^n$ as inputs and predicts their visible
environmental effects or consequences
$y \in \mathbb{R}^q$.
According to AC1990, M {\em minimizes} its prediction errors by gradient descent, thus becoming a better predictor. In absence of external reward, however, the adversarial C tries to find actions that {\em maximize} the errors of M: {\em M's errors are the intrinsic rewards of C.} Hence C {\em maximizes}
the errors that M {\em minimizes}.
The loss of M is the gain of C.
Without external reward, C is thus intrinsically motivated to invent novel action sequences or experiments that yield data that M still finds surprising, until the data becomes familiar and boring.
The 1990 paper~\cite{Schmidhuber:90diffenglish}
describes gradient-based learning methods for both C and M.
In particular, {\em backpropagation~\cite{Linnainmaa:1970,Linnainmaa:1976} through the model M down into the controller C} (whose outputs are inputs to M) is used to compute weight changes for C, generalizing previous work on feedforward networks~\cite{Werbos:89identification,Werbos:87specifications,Munro:87,JordanRumelhart:90}.
This is closely related to how
the code generator NN of {\em Predictability Minimization} (Sec. \ref{pm}) can be trained by backpropagation through its predictor NN~\cite{Schmidhuber:91predmin,Schmidhuber:92ncfactorial,Schmidhuber:96ncedges},
and to how
the GAN generator NN (Sec. \ref{special}) can be trained by backpropagation through its discriminator NN~\cite{olli2010,goodfellow2014generative}.
Furthermore, the concept of {\em backpropagation through random number generators}~\cite{Williams:88b} is used
to derive error signals even
for those units of C that are stochastic~\cite{Schmidhuber:90diffenglish}.
However, the original AC1990 paper
points out that the basic ideas of AC are not limited to particular learning algorithms---see Section {\em ``Implementing Dynamic Curiosity and Boredom''}~\cite{Schmidhuber:90diffenglish}.
Compare more recent summaries and later variants / extensions of AC1990's simple but powerful exploration principle~\cite{Schmidhuber:06cs,Schmidhuber:10ieeetamd}, which inspired much later work, e.g.,~\cite{Singh:05nips,Oudeyer:12intrinsic,Schmidhuber:10ieeetamd}; compare~\cite{oudeyerkaplan07,pathak2017curiosity,burda2018curious}.
See also related work of 1993~\cite{slg1993,slg2019}.
To summarize, unsupervised or self-supervised minimax-based neural networks of the previous millennium
(now often called CM systems~\cite{learningtothink2015}) were both {\em adversarial} and {\em generative} (using terminology of 2014~\cite{goodfellow2014generative}, Sec. \ref{special}), stochastically generating outputs yielding experimental data, not only for stationary patterns but also for pattern sequences, even for the general case of RL, and even for {\em recurrent} NN-based RL in partially observable environments~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab}.
\newpage
\section{A Special Case of AC1990: Generative Adversarial Networks}
\label{special}
Let us now consider a special case of a curious CM system as in Sec. \ref{ac} above, where each sequence of interactions of the CM system with its environment (each trial) is limited to a {\em single} interaction, like in bandit problems~\cite{robbins1952bandit,Gittins:89,auer1995,audibert2009minimax}.
The environment contains a representation of a user-given training set $X$ of patterns $ \in \mathbb{R}^n$. $X$ is not directly visible to C and M, but its properties are probed by AC1990 through C's outputs or actions or experiments.
In the beginning of any given trial, the activations of all units in C are reset.
C is blind (there is no input from the environment).
Using its internal stochastic units~\cite{Schmidhuber:90diffenglish,Schmidhuber:91nips} (Sec. \ref{ac}), C then computes a single output $x \in \mathbb{R}^n$. In a pre-wired fraction of all cases, $x$ is replaced by a randomly selected ``real'' pattern $\in X$
(the simple default exploration policy of traditional RL chooses a random action in a fixed fraction of all cases~\cite{Kaelbling:96,Sutton:98,wiering2012}).
This ensures that M will see both ``fake'' and ``real'' patterns.
The environment will react to output $x$ and return as its effect a binary observation $y \in \mathbb{R}$, where $y=1$ if $x \in X$, and $y=0$ otherwise.
As always in AC1990-like systems, M now takes C's output $x$ as an input, and predicts its environmental effect $y$, in that case a single bit of information, 1 or 0. As always, M learns by {\em minimizing} its prediction errors. However, as always in absence of external reward, the adversarial C is learning to generate outputs that {\em maximize} the error {\em minimized} by M. M's loss is C's negative loss.
Since the stochastic C is trained to {\em maximize}
the objective function {\em minimized} by M, C is
motivated to produce a distribution over more and more realistic patterns, e.g., images.
Since 2014, this particular application of the AC principle (1990) has been called a {\em Generative Adversarial Network} (GAN)~\cite{goodfellow2014generative}. M was called
the discriminator,
C was called the generator.
GANs and related approaches
are now widely used and studied,
e.g.,~\cite{radford2015,denton2015,huszar2015not,nowozin2016f,wasserstein2017,ganin2016,makhzani2015,bousmalis2016,underthiner2017}.
\subsection{Additional comments on AC1990 \& GANs \& Actor-Critic}
\label{comments}
The first variant of AC1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab,reddit2019gan}
generalized to the case of recurrent NNs a well-known way~\cite{Werbos:89identification,Werbos:87specifications,Munro:87,NguyenWidrow:89,JordanRumelhart:90,SchmidhuberHuber:91} of using a differentiable world model M to approximate gradients for C's parameters even when environmental rewards are {\em non-differentiable} functions of C's actions. In the simple differentiable GAN environment above, however, there are no such complications, since the rewards of C (the 1-dimensional errors of M) are differentiable functions of C's outputs. That is, standard backpropagation~\cite{Linnainmaa:1970} can directly compute the gradients of C's parameters with respect to C's rewards, like in Predictability Minimization (1991)~\cite{Schmidhuber:91predmin,Schmidhuber:92ncfactorial,schmidhuber1993,Schmidhuber:96ncedges,pm,Schmidhuber:99zif} (Sec. \ref{pm}).
Unlike the first variant of AC1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab}, most current GAN applications use more limited
feedforward NNs rather than recurrent NNs to implement M and C. The stochastic units of C are typically implemented by feeding noise sampled from a given probability distribution into C's inputs~\cite{goodfellow2014generative}.\footnote{In the GAN-like AC1990 setup of Sec. \ref{special},
real patterns (say, images) are produced in a pre-wired fraction of all cases. However,
one could easily give C the freedom to decide by itself to focus on particular {\em real} images $\in X$ that M finds still difficult to process. For example, one could employ the following procedure: once C has generated a fake image $\hat{x} \in \mathbb{R}^n$, and the activation of a special hidden unit of C is above a given threshold, say, 0.5, then $\hat{x}$ is replaced by the pattern in $X$ most similar to $\hat{x}$, according to some similarity measure. In this case, C is not only motivated to learn to generate almost realistic
{\em fake} images that are still hard to classify by M, but also to address and focus on those {\em real} images that are still hard on M. This may be useful as C sometimes may find it easier to fool M by sending it a particular real image, rather than a fake image. To my knowledge, however, this is rarely done with standard GANs.}
Actor-Critic methods~\cite{konda2000actor,Sutton:99} are less closely related to GANs as they do not embody explicit minimax games. Nevertheless, a GAN can be seen as a {\em modified} Actor-Critic with a blind C in a stateless MDP~\cite{pfau2016connect}. This in turn yields another connection between AC1990 and Actor-Critic (compare also Section
{\em "Augmenting the Algorithm by Temporal Difference Methods”}~\cite{Schmidhuber:90diffenglish}).
\newpage
\subsection{A closely related special case of AC1990: Conditional GANs (2010)}
\label{cgan}
Unlike AC1990~\cite{Schmidhuber:90diffenglish} and the GAN of 2014~\cite{goodfellow2014generative}, the GAN of 2010~\cite{olli2010}
(now known as a {\em conditional} GAN or cGAN~\cite{mirza2014conditional}) does {\em not} have an internal source of randomness. Instead, such cGANs depend on sufficiently diverse inputs from the environment.
cGANs are also special cases of the AC principle (1990):
cGAN-like additional environmental inputs just mean that the controller C of AC1990 is not blind any more
like in the example above with the GAN of 2014~\cite{goodfellow2014generative}.
Like the first version of AC1990~\cite{Schmidhuber:90diffenglish}, the cGAN of 2010~\cite{olli2010} minimaxed {\em Least Squares} errors. This was later called LSGAN~\cite{mao2017least}.
\subsection{AC1990 and StyleGANs (2019)}
\label{style}
The GAN of 2014~\cite{goodfellow2014generative} perceives noise vectors (typically sampled from a Gaussian) in its input layer and maps them to outputs. The more general StyleGAN~\cite{karras2019style}, however, allows for noise injection in deeper hidden layers as well, to implement all sorts of hierarchically structured probability distributions.
Note that this kind of additional probabilistic
expressiveness was already present in the mean and variance-generating Gaussian units of the recurrent generator network C of
AC1990~\cite{Schmidhuber:90diffenglish} (Sec. \ref{ac}).
\subsection{Summary: GANs and cGANs etc. are simple instances of AC1990}
\label{summary}
cGANs (2010) and GANs (2014) are quite different from certain
earlier adversarial machine learning settings~\cite{Samuel:59,hillis1990co} (1959-1990)
which
neither involved unsupervised neural networks nor
were about modeling data nor used gradient descent (see Sec. \ref{intro}).
However, GANs and cGANs are very closely related to AC1990.
GANs are essentially an application of the Adversarial Artificial Curiosity principle of 1990 (Sec. \ref{ac}) where the generator network C is blind and
the environment simply returns whether C's current output is in a given set.
As always, C maximizes the function minimized by M (Sec. \ref{special}).
Same for cGANS, except that in this case C is not blind any more (Sec. \ref{cgan}).
Similar for StyleGANs (Sec. \ref{style}).
\subsection{The generality of AC1990}
\label{general}
It should be emphasized though that AC1990 has much broader applicability~\cite{Singh:05nips,Oudeyer:12intrinsic,Schmidhuber:10ieeetamd,burda2018curious}
than the GAN-like special cases above. In particular, C may sequentially interact with the environment for a long time, producing a sequence of environment-manipulating outputs resulting in complex environmental constructs. For example, C may trigger actions that generate brush strokes on a canvas, incrementally refining a painting over
time, e.g.,~\cite{ha2017sketch,ganin2018synth,zheng2018stroke,huang2019paint,nakano2019paint}. Similarly, M may sequentially predict many other aspects of the environment besides the single bit of information in the GAN-like setup above.
General AC1990 is about unsupervised or self-supervised RL agents that actively shape their observation streams through their own actions, setting themselves their own goals through intrinsic rewards, exploring the world by inventing their own action sequences or experiments, to discover novel, previously unknown predictability in the data generated by the experiments.
Not only the 1990s but also recent years saw successful applications of this simple principle (and variants thereof) in sequential settings, e.g.,~\cite{pathak2017curiosity,burda2018curious}.
Since the GAN-like environment above is restricted to a teacher-given set $X$ of patterns and a procedure deciding whether a given pattern is in $X$, the teacher will find it rather easy to evaluate the quality of C's $X$-imitating behavior. In this sense the GAN setting is ``more'' supervised than certain other applications of AC1990, which may be ``highly" unsupervised in the sense that C may have much more freedom when it comes to selecting environment-affecting actions.
\section{Improvements of AC1990}
\label{ac1991}
Numerous improvements of the original AC1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab} are summarized in more recent surveys~\cite{Schmidhuber:06cs,Schmidhuber:10ieeetamd}. Let us focus here on a first important improvement of 1991.
The errors of AC1990's M (to be {\em minimized}) are the rewards of its C (to be {\em maximized}, Sec. \ref{ac}).
This makes for a fine exploration strategy in many deterministic environments.
In stochastic environments, however, this might fail.
C might learn to focus on those
parts of the environment where M can always
get high prediction errors due to randomness,
or due to computational limitations of M.
For example, an agent controlled by C might get stuck in front of
a TV screen showing highly unpredictable white
noise, e.g.,~\cite{Schmidhuber:10ieeetamd} (see also~\cite{burda2018curious}).
Therefore, as pointed out in 1991,
in stochastic environments,
C's reward should not be the errors of M,
but (an approximation of) the {\em first derivative} of M's errors across subsequent training iterations,
that is, M's {\em improvements}~\cite{Schmidhuber:91singaporecur,Schmidhuber:07alt}.
As a consequence, despite M's high errors in front of
the noisy TV screen above,
C won't get rewarded for getting stuck there,
simply because M's errors won't improve.
Both the totally predictable and the fundamentally unpredictable will get boring.
This insight led to lots of follow-up work~\cite{Schmidhuber:10ieeetamd}.
For example,
one particular RL approach for AC in stochastic environments was published in 1995~\cite{Storck:95}.
A simple M learned to predict or estimate the probabilities of the
environment's possible responses, given C's actions.
After each interaction with the environment,
C's reward was the KL-Divergence~\cite{kullback1951}
between M's estimated probability distributions
before and after the resulting new experience (the information gain)~\cite{Storck:95}.
(This was later also called {\em Bayesian Surprise}~\cite{itti:05};
compare earlier work on information gain
and its maximization {\em without} NNs~\cite{Shannon:48,Fedorov:72}.)
AC1990's above-mentioned limitations in probabilistic environments,
however, are not an issue in
the simple GAN-like setup of Sec. \ref{special},
because there the environmental reactions are totally deterministic:
For each image-generating action of C,
there is a unique deterministic binary response from the environment
stating whether the generated image is in $X$ or not.
Hence it is not obvious that above-mentioned improvements of AC1990
hold promise also for GANs.
\section{AC1997: Adversarial Brains Bet on Outcomes of Probabilistic Programs}
\label{brains}
Of particular interest in the context of the present paper is
one more advanced adversarial approach to curious exploration of 1997~\cite{Schmidhuber:97interesting,Schmidhuber:99cec,Schmidhuber:02predictable}, referred to as AC1997.
AC1997 is about generating computational experiments in form of programs whose execution may change both an external environment and the RL agent's internal state. An experiment has a binary outcome: either a particular effect happens, or it doesn't. Experiments are collectively proposed by two reward-maximizing adversarial policies. Both can predict and bet on experimental outcomes before they happen. Once such an outcome is actually observed, the winner will get a positive reward proportional to the bet, and the loser a negative reward of equal magnitude. So each policy is motivated to create experiments whose yes/no outcomes surprise the other policy. The latter in turn is motivated to learn something about the world that it did not yet know, such that it is not outwitted again.
More precisely, a single RL agent has two dueling,
reward-maximizing {\em policies} called the
{\em left brain} and the {\em right brain}.
Each brain is a modifiable probability distribution over programs
running on a general purpose computer.
{\em Experiments} are programs sampled in a collaborative way that is influenced by both brains.
Each experiment specifies how to execute an instruction sequence (which may affect both the environment and the agent's internal state), and how to compute the outcome of the experiment through instructions implementing a computable function (possibly resulting in an internal binary yes/no classification) of the observation sequence triggered by the experiment.
The modifiable parameters of
both brains are instruction probabilities. They
can be accessed and manipulated through programs that include
subsequences of special {\em self-referential} policy-modifying instructions~\cite{Schmidhuber:94self,Schmidhuber:97ssa}.
Both brains may also
trigger the execution of certain {\em bet} instructions whose effect is to predict experimental outcomes before they are observed. If their predictions or hypotheses differ, they may agree to execute the experiment to determine which brain was right, and the surprised loser will pay an intrinsic reward (the real-valued bet, e.g., 1.0) to the winner in a zero sum game.
That is, each brain is intrinsically motivated to
outwit or surprise the other by proposing an experiment such
that the other {\em agrees} on the experimental
protocol but {\em disagrees} on the predicted outcome.
This outcome is typically an internal computable abstraction
of complex spatio-temporal events generated through
the execution the self-invented experiment.
This motivates the unsupervised or self-supervised two brain system
to focus on "interesting" computational questions,
losing interest in "boring"
computations (potentially involving the environment)
whose outcomes are consistently predictable by {\em both} brains,
as well as computations whose outcomes are currently still hard to predict by {\em either} brain.
Again, in the absence of external reward,
each brain maximizes the value function minimised by the other.
Using the meta-learning {\em Success-Story RL algorithm}~\cite{Schmidhuber:94self,Schmidhuber:97ssa}, AC1997 learns when to learn and what to learn~\cite{Schmidhuber:97interesting,Schmidhuber:99cec,Schmidhuber:02predictable}. AC1997
will also minimize the computational cost of learning new skills,
provided both brains receive a small
negative reward for each computational step, which
introduces a bias towards {\em simple} still surprising experiments (reflecting {\em simple} still unsolved problems). This may facilitate hierarchical construction of more and more complex experiments, including those yielding {\em external} reward (if there is any). In fact, AC1997's artificial creativity may not only drive artificial scientists and artists, e.g.,~\cite{Schmidhuber:09multiple}, but can also accelerate the intake of external reward, e.g.,~\cite{Schmidhuber:97interesting,Schmidhuber:02predictable}, intuitively because a better understanding of the world can help to solve certain problems faster.
Other RL or evolutionary algorithms could also be applied to such
two-brain systems implemented as two interacting (possibly recurrent) RL NNs or other computers.
However, certain issues such as
catastrophic forgetting are presumably better addressed by the later
{\sc PowerPlay} framework (2011)~\cite{powerplay2011and13,Srivastava2013first},
which offers an {\em asymptotically optimal} way of finding the simplest yet unsolved problem
in a (potentially infinite) set of formalizable problems with computable solutions,
and adding its solution to the repertoire of a more and more general, curious problem solver.
Compare also the {\em One Big Net For Everything}~\cite{onebignet2018} which offers a simplified, less strict NN
version of {\sc PowerPlay}.
How does AC1997 relate to GANs? AC1997 is similar to standard GANs in the sense that both are unsupervised generative adversarial minimax players and focus on experiments with a binary outcome: {\em 1 or 0, yes or no, hypothesis true or false.} However, for GANs the experimental protocol is prewired and always the same: It simply tests whether a recently generated pattern is in a given set or not (Sec. \ref{special}). One can restrict AC1997 to such simple settings
by limiting its domain and the nature of the instructions in its programming language, such that possible bets of both brains are limited to binary yes/no outcomes of GAN-like experiments.
In general, however, the adversarial brains of AC1997 can invent essentially arbitrary computational questions or problems by themselves, generating programs that interact with the environment in any computable way that will yield binary results on which both brains can bet.
A bit like a pure scientist deriving internal joy signals from inventing experiments that yield discoveries of initially surprising but learnable and then reliably repeatable predictabilities.
\section{Predictability Minimization (PM)}
\label{pm}
An important NN task is to learn the statistics of given data such as images. To achieve this, the principles of gradient descent/ascent were used in {\em yet another type of unsupervised minimax game} where one NN minimizes the objective function maximized by another. This duel between two unsupervised adversarial NNs was introduced in the 1990s in a series of papers~\cite{Schmidhuber:91predmin,Schmidhuber:92ncfactorial,schmidhuber1993,Schmidhuber:96ncedges,pm,Schmidhuber:99zif}. It was called {\em Predictability Minimization (PM)}.
PM's goal is to achieve an important goal of unsupervised learning, namely,
an ideal, disentangled, {\em factorial} code~\cite{Barlow:89,Barlow:89review}
of given data, where the code components are statistically independent of each other.
That is, {\em the codes are distributed like the data, and the probability of a given data pattern is simply the product of the probabilities of its code components.}
Such codes may facilitate subsequent downstream learning~\cite{Schmidhuber:96ncedges,pm,Schmidhuber:99zif}.
PM requires an encoder network with initially random weights.
It maps data samples $x \in \mathbb{R}^n$ (such as images)
to codes $y \in [0,1]^m$
represented across $m$ so-called code units.
In what follows, integer indices $i,j$ range over $1,\ldots,m$.
The $i$-th component of $y$ is called $y_i \in [0,1]$.
A separate predictor network is trained by gradient descent to predict each $y_i$ from the remaining components $y_j (j \neq i)$.
The encoder, however, is trained to maximize the same objective function (e.g., mean squared error)
minimized by the predictor. Compare
the text near Equation 2 in the 1996 paper~\cite{Schmidhuber:96ncedges}: {\em``The clue is: the code units are trained (in our experiments by online backprop) to maximize essentially the same objective function the predictors try to minimize;"} or Equation 3 in Sec. 4.1 of the 1999 paper~\cite{Schmidhuber:99zif}: {\em ``But the code units try to maximize the same objective function the predictors try to minimize."}
Why should the end result of this fight between predictor and encoder be a
disentangled factorial code?
Using gradient descent, to maximize the prediction errors,
the code unit activations $y_j$ run away from their real-valued predictions in $[0,1]$, that is,
they are forced towards the corners of the unit interval, and tend to become binary, either 0 or 1.
And according to a proof of 1992~\cite{DayanZemel:92,schmidhuber1993},\footnote{It should be mentioned that the above-mentioned proof~\cite{DayanZemel:92,schmidhuber1993} is limited to binary factorial codes. There is no proof that PM is a universal method for approximating all kinds of non-binary distributions (most of which are incomputable anyway). Nevertheless, it is well-known that binary Bernoulli distributions can approximate at least Gaussians and other distributions, that is, with enough binary code units one should get at least arbitrarily close approximations of broad classes of distributions. In the PM papers of the 1990s, however, this was not studied in detail.}
the encoder's objective function is maximized when the $i$-th code unit maximizes its variance
(thus maximizing the information it conveys about the input data)
while simultaneously minimizing the deviation between its (unconditional) expected activations $E(y_i)$
and its predictor-modeled, {\em conditional} expected activations $E(y_i \mid \{y_j, j \neq i \})$, given the other code units.
See also conjecture 6.4.1 and Sec. 6.9.3 of the thesis~\cite{schmidhuber1993}.
That is, the code units are motivated to extract informative yet mutually independent binary features from the data.
PM's inherent class of probability distributions is the set
of {\em multivariate binomial distributions.}
In the ideal case, PM has indeed learned to create a binary factorial code of the data.
That is,
in response to some input pattern,
each $y_i$ is either 0 or 1,
and the predictor has learned the conditional expected value $E(y_i \mid \{y_j, j \neq i \})$.
Since the code is both binary and factorial,
this value is equal to the code unit's {\em unconditional} probability $P(y_i=1)$ of being on
(e.g., ~\cite{Schmidhuber:92ncfactorial}, Equation in Sec. 2).
E.g., if some code unit's prediction is 0.25,
then the probability of this code unit being on is 1/4.
The first toy experiments with PM~\cite{Schmidhuber:91predmin} were
conducted nearly three decades ago when compute was about a million times more expensive than today.
When it had become about 10 times cheaper 5 years later,
it was shown that simple semi-linear PM variants applied to images automatically generate
feature detectors well-known from neuroscience, such as
on-center-off-surround detectors,
off-center-on-surround detectors,
orientation-sensitive bar detectors, etc~\cite{Schmidhuber:96ncedges,pm}.
\subsection{Is it true that PM is NOT a minimax game?}
\label{true}
The NIPS 2014 GAN paper~\cite{goodfellow2014generative}
states that PM differs from GANs in the sense that PM is NOT based on a minimax game with a value function that one agent seeks to maximize and the other seeks to minimise. It states that for GANs
{\em "the competition between the networks is the sole training criterion, and is sufficient on its own to train the network,"} while PM {\em "is only a regularizer that encourages the hidden units of a neural network to be statistically independent while they accomplish some other task; it is not a primary training criterion"}~\cite{goodfellow2014generative}.
But this claim is incorrect,
since PM is indeed a pure minimax game, too, e.g.,~\cite{Schmidhuber:96ncedges}, Equation 2.
There is no {\em "other task."} In particular, PM was also trained ~\cite{Schmidhuber:91predmin,Schmidhuber:92ncfactorial,schmidhuber1993,Schmidhuber:96ncedges,pm,Schmidhuber:99zif} (also on images ~\cite{Schmidhuber:96ncedges,pm}) such that {\em "the competition between the networks is the sole training criterion, and is sufficient on its own to train the network."}
\subsection{Learning generative models through PM variants}
\label{pmgen}
One of the variants in the first peer-reviewed PM paper (\cite{Schmidhuber:92ncfactorial} e.g., Sec 4.3, 4.4) had an optional decoder (called {\em reconstructor}) attached
to the code such that data can be reconstructed from its code.
Let's assume that PM has indeed found an ideal factorial code of the data.
Since the codes are distributed like the data,
with the decoder,
we could immediately use the system as a {\em generative model,}
by randomly activating each binary code unit according to its unconditional probability
(which for all training patterns is now equal to the activation of its prediction---see Sec. \ref{pm}),
and sampling output data through the decoder.\footnote{Note that even one-dimensional data may have a complex distribution whose binary factorial code (Sec. \ref{pm}) may require many dimensions. PM's goal is the discovery of such a code, with an a priori unknown number of components. For example, if there are 8 input patterns, each represented by a single real-valued number between 0 and 1, each occurring with probability 1/8, then there is an ideal binary factorial code across 3 binary code units, each active with probability 1/2. Through a decoder on top of the 3-dimensional code of the 1-dimensional data we could resample the original data distribution, by randomly activating each of the 3 binary code units with probability 50\% (these probabilities are actually directly visible as predictor activations).}
With an accurate decoder,
the sampled data must obey the statistics of the original distribution,
by definition of factorial codes.
However, to my knowledge,
this straight-forward application as a generative model was never explicitly mentioned in any
PM paper, and the decoder
(as well as additional, optional local variance maximization for the code units)
was actually omitted in several PM papers after
1993~\cite{Schmidhuber:96ncedges,pm,Schmidhuber:99zif}
which focused on unsupervised learning of disentangled internal representations,
to facilitate subsequent downstream learning~\cite{Schmidhuber:96ncedges,pm,Schmidhuber:99zif}.
Nevertheless, generative models producing data
through stochastic outputs of minimax-trained NNs
were described in 1990~\cite{Schmidhuber:90diffenglish,Schmidhuber:90sab} (see Sec. \ref{ac} on Adversarial Artificial Curiosity)
and
2014~\cite{goodfellow2014generative} (Sec. \ref{special}).
Compare also the concept of Adversarial Autoencoders~\cite{makhzani2015}.
\subsection{Learning factorial codes through GAN variants}
\label{ganfac}
PM variants could easily be used as GAN-like generative models (Sec. \ref{pmgen}). In turn, GAN variants could easily be used to learn factorial codes like PM.
If we take a GAN generator network trained on random input codes with independent components,
and attach a traditional encoder network to its output layer, and train this encoder to map the output patterns back to their original random codes,
then in the ideal case this encoder will become a factorial code generator that can also be applied to the original data. This was not done by the GANs of 2014~\cite{goodfellow2014generative}. However, compare InfoGANs~\cite{infogan2016} and related
work~\cite{makhzani2015,donahue2016adversarial,dumoulin2016adversarial}.
\subsection{Relation between PM and GANs and their variants}
\label{pmgan}
Both PM and GANs are unsupervised learning techniques
that model the statistics of given data.
Both employ gradient-based adversarial nets that play a minimax game to achieve their goals.
While PM tries to make easily decoded, random-looking, factorial codes of the data,
GANs try to make decoded data directly from random codes.
In this sense,
the inputs of PM's encoders are like the outputs of GAN's decoders,
while the outputs of PM's encoders are like the inputs of GAN's decoders.
In another sense, the outputs of PM's encoders are like the outputs of GAN's decoders because both are shaped by the adversarial loss.
Effectively, GANs are trying to approximate the true data distribution through
some other distribution of a given type (e.g. Gaussian, binomial, etc).
Likewise, PM is trying to approximate it through a multivariate factorial binomial distribution, whose nature is also given in advance (see Footnote 2).
While other post-PM methods such as
the Information Bottleneck Method~\cite{tishby2000bottle}
based on rate distortion theory~\cite{davisson1972,cover2012},
Variational Autoencoders~\cite{vae2013,vae2014},
Noise-Contrastive Estimation~\cite{nce2010},
and Self-Supervised Boosting~\cite{welling2003self}
also exhibit certain relationships to PM,
none of them employs gradient-based adversarial NNs in a PM-like minimax game.
GANs do.
A certain duality between PM variants with attached decoders (Sec. \ref{pmgen}) and GAN variants with
attached encoders (Sec. \ref{ganfac}) can be illustrated through the following work flow pipelines (view them as very similar 4 step cycles by identifying their beginnings and ends---see Fig. \ref{fig1}):
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\linewidth]{3.pdf}
\end{center}
\caption{{\em
Symmetric work flows of PM and GAN variants. Both PM and GANs model given data distributions in unsupervised fashion. PM uses gradient-based minimax or adversarial training to learn an {\bf en}coder of the data, such that the codes are distributed like the data, and the probability of a given pattern can be read off its code as the product of the predictor-modeled probabilities of the code components (Sec. \ref{pm}). GANs, however, use gradient-based minimax or adversarial training to directly learn a {\bf de}coder of given codes (Sec. \ref{special}). In turn, to decode its codes again, PM can learn a {\bf non}-adversarial traditional {\bf de}coder (omitted in most PM papers after 1992---see Sec. \ref{pmgen}). Similarly, to encode the data again, GAN variants can learn a {\bf non}-adversarial traditional {\bf en}coder (absent in the 2014 GAN paper but compare InfoGANs---see Sec. \ref{ganfac}).
While PM's minimax procedure starts from the data and learns a factorial code in form of a multivariate binomial distribution, GAN's minimax procedure starts from the codes (distributed according to {\em any} user-given distribution), and learns to make data distributed like the original data.
}}
\label{fig1}
\end{figure}
\begin{itemize}
\item
Pipeline of PM variants with standard decoders:\\
data $\rightarrow$ {\bf minimax-trained encoder} $\rightarrow$ code $\rightarrow$ {\bf traditional decoder} (often omitted) $\rightarrow$ data
\item
Pipeline of GAN variants with standard encoders (compare InfoGANs):\\
code $\rightarrow$ {\bf minimax-trained decoder} $\rightarrow$ data $\rightarrow$ {\bf traditional encoder} $\rightarrow$ code
\end{itemize}
It will be interesting to study experimentally whether the GAN pipeline above is easier to train than PM to make
factorial codes or useful approximations thereof.
\section{Convergence of Unsupervised Minimax}
\label{convergence}
The 2014 GAN paper~\cite{goodfellow2014generative} has a comment on convergence under
the greatly simplifying assumption that one can directly optimize the relevant functions implemented by the two adversaries, without depending on suboptimal local search techniques such as gradient descent. In practice, however, gradient descent is almost always the method of choice.
So what's really needed is an analysis of what happens when backpropagation~\cite{Linnainmaa:1970,Linnainmaa:1976,werbos1982sensitivity}
is used for both adversarial networks. Fortunately, there are some relevant results. Convergence can be shown for both GANs and PM through two-time scale stochastic approximation~\cite{Borkar:97,Konda:04,Karmakar:17}.
In fact, Hochreiter's group used this technique to demonstrate convergence for GANs~\cite{Heusel:17arxiv,Heusel:17}; the proof is directly transferrable to the case of PM.
Of course, such proofs show only convergence to exponentially stable equilibria,
not necessarily to global optima. Compare, e.g.,~\cite{Mazumdar:19}.
\section{Conclusion}
\label{conclusion}
The notion of {\em Unsupervised Minimax} refers to unsupervised or self-supervised adaptive modules (typically neural networks or NNs) playing a zero sum game. The first NN somehow learns to generate data. The second NN learns to predict properties of the generated data, minimizing its error, typically by gradient descent. The first NN maximizes the objective function minimized by the second NN, trying to produce outputs that are hard on the second NN.
Examples are provided by Adversarial Artificial Curiosity (AC since 1990, Sec. \ref{ac}), Predictability Minimization (PM since 1991, Sec. \ref{pm}), Generative Adversarial Networks (GANs since 2014; conditional GANs since 2010, Sec. \ref{special}).
This is very different from certain
earlier adversarial machine learning settings
which neither involved unsupervised NNs nor
were about modeling data nor used gradient descent (see Sec. \ref{intro}, \ref{summary}).
GANs and cGANs are applications of the AC principle (1990)
where the environment simply returns whether the current output of the first NN is in a given set
(Sec. \ref{special}).
GANs are also closely
related to PM, because both GANs and PM model the statistics of given data distributions through
gradient-based adversarial nets that play a minimax game (Sec. \ref{pm}).
The present paper clarifies some of the previously published confusion surrounding these issues.
AC's generality (Sec. \ref{general}) extends GAN-like unsupervised minimax to sequential problems, not only for plain pattern generation and classification, but even for RL problems in partially observable environments. In turn, the large body of recent GAN-related insights might help to improve training procedures of
certain AC systems.
\section*{Acknowledgments} Thanks to Paulo Rauber, Joachim Buhmann, Sepp Hochreiter, Sjoerd van Steenkiste, David Ha, R\'{o}bert Csord\'{a}s, Louis Kirsch, and several anonymous reviewers, for useful comments on a draft of this paper. This work was partially funded by a European Research Council Advanced Grant (ERC no: 742870).
\newpage
| proofpile-arXiv_065-7014 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Graph structures can be represented with an adjacency matrix containing the edge weights between different nodes. Since the adjacency matrix is usually sparse, common estimation techniques involve a quadratic optimization with a LASSO penalty to impose this sparsity. While the coefficient of the $L_1$ regularization term is easy to choose when we have prior knowledge of the appropriate sparsity level, in most cases we do not and the parametrization becomes non-trivial. In this paper we focus on causal relationships between time series, following the causal graph process (CGP) approach \citep{Mei:2015ig, Mei:2017db} which assumes an autoregressive model, where the coefficients of each time lag is a polynomial function of the adjacency matrix. Using the adjacency matrix instead of the graph Laplacian allows this set-up to consider directed graphs with positive and negative edge weights. \citet{Mei:2017db} simplify the problem to a combination of quadratic minimisations with $L_1$ regularization terms which they then solve with a sparse gradient projection algorithm. They choose this algorithm for its use of sparse-matrix-computation libraries, which is efficient only for highly sparse graphs; its performance deteriorates significantly with more dense graphs.
The cyclical coordinate descent algorithm is a widely used optimisation method due to its speed. Its popularity increased after \citet{Tseng:2001bd} proved its convergence for functions that decompose into a convex part and a non-differentiable but separable term, which makes it perfectly suited for solving quadratic minimizations involving an $L_1$ regularization term. In \citet{Wu:2008jd} the efficiency of both a cyclical and a greedy coordinate descent algorithm to solve the LASSO was shown. Subsequently, \citet{Wang:2013bc} demonstrated that the cyclical version is more stable and applied it to solve the graphical LASSO problem. \citet{Meinshausen:2006kb} extended the LASSO to high-dimensional graphs by performing individual LASSO regularization on each node. This approach inspired \citet{Friedman:tb} to create the graphical-LASSO with the block coordinate descent algorithm. This algorithm computes a sparse estimate of the precision matrix assuming that the data follows a multivariate Normal distribution. The use of CCD to solve this graph-LASSO problem allows for efficiently computing the sparse precision matrix in large environments. However, this algorithm focuses on simultaneous connections not causal relationships. In contrast, \citet{Shojaie:2010ei, Shojaie:2010df} proposed different approaches to compute what they define as a graph-Granger causality effect, however their models assumed prior knowledge of the structure of the underlying adjacency matrix and therefore focused on estimating the weights.
In the literature there are many proposed methods for selecting the LASSO coefficient. \citet{Banerjee:2007uy, Meinshausen:2006kb, Shojaie:2010ei, Shojaie:2010df} use a function of the probability of error, which depends on selection of a probability. \citet{Wu:2008jd, Mei:2015ig, Mei:2017db} perform an expensive grid search and use the in- and out-of-sample error to select the coefficient. \citet{Friedman:tb, Wang:2013bc} avoid the problem by simply fixing the coefficient to a selected value. These strategies either require the selection of free parameters or rely on the prediction error to find the correct LASSO coefficient.
In this work we follow the idea of the graph-LASSO of \citet{Friedman:tb} and the individual LASSO regularization of \citet{Meinshausen:2006kb} to propose a new CCD algorithm that solves the causal graph process problem of \citet{Mei:2015ig, Mei:2017db}. The CCD approach allows us to leverage the knowledge of the specific structure of the adjacency matrix to optimise the computational steps. In addition, we propose a new metric that uses the prediction quality of each node separately to select the LASSO coefficient. Our solution does not require additional parameters and produces better results than relying solely on the prediction error. Thus, our algorithm computes the directed adjacency matrix with an appropriate number of edges, as well as the polynomial coefficients of the CGP. Furthermore, the quality of the results and speed are not affected by the sparsity level of the underlying problem. Indeed, while the algorithm proposed by \citet{Mei:2015ig, Mei:2017db} scales cubically with the number of nodes, the algorithm we propose scales quadratically while automatically selecting the LASSO coefficient.
We show the performance of our solution on simulated CGPs following a stochastic block model with different sizes, levels of sparsity and history lengths. We assess the quality of the estimated adjacency matrix by considering the difference in number of edges, the percentage of correct positives and the percentage of false positives. We highlight the performance of our approach and its limits. We then run the algorithm on a financial dataset and interpret the results. Section \ref{sec:CGP} introduces signal processing on graphs. We then introduce our novel algorithm based on a coordinate descent algorithm to efficiently estimate the adjacency matrix of the causal graph process in Section \ref{sec:CGP_CCD}. In Section \ref{sec:sel_L1} we present a new non-parametric metric to automatically select the LASSO sparsity coefficient. Finally, in Section \ref{sec:applications} we present the results on simulated and real datasets.
\section{Background to signal processing on graphs}
\label{sec:CGP}
There exist many approaches for modelling a graph; in this paper we use a directed adjacency matrix $A$. An element $A_{i,j} \in \mathbb{R}$ of the adjacency matrix corresponds to the weight of the edge from node $j$ to node $i$. We consider a time dependent graph with $N$ nodes evolving over $K$ time samples. Let $x(k) \in \mathbb{R}^N$ be the vector with the value of each node at time $k$. With this formulation, the graph signal over $K$ time samples is denoted by the matrix $X(K) = [x(0) \dots x(K - 1) ] \in \mathbb{R}^{N \times K}$. We assume the graphs to follow a causal process as defined in \citet{Mei:2015ig, Mei:2017db}. The causal graph process (CGP) assumes the graph at time $k$ to follow an autoregressive process over $M$ time lags. The current state of the graph, $x(k)$, is related to the lag $l$ through a graph filter $P_l(A)$. The graph filter is considered to be a polynomial function over the adjacency matrix with coefficients $C = \{c_{l,j}\}$ defined by: $P_l(A) = \sum_{j=0}^l c_{l,j} A^j$. Without loss of generality we can fix the coefficients of the first time lag to be $(c_{1,0},c_{1,1}) = (0, 1)$. Thus, the CGP at $k$ can be expressed as:
\begin{equation}
\label{eq:CGP_eq}
x(k) = w(k) + A x(k-1) + \dots + \left(c_{M,0}I + \dots + c_{M,M}A^M \right) x(k-M) \; .
\end{equation}
Where $w(k)$ corresponds to Gaussian noise. Hence, the problem of reconstructing the CGP from a group of time series consists of estimating the adjacency matrix $A$ and the polynomial coefficients $C$. \citet{Mei:2015ig, Mei:2017db} consider the optimisation problem:
\begin{equation}
\label{eq:min_Ac}
(A, c) = \min_{A, c} \frac{1}{2} \sum_{k=M}^{K-1} \left\|x(k) - \sum_{l=1}^M P_l(A) x(k-l) \right\|_2^2 + \lambda_1 \| A\|_1 + \lambda_1^c \| C \|_1 \; ,
\end{equation}
where they include a LASSO penalty for both the adjacency matrix and the polynomial coefficients to enforce sparsity. They decompose this optimisation into different steps: first estimate the coefficients $R_i = P_i(A)$, then from those coefficients retrieve the adjacency matrix $A$, which allows the polynomial coefficients $C$ to be obtained via another minimisation. Due to the $L_1$ penalty, the optimisation problem is not convex for $R_1$ and $C$.
For the first step, they perform a block coordinate descent over the matrix coefficients $R_i$. Each step of the descent is quadratic except for $R_1$ which has the $L_1$ regularisation term. From the obtained matrix coefficients $R_i$ they perform an additional step in the block descent to obtain the adjacency matrix $A$. With the estimated adjacency matrix, we can reformulate the minimisation of Equation \ref{eq:min_Ac} in a function of the vector $C$ with an $L_1$ regularisation term. In their paper, they do not go into details on how they perform these minimisations and just specify that they use a sparse gradient projection algorithm. The authors argue in favour of this algorithm because it is particularly efficient, however only in the case of highly sparse problems; otherwise, for a dense graph this algorithm will scale as the cube of the number of nodes which renders it impractical. This motivates our interest in developing a novel cyclical coordinate descent (CCD) algorithm for this problem.
\section{Estimating the adjacency matrix with coordinate descent}
\label{sec:CGP_CCD}
The sparse gradient projection (SGP) algorithm \citet{Mei:2015ig, Mei:2017db} used to solve each step of the block coordinate descent algorithm does not take full advantage of the structure of the problem. We therefore propose an efficient CCD algorithm for estimating the adjacency matrix and the CGP coefficients, and since the $L_1$ regularisation constraint in the optimisation problem for the matrix coefficients only applies to $R_1$, we consider the two cases $i=1$ and $i>1$ separately. The detailed steps leading to the equations presented in this section are in Appendix \ref{sec:eq_CCD}.
\subsection{Update equation for $i>1$}
In the case of $i>1$, the minimisation of Equation \ref{eq:min_Ac} on the matrix coefficient $R_i$ simplifies to a quadratic problem, which is well suited for a CCD algorithm looping over the different lags $i$. Furthermore, since the minimisation is over a matrix, CCD allows us to avoid computing a gradient over a matrix; instead we compute the update equation for matrix $R_i$ directly. We reformulate the loss function to isolate $R_i$ with $S_k^i = x(k) - \sum_{l \neq i}^M R_l x(k-l)$, thus the utility function is:
\begin{equation}
\label{eq:lagrangian_Ri}
\mathcal{L}(R_i) = \frac{1}{2} \sum_{k=M}^{K-1} \left( S_k^i - R_i x(k-i) \right)^T \left( S_k^i - R_i x(k-i) \right) \;,
\end{equation}
where $S_k^i$, $x_k$ are vectors of size $N$. The derivative of $\mathcal{L}(R_i)$ with respect to $R_i$ is equal to zero if:
\begin{equation}
\label{eq:upd_Ri}
R_i = \left(\sum_{k=M}^{K-1} S_k^i x(k-i)^T \right) \left(\sum_{k=M}^{K-1} x(k-i) x(k-i)^T \right)^{-1} \; .
\end{equation}
which gives us an update equation for CCD. Since the matrix $\sum_{k=M}^{K-1} x(k-i) x(k-i)^T$ in equation \ref{eq:upd_Ri} is not guaranteed to be non-singular, we can perform a regularisation step by adding noise to its diagonal to compute the inverse. While this inverse step is expensive it does not depend on the matrices $R_i$. Thus, it can be computed in advance outside of the CCD loop. Hence, the update consists of a vector-vector multiplication followed by a matrix-matrix multiplication of size $N\times N$.
\subsection{CCD for $i=1$}
For $i=1$, the optimisation corresponds to Equation \ref{eq:min_Ac} with the $L_1$ regularisation term. We can follow the methodology of the previous section and derive a matrix update to compute the CCD step. However, in practice this solution produces matrices that are too dense. Hence, we instead constrain the sparsity on each node. This corresponds to running a CCD over the columns of the matrix $R_1$. Indeed, a column $j$ of the adjacency matrix corresponds to the weight of the edges going from node $j$ to the nodes it influences. To do so, we have to reformulate the loss function as a problem over the column $j$ of $R_1$. Let us denote by $R_1^{-j}$ the matrix $R_1$ without the column $j$, $R_1^j$ the column $j$ of the matrix $R_1$, $x^{-j}(k-1)$ the vector $x(k-1)$ without the term at index $j$, and $x^j(k-1)$ the value of the $j$-th term of $x(k-1)$. Thus, we can reformulate the error term to isolate the column $j$: $x(k) - \sum_{i=1}^M R_i x(k-i) = S_k^1 - R_1^{-j} x^{-j}(k-1) - R_1^j x^j(k-1)$. Therefore, the Lagrangian of the minimisation over the column $j$ of $R_1$ with a LASSO regularisation term follows as:
\begin{equation}
\label{eq:lagrangian_R1}
\mathcal{L}(R_1^j) = \frac{1}{2} \sum_{k=M}^{K-1} \left\| S_k^1 - R_1^{-j} x^{-j}(k-1) - R_1^j x^j(k-1) \right\|_2^2 + \lambda_1 |R_1^j | \;.
\end{equation}
Due to the non-differentiable $L_1$ term we need to use sub-gradients and thus introduce the soft-thresholding function to obtain the CCD updating equation. We define the soft-threshold function as $S(a,b) = sign(a) ( |a| - b)_+$, where $sign(a)$ is the sign of $a$ and $(y)_+ = max(0,y)$. Then, the derivative of the Lagrangian \ref{eq:lagrangian_R1} is zero if:
\begin{equation}
\label{eq:R_1j}
R_1^j = \frac{S \left( \sum_{k=M}^{K-1}\left(S_k^1 - R_1^{-j} x^{-j}(k-1) \right) x^j(k-1), \lambda_1 \right) }{ \sum_{k=M}^{K-1} (x^j(k-1))^2} \;.
\end{equation}
The CCD algorithm for $R_1$ will loop by updating each column using Equation \ref{eq:R_1j}. As for the update of $R_i$, the denominator of Equation \ref{eq:R_1j} can be computed outside the loop. Thus the complete algorithm consists of a CCD for each lag $i$ with an inner CCD loop over the columns of $R_1$. Algorithm \ref{alg:ccd_Ri} shows the complete CCD algorithm to compute the matrix coefficients $R_i$ of the CGP process. We stop the descent when the first of four criterion is reached: the maximum number of iterations is reached, the $L_1$ norm of the difference between the matrix coefficients $R$ and its previous value is below a threshold $\epsilon$, the $L_1$ norm of the difference between the new and previous in-sample MSE is below $\epsilon$ or the in-sample MSE increases. We then obtain the adjacency matrix by running an extra step of the columns CCD on $R_1$.
\begin{algorithm}
\caption{CCD algorithm to compute the matrix coefficients $R$}\label{alg:ccd_Ri}
\begin{algorithmic}[1]
\Procedure{Compute-R}{$x, M, N, K$}
\State Compute the denominators of Equations \ref{eq:R_1j} and \ref{eq:upd_Ri} outside the loop
\State $R = [ zeros(N, N) \;,\; \forall i \in [1,M] ]$ \Comment{Initialise coefficients at zero}
\While{Convergence criterion not met}
\State Run CCD over the columns of $R_1$ with Equation \ref{eq:R_1j}
\State Run CCD for each lag $i>1$ with matrix update Equation \ref{eq:upd_Ri}
\EndWhile
\State Run the CCD over the columns of $R_1$ to obtain the adjacency matrix $A$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Retrieving the polynomial coefficients $c$}
Once we have retrieved the adjacency matrix $A$, the next step of the block coordinate descent is to estimate the polynomial coefficients $C$. To obtain these coefficients we minimise the MSE over the training set with both $L_1$ and $L_2$ regularisation terms. Hence, we can derive the CCD update for each coefficient $C_{i,j}$. We denote by $\hat{C}$ and $\hat{A}$ the estimated values of $C$ and $A$ respectively. Then, let us define $y_k = x_k - \hat{A}x_{k-1}$ and $w_k = \sum_{i',j' \neq i,j} C_{i',j'} \hat{A}^{j'} x_{k-i'}$. The error term of Equation \ref{eq:min_Ac} as a function of $C_{i,j}$ becomes $\left\| y_k - w_k - C_{i,j} \hat{A}^j x_{k-i} \right\|_2^2$. Thus, taking into account the $L_1$ and $L_2$ regularisation terms on $C$, the derivative of the Lagrangian with respect to $C_{i,j}$ is zero if:
\begin{equation}
\label{eq:Cij}
C_{i,j} = \frac{S\left( \sum_{k=M}^{K} \left(\hat{A}^j x(k-i) \right)^T \left(y_k - w_k \right) , (K-M) \lambda_1^c \right)}{\sum_{k=M}^{K} \left(\hat{A}^j x(k-i) \right)^T \left(\hat{A}^j x(k-i)\right) + 2 (K - M ) \lambda_2^c } \;.
\end{equation}
Hence, the CCD algorithm for $C$ will loop over each coefficient $C_{i,j}$ and update it with Equation \ref{eq:Cij}. This step completes the block coordinate descent to obtain the CGP process from the observed time series $x$. However, this CCD algorithm has three parameters: $\lambda_1$ for LASSO penalty used to obtain $R_1$, and $\lambda_1^c$ and $\lambda_2^c$ for the $L_1$ and $L_2$ regularisation terms used to compute the polynomial coefficients $C$. In this paper we focus on the estimation of the adjacency matrix $A$ and the choice of the LASSO parameter, while fixing the regularisation parameters of equation \ref{eq:Cij} to $\lambda_1^c=0.05$ and $\lambda_2^c=10^3$, for which the algorithm appears to be reasonably robust, as is evident from the results.
\section{Selecting the $L_1$ coefficient}
\label{sec:sel_L1}
We now introduce two new non-parametric metrics to efficiently and automatically select the LASSO coefficient, $\lambda_1$, which directly influences the sparsity of the adjacency matrix $A$. A classic approach for selecting this involves computing the cross-validation error. When applied to time series this corresponds to computing a prediction error over an out-of-sample time window, see for example \citet{Wu:2008jd, Mei:2015ig, Mei:2017db}, in which the authors use the estimated CGP variables $\hat{A}$ and $\hat{C}$ to make a prediction and use the MSE to assess its quality. This technique implies a direct relationship between the sparsity level of the adjacency matrix and the MSE of the prediction, which is questionable in practice. Figure \ref{fig:compMetrics} plots the evolution of the prediction MSE over increasing values of $\lambda_1$, alongside the percentage difference in the number of edges between the estimated adjacency matrix and the real one computed for a simulated CGP following a Stochastic Block Model (SBM) graph, i.e. the difference in number of edges divided by the total possible number of edges
It is clear that current techniques do not produce good results and we do not want to follow \citet{Friedman:tb, Wang:2013bc} in fixing different values of $\lambda_1$ producing a sparse and a dense result without knowing which one is correct. Thus, we need a metric that weights the improvement in prediction error against an increased number of edges. In statistical modelling the AIC and BIC criteria aim to avoid over-fitting by including a penalty on the number of input-variables, which works when the number of output-variables to be predicted is not related to the selected set of input-variables. However, in the case of an adjacency matrix the sparsity of each node also impacts the number of nodes predicted; we want sparsity in the adjacency matrix to encourage each node to more accurately predict a small subset of other nodes. Following this idea further we derive two new error metrics.
\subsection{Two new distance measures for selecting the $L_1$ coefficient}
We want a distance metric that has no parameters and a maximum around the exact number of edges of the adjacency matrix. An appropriately sparse adjacency matrix has few edges but enough to accurately reproduce the whole CGP process. What is important is not the prediction quality obtained by the complete adjacency matrix but rather the prediction quality of each node {\it independently}; the adjacency matrix should have few edges connecting nodes with each connection having a low in- and out-of-sample error. For each node we compute the errors of each connection with other nodes. We sum these errors over all edges and compute the average over time divided by the number of edges. From this error, we compute the sum over all nodes to obtain an error metric of the whole graph:
\begin{equation}
\label{eq:err_j}
err = \sum_{j=1}^N \frac{1}{\sum_{i=1}^N \mathbb{I}_{ \{A_j \neq 0 \} }(i) } \frac{1}{K-M}\sum_{k=M}^K \left\| x(k) \mathbb{I}_{\{A_j \neq 0 \}} - A x_j(k-1) \right\|_2^2 \;,
\end{equation}
where $ 1_{A_j \neq 0} $ corresponds to a vector of zeros with ones only where the lines of the column $A_j$ of the adjacency matrix $A$ are non-zero. Compared to the in-sample MSE of node $j$ the error metric of Equation \ref{eq:err_j} focuses on the error in the nodes it is connected to. Since we divide by the number of edges, the error should increase as sparsity increases, but it should start decreasing once the gain in the prediction quality of each edge offsets the decrease in number. Intuitively this peak should correspond to the underlying sparsity level of the graph under study; before the peak, the model has too many parameters with poor individual prediction quality, whereas after the peak, the model has too few edges with very low individual error.
The error metric of Equation \ref{eq:err_j} averages over the number of edges of each node $j$. Another approach would be to work with the degree of the graph instead, by which we mean the sum of the absolute values of the weights of the edges of node $j$. Thus we define the error with degree by:
\begin{equation}
\label{eq:err_jd}
err^d = \sum_{j=1}^N \frac{1}{\sum_{i=1}^N |A_{i,j}| } \frac{1}{K-M}\sum_{k=M}^K \left\| x(k) \mathbb{I}_{\{A_j \neq 0 \}} - A x_j(k-1) \right\|_2^2
\end{equation}
We simulate a CGP on a stochastic block model following the methodology and parameters in \citet{Mei:2015ig, Mei:2017db}, which we describe in Section \ref{sec:applications}. On this simulated graph we test our intuition by comparing the evolution of our error metrics as a function of the sparsity parameter $\lambda_1$. Figure \ref{fig:compMetrics} shows the evolution of these metrics as well as the commonly used in- and out-of-sample MSE and the AIC and BIC criteria. On this figure both metrics, $err$ and $err^d$, perform as expected while the others do not show any relationship to the number of edges in the adjacency matrix except for the BIC criterion. While this graph represents only one sample of a simulated scenario, this behaviour is consistent throughout the different simulations in Section \ref{sec:applications}. Interestingly, our two metrics complement each other; on average, for dense graphics $err$ often produces better results while for sparser ones $err^d$ is often better. In practice we can use the following pragmatic approach for robust results: if both metrics have a peak we take the mean value of the two resulting $\lambda_1$, if only one has a peak we take its value for $\lambda_1$. The Table \ref{tab:perf_compM} in Appendix compares the performance of these different metrics for different SBMs. It is interesting to observe that the performance of the two error metrics we propose is on par with the BIC criterion and actually complement is. Indeed, we observe the best results when averaging the selected $\lambda_1$ of $err$, $err^d$ and $BIC$. Since the BIC criterion is widely known, for the rest of this paper we will study the performance of the two new error distances $err$ and $err^d$ that we propose knowing that the resulting adjacency matrix would be better if the BIC criterion was included.
\begin{figure}[h]
\vspace{.3in}
\centerline{\includegraphics[width = \figurewidth, height = \figureheight]{CompMetricsAll_N200Nc5M3K5_v2}}
\vspace{.3in}
\caption{\label{fig:compMetrics} Comparison of the evolution of different metrics as a function of the value of the LASSO coefficient $\lambda_1$ for a simulated CGP-SBM graph with $200$ nodes, $5$ clusters, $3$ lags and $1040$ time points. The left axis indicates the number of different edges between the estimate and the true adjacency matrix. The blue line shows the evolution of that difference as a function of the coefficient $\lambda_1$, while the red line highlights the zero mark. The different error metrics are rescaled to be between $0$ and $1$ on the right y-axis. We compare the two metrics proposed in this paper, $err$ and $err^d$, with the in- and out-of-sample error, $MSE_{in}$ and $MSE_{out}$, as well as the AIC and BIC criteria, $AIC$ and $BIC$}
\end{figure}
\section{Applications}
\label{sec:applications}
We are interested in two performance metrics: the accuracy of the LASSO coefficient selection, assessed by measuring the difference in number of edges between the real and estimated adjacency matrix; and the quality of the CCD algorithm, assessed by measuring the percentage of true positives and false positives. For the computations we fix two parameters of our algorithm: the maximum iterations $maxIt=50$ and the convergence limit $\epsilon=0.1$. For the initialisation, we observe that the algorithm performs better when starting from zeros, i.e. $Rh=[0]$, than from random matrices. When starting with matrices of zeros, due to the block coordinate structure of the algorithm, the first step corresponds to solving the mean square problem of a CGP with only one lag, then the second step considers a CGP with two lags, whereby we learn on the errors left by the first lag, and so on. Thus, each step of the descent complements the previous lags by iteratively refining previous predictions.
We use the same causal graph stochastic block model (CGP-SBM) structure to assess the performance of our algorithm as \citet{Mei:2017db}, and hence our results are directly comparable. However, they focus on minimising the MSE and choose the LASSO coefficient that produces the best results. Although our approach results in a higher MSE, we have shown in Section \ref{sec:sel_L1} that the MSE is not linked to the sparsity of the graph. Thus, our algorithm obtains a more accurate estimate of the adjacency matrix and its approximate sparsity.
The SBM consists of a graph with a set of clusters where the probability of a connection is higher within a cluster than between them. This structure is interesting because it appears in a wide variety of real applications. The parameters to simulate a CGP-SBM are the number of nodes, $N$, the number of clusters, $Nc$, the number of lags, $M$ and the number of time points, $K$. For each simulation we use a burn in of $500$ points. For a visual assessment of our proposed algorithm's performance we show in Figure \ref{fig:compA} the absolute value of the adjacency matrix used to simulate the CGP and the estimation we obtain with our algorithm. We note that it is hard to visually detect the discrepancies. Hence we added in appendix Figure \ref{fig:compAdiff} to show the matrix of the differences between real and estimated adjacency matrices and, Figure \ref{fig:compAdiff_1} to show the non-zero elements of each matrix and the matrix of differences with a black square at each non-zero edge.
\begin{figure}[h]
\vspace{.3in}
\centerline{\includegraphics[width = \figurewidth, height = \figureheight]{CompA_N100Nc5M3K6}}
\vspace{.3in}
\caption{\label{fig:compA} Estimated adjacency matrix $\hat{A}$, on the right; the true one $A$, on the left. Absolute values of the weights are shown for better visualisation, with blacker points representing bigger weights. The estimation was performed on a CGP-SBM graph with $N=100$, $Nc=5$, $M=3$ and $K=1560$.}
\end{figure}
We also wish to quantitatively assess the accuracy of the estimated adjacency matrix. Hence, we measure the quality of the results by considering different metrics: the difference in the number of edges between $\hat{A}$ and $A$ as a absolute value and as percentage of the total number of possible edges, $N^2$; the percentage of true positive, i.e. the number of edges in $\hat{A}$ that are also edges in $A$ over the total number of edges in $A$; the percentage of false positive, i.e. the number of edges in $\hat{A}$ that are not in $A$ over the total number of edges in $\hat{A}$; the mean squared error $MSE=\|\hat{A} - A\|_2^2 / N^2$. The two metrics measuring the difference between the real and the selected number of edges assess the performance of the selection of the sparsity coefficient $\lambda_1$. The true and false positive rates assess the performance of the CGP-CCD algorithm \ref{alg:ccd_Ri} to compute the adjacency matrix.
\begin{table}[h]
\begin{center}
\caption{\label{tab:perf} Differences between the adjacency matrix $A$ and its estimate $\hat{A}$ for different CGP-SBM environments. \textit{NBDE}: absolute difference in the number of edges between $\hat{A}$ and $A$, and as a percentage of the total number of possible edges $N^2$ with \textit{NBDE (\%)}. \textit{True positive}: number of edges in $\hat{A}$ that are edges in $A$ over the total number of edges in $A$. \textit{False positive}: number of edges in $\hat{A}$ that are not in $A$ over the total number of edges in $\hat{A}$. \textit{Mean squared error}: $MSE=\|\hat{A} - A\|_2^2 / N^2$.}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ c | c | c | c | c | c | c | c | c}
N & Nc & M & K & MSE & NBDE & NBDE (\%) & True positive & False positive \\
\hline
$100$ & $5$ & $3$ & $1040$ & $2.0\times 10^{-4}$ & $40.5$ & $0.41 \%$ $(0.28)$ & $72.4 \%$ $(7.6)$ & $20.8\%$ $(11.4)$ \\
$200$ & $5$ & $3$ & $1040$ & $2.5\times 10^{-4}$ & $ 115.0$ & $0.29 \%$ $(0.29)$ & $65.9 \%$ $(4.1)$ & $ 25.4\%$ $(9.3)$ \\
$200$ & $10$ & $3$ & $1040$ & $1.6\times 10^{-4}$ & $198.5$ & $0.50\%$ $(0.46)$ & $67.0 \%$ $(8.6)$ & $26.1 \%$ $(18.9)$ \\
$200$ & $5$ & $5$ & $1040$ & $2.3\times 10^{-4}$ & $208.0$ & $0.52\%$ $(0.27)$ & $63.9 \%$ $(4.7)$ & $26.2 \%$ $(13.7)$ \\
$200$ & $5$ & $3$ & $2080$ & $1.2\times 10^{-4}$ & $135.5$ & $0.34\%$ $(0.27)$ & $73.7 \%$ $(6.3)$ & $21.1 \%$ $(10.0)$\\
$500$ & $5$ & $3$ & $2080$ & $1.7\times 10^{-4}$ & $1722.5$ & $0.69 \%$ $(0.26)$ & $ 61.3\%$ $(4.6)$ & $17.5 \%$ $(4.9)$\\
$1000$ & $10$ & $3$ & $2080$ & $9.3\times 10^{-5}$ & $4835.5$ & $0.48\%$ $(0.26)$ & $56.8 \%$ $(6.7)$ & $17.0\%$ $(9.7)$ \\
$1000$ & $10$ & $3$ & $4160$ & $6.6\times 10^{-5}$ & $3989.5$ & $0.40\%$ $(0.27)$ & $66.7 \%$ $(8.5)$ & $15.1\%$ $(9.7)$\\
$5000$ & $50$ & $3$ & $5000$ & $1.3\times 10^{-5}$ & $87709.5$ & $0.35\%$ $(0.05)$ & $64.3 \%$ $(3.5)$ & $14.0\%$ $(8.6)$\\
\hline
\multicolumn{4}{c |}{Median} & $1.7\times 10^{-4}$ & $208.0$ & $0.41\%$ $(0.12)$ & $65.9 \%$ $(5.2)$ & $20.8\%$ $(4.7)$ \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\end{table}
For assessing the consistency of the performance we simulate different environments and compute the median of the measures obtained over $10$ samples. In the simulated environments, the sparsity level, i.e. the number of non-zeros elements in the adjacency matrix, varies between $1.4\%$ and $3.0\%$ with an average at $2.1\%$. Table \ref{tab:perf} shows that the results are consistent for different graph sizes, sparsity levels, lags and numbers of time points with a median over all the environments of percentage-difference in number of edges of $0.41\%$, true positive rate of $65.9\%$ and false positive rate of $20.8\%$. Interestingly, even when the number of time points was too small to obtain accurate results, i.e. high true positive rate, the percentage of different edges stays small, below $0.5\%$. The results in Table \ref{tab:perf} assume we know the number of time lags of the underlying CGP we are looking for. Since this assumption is unlikely to hold on real datasets we tested the reliability of the performances by using wrong input parameters. We therefore simulated a graph with $M=5$ lags and ran the learning algorithm for $M=3$ lags and vice-versa, in both scenarios the average results were approximately unchanged. Section \ref{sec:realData} further studies the performance of the algorithm on real datasets with unknown parameters.
\subsection{ Computation time complexity}
All the computations employed Python $2.7$ using Numpy and Scipy-sparse libraries, which can use up to $4$ threads. The Stochastic Gradient projection (SGP) method used by \citet{Mei:2015ig, Mei:2017db} is especially efficient for highly sparse environments, which can leverage sparse matrix-vector computation. In our environment, when we have a graph with more than $1\%$ of non-zero weights a matrix-matrix or matrix-vector multiplication using the sparse function of Scipy-sparse is slower than using the dense functions of Numpy. Thus, for graphs with an adjacency matrix with more than $1\%$ of non-zero edges we use the dense library. For example, on a CGP-SBM graph with parameters $(N, Nc, M, K) = (200, 5, 3, 2080)$ with $2,4\%$ of non-zeros edges, our CCD algorithm solves the optimisation of Equation \ref{eq:lagrangian_R1} to obtain the matrix $R_1$ faster than the SGP algorithm by more than $100$-fold. When the graph size increases to $N=500$, CCD is faster than by more than $350$-fold.
We perform an empirical time complexity estimation of the complete block coordinate descent algorithm by measuring the evolution of the execution time as a function of each parameter $(N, Nc, M, K)$ individually. Increasing the number of time lags $M$ has a negligible effect on the execution time of the CCD to obtain the adjacency matrix, although the computation of the polynomial coefficients $C$ scales quadratically with the lags $M$. We observe that the execution time is not affected by the sparsity level $Nc$. However, it scales linearly as function of the number of time points $K$ and quadratically as a function of the number of nodes $N$. This quadratic complexity in $N$ can be tempered in different ways however. In the case of highly sparse graphs we can leverage sparse libraries, and an even faster solution for both dense and sparse graph is to perform the computations on a GPU. Indeed, Algorithm \ref{alg:ccd_Ri} does not use much memory and can thus be computed entirely in the GPU memory. With a GPU implementation using the library PyTorch, the algorithm has a $20$-fold speed-up compared to our CPU implementation with matrix computations parallelised over $4$ threads.
\subsection{An application to financial time series}
\label{sec:realData}
We now apply our algorithm to a real dataset of stock prices consisting of the $371$ stocks from the S\&P$500$ that have quotes between $2000/01/03$ and $2018/03/27$. Since we do not know the exact adjacency matrix of this environment, we test the accuracy of the obtained graph by studying how it changed following a known market shock. More specifically we compute two graphs, one before and one after the financial crisis of $2008/2009$. For both graphs to use the same number of time points, the first uses prices from $2004/11/15$ to $2009/01/01$ and the second from $2009/11/12$ to $2014/01/01$. We chose to build the graphs with a 4 year time window, $K=1040$, since in simulations with the same number of nodes it produces good results. The lag was fixed to one week, $M=5$, since there are documented trading patterns at a weekly frequency. We note that in the time windows studied modifying the lags to $M=3$ or $M=10$ has negligible impact on the results.
For both time windows the error metrics peaked at slightly different values, thus we took the mean of the two for estimating the LASSO coefficient. Interestingly, the algorithm selects a much sparser matrix after the crisis with a sparsity level decreasing from $5.1\%$ to $2.8\%$. This points to a more inter-connected market leading up to the crisis. Since the crisis was due to sub-prime issues, one might expect real-estate and financial firms to have many edges in the graph and influence the market in the pre-crisis period, with the importance of these firms decreasing after the crash. Indeed, this aspect is reflected in the estimated adjacency matrix; before the crisis, financial firms represent more than $60\%$ of the top ten nodes with the highest number of connections, while it decreases to less than $40\%$ afterwards, including insurance firms. Furthermore, before the crisis the firm with the highest number of connection was GGP Inc., a real estate firm which went on to file for bankruptcy in $2009$. While financial and oil firms represented more than $70\%$ of the top $20$ most connected nodes before the crisis, the graph of $2014$ is much more diversified with more sectors in the top $20$ and none representing more than $30\%$. Figure \ref{fig:US_A} in the appendix shows the evolution of the adjacency matrix before and after the crisis, and we can see the shift in importance and the increase in sparsity. Overall, the post-crisis market is sparser and less concentrated than before $2009$, with fewer edges linked to financial firms.
Since the CGP-CCD algorithm automatically selects the sparsity level, in addition to studying the different connections we can also study the evolution of the sparsity level. Figure \ref{fig:US_RV} shows the evolution through time of the sparsity level of the adjacency matrix and of the log realised variance, $log(RV)$, of the market. We computed the adjacency matrix every $6$ months and the corresponding $log(RV)$ at the last date of that time window. We can observe an interesting correlation between the increase in density of the causal graph and the increase in the realised variance.
\section{Conclusion}
We have proposed a novel cyclical coordinate descent algorithm to efficiently infer the directed adjacency matrix and polynomial coefficients of a causal graph process. Compared to the previous state-of-the-art our solution has lower complexity and does not depend on the sparsity level of the graph for scalability. Furthermore, we propose two new error metrics to automatically select the coefficient of the LASSO constraint. Our solution is able to recover approximately the correct number of edges in the directed adjacency matrix of the CGP. The performance of our algorithm is consistent across the different simulated stochastic block model graphs we tested. In addition, we provided an example application to a real-world dataset consisting of stocks from the S\&P$500$, demonstrating results that are in line with economic theory.
| proofpile-arXiv_065-7032 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Gravitational waves from the inspiral and merger of binary neutron stars encode the extreme physics of these objects. From the inspiral phase one may extract the faint imprint of the tidal interaction while the violent merger involves the oscillations of the hot remnant that may eventually collapse to form a black hole. The first aspect has already been demonstrated in the case of the spectacular GW170817 signal \citep{TheLIGOScientific:2017qsa}. The contribution from the subsequent merger, which leaves a higher frequency signature, was however buried in the instrument noise in this instance \citep{Abbott:2017dke,Abbott:2018wiz}. Intuitively, there is no reason why these distinct phases of the gravitational-wave signal should be correlated. Yet, numerical simulations suggest we should expect the unexpected.
A typical neutron star merger leads to the formation of a massive, $3-4 M_{\odot}$, remnant (often referred to as a Hyper Massive Neutron Star, HMNS), which collapses to a black hole (on a timescale of 100~ms) after shedding the angular momentum that (temporarily) counteracts gravity \citep{Takami:2014tva}. Numerical simulations demonstrate that the dynamics of the HMNS have robust high-frequency features \citep{Bauswein:2015vxa,Rezzolla:2016nxn}, thought to be associated with the modes of oscillation of the remnant. This would be natural, although the problem is less straightforward than the corresponding one for a cold neutron star (see for example \citet{Cunha:2007wx} or \citet{Chirenti:2012wn}). In that case one would be considering perturbations with respect to a long-term stable background. Meanwhile, in the case of a HMNS one would have to explore perturbations relative to a background that evolves (and eventually collapses) on a relatively short timescale. The intuitive picture makes sense, but we do not (at least not yet) have precise mode-calculations to test simulations (and eventually observations) against.
Numerical simulations have demonstrated the existence of useful phenomenological relations linking the post-merger oscillation frequencies to the matter equation of state \citep{,Bauswein:2012ya,Rezzolla:2016nxn}. This is important as it means that observations could eventually help us get a handle on problematic physics associated with hot high-density matter. This information would complement information gleaned from the inspiral phase (e.g. in terms of the tidal deformability, see \citet{Flanagan:2007ix} and \citet{Hinderer:2007mb}), which relates to cold supranuclear matter. However, it is not clear to what extent the information from inspiral and merger is (at the end of the day) independent. Formally, the underlying equation of state should (obviously) be the same (involving identical many-body interactions etcetera) but one might expect thermal and rotational effects to have decisive impact on the HMNS. Given this, it is interesting to note that
the tidal deformability (usually encoded in a mass-weighted combination of the so-called Love numbers of the individual stars, $\kappa_2^t$ in the following \citep{Flanagan:2007ix}) appears to be linked to the dominant oscillation frequency ($f_2$) of the post-merger remnant, see in particular \citet{Bernuzzi:2015rla}. The relation appears to be robust, perhaps hinting at some underlying universality, and may provide a useful constraint on the inferred physics. Of course, before we make use of this information in either a data analysis algorithm or a parameter extraction effort we need to understand why the relation should hold. At first sight it seems peculiar. Why should the properties of the (cold, slowly spinning) inspiralling neutron stars be related to the oscillations of the (hot, differentially rotating) remnant? This is the question we (try to) address in the following.
\section{The implied universality}
In the last few years it has become clear that many neutron star properties are related through universal relations. In fact, since the late 1990s, this observation has provided a foundation for discussions of neutron star asteroseismology \citep{Andersson:1997rn}, which aims to use observed oscillation frequencies to infer mass and radius (and hence constrain the equation of state) for individual stars. More recently, the so-called I-Love-Q relations \citep{Yagi:2013bca} demonstrate a link between the moment of inertia, the Love number and the quadrupole moment that helps break degeneracies in gravitational-waveform modelling. Finally, one may link the f-mode frequency of a given star to the tidal deformability \citep{Chan:2014kua}. Since these relations have been demonstrated to be accurate to within a few percent (at least as long as the equation of state does not involve sharp phase transitions, see \citet{han}) it makes sense to take them as our starting point.
Labelling the binary partners $a$ and $b$, with the mass ratio $q = {M_b}/{M_a} \leq 1$, the effective tidal parameter used by, for example, \citet{Bernuzzi:2015rla} is given by
\begin{equation}
\kappa_2^t = 2\left[ q \left(\frac{X_a}{C_a}\right)^5 k_2^a+
\frac{1}{q} \left(\frac{X_b}{C_b}\right)^5k_2^b \right]
\end{equation}
where $C_a=M_a/R_a$ is the compactness of each star, $X_a= {M_a}/{(M_a+M_b)}$ while $k_2^a$ is the quadrupole Love number (and similarly for star $b$).
For simplicity, let us consider a binary system with two non-spinning equal-mass partners. Then we have
\begin{equation}
\kappa_2^t = \frac{1}{8} \frac{k_2}{C^5} = {3\over 16} \Lambda
\end{equation}
where $\Lambda$ is commonly used to quantify the tidal deformability.
If, in addition, we note that $k_2$ is only weakly dependent on the compactness, we expect the scaling
\begin{equation}
\kappa_2^t \sim C^{-5}
\end{equation}
Meanwhile, we know that (for non-spinning stars) the fundamental mode frequency scales (roughly) as the average density (see for example \citet{Andersson:1997rn}). That is, we have
\begin{equation}
Mf_2 \sim M\bar{\rho}^{1/2} = C^{3/2}
\end{equation}
In essence, for a cold neutron star we expect to have
\begin{equation} \label{scaling}
Mf_2 \sim C^{3/2} \sim \left(\kappa_2^t\right)^{-3/10}
\end{equation}
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{chan_pl.pdf}}
\caption{Power-law fit to the $Mf_2-\kappa_2^t$ relation for cold neutron stars from \citet{Chan:2014kua}, demonstrating that we can reliably base the discussion on the simpler scaling from \eqref{f-mode_premerg}.}
\label{gchan}
\end{figure}
This is, of course, only a rough indication. A more precise relation between the f-mode frequency and the tidal deformability has been obtained by \citet{Chan:2014kua}, linking the dimensionless frequency (in geometric units) $\omega M$ to an expansion in $\ln \Lambda$. This relation is a little bit too complicated for our purposes, but it is easy to demonstrate that it can be replaced by the power-law
\begin{equation}\label{f-mode_premerg}
Mf_2 \approx 0.031 \left(\kappa_2^t\right)^{-0.218} \ .
\end{equation}
Moreover, as is evident from figure~\ref{gchan}, this is is close to the $-3/10$ power law suggested by our simple argument. Basically, the origin of the scaling for cold neutron stars is well understood.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{Mf2rvkappa.pdf}}
\caption{The top panel shows the suggested $-3/10$ power-law fit to the post-merger f-modes inferred from simulations for a set of equations of state (solid line, data provided by S. Bernuzzi). Also indicated (as a dashed line) is the fit to the f-mode of the individual (cold) pre-merger neutron stars from \citet{Chan:2014kua}. The bottom panel illustrates the fractional increase in the f-mode frequency required to explain the scaling of the post-merger remnant oscillations (assuming that no mass is lost during the merger). This factor is seen lie in the range $3-4$. }
\label{f2vk}
\end{figure}
This is not the scaling relation we are interested in, but it is easy to show that the oscillations of the remnant are well represented by a power law, as well (see \citet{Takami:2014tva} and \citet{Rezzolla:2016nxn}). Using data from a range of simulations (by different groups) we infer the scaling (see figure~\ref{f2vk})
\begin{equation}\label{f-mode_postmerg}
M_tf_2^h \approx 0.144 (\kappa_2^t)^{-0.278},
\end{equation}
where $M_t$ is the total mass of the system (assuming that we can neglect mass shedded during the merger) and $f_2^h$ represents the hot post-merger f-mode. Notably, the post-merger f-modes follow almost exactly the same scaling law as the cold f-mode of the individual pre-merger stars. This is the behaviour we are trying to explain.
The problem breaks down to two questions. First of all, we need to understand why ``the same'' scaling with the tidal parameter should apply for the oscillations of cold neutron stars and hot, more massive and differentially rotating, merger remnants. Secondly, we note that the scalings \eqref{f-mode_premerg} and \eqref{f-mode_postmerg} imply that
\begin{equation}\label{beta}
\left(M_t f_2^h\right) \approx \beta \left(Mf_2\right)
\end{equation}
with $\beta \approx 3-4$ (see figure~\ref{f2vk}). We need (at least at the qualitative level) to understand how the merger physics impacts on this numerical factor.
\section{Thermal Effects}
Just after the merger, the HMNS can reach temperatures as high as 85~MeV \citep{, Hanauske:2016gia,Hanauske:2019qgs}, making the remnant hotter than the collapsing core of a supernova furnace. Given this, one would expect thermal effect to come into play and we need to explore the implicates for the f-mode oscillations. However, the problem is complicated by the fact that the system is evolving -- we are not dealing with a ``thermalized'' equlibrium background with respect to which we can define a mode perturbation. Still, we can make progress with a simple argument. As we have already pointed out, the f-modes are known to be determined by the average density of the body in question. For the HMNS, this is tricky, as $\bar{\rho}$ (or equivalently, as the mass is known, the compactness $C$) of the collapsing matter is explicitly dependent on time. However, as the suggested collapse timescale ($\thicksim 0.1$ s) is about two orders of magnitude larger than the f-mode oscillation timescale (typically $\thicksim 1$ ms) we can expect the HMNS density to be roughly constant on the timescale of a few oscillations. This is all we need to progress, as it allows us to simplify the discussion by considering a single ``neutron star''. However, it is not quite enough to make the argument quantitative. After all, stable solutions can never reach masses of $3-4M_{\odot}$. This likely requires both support of thermal pressure and differential rotation (see later). Nevertheless, we may be able to gain some insight into the thermal effects from a simple ``surrogate'' model. Basically, we should be able to estimate the effect on the f-mode as the star becomes bloated due to thermal pressure, causing a relative change in the average density and the compactness. This effect has, in fact, already been studied for proto-neutron stars \citep{Ferrari:2002ut}. The main difference here is that we consider a model inspired by the actual temperature distribution inside a HMNS.
The thermal profile of a HMNS is not uniform. In essence, the hottest regions arise from heating due to shocks as the stars come into contact. The remnant is not able to to reach thermal equilibrium on the timescale we are interested in. Instead, the heat is advected along with the matter, leading to a nonuniform temperature distribution, see for example the results of \citet{Hanauske:2016gia}. In general, the thermal distribution has no obvious symmetry. However, for simplicity we will assume that it has a simple radial profile. Motivated by Figure 6 of \citet{Hanauske:2016gia} we assume the temperature profile to be such that:
\begin{itemize}
\item[-] thermal effects can be ignored up to about a 5~km distance from centre, simply because the pressure of the cold high-density matter dominates in this region,
\item[-] the temperature reaches 50~MeV temperature in the region between 5-10~km,
\item[-] the temperature drops to about 10~MeV temperature in the region between 10-15~km,
\item[-] we (again) ignore thermal effects beyond a radius of 15~km as the temperature (indicated by simulations) is close to zero (and this matter is likely to be gravitationally unbound, anyway).
\end{itemize}
With this model as our starting point, we can easily calculate the relative compactness of hot and cold models. This, in turn, provides some insight into the thermal impact on the post-merger oscillations. In effect, the post-merger (hot) compactness can be related to the pre-merger (cold) compactness which then relates to the tidal deformability in terms of $\kappa_2^t$.
In order to estimate the magnitude of this effect, which obviously leads to different fluid configurations, we need some idea of how the compactness of the merger remnant differs from that of the original cold neutron star. This is not at all trivial, but we may try to encode the change in compactness in terms of a nearly constant factor, $F(\theta,\phi)$. Then we argue that we ought to have $F\leq 2$, following from taking the (equatorial) radius at the point when the stars first touch. Meanwhile, numerical simulations show that we should have $F \geq 1$. For simplicity, we ignore the $\theta,\phi$ dependence which relates to the deviation from spherical symmetry. If we now imagine a hypothetical postmerger compactness, $C_{hyp}$, we have
\begin{equation}\label{chyp1}
C_{hyp} = F(\theta,\phi) C
\end{equation}
However, we still need to quantify the thermal effects.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{hotvcold.pdf}}
\caption{Illustrating the relation between the ``hot'' compactness $C^h$ (for our simple thermal surrogate model) and the cold compactness $C$ for for four different equations of state (EoS).}
\label{hotvcold}
\end{figure}
Results from such an exercise are shown in figure~\ref{hotvcold}, which shows the relation between the ``hot'' compactness and the corresponding cold model, based on the thermal surrogate and four different EoSs. It is notable that the relationship is close to linear for a range of parameter values. This is true for all models we have considered. Since the thermal surrogate ought to capture the relative behaviour, we therefore expect the postmerger compactness $C^h$ to scale linearly with $C_{hyp}$. That is, we have
\begin{equation}\label{chyp2}
C^h = \alpha C_{hyp} \quad \alpha <1
\end{equation}
Combining \eqref{chyp1} and \eqref{chyp2}, we finally obtain
\begin{equation}\label{comp_eq}
C^h = (\alpha F) C ,
\end{equation}
where the pre-factors are expected to be roughly constant. This is an important conclusion. It is now apparent why the thermal pressure would not affect the power-law from \eqref{scaling}.
Moreover, we can estimate the quantitative effect on the f-mode frequency. The results in figure~\ref{hotvcold} suggest that $\alpha\approx 0.6-0.9$, leading to the estimated range $F\approx 1.5-1.7$. In essence, we expect the post-merger compactness to be $1 -1.5$ times the pre-merger value. The post-merger frequencies would then increase by factor of $(1-1.5)^{3/2}$ or $\approx(1 - 2)$ because of the thermal effects, which takes us some way towards explaining the missing factor indicated by the results in figure~\ref{f2vk}. However, we still seem to be short a factor of 2 or so.
Before we move on to consider whether rotation may provide the missing factor, let us consider a related question.
In a series of papers, see for example \citet{Bauswein:2011tp} and \citet{Bauswein:2015vxa}, it has been argued that the f-mode frequency of the HMNS scales with the radius of an isolated neutron star with mass $1.6M_{\odot}$. This scaling is interesting because it is not at all obvious why the f-mode of the hot, differentially rotating, star should depend on the radius of an isolated neutron star with a particular mass. Further, the scaling seems to be insensitive to the mass of the HMNS.
Motivated by this, we break down the argument for the observed scaling of $M_tf_2^h$ with the isolated neutron star mass into three steps. First of all, given a particular remnant there must exist a variation of the radius of the remnant across different EoS which is unaffected by the mass of the remnant. If this is the case, we can calculate the mass of an isolated NS that reproduces this variation in radius.
Finally, since $M_tf_2^h \thicksim \left(C^h\right)^{-3/2} \thicksim \left(R^h\right)^{-3/2}$ the f-mode should scale with the radius of the isolated neutron star for which the radius varies with EoS in the same way as $R^h$.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{hotradius_scaling.pdf}}
\caption{Variation of the ``hot'' neutron star radius for different EoS (for the thermal surrogate model) and three different masses. The fits show power laws that best capture the variation, having first of all ordered the equations of state in order of increasing radius (for stars of mass $1.5M_\odot$) and then compared the scaling (in terms of a fiducial equation of state parameter $x$) for different masses. It is notable these power laws are similar ($\sim 1.7$) for masses in the range $1.5-1.7M_\odot$. }
\label{hotradius}
\end{figure}
Once again we turn to the thermal surrogate model for an answer. Figure~\ref{hotradius} shows the variation of the radius of the surrogate models for three different masses and different EoS. As before, we emphasize that the surrogate model will not return to ``actual'' value of the radius, but we expect that with the EoS will be meaningfully captured. It is then clear from figure~\ref{hotradius} that the variation of radii across EoS closely follows a 1.7 power law. This completes the first item on our list. This moves us on to the issue of determining a mass value for an isolated neutron star showing the same variation. This solution to this problem is provided in figure~\ref{coldradius}. The results suggest that the f-modes should scale with the radius of a $1.45 M_\odot$ neutron star, not too different from the $1.6 M_\odot$ scaling of \citet{Bauswein:2015vxa} (especially if we consider that we are using a fairly crude argument).
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{coldradius_scaling.pdf}}
\caption{Variation of radius with EoS for $1.45 M_{\odot}$ isolated neutron star models. The indicated power-law of $\approx 1.7$ is that required by the results in figure~\ref{hotradius}.}
\label{coldradius}
\end{figure}
\section{Rotational effects}
Let us turn to the role of rotation. As in the thermal case, it is easy to argue that the rapid rotation of the HMNS will have decisive impact on the f-mode oscillation frequencies. In fact, in this case we have better quantitative evidence. We may, for example, draw on the results from \citet{Doneva:2013zqa}, which show how the f-mode of an isolated NS changes with the angular frequency $\Omega$. The key point is that there exists robust phenomenological relations drawn from a collection of EoS. These support the notion that the overall scaling with rotation is likely to be insensitive to the EoS, as required to explain the results in figure~\ref{f2vk}. However, we have to be a bit careful as the results assume uniform rotation, while we know that the HMNS will rotate differentially. At the same time, the typical rotation profile inferred from simulations \citep{uryu} has the high-density core rotating close to uniformly. The argument may be strengthened by mode calculations for the appropriate HMNS differential rotation law, but such work has not yet been carried out.
With this caveat in mind, let us piece together the argument from the results from \citet{Doneva:2013zqa}. In order to do this, we need some rough idea of the spin of the HMNS. This is relatively straightforward. Assuming that the individual stars in the binary are slowly rotating (which makes sense if the system is old enough that the stars have had time to spin down due to dipole emission, which is likely), the angular velocity of the HMNS should arise from the total angular momentum of the system at the innermost stable circular orbit. Conservation of angular momentum then allows us to estimate the HMNS rotation rate. This rough estimate agrees fairly well with the results inferred from simulations, which lead to a dimensionless rotation parameter
\begin{equation}
\frac{a}{M} = \frac{J}{M^2} = \tilde{I} \left(\frac{R}{M}\right)^2 (M\Omega) \thicksim (0.75-0.8)
\end{equation}
We can turn this into an estimate for the (here assumed uniform) rotation via the scaling for the dimensionless moment of inertia, $\bar{I}$, from \citet{Lattimer:2004nj}
\begin{equation}
\tilde{I} = 0.237\left[ 1 + 4.2\frac{M/M_{\odot}}{R/\mathrm{km}} + 90 \left(\frac{M/M_{\odot}}{R/\mathrm{km}}\right)^4 \right]
\end{equation}
Working out the rotation rate from these relations for (say) a remnant mass, $2.6 M_{\odot}$ with 15~km radius, we arrive at $\Omega \approx 9.5\times 10^3\ \mathrm{s}^{-1}$. We need to compare this to the expected break-up frequency, $\Omega_K$, which is approximated as
\begin{equation}
{1\over 2\pi} \Omega_K\ [\mathrm{kHz}] \approx 1.716 \left( {M_0 \over 1.4M_\odot} \right)^{1/2} \left( {R_0\over 10\ \mathrm{km}}\right)^{-3/2} - 0.189
\end{equation}
for the models considered in \citet{Doneva:2013zqa} (with $M_0$ and $R_0$ the mass and radius of the corresponding non-rotating model, respectively). Naively using this estimate for our suggested HMNS parameters, we would have $\Omega_K \approx 2.6\times10^4\ \mathrm{s}^{-1}$. Taking these estimates at face value, the HMNS would rotate at just below half of the Kepler rate.
Armed with this (rough) estimate, let us turn to the f-mode frequency. Intuitively one would expect the f-mode that co-rotates with the orbit to be the one that is excited by the merger dynamics simply because this mode most closely resembles the configuration when the two stars come into contact. With the conventions from \citet{Doneva:2013zqa}, we are then considering the stable $l=-m=2$ mode and,
in a frame rotating with the star, we have
\begin{equation}\label{f-mode_comov_stable}
\sigma_r \approx \sigma_0 \left[ 1 - 0.235 \left( {\Omega \over \Omega_K}\right) -0.358 \left( {\Omega \over \Omega_K}\right)^2 \right]
\end{equation}
However, this needs to be translated into the inertial frame (where the gravitational-wave signal is measured). Using $\sigma_i = \sigma_r - m\Omega$ we obtain
\begin{equation}\label{f-mode_inertial_stable}
\sigma_i = \sigma_0 \left[ 1 - 0.235 \left( {\Omega \over \Omega_K}\right)-\frac{m\Omega}{\sigma_0} -0.358 \left( {\Omega\over \Omega_K}\right)^2 \right]
\end{equation}
where $\sigma_0$ is the f-mode frequency of a non-rotating star. This is estimated as
\begin{equation}
{1\over 2\pi} \sigma_0 [\mathrm{kHz}] \approx 1.562 + 1.151 \left( {M_0 \over 1.4M_\odot} \right)^{1/2} \left( {R_0\over 10\ \mathrm{km}}\right)^{-3/2}
\end{equation}
which for our fiducial parameters returns $\sigma_0 \approx \Omega_K$.
Taking $\Omega/\Omega_K\approx 0.5$ we then have
\begin{equation}
\sigma_i \approx 1.8 \sigma_0
\end{equation}
We thus arrive at a back-of-the-envelope idea of how much the f-mode changes due to rotation, compared to a rotating model. Basically, we estimate that rotation would take us another factor of almost 2 towards explaining the results in figure~\ref{f2vk}.
\section{Discussion and Concluding Remarks}
Using simple approximations, we have tried to explain universal behaviour seen in simulations of neutron star mergers. The basic premise was that we wanted to demonstrate the intuitive association of the dominant oscillation seen in merger simulations with the fundamental oscillation mode of the HMNS. In order to argue the case, we compared the inferred f-modes for hot, differentially rotating post-merger HMNSs to robust scalings relevant for cold NSs. We also made use of phenomenological relations for the oscillations of rapidly (and uniformly) rotating neutron stars. Very roughly, our estimates suggest that thermal effects should increase the f-mode frequency by a factor of (up to) 2, compared to the cold neutron star f-mode. Similarly, rotation would introduce another factor of 2. Combining the effects, the post-merger frequencies should lie a factor 3-4 above the frequency of the individual pre-merger stars. While admittedly simplistic, this estimate allows us to connect the observed frequency HMNS to the (much more easily calculated) f-modes of cold single neutron star. This would ``explain'' why the features seen in simulations are found to be robust.
The arguments we have provided are obviously at the level of back-of-the-envelope estimates. Nevertheless, the exercise opens the door for future possibilities. First, one should be able to extend the rotational estimates to the realistic case of differential rotation (see, for example, \cite{uryu} for the differential rotation profiles expected for post-merger remnants). This would be an important step as it should quantify the rotational effects. Moreover, given the available computational technology \citep{Doneva:2013zqa}, such results should be within reach. The issue of thermal effects (and potential phase transitions, \citet{han}) is more complex, as aspects related to heat (entropy) tend to be treated in a somewhat ad hoc manner in many numerical simulations. Ultimately, one would like demonstrate that the observed relation between tidal deformability and post-merger dynamics is real and not an artefact due to (for example) a ``simplified'' treatment of the physics. There is scope for improvements in this direction, although it will require some effort as it involves paying detailed attention to the thermodynamics in nonlinear simulations.
\section*{Acknowledgements}
We would like to, first of all, thank Sebastiano Bernuzzi for providing the numerical simulation data used in figure~\ref{f2vk} and acknowledge the use of data from the ComPOSE website for the thermal EoSs.
NA gratefully acknowledges support from STFC via grant ST/R00045X/1. Research of K.C. was supported in part by the International Centre for Theoretical Sciences (ICTS) during a visit for participating in the program Summer School on Gravitational-Wave Astronomy (Code: ICTS/Prog-gws/2017/07).
\bibliographystyle{mn2e}
| proofpile-arXiv_065-7035 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Phylogenetic trees and networks are leaf-labelled graphs that are used to visualise and study the evolutionary history of taxa like species, genes, or languages. While phylogenetic trees are used to model tree-like evolutionary histories, the more general phylogenetic networks can be used for taxa whose past includes reticulate events like hybridisation or horizontal gene transfer~\cite{SS03,HRS10,Ste16}. Such reticulate events arise in all domains of life~\cite{TN05,RW07,MMM17,WKH17}.
In some cases, it can be useful to distinguish between rooted and unrooted phylogenetic networks. In a rooted phylogenetic network, the edges are directed from a designated root towards the leaves. Hence, it models evolution along the passing of time. An unrooted phylogenetic network, on the other hand, has undirected edges and thus represent evolutionary relatedness of the taxa. In some cases, unrooted phylogenetic networks can be thought of as rooted phylogenetic networks in which the orientation of the edges has been disregarded.
Such unrooted phylogenetic networks are called proper~\cite{JJEvIS17,FHM18}.
Here we focus on unrooted, binary, proper phylogenetic networks, where binary means that all vertices except for the leaves have degree three. The set of phylogenetic networks on the same taxa can be partitioned into tiers that contain all networks of the same size.
A rearrangement operation transforms a phylogenetic tree into another tree by making a small graph theoretical change. An operation that works locally within the tree is the NNI (nearest neighbour interchange) operation, which changes the order of the four edges incident to an edge $e$. See for example the NNI from $T_1$ to $T_2$ in \cref{fig:utrees:rearrangementOps}.
Two further popular rearrangement operations are the SPR (subtree prune and regraft) operation, which as the name suggests prunes (cuts) an edge and then regrafts (attaches) the resulting half edge again, and the TBR (tree bisection and reconnection) operation, which first removes an edge and then adds a new one to reconnect the resulting two smaller trees. See, for example, the SPR from $T_2$ to $T_3$ and the TBR from $T_3$ to $T_4$ in \cref{fig:utrees:rearrangementOps}.
The set of phylogenetic trees on a fixed set of taxa together with a rearrangement operation yields a graph where the vertices are the trees and two trees are adjacent if they can be transformed into each other with the operation. We call this a space of phylogenetic trees. This construction also induces a metric on phylogenetic trees as the distance of two trees is then given as the distance in this space, that is, the minimum number of applications of the operation that are necessary to transform one tree into the other~\cite{SOW96}. However, computing the distance of two trees under \textup{NNI}\xspace, \textup{SPR}\xspace, and \textup{TBR}\xspace is NP-hard~\cite{DGHJLTZ97,HDRB08,AS01}. Nevertheless, both the space of phylogenetic trees and a metric on them are of importance for the many inference methods for phylogenetic trees that rely on local search strategies~\cite{Gus14,StJ17}.
\begin{figure}[htb]
\centering
\includegraphics{basicExampleTrees}
\caption{The three rearrangement operations on unrooted phylogenetic trees: The NNI from $T_1$ to $T_2$ changes the order of the four edges incident to $e$; the SPR from $T_2$ to $T_3$ prunes the edge $e'$, and then regrafts it again; and the TBR from $T_3$ to $T_4$ first removes the edge $e''$, and then reconnects the resulting two trees with a new edge. Note that every NNI is also an SPR and every SPR is also a TBR but not vice versa.}
\label{fig:utrees:rearrangementOps}
\end{figure}
Recently, these rearrangement operations have been generalised to phylogenetic networks, both for unrooted networks \cite{HLMW16,HMW16,FHMW17} and for rooted networks\cite{BLS17,FHMW17,GvIJLPS17,Kla19}.
For unrooted networks, Huber {et~al.}~\cite{HLMW16} first generalised NNI to level-1 networks, which are phylogenetic networks where all cycles are vertex disjoint. This generalisation includes a horizontal move that changes the topology of the network, like an NNI on a tree, and vertical moves that add or remove a triangle to change the size of the network. Among other results, they then showed that the space of level-1 networks and its tiers are connected under NNI~\cite[Theorem 2]{HLMW16}. Note that connectedness implies that the distance between any two networks in such a space is finite and that NNI thus induces a metric. This NNI operation was then extended by Huber {et~al.}~\cite{HMW16} to work for general unrooted phylogenetic networks. Again, connectedness of the space was proven. Later, Francis {et~al.}~\cite{FHMW17} gave lower and upper bounds on the diameter (the maximum distance) of the space of unrooted phylogenetic network of a fixed size under \textup{NNI}\xspace.
They also showed that \textup{SPR}\xspace and \textup{TBR}\xspace can straightforwardly be generalised to phylogenetic networks, that the connectedness under \textup{NNI}\xspace implies connectedness under \textup{SPR}\xspace and \textup{TBR}\xspace, and they gave bounds on the diameters. These bounds for \textup{SPR}\xspace were made asymptotically tight by Janssen {et~al.}~\cite{JJEvIS17}. Here, we improve these bounds on the diameter under \textup{TBR}\xspace.
There are several generalisations of \textup{SPR}\xspace on rooted phylogenetic trees to rooted phylogenetic networks for which connectedness and diameters have been obtained~\cite{BLS17,FHMW17,GvIJLPS17,JJEvIS17,Jan18}.
For example, Bordewich {et~al.}~\cite{BLS17} introduced SNPR (subnet prune and regraft), a generalisation of \textup{SPR}\xspace that includes vertical moves, which add or remove an edge. They then proved connectedness under \textup{SNPR}\xspace for the space of rooted phylogenetic networks and for special classes of phylogenetic networks including tree-based networks. Roughly speaking, these are networks that have a spanning tree that is the subdivision of a phylogenetic tree on the same taxa~\cite{FS15,FHM18}. Furthermore, Bordewich {et~al.}~\cite{BLS17} gave several bounds on the SNPR-distance of two phylogenetic networks. Further bounds and a characterisation of the SNPR-distance of a tree and a network were recently proven by Klawitter and Linz~\cite{KL19}. Here, we show that these bounds and characterisation on the SNPR-distance of rooted phylogenetic networks are analogous to the TBR-distance of two unrooted phylogenetic networks.
In this paper, we study spaces of unrooted phylogenetic networks under \textup{NNI}\xspace, \textup{PR}\xspace (prune and regraft), and \textup{TBR}\xspace. Here, the \textup{PR}\xspace and the \textup{TBR}\xspace operation are the generalisation of \textup{SPR}\xspace and \textup{TBR}\xspace on trees, respectively, where vertical moves add or remove an edge like the vertical moves of the SNPR operation in the rooted case. After the preliminary section, we examine the relation of \textup{NNI}\xspace, \textup{PR}\xspace, and \textup{TBR}\xspace; in particular, how a sequence using one of these operations can be transformed into a sequence using another operation (\cref{sec:relations}). We then study properties of shortest paths under \textup{TBR}\xspace in \cref{sec:paths}.
This includes the translation of the results from Bordewich {et~al.}~\cite{BLS17} and Klawitter and Linz~\cite{KL19} on the \textup{SNPR}\xspace-distance of rooted phylogenetic networks to the \textup{TBR}\xspace-distance of unrooted phylogenetic networks.
Next, we consider the connectedness and diameters of spaces of phylogenetic networks for different classes of phylogenetic networks, including tree-based networks and level-$k$ networks (\cref{sec:connectedness}).
A subspace of phylogenetic networks (e.g., the space of tree-based networks) is an isometric subgraph of a larger space of phylogenetic networks if, roughly speaking, the distance of two networks is the same in the smaller and the larger space. In \cref{sec:isometric} we study such isometric relations and answer a question by Francis {et~al.}~\cite{FHMW17} by showing that the space of phylogenetic trees is an isometric subgraph of the space of phylogenetic networks under \textup{TBR}\xspace. We use this result in \cref{sec:complexity} to show that computing the \textup{TBR}\xspace-distance is NP-hard. In the same section, we also show that computing the \textup{PR}\xspace-distance is NP-hard.
\section{Preliminaries}
This section provides notation and terminology used in the remainder of the paper.
In particular, we define phylogenetic networks and special classes thereof, and rearrangement operations and how they induce distances. Throughout this paper, $X=\set{1, 2,\ldots, n}$ denotes a finite set of taxa.
\paragraph{Phylogenetic networks.}
An \emph{unrooted, binary phylogenetic network} $N$ on a set of \emph{taxa} $X$ is an undirected multigraph such that the leaves are bijectively labelled with $X$ and all non-leaf vertices have degree three. It is called \emph{proper} if every cut-edge separates two labelled leaves~\cite{FHM18}, and \emph{improper} otherwise. This property implies that every edge lies on a path that connects two leaves. More importantly, a network can be rooted at any leaf if and only if it is proper~\cite[Lemma 4.13]{JJEvIS17}. If not mentioned otherwise, we assume that a phylogenetic network is proper. Furthermore, note that our definition of a phylogenetic network permits the existence of parallel edges in $N$, i.e., we allow that two distinct edges join the same pair of vertices. An \emph{unrooted, binary phylogenetic tree} $T$ on $X$ is an unrooted, binary phylogenetic network on $X$ that is a tree.
Let $\ensuremath{u\mathcal{N}_n}$ denote the set of all unrooted, binary proper phylogenetic networks on $X$
and let $\ensuremath{u\mathcal{T}_n}$ denote the set of all unrooted, binary phylogenetic trees on $X$, where $X = \set{1, 2,\ldots, n}$.
To ease reading, we refer to an unrooted, binary proper phylogenetic network (resp. unrooted, binary phylogenetic tree) on $X$ simply as phylogenetic network or network (resp. phylogenetic tree or tree).
\Cref{fig:unets:treeAndNetwork} shows an example of a tree $T \in \utreesx[6]$, a network in $N \in \unetsx[6]$,
and an improper network $M$.
\begin{figure}[htb]
\centering
\includegraphics{TreeNetworkExample}
\caption{An unrooted, binary phylogenetic tree $T \in \utreesx[6]$ and an unrooted, binary proper phylogenetic network $N \in \unetsx[6]$. The unrooted, binary phylogenetic network $M$ is improper since the cut-edge $e$ does not lie on a path that connects two leaves.}
\label{fig:unets:treeAndNetwork}
\end{figure}
An edge of a network $N$ is an \emph{external} edge if it is incident to a leaf, and an \emph{internal} edge otherwise.
A \emph{cherry} $\set{a, b}$ of $N$ is a pair of leaves $a$ and $b$ in $N$ that are adjacent to the same vertex.
For example, each network in \cref{fig:unets:treeAndNetwork} contains the cherry $\set{1, 5}$.
\paragraph{Tiers.}
We say a network $N = (V, E)$ has \emph{reticulation number\footnotemark} $r$ for $r = \abs{E} - (\abs{V} - 1)$,
that is, the number of edges that have to be deleted from $N$ to obtain a spanning tree of $N$.
For example, the network $N$ in \cref{fig:unets:treeAndNetwork} has reticulation number three.
Note that a phylogenetic tree is a phylogenetic network with reticulation number zero.
Let $\ensuremath{u\mathcal{N}_{n,r}}$ denote \emph{tier} $r$ of $\ensuremath{u\mathcal{N}_n}$, the set of networks in $\ensuremath{u\mathcal{N}_n}$ that have reticulation number $r$.
\footnotetext{In graph theory the value $\abs{E} - (\abs{V} - 1)$ of a connected graph is also called the cyclomatic number of the graph~\cite{Diestel}.}
\paragraph{Embedding.}
Let $G$ be an undirected graph.
\emph{Subdividing} an edge $\set{u, v}$ of $G$ consists of replacing $\set{u, v}$ by a path form $u$ to $v$ that contains at least one edge. A \emph{subdivision} $G^*$ of $G$ is a graph that can be obtained from $G$ by subdividing edges of $G$. If $G$ has no degree two vertices, there exists a canonical embedding of vertices of $G$ to vertices of $G^*$ and of edges of $G$ to paths of $G^*$.
Let $N \in \ensuremath{u\mathcal{N}_n}$. We say $G$ has an \emph{embedding} into $N$ if there exists a subdivision $G^*$ of $G$ that is a subgraph of $N$ such that the embedding maps each labelled vertex of $G^*$ to a labelled vertex of $N$ with the same label.
\paragraph{Displaying.}
Let $T \in \ensuremath{u\mathcal{T}_n}$ and $N \in \ensuremath{u\mathcal{N}_n}$.
We say $N$ \emph{displays} $T$ if $T$ has an embedding into $N$.
For example, in \cref{fig:unets:treeAndNetwork} the tree $T$ is displayed by both networks $N$ and $M$.
Let $D(N)$ be the set of trees in $\ensuremath{u\mathcal{T}_n}$ that are displayed by $N$.
This notion can be extended to trees with fewer leaves, and to networks.
For this, let $M$ be a phylogenetic network on $Y \subseteq X = \set{1, \ldots, n}$.
We say $N$ \emph{displays} $M$ if $M$ has an embedding into $N$.
Let $P = \set{M_1,\ldots, M_k}$ be a set of phylogenetic networks $M_i$ on $Y_i \subseteq X = \set{1, \ldots, n}$.
Then let $\ensuremath{u\mathcal{N}_n}(P)$ denote the subset of networks in $\ensuremath{u\mathcal{N}_n}$ that display each network in $P$.
\paragraph{Tree-based networks.}
A phylogenetic network $N \in \ensuremath{u\mathcal{N}_n}$ is a \emph{tree-based} network if there is a tree $T \in \ensuremath{u\mathcal{T}_n}$ that has an embedding into $N$ as a spanning tree. In other words, there exists a subdivision $T^*$ of $T$ that is a spanning tree of $N$.
The tree $T$ is then called a \emph{base tree} of $N$. Let $\ensuremath{u\mathcal{TB}_n}$ denote the set of tree-based networks in $\ensuremath{u\mathcal{N}_n}$.
For $T \in \ensuremath{u\mathcal{T}_n}$, let $\ensuremath{u\mathcal{TB}_n}(T)$ denote the set of tree-based networks in $\ensuremath{u\mathcal{TB}_n}$ with base tree $T$.
\paragraph{Level-$k$ networks.}
A blob $B$ of a network $N \in \ensuremath{u\mathcal{N}_n}$ is a nontrivial two-connected component of $N$. The \emph{level} of $B$ is the minimum number of edges that have to be removed from $B$ to make it acyclic. The \emph{level} of $N$ is the maximum level of all blobs of $N$. If the level of $N$ is at most $k$, then $N$ is called a \emph{level-$k$} network. Let $\ensuremath{u\mathcal{LV}\text{-}k_{n}}$ denote the set of level-$k$ networks in $\ensuremath{u\mathcal{N}_n}$.
\paragraph{$r$-Burl.} An $r$-burl is a specific type of blob that we define recursively: a $1$-burl is the blob consisting of a pair of parallel edges; an $r$-burl is the blob obtained by placing a pair of parallel edges on one of the parallel edges of an $r-1$-burl for all $r>1$. See for example the network $M$ in \cref{fig:unets:handcuffed}.
\paragraph{$r$-Handcuffed trees and caterpillars.}
Let $T \in \ensuremath{u\mathcal{N}_n}$ and let $a$ and $b$ be two leaves of $T$. Let $e$ and $f$ be the edges incident to $a$ and $b$, respectively. Subdivide $e$ and $f$ with vertices $\set{u_1, \ldots, u_r}$ and $\set{v_1, \ldots, v_r}$, respectively, and add the edges $\set{u_1, v_1}, \ldots, \set{u_r, v_r}$. The resulting network is an \emph{$r$-handcuffed tree} $N \in \ensuremath{u\mathcal{N}_n}$ with base tree $T$ on the \emph{handcuffed} leaves $\set{a, b}$. Note that $N$ has reticulation number $r$. If the tree $T$ is a caterpillar and $a$ and $b$ form a cherry of $T$, then the resulting network $N$ is an \emph{$r$-handcuffed caterpillar}. Furthermore, we call an $r$-handcuffed caterpillar \emph{sorted} if it is handcuffed on the leafs 1 and 2 and the leafs from 3 to $n$ have a non-decreasing distance to leaf 1. See \cref{fig:unets:handcuffed} for an example.
\begin{figure}[htb]
\centering
\includegraphics{HandcuffedTree}
\caption{A network $M$ with a $3$-burl and a sorted $3$-handcuffed caterpillar $N$.}
\label{fig:unets:handcuffed}
\end{figure}
\paragraph{Suboperations.}
To define rearrangement operations on phylogenetic networks, we first define several suboperations. Let $G$ be an undirected graph. A degree-two vertex $v$ of $G$ with adjacent vertices $u$ and $w$ gets \emph{suppressed} by deleting $v$ and its incident edges, and adding the edge $\set{u, w}$. The reverse of this suppression is the subdivision of $\set{u, w}$ with vertex $v$.
Let $N \in \ensuremath{u\mathcal{N}_n}$ be a network, and $\set{u, v}$ an edge of $N$. Then $\set{u, v}$ gets \emph{removed} by deleting $\set{u, v}$ from $N$ and suppressing any resulting degree-two vertices. We say $\set{u, v}$ gets \emph{pruned} at $u$ by transforming it into the half edge $\set{\cdot, v}$ and suppressing $u$ if it becomes a degree-two vertex. Note that otherwise $u$ is a leaf. In reverse, we say that a half edge $\set{\cdot, v}$ gets \emph{regrafted} to an edge $\set{x, y}$ by transforming it into the edge $\set{u, v}$ where $u$ is a new vertex subdividing $\set{x, y}$.
\paragraph{TBR.}
A \textup{TBR}\xspace operation{\footnotemark} is the rearrangement operation that transforms a network $N\in\ensuremath{u\mathcal{N}_n}$ into another network $N' \in \ensuremath{u\mathcal{N}_n}$ in one of the following four ways:
\begin{itemize}[leftmargin=*,label=(TBR$^-$)]
\item[(\textup{TBR$^0$}\xspace)] Remove an internal edge $e$ of $N$, subdivide an edge of the resulting graph with a new vertex $u$, subdivide an edge of the resulting graph with a new vertex $v$, and add the edge $\set{u, v}$;
\item[ ] or, prune an external edge $e = \set{u, v}$ of $N$ that is incident to leaf $v$ at $u$, regraft $\set{\cdot, v}$ to an edge of the resulting graph.
\item[(\textup{TBR$^+$}\xspace)] Subdivide an edge of $N$ with a new vertex $u$, subdivide an edge of the resulting graph with a new vertex $v$, and add the edge $e = \set{u, v}$.
\item[(\textup{TBR$^-$}\xspace)] Remove an edge $e$ of $N$.
\end{itemize}
\footnotetext{The TBR operation is known on unrooted phylogenetic trees as \emph{tree bisection and reconnection}.
Since in general networks are not trees and a TBR on a network does not necessarily bisect it, we use TBR now as a word on its own. For the reader who would however like to have an expansion of TBR we suggest "total branch relocation". We welcome other suggestions.}
Note that a \textup{TBR$^0$}\xspace can also be seen as the operation that prunes the edge $e = \set{u, v}$ at both $u$ and $v$ and then regrafts both ends. Hence, we say that a \textup{TBR$^0$}\xspace \emph{moves} the edge $e$. Furthermore, we say that a \textup{TBR$^+$}\xspace \emph{adds} the edge $e$ and that a \textup{TBR$^-$}\xspace \emph{removes} the edge $e$. These operations are illustrated in \cref{fig:unets:TBR}. Note that a \textup{TBR$^0$}\xspace has an inverse \textup{TBR$^0$}\xspace and that a \textup{TBR$^+$}\xspace has an inverse \textup{TBR$^-$}\xspace, and that furthermore a \textup{TBR$^+$}\xspace increases the reticulation number by one and a \textup{TBR$^-$}\xspace decreases it by one.
Since a \textup{TBR}\xspace operation has to yield a phylogenetic network, there are some restrictions on the edges that can be moved or removed. Firstly, if removing an edge by a \textup{TBR$^0$}\xspace yields a disconnected graph, then in order to obtain a phylogenetic network an edge has to be added between the two connected components. Similarly, a \textup{TBR$^-$}\xspace cannot remove a cut-edge. Secondly, the suppression of a vertex when removing an edge may not yield a loop $\set{u, u}$. Thirdly, removing or moving an edge cannot create a cut-edge that does not separate two leaves. Otherwise the network would not be proper.
\begin{figure}[htb]
\centering
\includegraphics{TBRbasicExample}
\caption{Illustration of the TBR operation.
The network $N_2$ can be obtained from $N_1$ by a \textup{TBR$^0$}\xspace that moves the edge $\set{u, v}$ and the network $N_3$ can be obtained from $N_2$ by a \textup{TBR$^+$}\xspace that adds the edge $\set{u', v'}$. Each operation has its corresponding \textup{TBR$^0$}\xspace and \textup{TBR$^-$}\xspace operation, respectively, that reverses the rearrangement.}
\label{fig:unets:TBR}
\end{figure}
The \textup{TBR$^0$}\xspace operation equals the well known TBR (tree bisection and reconnection) operation on unrooted phylogenetic trees~\cite{AS01}. The TBR operation on trees has recently been generalised to \textup{TBR$^0$}\xspace on improper unrooted phylogenetic networks by Francis {et~al.}~\cite{FHMW17}.
\paragraph{PR.}
A \textup{PR}\xspace (\emph{prune and regraft}) operation is the rearrangement operation that transforms a network $N \in \ensuremath{u\mathcal{N}_n}$ into another network $N' \in \ensuremath{u\mathcal{N}_n}$ with a \textup{PR$^+$}\xspace $=$ \textup{TBR$^+$}\xspace, a \textup{PR$^-$}\xspace $=$ \textup{TBR$^-$}\xspace, or a \textup{PR$^0$}\xspace that prunes and regrafts an edge $e$ only at one endpoint, instead of at both like a \textup{TBR$^0$}\xspace. Like for TBR, we the say that the PR$^{0/+/-}$ \emph{moves/adds/removes} the edge $e$ in $N$. The PR operation is a generalisation of the well-known SPR (subtree prune and regraft) operation on unrooted phylogenetic trees~\cite{AS01}. Like for TBR, the generalisation of SPR to \textup{PR$^0$}\xspace for networks has been introduced by Francis {et~al.}~\cite{FHMW17}.
\paragraph{NNI.}
An \textup{NNI}\xspace (\emph{nearest neighbour interchange}) operation is a rearrangement operation that transforms a network $N\in \ensuremath{u\mathcal{N}_n}$ into another network $N' \in \ensuremath{u\mathcal{N}_n}$ in one of the following three ways:
\begin{itemize}[leftmargin=*,label=(NNI$^-$)]
\item[(\textup{NNI$^0$}\xspace)] Let $e= \{u, v\}$ be an internal edge of $N$. Prune an edge $f$ ($f \neq e$) at $u$, and regraft it to an edge $f'$ ($f' \neq e$) that is incident to $v$.
\item[(\textup{NNI$^+$}\xspace)] Subdivide two adjacent edges with new vertices $u'$ and $v'$, respectively, and add the edge $\{u', v'\}$.
\item[(\textup{NNI$^-$}\xspace)] If $N$ contains a triangle, remove an edge of the triangle.
\end{itemize}
These operations are illustrated in \cref{fig:unets:NNI}. We say that an \textup{NNI$^0$}\xspace \emph{moves} the edge $f$. Alternatively, we call the edge $e$ of an \textup{NNI$^0$}\xspace the \emph{axis} of the operation, as the operation can also be defined as pruning $f$ at $u$, and $f''\neq f'$ at $v$, and regrafting $f$ at $v$ and $f''$ at $u$.
The NNI operation has been introduced on trees by Robinson~\cite{Rob71} and generalised to networks by Huber {et~al.}~\cite{HLMW16,HMW16}.
\begin{figure}[htb]
\centering
\includegraphics{NNIbasicExample}
\caption{Illustration of the NNI operation.
The network $N_2$ (resp. $N_3$) can be obtained from $N_1$ (resp. $N_2$) by an \textup{NNI$^0$}\xspace with the axis $\{u, v\}$; alternatively, $N_2$ can be obtained from $N_1$ using the \textup{NNI$^0$}\xspace of $\{1,u\}$ to the triangle, and $N_3$ from $N_2$ by moving $\{1,u\}$ to the bottom edge of the square. The labels are inherited naturally following the first interpretation of the \textup{NNI$^0$}\xspace moves.
The network $N_4$ can be obtained from $N_3$ by an \textup{NNI$^+$}\xspace that extends $x$ into a triangle.
Each operation has its corresponding \textup{NNI$^0$}\xspace and \textup{NNI$^-$}\xspace operation, respectively, that reverses the transformation.}
\label{fig:unets:NNI}
\end{figure}
\paragraph{Sequences and distances.}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ be two networks.
A \emph{\textup{TBR}\xspace-sequence} from $N$ to $N'$ is a sequence
$$\sigma = (N = N_0, N_1, N_2, \ldots, N_k = N') $$
of phylogenetic networks such that $N_i$ can be obtained from $N_{i-1}$ by a single TBR for each $i \in \set{1, 2, ..., k}$. The \emph{length} of $\sigma$ is $k$.
The \emph{\textup{TBR}\xspace-distance} $\dTBR(N, N')$ between $N$ and $N'$ is the length of a shortest TBR-sequence from $N$ to $N'$, or infinite if no such sequence exists.
Let $\ensuremath{\mathcal{C}_n}$ be a class of phylogenetic networks.
The TBR-distance on $\ensuremath{\mathcal{C}_n}$ is defined like on $\ensuremath{u\mathcal{N}_n}$ but with the restriction that every network in a shortest TBR-sequence has to be in $\ensuremath{\mathcal{C}_n}$.
The class $\ensuremath{\mathcal{C}_n}$ is \emph{connected} under TBR if, for all pairs $N, N' \in \ensuremath{\mathcal{C}_n}$, there exists a TBR-sequence $\sigma$ from $N$ to $N'$ such that each network in $\sigma$ is in $\ensuremath{\mathcal{C}_n}$.
Hence, for the \textup{TBR}\xspace-distance to be a metric on $\ensuremath{\mathcal{C}_n}$, the class has to be connected under \textup{TBR}\xspace and the \textup{TBR}\xspace operation has to be reversible. We already noted above that the latter holds for TBR (and NNI and PR).
For a connected class $\ensuremath{\mathcal{C}_n}$, the \emph{diameter} is the maximum distance between two of its networks under its metric.
The definition for NNI and PR are analogous.
Let $\ensuremath{\mathcal{C}_n}'$ be a subclass of $\ensuremath{\mathcal{C}_n}$.
Then $\ensuremath{\mathcal{C}_n}'$ is an \emph{isometric subgraph} of a $\ensuremath{\mathcal{C}_n}$ under, say, \textup{TBR}\xspace if for every $N, N' \in \ensuremath{\mathcal{C}_n}'$ the \textup{TBR}\xspace-distance of $N$ and $N'$ in $\ensuremath{\mathcal{C}_n}'$ equals the \textup{TBR}\xspace-distance of $N$ and $N'$ in $\ensuremath{\mathcal{C}_n}$.
\section{Relations of rearrangement operations} \label{sec:relations}
On trees, it is well known that every \textup{NNI}\xspace is also an \textup{SPR}\xspace, which, in turn, is also a \textup{TBR}\xspace.
We observe that the same holds for the generalisations of these operations as defined above.
\begin{observation} \label{clm:NNIisPRisTBR}
Let $N \in \ensuremath{u\mathcal{N}_n}$. Then, on $N$, every \textup{NNI}\xspace is a \textup{PR}\xspace and every \textup{PR}\xspace is a \textup{TBR}\xspace.
\end{observation}
For the reverse direction, we first show that every \textup{TBR}\xspace can be mimicked by at most two \textup{PR}\xspace like in $\ensuremath{u\mathcal{T}_n}$. Then we show how to substitute a \textup{PR}\xspace with an \textup{NNI}\xspace-sequence.
\begin{lemma} \label{clm:unets:TBRisTwoPR}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ such that $\dTBR(N, N') = 1$.
Then $1 \leq \dPR(N, N') \leq 2$, where a \textup{TBR$^0$}\xspace may be replaced by two \textup{PR$^0$}\xspace.
\begin{proof}
If $N'$ can be obtained from $N$ by a \textup{TBR$^+$}\xspace or \textup{TBR$^-$}\xspace, then by the definition of \textup{PR$^+$}\xspace and \textup{PR$^-$}\xspace it follows that $\dPR(N, N') = 1$. If $N'$ can be obtained from $N$ by a \textup{TBR$^0$}\xspace that is also a \textup{PR$^0$}\xspace, the statement follows.
Assume therefore that $N'$ can be obtained from $N$ by a \textup{TBR$^0$}\xspace that moves the edge $e = \set{u, v}$ of $N$ to $e' = \set{x, y}$ of $N'$. Let $G$ be the graph obtained from $N$ by removing $e$, or equivalently the graph obtained from $N'$ by removing $e'$. If $e$ is a cut-edge, then so is $e'$, and without loss of generality $u$ and $x$ as well as $v$ and $y$ subdivide an edge in the same connected components of $G$. Furthermore, if $u$ subdivides an edge of a pendant blob in $G$, then so does $x$. Otherwise $N'$ would not be proper. Therefore, the \textup{PR$^0$}\xspace that prunes $e$ at $u$ and regrafts it to obtain $x$ yields a phylogenetic network $N''$. The choices of $u$ and $x$ ensure that $N''$ is connected and proper. There is then a \textup{PR$^0$}\xspace from $N''$ to $N'$ that prunes $\set{x, v}$ at $v$ and regrafts it at $y$ to obtain $N'$. Hence, $\dPR(N, N') \leq 2$.
\end{proof}
\end{lemma}
\begin{corollary}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$.
Then $\dTBR(N, N') \leq \dPR(N, N') \leq 2 \dTBR(N, N')$.
\end{corollary}
\begin{lemma} \label{clm:PRZtoNNIZ}
Let $N, N' \in \ensuremath{u\mathcal{N}_{n,r}}$ such that there is a \textup{PR$^0$}\xspace that transforms $N$ into $N'$. Let $e$ be the edge of $N$ pruned by this \textup{PR$^0$}\xspace.\\
Then there exists an \textup{NNI$^0$}\xspace-sequence from $N$ to $N'$ that only moves $e$ and whose length is in $\ensuremath{\mathcal{O}}(n +r)$.
Moreover, if neither $N$ nor $N'$ contains parallel edges, then neither does any intermediate networks in the \textup{NNI}\xspace-sequence.
\begin{proof}
Assume that $N$ can be transformed into $N'$ by pruning the edge $e = \{u, v\}$ at $u$ and regrafting it to $f = \{x, y\}$.
Note that there is then a (shortest) path $P = (u = v_0, v_1, v_2, \ldots, v_k = x)$ from $u$ to $x$ in $N \setminus \{e\}$, since otherwise $N'$ would be disconnected.
Without loss of generality, assume that $P$ does not contain $y$. Furthermore, assume for now that $P$ does not contain $v$.
The idea is now to move $e$ along $P$ to $f$ with \textup{NNI$^0$}\xspace. In particular, we show how to construct a sequence $\sigma = (N = N_0, N_1, \ldots, N_{k} = N')$ such that either $N_{i+1}$ can be obtained from $N_{i}$ by an \textup{NNI$^0$}\xspace or $N_{i+1} = N_{i}$, and such that $N_i$ contains the edge $e_i = \{v_i, v\}$.
This process is illustrated in \cref{fig:unets:PRZtoNNIZ}.
Assume we have constructed the sequence up to $N_i$.
Let $g = \{v_{i+1}, w\}$ with $w \neq v$ be the edge incident to $v_{i+1}$ that is not on $P$.
Obtain $N_{i+1}$ from $N_i$ by swapping $e_i$ and $g$ with an \textup{NNI$^0$}\xspace on the axis $\{v_{i}, v_{i+1}\}$.
Note that this preserves the path $P$ and that $N_{i+1}$ may only contain a parallel edge if $N$ or $N'$ contains parallel edges. As a result, we get $N_k = N'$.
\begin{figure}[htb]
\centering
\includegraphics{NNIZforPRZ}
\caption{How to mimic the \textup{PR$^0$}\xspace that prunes the edge $\{u, v\}$ at $u$ and regrafts to $\{x, y\}$ with \textup{NNI$^0$}\xspace operations that move $u$ of $\{u, v\}$ along the path $P = (u = v_0, v_1, v_2 = x)$ (for the proof of \cref{clm:PRZtoNNIZ}). Labels follow the definition of \textup{NNI$^0$}\xspace along an axis.}
\label{fig:unets:PRZtoNNIZ}
\end{figure}
It remains to show that every network in $\sigma$ is proper. Assume otherwise and let $N_{i+1}$ be the first improper network in $\sigma$. Then $N_{i+1}$ contains a cut-edge $e_c$ that separates a blob $B$ from all leaves. We claim that $e_c$ is part of $P$. Indeed, the pruning of the \textup{NNI$^0$}\xspace from $N_i$ to $N_{i+1}$ has to create $B$ and the regrafting cannot be to $B$, so it has to pass along $e_c$ (\cref{fig:unets:PRZtoNNIZ:properness}). However, as $P$ is a path, the moving edge cannot pass $e_c$ again, so all networks $N_j$ for $j > i$ including $N'$ are improper; a contradiction.
Hence, all intermediate networks $N_i$ are proper and thus $\sigma$ is an \textup{NNI$^0$}\xspace-sequence from $N$ to $N'$.
\begin{figure}[htb]
\centering
\includegraphics{NNIZforPRZproperness}
\caption{How an \textup{NNI$^0$}\xspace in the proof of \cref{clm:PRZtoNNIZ} may result an improper network where $e_c$ separates a blob $B$ from all leaves. The moving edge $\{v,v_i\}$ of $N_i$ becomes the moving edge $\{v,v_{i+1}\}$ of $N_{i+1}$. Labels follow the definition of \textup{NNI$^0$}\xspace along an axis.}
\label{fig:unets:PRZtoNNIZ:properness}
\end{figure}
Next, assume that $P$ contains $v_i = v$. Then first apply the process above to move $v$ of $\{u, v\}$ along $P' = (v = v_i, v_{i+1}, \ldots, v_k)$ to $v_k$.
In the resulting network, apply the process above to move $u$ of $\{u, v\} = \{u, v_k\}$ along $P'' = (u = v_0, v_1, \ldots, v_i)$ to $v_i$.
The process again avoids the creation of a network $N_j$ with parallel edges, if neither $N$ nor $N'$ contains parallel edges. Furthermore, from \cref{fig:unets:PRZtoNNIZ:properness} we get that if $\sigma$ would contain improper network then $u$ would be contained in the blob $B$. However, then $\set{u, v}$ and $e_c$ would be edges from $B$ to the rest of the network; again a contradiction.
Lastly, note that the length of $P$ is in $\ensuremath{\mathcal{O}}(n + r)$ since $N$ contains only $2n + 3r - 1$ edges. Hence, the length of $\sigma$ is also in $\ensuremath{\mathcal{O}}(n +r)$.
\end{proof}
\end{lemma}
\begin{lemma}\label{clm:PRMtoNNIM}
Let $n \geq 3$. Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ such that there is a \textup{PR$^-$}\xspace that transforms $N$ into $N'$. Let $e$ be the edge of $N$ removed by this \textup{PR$^-$}\xspace. Let $N$ have reticulation number $r$.\\
Then, there is an \textup{NNI$^0$}\xspace-sequence followed by one \textup{NNI$^-$}\xspace that transforms $N$ and $N'$ by only moving and removing $e$ and whose length is in $\ensuremath{\mathcal{O}}(n + r)$.
Moreover, if neither $N$ nor $N'$ contains parallel edges, then neither do the intermediate networks in the \textup{NNI}\xspace-sequence.
\begin{proof}
Assume the \textup{PR$^-$}\xspace removes $e = \set{u, v}$ from $N$ to obtain $N'$. If $e$ is part of a triangle, the \textup{PR$^-$}\xspace move is an \textup{NNI$^-$}\xspace move.
If $e$ is a parallel edge, then move either $u$ or $v$ with an \textup{NNI$^0$}\xspace to obtain a network with a triangle that contains $e$. Then the previous case applies.
So assume otherwise, namely that $e$ is not part of a triangle or a pair of parallel edges.
Then move $u$ with an \textup{NNI$^0$}\xspace-sequence closer to $v$ to form a triangle as follows.
Because removing $e$ in $N$ yields the proper network $N'$, it follows that $N \setminus \set{e}$ contains a shortest path $P$ from $u$ to $v$.
Since $e$ is not part of a triangle, this path must contain at least two nodes other than $u$ and $v$. Let $\set{x, y}$ and $\set{y, v}$ be the last two edges on $P$.
Consider the \textup{PR$^0$}\xspace that prunes $\{u, v\}$ at $u$ and regrafts it to $\set{x, y}$. Note that this creates a triangle on the vertices $y$, $u$ and $v$.
By \cref{clm:PRZtoNNIZ} we can replace this \textup{PR$^0$}\xspace with an \textup{NNI$^0$}\xspace-sequence. Lastly, we can remove $\{u, v\}$ with an \textup{NNI$^-$}\xspace to obtain $N'$. The bound on the length of the \textup{NNI}\xspace-sequence as well as the second statement follow from \cref{clm:PRZtoNNIZ}.
\end{proof}
\end{lemma}
To conclude this section, we note that all previous results combined show that we can replace a \textup{TBR}\xspace-sequence with a \textup{PR}\xspace-sequence, which we can further replace with an \textup{NNI}\xspace-sequence. For several connectedness results in \cref{sec:connectedness} this allows us to focus on \textup{TBR}\xspace and then derive results for \textup{NNI}\xspace and \textup{PR}\xspace.
\section{Shortest paths} \label{sec:paths}
In this section, we focus on bounds on the distance between two specified networks. We restrict to the \textup{TBR}\xspace-distance in $\ensuremath{u\mathcal{N}_n}$ and in $\ensuremath{u\mathcal{N}_{n,r}}$, and study the structure of shortest sequences of moves. We make several observations about these sequences in general, and some about shortest sequences between two networks that have certain structure in common, e.g., common displayed networks. Hence, we get bounds on the \textup{TBR}\xspace-distance between two networks, and we uncover properties of the spaces of phylogenetic networks which allow for reductions of the search space. For example, if $N$ and $N'$ have reticulation number $r$, no shortest path from $N$ to $N'$ contains a network with a reticulation number less than $r$. The proof of this statement relies on the following observation about the order in which \textup{TBR$^0$}\xspace and \textup{TBR$^+$}\xspace operations can occur in a shortest path.
\begin{observation} \label{clm:unets:TBR:PMtoZ}
Let $N, N' \in \ensuremath{u\mathcal{N}_{n,r}}$ such that there exists a \textup{TBR}\xspace-sequence
$\sigma_0 = (N, M, N')$ that uses a \textup{TBR$^+$}\xspace and a \textup{TBR$^-$}\xspace. Then there is a \textup{TBR$^0$}\xspace that transforms $N$ into $N'$.
\end{observation}
Rephrasing \cref{clm:unets:TBR:PMtoZ}, a \textup{TBR$^+$}\xspace followed by a \textup{TBR$^-$}\xspace, or vice versa, can be replaced by a \textup{TBR$^0$}\xspace. This case can thus not occur in a shortest \textup{TBR}\xspace-sequence.
Next, we look at a \textup{TBR$^0$}\xspace followed by a \textup{TBR$^+$}\xspace.
\begin{lemma} \label{clm:unets:TBR:ZPtoPZ}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ with reticulation number $r$ and $r+1$ such that there exists a shortest \textup{TBR}\xspace-sequence $\sigma_0 = (N, M, N')$ that starts with a \textup{TBR$^0$}\xspace.\\
Then there is a \textup{TBR}\xspace-sequence $\sigma_{+} = (N, M', N')$ that starts with a \textup{TBR$^+$}\xspace.
\begin{proof}
Note that the \textup{TBR$^0$}\xspace from $N$ to $M$ of $\sigma_{0}$ can be replaced with a sequence consisting of a \textup{TBR$^+$}\xspace followed by a \textup{TBR$^-$}\xspace. This \textup{TBR$^-$}\xspace and the \textup{TBR$^+$}\xspace from $M$ to $N'$ can now be combined to a \textup{TBR$^0$}\xspace, which gives us a sequence $\sigma_{+}$.
\end{proof}
\end{lemma}
Let $N, N' \in \ensuremath{u\mathcal{N}_{n,r}}$ and consider a shortest \textup{TBR}\xspace-sequences from $N$ to $N'$ that contains \textup{TBR$^+$}\xspace and \textup{TBR$^-$}\xspace operations. If the reverse statement of \cref{clm:unets:TBR:ZPtoPZ} would also hold, then we could shuffle the sequence such that consecutive \textup{TBR$^+$}\xspace and \textup{TBR$^-$}\xspace can be replaced with a \textup{TBR$^0$}\xspace. This would imply that $\ensuremath{u\mathcal{N}_{n,r}}$ is an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{TBR}\xspace. However, we now show that the reverse statement of \cref{clm:unets:TBR:ZPtoPZ} does not hold in general, and, hence, adjacent operations of different types in a shortest \textup{TBR}\xspace-sequence cannot always be swapped.
\begin{lemma} \label{clm:unets:TBR:PZtoZP:notInNets}
Let $n \geq 4$ and $r \geq 2$.
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ with reticulation number $r$ and $r+1$ such that there exists a shortest \textup{TBR}\xspace-sequence
$\sigma_+ = (N, M', N')$ that starts with a \textup{TBR$^+$}\xspace.\\
Then it is not guaranteed that there is a \textup{TBR}\xspace-sequence $\sigma_{0} =(N, M, N')$ that starts with a \textup{TBR$^0$}\xspace.
\begin{proof}
We claim that the networks $N$ and $N'$ in \cref{fig:unets:PZbutnoZP} are a pair of networks for which no \textup{TBR}\xspace-sequence $\sigma_{0} =(N, M, N')$ exists that starts with a \textup{TBR$^0$}\xspace. The two networks $M_1$ and $M_2$ in \cref{fig:unets:PZbutnoZP} are the only two \textup{TBR$^-$}\xspace neighbours of $N'$. However, it is easy to check that the \textup{TBR$^0$}\xspace-distance of $N$ and $M_i$, $i \in \set{1, 2}$, is at least two. Hence, a shortest \textup{TBR}\xspace sequence from $N$ to $N'$ that starts with a \textup{TBR$^0$}\xspace has length three and so $\sigma_{0}$ cannot exist. Note that we can add an edge to each of the pair of parallel edges to obtain an example without parallel edges. Moreover, the example can be extended to higher $n$ and $r$ by adding extra leaves between leaf 3 and 4, and replacing a pair of parallel edges by a chain of parallel edges in each network.
\end{proof}
\end{lemma}
\begin{figure}[htb]
\centering
\includegraphics{PZbutnoZP2}
\caption{Two networks $N, N' \in \ensuremath{u\mathcal{N}_n}$ with TBR-distance two such that there exist a shortest TBR-sequence from $N$ to $N'$ starting with a \textup{TBR$^+$}\xspace move (to $M'$). However, there is no shortest TBR-sequence starting with a \textup{TBR$^0$}\xspace, since the networks $M_1$ and $M_2$, which are the \textup{TBR$^-$}\xspace neighbours of $N'$, have \textup{TBR$^0$}\xspace-distance at least two to $N$.}
\label{fig:unets:PZbutnoZP}
\end{figure}
Note that the \textup{TBR$^0$}\xspace used in \cref{fig:unets:PZbutnoZP} to prove \cref{clm:unets:TBR:PZtoZP:notInNets} is a \textup{PR$^0$}\xspace. Hence, the statement of \cref{clm:unets:TBR:PZtoZP:notInNets} also holds for \textup{PR}\xspace. On the positive side, if one of the two networks is a tree, then we can swap the \textup{TBR$^+$}\xspace with the \textup{TBR$^0$}\xspace.
\begin{lemma}\label{clm:unets:TBR:PZtoZP:trees}
Let $T \in \ensuremath{u\mathcal{T}_n}$ and $N \in \ensuremath{u\mathcal{N}_n}$ with reticulation number one such that there exists a shortest \textup{TBR}\xspace-sequence $\sigma_{+} = (T, N', N)$ that starts with a \textup{TBR$^+$}\xspace.\\
Then there is a \textup{TBR}\xspace-sequence $\sigma_{0} =(T, T', N)$ that starts with a \textup{TBR$^0$}\xspace.
\begin{proof}
We show how to obtain $\sigma_{0}$ from $\sigma_{+}$. Suppose that $N'$ is obtained from $T$ by adding the edge $f$ and that $N$ is obtained from $N'$ by removing $e'$ and adding $e$. Note that $f$ is an edge of the cycle $C$ in $N'$. Furthermore, $e'$ and $f$ are distinct. Indeed, otherwise there would be a shorter \textup{TBR}\xspace-sequence from $T$ to $N$ that simply adds $e$ to $T$.
Assume for now that $e'$ is an edge of $C$ in $N'$. Then, $e'$ can be removed with a \textup{TBR$^-$}\xspace from $N'$ to obtain a tree $T'$. Hence, the \textup{TBR$^+$}\xspace from $T$ to $N'$ and the \textup{TBR$^-$}\xspace from $N'$ to $T'$ can be merged into a \textup{TBR$^0$}\xspace from $T$ to $T'$. Furthermore, the edge $e$ can then be added to $T'$ with a \textup{TBR$^+$}\xspace to obtain $N$. This yields the sequence $\sigma_{0}$.
Next, assume that $e'$ is not an edge of $C$ in $N'$. Then, $e'$ is a cut-edge in $N'$ and $e$ is a cut-edge in $N$. Let $\bar e$ be the edge of $T$ that equals $e'$, if it exists, or the edge that gets subdivided by $f$ into $e'$ and another edge. Let $\bar f$ be the edge of $N$ defined as follows: it is equal to $f$ itself if $f$ is not touched by the \textup{TBR$^0$}\xspace move from $N'$ to $N$; it is the extension of $f$ if one of its endpoints is suppressed by this move; it is one of the two edges obtained by subdividing $f$. Now let $T'$ be a tree obtained by removing $\bar f$ from $N$. Then, there is a \textup{TBR$^0$}\xspace from $T$ to $T'$ that moves $\bar e$ to $\bar e'$ and furthermore a \textup{TBR$^+$}\xspace that adds $\bar f$ to $T'$ and yields $N$. We obtain again $\sigma_{0}$. An example is given in \cref{fig:unets:PZtoZP:trees}.
\end{proof}
\end{lemma}
\begin{figure}[htb]
\centering
\includegraphics{PZtoZPtrees}
\caption{There is a \textup{TBR}\xspace-sequence from $T$ to $N$ that first adds $f$ with a \textup{TBR$^+$}\xspace and then moves $e'$ to $e$ with a \textup{TBR$^0$}\xspace. From this, a \textup{TBR}\xspace-sequence can be derived that moves $\bar e$ to $\bar e'$ with a \textup{TBR$^0$}\xspace and then adds $\bar f$ with a \textup{TBR$^+$}\xspace.}
\label{fig:unets:PZtoZP:trees}
\end{figure}
Next, we look at shortest paths between a tree and a network. First, we show that if a network displays a tree, then there is a simple \textup{TBR$^-$}\xspace-sequence from the network to the tree. Recall that $D(N)$ is the set of trees in $\ensuremath{u\mathcal{T}_n}$ displayed by $N \in \ensuremath{u\mathcal{N}_n}$. This result is the unrooted analogous to Lemma 7.4 by Bordewich {et~al.}~\cite{BLS17} on rooted phylogenetic networks.
\begin{lemma} \label{clm:unets:TBR:pathDown}
Let $N \in \ensuremath{u\mathcal{N}_{n,r}}$ and $T \in \ensuremath{u\mathcal{T}_n}$. \\
Then $T \in D(N)$ if and only if $\dTBR(T, N) = r$, that is, iff there exists a \textup{TBR$^-$}\xspace-sequence of length $r$ from $N$ to $T$.
\begin{proof}
Note that $\dTBR(T, N) \geq r$, since a \textup{TBR}\xspace can reduce the reticulation number by at most one. Furthermore, if we apply a sequence of $r$ \textup{TBR$^-$}\xspace moves on $N$, we arrive at a tree that is displayed by $N$. Hence, if $T \not\in D(N)$, then $\dTBR(T,N) > r$.
We now use induction on $r$ to show that $\dTBR(T, N) \leq r$ if $T\in D(N)$. If $r = 0$, then $T = N$ and the inequality holds. Now suppose that $r > 0$ and that the statement holds whenever a network with a reticulation number less than $r$ displays $T$. Fix an embedding of $T$ into $N$ and colour all edges of $N$ not covered by this embedding green. Note that removing a green edge with a \textup{TBR$^-$}\xspace might result in an improper network or a loop. Therefore, we have to show that there is always at least one edge that can be removed such that the resulting graph is a phylogenetic network. For this, consider the subgraph $H$ of $N$ induced by the green edges. If $H$ contains a component consisting of a single green edge $e$, then removing $e$ from $N$ with a \textup{TBR$^-$}\xspace yields a network $N'$. If $H$ contains a tree component $S$, then it is easy to see that removing an external edge of $S$ from $N$ with a \textup{TBR$^-$}\xspace yields a network $N'$. Otherwise, as $N$ is proper, a component $S$ displays a tree $T_S$ whose external edges cover exactly the external edges of $S$.
We can then apply the same case distinction to the edges of $S$ not covered by $T_S$ and either directly find an edge to remove or find further trees that cover the smaller remaining components. Since $S$ is finite, we eventually find an edge to remove. The induction hypothesis then applies to $N'$. This concludes the proof.
\end{proof}
\end{lemma}
Note that the proof of \cref{clm:unets:TBR:pathDown} also works if $T$ is a network displayed by $N$. Hence, we get the following corollary.
\begin{corollary} \label{clm:unets:TBR:pathDown:nets}
Let $N \in \ensuremath{u\mathcal{N}_{n,r}}$ and let $N' \in \unetsxx[n,r']$ such that $N'$ is displayed by $N$.\\
Then $\dTBR(N', N) = r - r'$, that is, there exists a \textup{TBR$^-$}\xspace-sequence of length $r-r'$ from $N$ to $N'$.
\end{corollary}
\Cref{clm:unets:TBR:pathDown} and \cref{clm:unets:TBR:pathDown:nets} now allow us to construct \textup{TBR}\xspace-sequences between networks that go down tiers and then come up again. In fact, for rooted networks this can sometimes be necessary as Klawitter and Linz have shown~\cite[Lemma 13]{KL19}. However, we now show that this is never necessary for \textup{TBR}\xspace on unrooted networks.
\begin{lemma} \label{clm:unets:TBR:noNeedToGoDown}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$.\\
Then in no shortest \textup{TBR}\xspace-sequence from $N$ to $N'$ does a \textup{TBR$^-$}\xspace precede a \textup{TBR$^+$}\xspace.
\begin{proof}
Consider a minimal counterexample with $N, N' \in \ensuremath{u\mathcal{N}_n}$ such that there exists a shortest \textup{TBR}\xspace-sequence $\sigma$ from $N$ to $N'$ that uses exactly one \textup{TBR$^-$}\xspace and \textup{TBR$^+$}\xspace and that starts with this \textup{TBR$^-$}\xspace. If $\sigma$ uses \textup{TBR$^0$}\xspace operations between the \textup{TBR$^-$}\xspace and the \textup{TBR$^+$}\xspace, then, by \cref{clm:unets:TBR:ZPtoPZ}, we can swap the \textup{TBR$^+$}\xspace forward until it directly follows the \textup{TBR$^-$}\xspace. However, then we can obtain a \textup{TBR}\xspace-sequence shorter than $\sigma$ by combining the \textup{TBR$^-$}\xspace and the \textup{TBR$^+$}\xspace into a \textup{TBR$^0$}\xspace by \cref{clm:unets:TBR:PMtoZ}; a contradiction.
\end{proof}
\end{lemma}
Combining \cref{clm:unets:TBR:pathDown,clm:unets:TBR:pathDown:nets,clm:unets:TBR:ZPtoPZ}, we easily derive the following two corollaries about short sequences that do not go down tiers before going back up again.
\begin{corollary} \label{clm:unets:TBR:distanceViaDisplayedTrees}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ with reticulation number $r$ and $r'$, with $r\geq r'$. Then
$$\dTBR(N, N') \leq \min\set{\dTBR(T, T') \colon T \in D(N), T' \in D(N')} + r \text{.}$$
\end{corollary}
\begin{corollary} \label{clm:unets:TBR:distanceSharedDisplayedTrees}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ with reticulation number $r$ and $r'$, and $r\geq r'$. Let $T \in \ensuremath{u\mathcal{T}_n}$ such that $T \in D(N), D(N')$. Then
$$\dTBR(N, N') \leq r \text{.}$$
\end{corollary}
Both \cref{clm:unets:TBR:distanceViaDisplayedTrees,clm:unets:TBR:distanceSharedDisplayedTrees} can easily be proven by first finding a sequence that goes down to tier 0 and back up to tier $r$, and then combining the $r'$ \textup{TBR$^-$}\xspace with $r'$ \textup{TBR$^+$}\xspace into $r'$ \textup{TBR$^0$}\xspace using \cref{clm:unets:TBR:ZPtoPZ}.
The following lemma is the unrooted analogue to Proposition 7.7 by Bordewich\linebreak {et~al.}~\cite{BLS17}. We closely follow their proof.
\begin{lemma} \label{clm:unets:TBR:existingCloseDisplayedTree}
Let $N, N' \in \ensuremath{u\mathcal{N}_n}$ such that $\dTBR(N, N') = k$. Let $T \in D(N)$. \\
Then there exists a $T' \in D(N)$ such that
$$\dTBR(T, T') \leq k \text{.}$$
\begin{proof}
The proof is by induction on $k$. If $k = 0$, then the statement trivially holds. Suppose that $k = 1$. If $T \in D(N')$, then set $T' = T$, and we have $\dTBR(T, T') = 0 \leq 1$. So assume otherwise, namely that $T \not \in D(N')$. Note that that if $N'$ has been obtained from $N$ by a \textup{TBR$^+$}\xspace, then $N'$ displays $T$. Therefore, distinguish whether $N'$ has been obtained from $N$ by a \textup{TBR$^0$}\xspace or \textup{TBR$^-$}\xspace $\sigma$.
Suppose that $N'$ has been obtained from $N$ by a \textup{TBR$^0$}\xspace that moves the edge $e = \set{u, v}$ of $N$. Fix an embedding $S$ of $T$ into $N$. Since $N'$ does not display $T$, the edge $e$ is covered by $S$. Let $\bar e$ be the edge of $T$ that gets mapped to the path of $S$ that covers $e$. Let $S_1$ and $S_2$ be the subgraphs of $S \setminus \set{e}$. Note that $S_1, S_2$ have embeddings into $N$ and $N'$. Now, if in $N$ there exists a path $P$ from the embedding of $S_1$ to the embedding of $S_2$ that avoids $e$, then the graph consisting of $P$, $S_1$, and $S_2$ is a tree $T'$ displayed by $N'$. Otherwise $e$ is a cut-edge of $N$ and the \textup{TBR$^0$}\xspace moves $e$ to an edge $e'$ connecting the two components of $N \setminus \set{e}$. Then in $N'$ there is path $P$ from the embedding of $S_1$ to the embedding of $S_2$ in $N'$. Together they form an embedding of a tree $T'$ displayed by $N'$. In both cases $T'$ can also be obtained from $T$ by moving $\bar e$ to where $P$ attaches to $S_1$ and $S_2$. If $N'$ is obtained from $N$ by a \textup{TBR$^-$}\xspace, then the first case has to apply.
Now suppose that $k \geq 2$ and that the hypothesis holds for any two networks with \textup{TBR}\xspace-distance at most $k-1$. Let $N'' \in \ensuremath{u\mathcal{N}_n}$ such that $\dTBR(N, N'') = k-1$ and $\dTBR(N'', N') = 1$. Thus by induction there are trees $T''$ and $T'$ such that $T'' \in D(N'')$ with $\dTBR(T, T'') \leq k-1$ and $T' \in D(N')$ with $\dTBR(T'', T') \leq 1$. It follows that $\dTBR(T, T') \leq k$, thereby completing the proof of the lemma.
\end{proof}
\end{lemma}
By setting one of the two networks in the previous lemma to be a phylogenetic tree and noting that the roles of $N$ and $N'$ are interchangeable, the next two corollaries are immediate consequences of \cref{clm:unets:TBR:pathDown,clm:unets:TBR:existingCloseDisplayedTree}.
\begin{corollary} \label{clm:unets:TBR:distanceToDisplayedTree}
Let $T \in \ensuremath{u\mathcal{T}_n}$, $N \in \ensuremath{u\mathcal{N}_{n,r}}$ such that $\dTBR(T, N) = k$.
Then for every $T' \in D(N)$
$$\dTBR(T, T') \leq k \text{.}$$
\end{corollary}
\begin{corollary} \label{clm:unets:TBR:distanceOfDisplayedTrees}
Let $N \in \ensuremath{u\mathcal{N}_{n,r}}$ and let $T, T' \in D(N)$.
Then
$$\dTBR(T, T') \leq r \text{.}$$
\end{corollary}
The following theorem is the unrooted analogous of Theorem 7 by Klawitter and Linz~\cite{KL19} and their proof can be applied straightforward by swapping SNPR and rooted networks with TBR and unrooted networks, and by using \cref{clm:unets:TBR:pathDown,clm:unets:TBR:existingCloseDisplayedTree} and \cref{clm:unets:TBR:treesIsometric}.
\begin{theorem} \label{clm:unets:TBR:distanceTreeNetwork}
Let $T \in \ensuremath{u\mathcal{T}_n}$ and let $N \in \ensuremath{u\mathcal{N}_{n,r}}$. Then
$$\dTBR(T, N) = \min\limits_{T' \in D(N)} \dTBR(T, T') + r \text{.}$$
\end{theorem}
\section{Connectedness and diameters} \label{sec:connectedness}
Whereas in the previous section we studied the maximum distance between two given networks, here, we focus on global connectivity properties of several classes of phylogenetic networks under NNI, PR, and TBR. These results imply that these operations induce metrics on these spaces. For each connected metric space, we can ask about its diameter. Since a class of phylogenetic networks that contains networks with unbounded reticulation number naturally has an unbounded diameter, this questions is mainly of interest for the tiers of a class. First, we recall some known results from unrooted phylogenetic trees.
\begin{theorem}[Li {et~al.} \cite{LTZ96}, Ding {et~al.} \cite{DGH11}] \label{clm:utrees:diameter}
The space $\ensuremath{u\mathcal{T}_n}$ is connected under
\begin{itemize}
\item \textup{NNI$^0$}\xspace with the diameter in $\Theta(n \log n)$,
\item \textup{PR$^0$}\xspace with the diameter in $n - \Theta(\sqrt{n})$, and
\item \textup{TBR$^0$}\xspace with the diameter in $n - \Theta(\sqrt{n})$.
\end{itemize}
\end{theorem}
\subsection{Network space}
Huber {et~al.}~\cite[Theorem 5]{HMW16} proved that the space of phylogenetic networks that includes improper networks is connected under \textup{NNI}\xspace. We reprove this for our definition of $\ensuremath{u\mathcal{N}_n}$, but first look at the tiers of this space.
\begin{theorem} \label{clm:unets:NNIconnected:tier}
Let $n \geq 0$, $r \geq 0$, and $m = n + r$.\\
Then $\ensuremath{u\mathcal{N}_{n,r}}$ is connected under \textup{NNI}\xspace with the diameter in $\Theta(m \log m)$.
\begin{proof}
Let $N \in \ensuremath{u\mathcal{N}_{n,r}}$ and let $T \in \ensuremath{u\mathcal{T}_n}$ be a tree displayed by $N$.
We show that $N$ can be transformed into a sorted $r$-handcuffed caterpillar $N^*$ with $\ensuremath{\mathcal{O}}(m \log m)$ \textup{NNI}\xspace. Our process is as follows and illustrated in \cref{fig:unets:NNIdiam:process}.
\begin{description}
\item[Step 1.] Transform $N$ into a network $N_T$ that is tree-based on $T$.
\item[Step 2.] Transform $N_T$ into handcuffed tree $N_H$ on the leafs 1 and 2.
\item[Step 3.] Transform $N_H$ into a sorted handcuffed caterpillar $N^*$.
\end{description}
\begin{figure}[htb]
\centering
\includegraphics{NNIlowerBoundProcess}
\caption{The process used in the proof of \cref{clm:unets:NNIconnected:tier}.
We transform a network $N$ into a tree-based network $N_T$, then into a handcuffed tree $N_H$, and finally into a sorted handcuffed caterpillar $N^*$.}
\label{fig:unets:NNIdiam:process}
\end{figure}
We now describe this process in detail.
For \textbf{Step 1}, we show how to construct an \textup{NNI$^0$}\xspace-sequence $\sigma$ from $N$ to $N_T$, and we give a bound on the length of $\sigma$.
Let $S$ be an embedding of $T$ into $N$, that is, $S$ is a subdivision of $T$ and a subgraph of $N$. Colour all edges of $N$ used by $S$ black and all other edges green.
Note that this yields green, connected subgraphs $G_1, \ldots, G_l$ of $N$; more precisely, the $G_i$ are the connected components of the graph induced by the green edges of $N$.
Note that each $G_i$ has at least two vertices in $S$, since otherwise $N$ would not be proper.
Furthermore, if each $G_i$ consists of a single edge, then $N$ is tree-based on $T$.
Assuming otherwise, we show how to break the $G_i$ apart.
First, if there is a triangle on vertices $v_1, u, v_2$ where $v_1$ and $v_2$ are
adjacent vertices in $S$ and $u$ is their neighbour in $G_i$, then change the embedding of $S$ (and $T$) so that it takes the path $v_1, u, v_2$ instead of $v_1, v_2$ (see \cref{fig:unets:NNIdiam:tbased}a).
Otherwise, there is an edge $\set{v, u}$ where $v$ is in $S$ and the other vertices adjacent to $u$ are not adjacent to $v$. Let $\set{u, w_1}$ and $\set{u, w_2}$ be the other edges incident to $u$. Apply an \textup{NNI$^0$}\xspace to move $\set{u, w_1}$ to $S$ as in \cref{fig:unets:NNIdiam:tbased}b.
Note that each such \textup{NNI$^0$}\xspace decreases the number of vertices in green subgraphs and increases the number of vertices in $S$. Furthermore, the resulting networks is clearly proper.
Therefore, repeat these cases until all $G_i$ consist of single edges.
Let the resulting graph be $N_T$.
Since there are at most $2(r-1)$ vertices in all green subgraphs that are not in $S$, the number of required \textup{NNI$^0$}\xspace for Step 1 is at most
\begin{equation} \label{eq:unets:NNI:diam1}
2(r-1)\text{.}
\end{equation}
\begin{figure}[htb]
\centering
\includegraphics{NNIlowerBoundProcessTbased}
\caption{Transformation and \textup{NNI$^0$}\xspace used in Step 1 to obtain a tree-based network $N_T$.}
\label{fig:unets:NNIdiam:tbased}
\end{figure}
In \textbf{Step 2} we transform $N_T$ into a handcuffed tree $N_H$ on the leaves 1 and 2.
Let $M = \set{\set{u_1, v_1}, \set{u_2, v_2}, \ldots, \set{u_r, v_r}}$ be the set of green edges in $N_T$, that is, the edges that are not in the embedding $S$ of $T$ into $N_T$.
Without loss of generality, assume that for $i \in \set{1, \ldots, r}$ the distance between $u_i$ and leaf $1$ in $S$ is at most the distance of $v_i$ to leaf $1$ in $S$.
The idea is to sweep along the edges of $S$ to move the $u_i$ towards leaf $1$ and then do the same for the $v_i$ towards leaf $2$.
For an edge $e$ of $T$, let $P_e$ be the path of $S$ corresponding to $e$.
Let $e_1$ be the edge of $T$ incident to leaf $1$.
Impose directions on the edges of $T$ towards leaf $1$. Do the same for the edges of $S$ accordingly.
This gives a partial order $\preceq$ on the edges of $T$ with $e_1$ as maximum.
Let $\prec$ be a linear extension of $\preceq$ on the edges of $T$.
Let $e = (x, y)$ be the minimum of $\prec$.
Let $P_e = (x, \ldots, y)$ be the corresponding path in $S$.
From $x$ to $y$ along $P_e$, proceed as follows.
\begin{enumerate}[label=(\roman*)]
\item If there is an edge $(u_i, v_l)$ in $P_e$, then swap $u_i$ and $v_l$ with an \textup{NNI$^0$}\xspace.
\item If there is an edge $(u_i, u_j)$ in $P_e$ then move the $u_j$ endpoint of the green edge incident to $u_j$ onto the green edge incident to $u_i$ with an \textup{NNI$^0$}\xspace.
\item Otherwise, if there is an edge $(u_i, y)$ in $P_e$, then move $u_i$ beyond $y$.
\end{enumerate}
This is illustrated in \cref{fig:unets:NNIdiam:sweep}. Informally speaking, we stack $u_j$ onto $u_i$ so they can move together towards $e_1$.
Repeat this process for each edge in the order given by $\prec$.
For the last edge $e_1$, ignore case (iii).
Next ``unpack'' the stacked $u_i$'s on $e_1$.
We now count the number of \textup{NNI$^0$}\xspace needed.
Firstly, each $v_l$ is swapped at most once with a $u_i$.
Secondly, each $u_j$ is moving to and from a green edge at most once.
Furthermore, each vertex of $S$ corresponding to a vertex of $T$ is swapped at most twice.
Hence, the total number of \textup{NNI$^0$}\xspace required is at most
\begin{equation} \label{eq:unets:NNI:diam2}
3r + 2n \text{.}
\end{equation}
\begin{figure}[htb]
\centering
\includegraphics{NNIlowerBoundProcessSweep2}
\caption{\textup{NNI$^0$}\xspace used in Step 2
to obtain a hand-cuffed tree $N_H$. The label of the moving endpoint follows this endpoint to its regrafting point.}
\label{fig:unets:NNIdiam:sweep}
\end{figure}
Repeat this process for the $v_i$ towards leaf $2$. Since the $v_i$ do not have to be swapped with $u_j$, the total number of \textup{NNI$^0$}\xspace required for this is at most
\begin{equation} \label{eq:unets:NNI:diam3}
2r + 2n \text{.}
\end{equation}
Note that the resulting network may not yet be a handcuffed tree as the order of the $u_i$ and $v_j$ may be different.
Hence, lastly in Step 2, to obtain $N_H$ sort the edges with the mergesort-like algorithm by Li {et~al.}~\cite[Lemma 2]{LTZ96}. They show that the required number of \textup{NNI$^0$}\xspace for this is at most
\begin{equation} \label{eq:unets:NNI:diam4}
r (1 + \log r) \text{.}
\end{equation}
For \textbf{Step 3}, consider the path $P$ in $S$ from leaf $1$ to $2$.
If $P$ contains only one pendant subtree, then $N_H$ is handcuffed on the cherry $\set{1, 2}$.
Otherwise, use \textup{NNI$^0$}\xspace to reduce it to one pendant subtree. This takes at most $n$ \textup{NNI$^0$}\xspace.
Next, transform the pendant subtree of $P$ into a caterpillar to obtain a handcuffed caterpillar, again with at most $n$ \textup{NNI$^0$}\xspace.
Lastly, sort the leaves with the algorithm from Li {et~al.}~\cite[Lemma 2]{LTZ96} to obtain the sorted handcuffed caterpillar $N^*$.
The required number of \textup{NNI$^0$}\xspace to get from $N_H$ to $N^*$ is at most
\begin{equation} \label{eq:unets:NNI:diam5}
2n + n \log n \text{.}
\end{equation}
Since we can transform any network $N\in\ensuremath{u\mathcal{N}_{n,r}}$ into $N^*$, it follows that $\ensuremath{u\mathcal{N}_{n,r}}$ is connected under $\textup{NNI}\xspace$.
Furthermore, adding \crefrange{eq:unets:NNI:diam1}{eq:unets:NNI:diam5} up and multiplying the result by two shows that the diameter of $\ensuremath{u\mathcal{N}_{n,r}}$ under \textup{NNI$^0$}\xspace is at most
\begin{equation} \label{eq:unets:NNI:diam6}
2(6n + 8r + n \log n + r \log r) \in \ensuremath{\mathcal{O}}((n + r) \log (n + r)) \text{.}
\end{equation}
Francis {et~al.}~\cite[Theorem 2]{FHMW17} gave the lower bound $\Omega(m \log m)$ on the diameter of tier $r$ of the space that allows improper networks under \textup{NNI$^0_{\text{improper}}$}\xspace (\textup{NNI$^0$}\xspace without the properness condition).
Their proof consists of two parts: a lower bound on the total number of networks in a tier $\abs{\ensuremath{u\mathcal{N}_{n,r}}}$, and upper bounds on the number of networks that can be reached from one network for each fixed number of \textup{NNI$^0_{\text{improper}}$}\xspace.
The diameter of $\ensuremath{u\mathcal{N}_{n,r}}$ is at least the smallest number of moves needed for which previously mentioned upper bound is greater than the lower bound on $\abs{\ensuremath{u\mathcal{N}_{n,r}}}$.
Our version of \textup{NNI$^0$}\xspace is stricter than theirs as we do not allow improper networks. Hence, the number of networks that can be reached with a fixed number of \textup{NNI$^0$}\xspace is at most the number of networks that can be reached with the same number of \textup{NNI$^0_{\text{improper}}$}\xspace. Furthermore, their lower bound on $\abs{\ensuremath{u\mathcal{N}_{n,r}}}$ is found by counting the number of \emph{Echidna} networks, a class of networks only containing proper networks. Combining these two observations, we see that their lower bound for the diameter of $\ensuremath{u\mathcal{N}_{n,r}}$ under \textup{NNI$^0_{\text{improper}}$}\xspace is also a lower bound for $\ensuremath{u\mathcal{N}_{n,r}}$ under \textup{NNI$^0$}\xspace.
\end{proof}
\end{theorem}
From \cref{clm:unets:NNIconnected:tier} we get the following corollary.
\begin{corollary} \label{clm:unets:NNI:connected}
The space $\ensuremath{u\mathcal{N}_n}$ is connected under \textup{NNI}\xspace with unbounded diameter.
\end{corollary}
Since, by \cref{clm:NNIisPRisTBR}, every \textup{NNI}\xspace is also a \textup{PR}\xspace and \textup{TBR}\xspace, the statements in \cref{clm:unets:NNIconnected:tier} and \cref{clm:unets:NNI:connected} also hold for \textup{PR}\xspace and \textup{TBR}\xspace. This observation has been made before by Francis {et~al.} \cite{FHMW17} for tiers of the space of networks that allow improper networks.
\begin{corollary} \label{clm:unets:TBR:connected}
The spaces $\ensuremath{u\mathcal{N}_n}$ and $\ensuremath{u\mathcal{N}_{n,r}}$ are connected under the \textup{PR}\xspace and \textup{TBR}\xspace operation.
\end{corollary}
We now look at the diameters of $\ensuremath{u\mathcal{N}_{n,r}}$ under \textup{PR}\xspace and \textup{TBR}\xspace.
\begin{theorem} \label{clm:unets:PR:tierConnected}
Let $n \geq 0$, $r \geq 0$.\\
Then the diameter of $\ensuremath{u\mathcal{N}_{n,r}}$ under \textup{PR$^0$}\xspace is in $\Theta(n + r)$ with the upper bound $n + 2r$.
\begin{proof}
The asymptotic lower bound was proven by Francis {et~al.}~\cite[Proposition 4]{FHMW17}.
Concerning an upper bound, Janssen {et~al.}~\cite[Theorem 4.22]{JJEvIS17} showed that the distance of two improper networks $M$ and $M'$ under \textup{PR}\xspace is at most $n + \frac{8}{3}r$, of which $\frac{2}{3}r$ \textup{PR$^0$}\xspace moves are used to transform $M$ and $M'$ into proper networks $N$ and $N'$. Hence, the \textup{PR}\xspace-distance of $N$ and $N'$ is at most $n + 2r$.
\end{proof}
\end{theorem}
\begin{theorem} \label{clm:unets:TBR:tierConnected}
Let $n \geq 0$, $r \geq 0$.\\
Then the diameter of $\ensuremath{u\mathcal{N}_{n,r}}$ under \textup{TBR}\xspace is in $\Theta(n + r)$ with the upper bound $$n - 3 - \floor{\frac{\sqrt{n - 2} - 1}{2}} + r\text{.}$$
\begin{proof}
Like for PR, the lower bound was proven by Francis {et~al.}~\cite[Proposition 4]{FHMW17}. In \cref{clm:unets:TBR:distanceViaDisplayedTrees} we show that the \textup{TBR}\xspace-distance of two networks $N$ and $N' \in \ensuremath{u\mathcal{N}_{n,r}}$ that display a tree $T$ and $T' \in \ensuremath{u\mathcal{T}_n}$, respectively, is at most $\dTBR(T, T') + r$. Since $\dTBR(T, T') \leq n - 3 - \floor{\frac{\sqrt{n - 2} - 1}{2}}$ by Theorem 1.1 of Ding {et~al.}~\cite{DGH11} it follows that $\dTBR(N, N') \leq n - 3 - \floor{\frac{\sqrt{n - 2} - 1}{2}} + r$.
\end{proof}
\end{theorem}
\subsection{Networks displaying networks}
Bordewich~\cite[Proposition 2.9]{Bor03} and Mark {et~al.}~\cite{MMS16} showed that the space of rooted phylogenetic trees that display a set of triplets (trees on three leaves) is connected under \textup{NNI}\xspace.
Furthermore, Bordewich {et~al.}~\cite{BLS17} showed that the space of rooted phylogenetic networks that display a set of rooted phylogenetic trees is connected.
We give a general result for unrooted phylogenetic networks that display a set of networks.
For this, we will use \cref{clm:unets:TBR:pathDown}, which, as we recall, guarantees that if a network $N \in \ensuremath{u\mathcal{N}_{n,r}}$ displays a tree $T \in \ensuremath{u\mathcal{T}_n}$, then there is a sequence of $r$ \textup{TBR$^-$}\xspace from $N$ to $T$.
\begin{proposition} \label{clm:unets:displayingSet:connectivity}
Let $P = \set{P_1, ..., P_k}$ be a set of $k$ phylogenetic networks $P_i$ on $Y_i \subseteq X = \set{1, \ldots, n}$.\\
Then $\ensuremath{u\mathcal{N}_n}(P)$ is connected under \textup{NNI}\xspace, \textup{PR}\xspace, and \textup{TBR}\xspace.
\begin{proof}
Define the network $N_P \in \ensuremath{u\mathcal{N}_n}(P)$ as follows. Let $P_0 \in \ensuremath{u\mathcal{T}_n}$ be the caterpillar where the leaves are ordered from $1$ to $n$; that is, $P_0$ contains a path $(v_2, v_3, \ldots, v_{n-1})$ such that leaf $i$ is incident to $v_i$, leaf $1$ is incident to $v_2$, and leaf $n$ is incident to $v_{n-1}$. Let $e_i$ be the edge incident to leaf $i$ in $P_0$.
Subdivide $e_i$ with $k$ vertices $u_i^1, \ldots, u_i^k$.
Now, for $P_j \in P$, $j \in \set{1, \ldots, k}$, identify leaf $i$ of $P_j$ with $u_i^j$ of $P_0$ and remove its label $i$.
Finally, in the resulting network suppress any degree two vertex. This is necessary if one or more of the $P_j$ have fewer than $n$ leaves.
The resulting network $N_P$ now displays all networks in $P$. An example is given in \cref{fig:unets:CanonicalDisplayingNetwork}.
\begin{figure}[htb]
\begin{center}
\includegraphics{CanonicalDisplayingNetwork}
\caption{The canonical network $N_P \in \unetsx[5]$ that displays the set of phylogenetic networks $P = (P_1, P_2)$ with the underlying caterpillar $P_0$.}
\label{fig:unets:CanonicalDisplayingNetwork}
\end{center}
\end{figure}
Let $N \in \ensuremath{u\mathcal{N}_n}(P)$. Construct a \textup{TBR}\xspace-sequence from $N$ to $N_P$ by, roughly speaking, building a copy of $N_P$ attached to $N$, and then removing the original parts of $N$. First, add $P_0$ to $N$ by adding an edge $e = \set{v_1, v_2}$ from the edge incident to leaf 1 to the edge incident to leaf 2 with a \textup{TBR$^+$}\xspace. Then add another edge from $e$ to the edge incident to leaf 3, and so on up to leaf $n$. Colour all newly added edges and the edges incident to the leaves blue, and all other edges red. Note that the blue edges now give an embedding of $P_0$ into the current network. Now, ignoring all red edges, it is straight forward to add the $P_j$, $j \in \{1, \ldots, k\}$ one after the other with \textup{TBR$^+$}\xspace such that the resulting network displays $N_P$. For example, one could start by adding a tree displayed by $P_j$ and then adding any other edges. The first part works similar to the construction of $P_0$ and the second part is possible by \cref{clm:unets:TBR:pathDown}. Lastly, remove all red edges with \textup{TBR$^-$}\xspace such that every intermediate network is proper. This is again possible by \cref{clm:unets:TBR:pathDown} and yields the network $N_P$. Note that in the first two stages the red edges (plus external edges) display $P$ and in the last phase the non-red edges display $P$.
Since we only used \textup{TBR$^+$}\xspace and \textup{TBR$^-$}\xspace operations, the statement also holds for \textup{PR}\xspace. For \textup{NNI}\xspace, by \cref{clm:PRMtoNNIM} we can replace each of these operations that add or remove an edge $e$ by \textup{NNI}\xspace-sequences that only move and remove or add the edge $e$. Hence, the statement also holds for \textup{NNI}\xspace.
\end{proof}
\end{proposition}
For the following corollary, note that a quartet is an unrooted binary tree on four leaves and a quarnet is an unrooted binary, level-1 network on four leaves \cite{HMSW18}.
\begin{corollary}
Let $X = \set{1, ..., n}$.
Let $P$ be a set of phylogenetic trees on $X$, a set of quartets on $X$, or a set of quarnets on $X$.
Then $\ensuremath{u\mathcal{N}_n}(P)$ is connected under \textup{NNI}\xspace, \textup{PR}\xspace, and \textup{TBR}\xspace.
\end{corollary}
\subsection{Tree-based networks}
A related but more restrictive concept to displaying a tree is being tree-based. So, next, we consider the class of tree-based networks. We start with the tiers of $\ensuremath{u\mathcal{TB}_n}(T)$, which is the set of tree-based networks that have the tree $T$ as base tree.
\begin{theorem} \label{clm:unets:tbased:connectedness}
Let $T \in \ensuremath{u\mathcal{T}_n}$.
Then the space $\ensuremath{u\mathcal{TB}_{n,r}}(T)$ is connected under
\begin{itemize}
\item \textup{TBR}\xspace with the diameter being between $\ceil{\frac{r}{3}}$ and $r$,
\item \textup{PR}\xspace with the diameter being between $\ceil{\frac{r}{2}}$ and $2r$, and
\item \textup{NNI}\xspace with the diameter being in $\ensuremath{\mathcal{O}}(r(n + r))$.
\end{itemize}
\begin{proof}
We start with the proof for TBR. Let $N, N' \in \ensuremath{u\mathcal{TB}_{n,r}}(T)$. Consider embeddings of $T$ into $N$ and $N'$. Let $S = \set{e_1, \ldots, e_r}$ and $S' = \set{e_1', \ldots, e_r'}$ be the set of all edges not covered by this embedding of $T$ in $N$ and in $N'$. Since $N$ is tree-based, $S$ and $S'$ consist of vertex-disjoint edges. Following the embeddings of $T$ into $N$ and $N'$, it is straightforward to move each edge $e_i$ with a \textup{TBR$^0$}\xspace from $N$ to where $e_i'$ is in $N'$. In total, this requires at most $r$ \textup{TBR$^0$}\xspace. Since every intermediate network is clearly in $\ensuremath{u\mathcal{TB}_{n,r}}(T)$, this gives connectedness of $\ensuremath{u\mathcal{TB}_{n,r}}(T)$ and an upper bound of $r$ on the diameter. For the lower bound, consider a network $M$ with $r$ pairs of parallel edges and $M'$ without any. Observe that a \textup{TBR$^0$}\xspace can break at most three pairs of parallel edges and that only if a pair of parallel edges is removed and attached to two other pairs of parallel edge. Hence, for these particular $N$ and $N'$ we have that $\dTBR(N, N') \geq \ceil{\frac{r}{3}}$.
The constructed \textup{TBR$^0$}\xspace-sequence for $N$ to $N'$ above can be converted straightforwardly into a \textup{PR$^0$}\xspace-sequence from $N$ to $N'$ of length at most $2r$. For the lower bound, let $M$ and $M'$ be as above and note that a \textup{PR}\xspace can break at most two pairs of parallel edges. Hence, $\dPR(M, M') \geq \ceil{\frac{r}{2}}$.
By \cref{clm:PRZtoNNIZ}, the \textup{PR}\xspace-sequence can be used to construct an \textup{NNI}\xspace-sequence from $N$ to $N'$ that only moves the edges $e_i$ along paths of the embedding of $T$. Since the \textup{PR}\xspace-sequence has length at most $2r$ and each \textup{PR}\xspace can be replaced by an \textup{NNI}\xspace sequence of length at most $\ensuremath{\mathcal{O}}(n + r)$, this gives the upper bound of $\ensuremath{\mathcal{O}}(r(n + r))$ on the diameter of $\ensuremath{u\mathcal{TB}_{n,r}}(T)$ under \textup{NNI}\xspace.
\end{proof}
\end{theorem}
We use \cref{clm:unets:tbased:connectedness} to prove connectedness of other spaces of tree-based networks.
\begin{theorem} \label{clm:unets:tbased:connectednessOther}
Let $T \in \ensuremath{u\mathcal{T}_n}$.\\
Then the spaces $\ensuremath{u\mathcal{TB}_n}(T)$, $\ensuremath{u\mathcal{TB}_{n,r}}$, and $\ensuremath{u\mathcal{TB}_n}$ are each connected under \textup{TBR}\xspace, \textup{PR}\xspace, and \textup{NNI}\xspace.
Moreover, the diameter of $\ensuremath{u\mathcal{TB}_{n,r}}$ is in $\Theta(n + r)$ under \textup{TBR}\xspace and \textup{PR}\xspace and in $\ensuremath{\mathcal{O}}(n \log n + r(n + r))$ under \textup{NNI}\xspace.
\begin{proof}
Assume without loss of generality that $T$ has the cherry $\set{1, 2}$.
First, let $N$ and $N'$ be in tiers $r$ and $r'$ of $\ensuremath{u\mathcal{TB}_n}(T)$, respectively, such that they are $r$- and $r'$-handcuffed on the cherry $\set{1, 2}$. Then $\dNNI(N, N') = \abs{r' - r}$, as we can decrease the number of handcuffs with \textup{NNI$^-$}\xspace.
Since, by \cref{clm:unets:tbased:connectedness}, the tiers of $\ensuremath{u\mathcal{TB}_{n,r}}(T)$ are connected, the connectedness of $\ensuremath{u\mathcal{TB}_n}(T)$ follows.
Second, let $N, N' \in \ensuremath{u\mathcal{TB}_{n,r}}$ be tree-based networks on $T$ and $T'$ respectively, and with an $r$-burl on the edge incident to leaf $1$.
Ignoring the burls, by \cref{clm:utrees:diameter}, $N$ can be transformed into $N'$ by transforming $T$ into $T'$ with $\ensuremath{\mathcal{O}}(n \log n)$ \textup{NNI$^0$}\xspace or with $\ensuremath{\mathcal{O}}(n)$ \textup{PR$^0$}\xspace or \textup{TBR$^0$}\xspace. With \cref{clm:unets:tbased:connectedness}, the connectedness of $\ensuremath{u\mathcal{TB}_{n,r}}$ and the upper bounds on the diameter follow.
The lower bound on the diameter under \textup{PR}\xspace and \textup{TBR}\xspace also follows from \cref{clm:utrees:diameter} and \cref{clm:unets:tbased:connectedness},
Lastly, the connectedness of $\ensuremath{u\mathcal{TB}_n}$ follows similarly from the connectedness of $\ensuremath{u\mathcal{T}_n}$ and $\ensuremath{u\mathcal{TB}_{n,r}}$.
\end{proof}
\end{theorem}
\subsection{Level-$k$ networks}
To conclude this section, we prove the connectedness of the space of level-$k$ networks.
\begin{theorem}\label{clm:unets:lvlk:connectedness:TBR}
Let $n \geq 2$ and $k \geq 1$.\\
Then, the space $\ensuremath{u\mathcal{LV}\text{-}k_{n}}$ is connected under \textup{TBR}\xspace and \textup{PR}\xspace with unbounded diameter.
\begin{proof}
Let $N \in \ensuremath{u\mathcal{LV}\text{-}k_{n}}$ and $T \in \ensuremath{u\mathcal{T}_n}$.
We show that $N$ can be transformed into the network $M \in \ensuremath{u\mathcal{LV}\text{-}k_{n}}$ that can be obtained from $T$ by adding a $k$-burl to the edge incident to leaf $1$.
First, create a $k$-burl in $N$ on the edge incident to leaf $1$. This can be done using $k$ \textup{PR$^+$}\xspace.
Next, using \cref{clm:unets:TBR:pathDown} remove all other blobs. This gives a network $M'$ which consists of a tree $T'$ with a $k$-burl at leaf $1$. There is a \textup{PR$^0$}\xspace-sequence from $T'$ to $T$, which is easily converted into a sequence from $M'$ to $M$. This proves the connectedness of $\ensuremath{u\mathcal{LV}\text{-}k_{n}}$ under \textup{PR}\xspace and also \textup{TBR}\xspace.
Lastly, note that the diameter is unbounded because the number of possible reticulations in a level-$k$ network is unbounded.
\end{proof}
\end{theorem}
Note that an \textup{NNI$^+$}\xspace cannot directly create a pair of parallel edges.
We may instead add a triangle with an \textup{NNI$^+$}\xspace and then use an \textup{NNI$^0$}\xspace to transform it into a pair of parallel edges.
However, adding the triangle within a level-$k$ blob of a level-$k$ network, then adding the triangle would increase the level. Therefore, to prove connectedness of level-$k$ networks under \textup{NNI}\xspace we use the same idea as for \textup{PR}\xspace but are more careful to not increase the level.
\begin{theorem}\label{clm:unets:lvlk:connectedness:NNI}
Let $n \geq 3$ and $k \geq 1$.\\
Then, the space $\ensuremath{u\mathcal{LV}\text{-}k_{n}}$ is connected under \textup{NNI}\xspace with unbounded diameter.
\begin{proof}
Let $N \in \ensuremath{u\mathcal{LV}\text{-}k_{n}}$ and let $T \in \ensuremath{u\mathcal{T}_n}$.
Like in the proof of \cref{clm:unets:lvlk:connectedness:TBR}, we want to transform $N$ into a network $M$ obtained from $T$ by adding a $k$-burl to the edge incident to leaf $1$.
Let $B$ be a level-$k$ blob of $N$. Assume that $N$ contains another blob $B'$.
By \cref{clm:unets:TBR:pathDown} there is a \textup{PR$^+$}\xspace-sequence that can remove $B'$.
Use \cref{clm:PRMtoNNIM} to substitute this sequence with an \textup{NNI}\xspace-sequence that reduces $B'$ to a level-1 blob.
Note that this can be done locally within blob $B'$ and its incident edges.
Therefore, this process does not increase the level of a network along this sequence.
If $B'$ is now a cycle of size at least three, then we can shrink it to a triangle, if necessary, and remove it with an \textup{NNI$^-$}\xspace.
If $B'$ is a pair of parallel edges and one of its vertices is incident to a degree three vertex $v$ that is not part of a level-$k$ blob, then use an \textup{NNI$^0$}\xspace to increase the size of $B'$ into a triangle by including $v$ or merge it with the blob containing $v$.
Next, either remove the resulting triangle, or repeat the process above to remove the new blob.
Otherwise, ignore $B'$ for now and continue with another blob of the current network that is neither $B'$ nor $B$.
When this process terminates, we arrive at a network that has only blob $B$, and, potentially, pairs of parallel edges that are incident to both $B$ and a leaf. That is the case since a pair of parallel edges incident to a degree three vertex not in $B$ could be removed with an \textup{NNI$^0$}\xspace and an \textup{NNI$^-$}\xspace.
If the edge incident to leaf $1$ contains a pair of parallel edges or is incident to a degree three vertex not in $B$, then use $k-1$ \textup{NNI$^+$}\xspace and \textup{NNI$^0$}\xspace (or $k$ in the latter case) to create a $k$-burl next to leaf $1$.
Otherwise, if $B$ is incident to three or more cut-edges, then one of them is not incident to leaf $1$ and can be moved to the edge incident to leaf $1$ with an \textup{NNI$^0$}\xspace-sequence. If $B$ is incident to two or fewer cut-edges, there is a vertex incident to three cut edges (since $n \geq 3$) and one of them can be moved to the edge incident to leaf $1$ with an \textup{NNI$^0$}\xspace-sequence. Then apply the first case again to create a $k$-burl. Finally, remove $B$ and any remaining pair of parallel edges.
This gives a network $M'$ which consists of a tree $T'$ with a $k$-burl at leaf $1$. There is an \textup{NNI$^0$}\xspace-sequence from $T'$ to $T$, which is easily converted into a sequence from $M'$ to $M$. Lastly, note that the diameter is unbounded because for each $r\geq 0$, there is a level-$k$ network with $r$ reticulations.
\end{proof}
\end{theorem}
\section{Isometric relations between spaces} \label{sec:isometric}
Recall that a space $\ensuremath{\mathcal{C}_n}$ is an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under a rearrangement operation, say TBR, if the TBR-distance of two networks in $\ensuremath{\mathcal{C}_n}$ is the same as their TBR-distance in $\ensuremath{u\mathcal{N}_n}$. In this section, we investigate this question for $\ensuremath{u\mathcal{T}_n}$ under \textup{TBR}\xspace, and for tree-based networks and level-k networks under \textup{TBR}\xspace and \textup{PR}\xspace.
We start with $\ensuremath{u\mathcal{T}_n}$. The proof of the following theorem follows the proof by Bordewich {et~al.}~\cite[Proposition 7.1]{BLS17} for their equivalent statement for SNPR on rooted phylogenetic trees and networks closely.
\begin{theorem} \label{clm:unets:TBR:treesIsometric}
The space $\ensuremath{u\mathcal{T}_n}$ is an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{TBR}\xspace. Moreover, every shortest \textup{TBR}\xspace-sequence from $T \in \ensuremath{u\mathcal{T}_n}$ to $T' \in \ensuremath{u\mathcal{T}_n}$ only uses \textup{TBR$^0$}\xspace.
\begin{proof}
Let $\ensuremath{\dist_\mathcal{T}}$ and $\ensuremath{\dist_\mathcal{N}}$ be the \textup{TBR}\xspace-distance in $\ensuremath{u\mathcal{T}_n}$ and $\ensuremath{u\mathcal{N}_n}$ respectively. To prove the statement, it suffices to show that $\ensuremath{\dist_\mathcal{T}}(T, T') = \ensuremath{\dist_\mathcal{N}}(T, T')$ for every pair $T, T' \in \ensuremath{u\mathcal{T}_n}$. Note that $\ensuremath{\dist_\mathcal{T}}(T, T') \geq \ensuremath{\dist_\mathcal{N}}(T, T')$ holds by definition. To prove the converse, let $\sigma = (T = N_0, N_1, \ldots, N_k = T')$ be a shortest \textup{TBR}\xspace-sequence from $T$ to $T'$. Consider the following colouring of the edges of each $N_i$, for $i \in \set{0, \ldots, k}$. Colour all edges of $T = N_0$ blue. For $i \in \set{1, \ldots, k}$ preserve the colouring of $N_{i-1}$ to a colouring of $N_i$ for all edges except those affected by the \textup{TBR}\xspace. In particular, an edge that gets added or moved is coloured red, an edge resulting from a vertex suppression is coloured blue if the two merged edges were blue and red otherwise, and the edges resulting from an edge subdivision are coloured like the subdivided edge.
Let $F_i$ be the graph obtained from $N_i$ by removing all red edges. We claim that $F_i$ is a forest with at most $k + 1$ components. Since $F_0 = T$, the statement holds for $i = 0$. If $N_i$ is obtained from $N_{i-1}$ by a \textup{TBR$^+$}\xspace, then $F_i = F_{i-1}$. If $N_i$ is obtained from $N_{i-1}$ by a \textup{TBR$^0$}\xspace or \textup{TBR$^-$}\xspace, then at most one component gets split. Note that $F_k$ is a so-called agreement forest for $T$ and $T'$ and thus $\ensuremath{\dist_\mathcal{T}}(T, T') \leq k = \ensuremath{\dist_\mathcal{N}}(T, T')$ by Theorem~2.13 by Allen and Steel~\cite{AS01}. Furthermore, if $\sigma$ would use a \textup{TBR$^+$}\xspace, then the forest $F_k$ would contain at most $k$ components. However, then $\ensuremath{\dist_\mathcal{T}}(T, T') < k$; a contradiction.
\end{proof}
\end{theorem}
Francis {et~al.}~\cite{FHMW17} gave the example in \cref{fig:unets:NNI:tierNonIsometric} to show that the tiers $\ensuremath{u\mathcal{N}_{n,r}}$ for $n \geq 5$ and $r > 0$ are not isometric subgraphs of $\ensuremath{u\mathcal{N}_n}$ under \textup{NNI}\xspace. Their question of whether tier zero, $\ensuremath{u\mathcal{T}_n}$, is an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{NNI}\xspace remains open.
\begin{lemma} \label{clm:unets:NNI:tierNonIsometric}
Let $n \geq 5$ and $r \geq 0$. Then the space $\ensuremath{u\mathcal{N}_{n,r}}$ is not an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{NNI}\xspace.
\end{lemma}
\begin{figure}[htb]
\centering
\includegraphics{NNInonIsometric}
\caption{An \textup{NNI}\xspace-sequence from $N$ to $N'$ using an \textup{NNI$^+$}\xspace that adds $f$, an \textup{NNI$^0$}\xspace that moves $e$, and an \textup{NNI$^-$}\xspace that removes $e'$. A shortest \textup{NNI$^0$}\xspace-sequence from $N$ to $N'$ has length three.}
\label{fig:unets:NNI:tierNonIsometric}
\end{figure}
\begin{lemma} \label{clm:unets:PR:tierNonIsometric}
For $n=4$ and $r=13$ the space $\ensuremath{u\mathcal{N}_{n,r}}$ is not an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{PR}\xspace.
\begin{proof}
For the networks $N$ and $N'$ in $\ensuremath{u\mathcal{N}_{n,r}}$ shown in \cref{fig:unets:PR:nonIsometric} there is a length three \textup{PR}\xspace-sequence that traverses tier $r+1$, for example, like the depicted sequence $\sigma = (N = N_0, N_1, N_2,$ $N_3 = N')$. To prove the statement we show that every \textup{PR$^0$}\xspace-sequence from $N$ to $N'$ has length at least four.
The networks $N$ and $N'$ contain the highlighted (sub)blobs $B_1$, $B_2$, (resp. $B_1'$ and $B_2'$), $B_3$, and $B_4$. Observe that the edges between $B_1$ and $B_2$ and between $B_3$ and $B_4$ may only be pruned from a blob by a \textup{PR$^0$}\xspace if they get regrafted to the same blob again. Otherwise the resulting network is improper. Note that to derive $B_1'$ from $B_1$ an edge has to be regrafted to the ``top'' of $B_1$ and the edge to $B_2$ has to be pruned. By the first observation, combining these into one \textup{PR$^0$}\xspace cannot build the connection to $B_3$. The same applies for the transformation of $B_2$ into $B_2'$ and its connection to $B_4$. Therefore, we either need four \textup{PR$^0$}\xspace to derive $B_1'$ and $B_2'$ or two \textup{PR$^0$}\xspace plus two \textup{PR$^0$}\xspace to build the connections to $B_3$ and $B_4$. In conclusion, at least four \textup{PR$^0$}\xspace are required to transform $N$ into $N'$, which concludes this proof.
\end{proof}
\end{lemma}
By replacing a leaf with a tree, and adding more pairs of parallel edges to edge leading to $4$, this example can be made to work for $n\geq 4$ and $r\geq 13$.
\begin{figure}[htb]
\centering
\includegraphics{PRnonIsometricBlobs}
\caption{A length three \textup{PR}\xspace-sequence from $N$ to $N'$ that uses a \textup{PR$^+$}\xspace, which adds $f$, a \textup{PR$^0$}\xspace, which moves $e$, and a \textup{PR$^-$}\xspace, which removes $e'$.
A \textup{PR$^0$}\xspace-sequence from $N$ to $N'$ has length at least four.}
\label{fig:unets:PR:nonIsometric}
\end{figure}
\begin{theorem} \label{clm:unets:tbased:nonIsometric}
For $n \geq 6$ the space $\ensuremath{u\mathcal{TB}_n}$ is not an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{TBR}\xspace and \textup{PR}\xspace.
\begin{proof}
Let $N$ be the network in \cref{fig:unets:tbased:nonIsometric}. Let $N'$ be the network derived from $N$ by swapping the labels $1$ and $2$.
Note that $\dTBR(N, N') = \dPR(N, N') = 2$, since, from $N$ to $N'$, we can move leaf 2 next to leaf 1 and then move leaf 1 to where leaf 2 was.
However, then the network in the middle is not tree-based, since the blob derived from the Petersen graph has no Hamiltonian path if the two pendent edges of the blob are next to each other~\cite{FHM18}.
We claim that there is no other length two \textup{TBR}\xspace-sequence from $N$ to $N'$.
For this proof we call a blob derived from the Petersen graph a Petersen blob.
\begin{figure}[htb]
\centering
\includegraphics{tbasedNonIsometric}
\caption{A tree-based network on the left and a Hamiltonian path through a blob derived from the Petersen graph on the right.}
\label{fig:unets:tbased:nonIsometric}
\end{figure}
First, note that the \textup{TBR$^0$}\xspace-sequence of $N$ and $N'$ is at least two and there is thus no \textup{TBR}\xspace-sequence that consists of a \textup{TBR$^-$}\xspace and a \textup{TBR$^+$}\xspace. Otherwise, these two operations could be merged into a single \textup{TBR$^0$}\xspace by \cref{clm:unets:TBR:PMtoZ}. Note that we can only move leaf 1 or 2 by pruning an incident edge if we do not affect the split 1 versus 2, 3 or break the tree-based property. Therefore, they either have to be swapped using edges of the Petersen blobs or the $(4, 5, 6)$-chain has to be reversed and leaf 3 moved to the other Petersen blob.
However, it is straightforward to check that neither can be done with two \textup{TBR$^0$}\xspace. In particular, we can look at what edge the first \textup{TBR$^0$}\xspace might move and then check whether a second \textup{TBR$^0$}\xspace can arrive at $N'$. If the first \textup{TBR$^0$}\xspace breaks a Petersen blob, the problem is that the second \textup{TBR$^0$}\xspace has to restore it. We then find that this does not allows us to make the initially planned changes to arrive at $N'$. On the other hand, if we avoid breaking the Petersen blob and reverse the $(4, 5, 6)$-chain, then leaf 3 is still on the wrong side; and if we move leaf 3 to the other Petersen blob, then not enough \textup{TBR$^0$}\xspace moves remain to reverse the chain.
Since there is no other length two \textup{TBR$^0$}\xspace-sequence there is also no other length two \textup{PR}\xspace-sequence.
\end{proof}
\end{theorem}
\begin{theorem} \label{clm:unets:lvlk:nonIsometric}
For $n\geq 5$ and large enough $k$, the space $\ensuremath{u\mathcal{LV}\text{-}k_{n}}$ is not an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{TBR}\xspace and \textup{PR}\xspace.
\begin{proof}
For even $k$, the networks $N$ and $N'$ in \cref{fig:unets:lvlk:nonIsometric} have \textup{TBR}\xspace- and \textup{PR}\xspace-distance two via the network $M$. However, note that in $M$ the blobs of size $\frac{k}{2} + 1$ a $\frac{k}{2}$ are merged into a blob of size $k + 1$. Therefore, $M$ is not a level-$k$ network.
We claim that there is no \textup{TBR}\xspace- or \textup{PR}\xspace-sequence of length two that does not go through a level-$(k+1)$ network like $M$. An example for odd $k$ can be derived from this.
\begin{figure}[htb]
\centering
\includegraphics{lvlkNonIsometric}
\caption{For even $k$, a \textup{PR$^0$}\xspace-sequence from a level-$k$ network $N$ to a level-$k$ network $N'$ (hidden reticulations of the blob-parts given inside, at least two leaves ommited: in $B_1$ and in $B_3$). However, the network $M$ in the middle is a level-$(k+1)$ but not a level-$k$ network.}
\label{fig:unets:lvlk:nonIsometric}
\end{figure}
It is easy to see that the \textup{TBR}\xspace-distance of $N$ and $N'$ is at least two and there is thus no \textup{TBR}\xspace-sequence that consists of a \textup{TBR$^-$}\xspace and a \textup{TBR$^+$}\xspace. Otherwise, these two operations could be merged into a single \textup{TBR$^0$}\xspace by \cref{clm:unets:TBR:PMtoZ}. We thus have to prove that there is no length two \textup{TBR$^0$}\xspace-sequence from $N$ to $N'$ that avoids a level-$(k+1)$ network. Note that it requires two \textup{TBR$^0$}\xspace (or \textup{PR$^0$}\xspace) to connect $B_2$ and $B_3$ into $B_2'$. Similarly, it requires either two prunings from the upper five-cycle of $B_2$ to obtain the triangle $B_3'$ or one pruning within that cycle. However, in the latter option this would not contribute to connecting $B_2$ and $B_3$ and hence overall at least three operations would be needed. Therefore we have to combine the two operations necessary to create $B_2'$ and to create $B_3'$, which however gives us a sequence like the one shown in \cref{fig:unets:lvlk:nonIsometric}.
\end{proof}
\end{theorem}
Note that the results of this section that show that the spaces of tree-based networks and level-$k$ networks are not isometric subgraphs of the space of all networks also hold if we restrict these spaces to a particular tier $r$ (for large enough $r$).
\section{Computational complexity} \label{sec:complexity}
In this section, we consider the computational complexity of computing the \textup{TBR}\xspace-distance and the \textup{PR}\xspace-distance. First, we recall the known results on phylogenetic trees.
\begin{theorem}[\cite{DGHJLTZ97,HDRB08,AS01}]
\label{clm:unets:distancesNPhard}
Computing the distance of two trees in $\ensuremath{u\mathcal{T}_n}$ is NP-hard for the \textup{NNI}\xspace-distance, the \textup{SPR}\xspace-distance, and the \textup{TBR}\xspace-distance.
\end{theorem}
In \cref{clm:unets:TBR:treesIsometric}, we have shown that $\ensuremath{u\mathcal{T}_n}$ is an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$ under \textup{TBR}\xspace. Hence, with \cref{clm:unets:distancesNPhard}, we get the following corollary.
\begin{corollary} \label{clm:unets:TBR:NP}
Computing the \textup{TBR}\xspace-distance of two arbitrary networks in $\ensuremath{u\mathcal{N}_n}$ is NP-hard.
\end{corollary}
We can use the same two theorems to prove that computing the \textup{TBR}\xspace-distance in tiers is also hard.
\begin{theorem} \label{clm:unets:TBR:tierNP}
Computing the \textup{TBR}\xspace-distance of two arbitrary networks in $\ensuremath{u\mathcal{N}_{n,r}}$ is NP-hard.
\begin{proof}
We (linear-time) reduce the NP-hard problem of computing the \textup{TBR}\xspace-distance of two trees in $\ensuremath{u\mathcal{T}_n}$ to computing the \textup{TBR}\xspace-distance of two networks in $\unetsxx[n+1,r]$. For this, let $T, T' \in \ensuremath{u\mathcal{T}_n}$. Let $e$ be the edge incident to leaf $n$ of $T$. Obtain $S$ from $T$ by subdividing $e$ with a new vertex $u$ and adding the edge $\{u, v\}$ where $v$ is a new vertex labelled $n+1$. Next, add $r$ handcuffs to the cherry $\set{n, n+1}$ to obtain the network $N \in \unetsxx[n+1,r]$. Analogously obtain $N'$ from $T'$.
The equality $\dTBR(T, T') = \dTBR(N, N')$ follows from \cref{clm:unets:TBR:existingCloseDisplayedTree}, and the fact that networks handcuffed at a cherry display exactly one tree. More precisely, a \textup{TBR}\xspace-sequence between $T$ and $T'$ induces a \textup{TBR}\xspace-sequence of the same length between $N$ and $N'$, hence $\dTBR(T, T') \geq \dTBR(N, N')$.
Conversely, by \cref{clm:unets:TBR:existingCloseDisplayedTree} and the fact that $D(N)=\{T\}$ and $D(N')=\{T'\}$, it follows that $\dTBR(T, T') \leq \dTBR(N, N')$. Since computing the TBR-distance in $\ensuremath{u\mathcal{T}_n}$ is NP-hard, the statement follows.
\end{proof}
\end{theorem}
To prove that computing the \textup{PR}\xspace-distance is hard, we use a different reduction. Van Iersel et al. prove that deciding whether a tree is displayed by a (not necessarily proper) phylogenetic network (Unrooted Tree Containment; UTC) is NP-hard \cite{vIKSSB18}. Combining this with \cref{clm:unets:TBR:pathDown}, we arrive at our result.
\begin{theorem}
Computing the \textup{PR}\xspace-distance of two arbitrary networks in $\ensuremath{u\mathcal{N}_n}$ is NP-hard.
\begin{proof}
We reduce from UTC to the problem of computing the \textup{PR}\xspace-distance of two networks in $\ensuremath{u\mathcal{N}_n}$. Let $(N,T)$ with $N$ a (not necessarily proper) network and $T\in\ensuremath{u\mathcal{T}_n}$ be an arbitrary instance of UTC. We obtain an instance $(N',T',r')$ of the \textup{PR}\xspace-distance decision problem as follows: remove all cut-edges of $N$ that do not separate two labelled leaves, and let $N''$ be the connected component containing all the leaves; now, let $N'$ be the proper network obtained from $N''$ by suppressing all degree two nodes. The instance of the \textup{PR}\xspace-distance decision problem consists of $N'$, $T'=T$, and the reticulation number $r'$ of $N'$. As we can compute in polynomial time whether a cut edge separates two labelled leaves, the reduction is polynomial time. Because a displayed tree uses only cut-edges that separate two labelled leaves, $T$ is displayed by $N$ if and only if it is displayed by $N'$. By \cref{clm:unets:TBR:pathDown}, $T$ is a displayed tree of $N$, if and only if $\dPR(N',T')\leq r$, which concludes the proof.
\end{proof}
\end{theorem}
Unlike for the hardness proof of \textup{TBR}\xspace-distance, we cannot readily adapt this proof to the \textup{PR}\xspace-distance in $\ensuremath{u\mathcal{N}_{n,r}}$. For this purpose, we need to learn more about the structure of \textup{PR}\xspace-space.
\section{Concluding remarks}
In this paper, we investigated basic properties of spaces of unrooted phylogenetic networks and their metrics under the rearrangement operations \textup{NNI}\xspace, \textup{PR}\xspace, and \textup{TBR}\xspace.
We have proven connectedness and bounds on diameters for different classes of phylogenetic networks, including networks that display a particular set of trees, tree-based networks, and level-$k$ networks.
Although these parameters have been studied before for classes of rooted phylogenetic network~\cite{BLS17}, this is the first paper that studies these properties for classes of unrooted phylogenetic networks besides the space of all networks. A summary of our results is shown in \cref{tbl:unets:connectedness}.
To see the improvements in diameter bounds, we compare our results to previously found bounds: For the space of phylogenetic trees $\ensuremath{u\mathcal{T}_n}$ it was known that the diameter is asymptotically linearithmic and linear in the size of the trees under \textup{NNI}\xspace and \textup{SPR}\xspace/\textup{TBR}\xspace~\cite{LTZ96,DGH11}, respectively. Here, we have shown that the diameter under \textup{NNI}\xspace is also asymptotically linearithmic for higher tiers of phylogenetic networks. Whether this also holds in the rooted case is still open. We have further (re)proven the asymptotic linear diameter for \textup{PR}\xspace and \textup{TBR}\xspace of these tiers and, in particular, improved the upper bound on the diameter under \textup{TBR}\xspace to $n - 3 - \floor{\frac{\sqrt{n - 2} - 1}{2}} + r$ from the previously best bound $n + 2r$~\cite{JJEvIS17}.
\begin{table}[htb]
\centering
\begin{tabular}{c|c|c|c}
class & \textup{NNI}\xspace & \textup{PR}\xspace & \textup{TBR}\xspace \\ \hline
$\ensuremath{u\mathcal{T}_n}$ & $\Theta(n \log n)$ \cite{LTZ96} & $\Theta(n)$ \cite{DGH11} & $\Theta(n)$ \cite{DGH11} \\
$\ensuremath{u\mathcal{N}_{n,r}}$ & $\Theta(m \log m)$ T.~\ref{clm:unets:NNIconnected:tier}
& $\Theta(m)$ \cite{FHM18,JJEvIS17} & $\Theta(m)$ T.~\ref{clm:unets:TBR:tierConnected} \\
$\ensuremath{u\mathcal{N}_n}$ & \checkmark \cref{clm:unets:NNI:connected} & \checkmark \cref{clm:unets:TBR:connected} & \checkmark \cref{clm:unets:TBR:connected} \\
$\ensuremath{u\mathcal{N}_n}(P)$ & \checkmark \cref{clm:unets:displayingSet:connectivity} & \checkmark \cref{clm:unets:displayingSet:connectivity} & \checkmark \cref{clm:unets:displayingSet:connectivity}\\
$\ensuremath{u\mathcal{TB}_{n,r}}(T)$ & $\ensuremath{\mathcal{O}}(rm)$ \cref{clm:unets:tbased:connectedness} & $\Theta(r)$ \cref{clm:unets:tbased:connectedness} & $\Theta(r)$ \cref{clm:unets:tbased:connectedness} \\
$\ensuremath{u\mathcal{TB}_{n,r}}$ & $\ensuremath{\mathcal{O}}(rm + n \log n)$ T.~\ref{clm:unets:tbased:connectednessOther} & $\Theta(m)$ \cref{clm:unets:tbased:connectednessOther} & $\Theta(m)$ T.~\ref{clm:unets:tbased:connectednessOther} \\
$\ensuremath{u\mathcal{TB}_n}(T)$ & \checkmark \cref{clm:unets:tbased:connectednessOther} & \checkmark \cref{clm:unets:tbased:connectednessOther} & \checkmark \cref{clm:unets:tbased:connectednessOther} \\
$\ensuremath{u\mathcal{TB}_n}$ & \checkmark \cref{clm:unets:tbased:connectednessOther} & \checkmark \cref{clm:unets:tbased:connectednessOther} & \checkmark \cref{clm:unets:tbased:connectednessOther} \\
$\ensuremath{u\mathcal{LV}\text{-}k_{n}}$ & \checkmark \cref{clm:unets:lvlk:connectedness:NNI} & \checkmark \cref{clm:unets:lvlk:connectedness:TBR} & \checkmark \cref{clm:unets:lvlk:connectedness:TBR} \\
\end{tabular}
\caption{Connectedness and diameters, if bounded, for the various classes and rearrangement operations. Here $m = n + r$, $P$ is a set of phylogenetic networks, and $T \in \ensuremath{u\mathcal{T}_n}$.}
\label{tbl:unets:connectedness}
\end{table}
To uncover local structures of network spaces, we looked at properties of shortest sequences of moves between two networks. Here we found that shortest \textup{TBR}\xspace-sequences between networks in the same tier never traverse lower tiers, and shortest \textup{TBR}\xspace-sequences between trees also never traverse higher tiers. This implies that $\ensuremath{u\mathcal{T}_n}$ is an isometric subgraph of $\ensuremath{u\mathcal{N}_n}$, and that computing the \textup{TBR}\xspace-distance between two networks in $\ensuremath{u\mathcal{N}_n}$ is NP-hard. This answers a question by Francis {et~al.}~\cite{FHMW17}.
We have attempted to prove similar results for other subspaces and rearrangement moves. However, for higher tiers, we have not been able to prove that shortest \textup{TBR}\xspace-sequences never traverse higher tiers. To answer this question we may need to utilise agreement graphs such as frequently used for phylogenetic trees~\cite{AS01,BS05} and, more recently, also for rooted phylogenetic networks~\cite{KL19,Kla19}.
Concerning \textup{NNI}\xspace and \textup{PR}\xspace we gave counterexamples to prove that higher tiers are not isometric subgraphs of $\ensuremath{u\mathcal{N}_n}$. The questions whether $\ensuremath{u\mathcal{T}_n}$ is isometrically embedded in $\ensuremath{u\mathcal{N}_n}$ under \textup{PR}\xspace and \textup{NNI}\xspace remains open. Answering these questions positively would also provide an answer to the question whether computing the shortest \textup{NNI}\xspace-distance between two networks is NP-hard, and clues toward proving whether the \textup{PR}\xspace-distance between two networks in the same tier is NP-hard. Further negative results that we have shown are that the spaces of tree-based networks and level-$k$ are not isometric subgraphs of the space of all phylogenetic networks.
Throughout this paper, we have restricted our attention to proper networks. We could also have chosen to use unrooted networks without the properness condition. This definition, which is mathematically more elegant, is used in most other papers, so it seems to be the obvious choice. However, it is not natural to have cut-edges that do not separate leaves: such networks carry no biological meaning. It is desirable that networks are rootable and thus have an evolutionary interpretation. Unrooted phylogenetic networks are rootable if they have at most one blob with one cut-edge. While using this in the definition of an unrooted phylogenetic network could therefore be sufficient, we go one step further, and ask that there is no such blob. This makes a network rootable at any leaf (i.e., with any taxon as out-group), which gives a stronger biological interpretation and usability.
The fact that our definition of unrooted phylogenetic networks is mathematically more restrictive, means that any positive result we have proven is likely also true when using a less restrictive definition. That is, connectedness for those definitions follows easily by finding sequences to proper networks, like done by Jansen {et~al.}~\cite{JJEvIS17}.
As we may be able to find short sequences for this purpose, the diameter results will likely also still hold. This means that whatever definitions may be used in practice, with minor additional arguments, our results provide the theoretical background necessary to justify local search operations.
\pdfbookmark[1]{Acknowledgments}{Acknowledgments}
\subsection*{Acknowledgements}
The first author was supported by the Netherlands Organization for Scientific Research (NWO) Vidi grant 639.072.602.
The second author thanks the New Zealand Marsden Fund for their financial support.
\phantomsection
\pdfbookmark[1]{References}{references}
\providecommand{\etalchar}[1]{$^{#1}$}
| proofpile-arXiv_065-7044 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Various cosmological observations have provided strong evidence for the
existence of dark matter.
However, if dark matter is to be an elementary particle, it is a
yet-unknown particle beyond the Standard Model.
The Axion-Like Particle (ALP) is a
\mtrv{hypothetical pseudo-scalar particle} beyond the Standard Model,
and is a consequence of the quantum field for conserving CP symmetry in
the strong interaction \citep{Peccei1977, Weinberg1978}.
ALPs are attractive because they act like cold dark matter (CDM) in the
formation of cosmic structure.
It is possible that ALPs are created by
a decay of other CDM-candidate particles.
If ALP mass is too low, a direct experiment with present
techniques is unlikely to find it.
A possible channel is to observe photons, which are created by ALPs
following the inverse Primakoff process in an electromagnetic field.
As we show in detail in Section 2,
the ALP-photon conversion probability $P_{a \rightarrow \gamma}$ is
approximately proportional to the squared product of
magnetic field \mtrv{strength} orthogonal to the ALP
\mtrv{momentum direction,}
$B_\perp$, and the length,
$L$, i.e., $P_{a \rightarrow \gamma}$ $\propto$
$\left( B_\perp L \right)^2$.
\mtrv{There have been many attempts to detect ALP signals in terrestrial
experiments and astronomical observations. One candidate signal is
in the direction of galaxies or galaxy clusters, which
was proposed to be due to
ALP interaction with inter-stellar or galactic magnetic fields}
\cite{Cicoli2014, Conlon2014, Conlon2015a}, \mtrv{although the results
are still under discussion.
If ALPs are CDM itself or produced from a decay of CDM at cosmological
distances and if those can be observed, the distribution of ALPs or
ALP-induced photons via its interaction with magnetic fields in cosmic
structures should appear to be isotropic in the sky to the zero-th order
approximation, unless we have high-sensitivity and high-angular
resolution data to resolve the distribution tracing inhomogeneous cosmic
structures in the universe.}
\mtrv{In this paper, we propose a novel method to search for ALP-induced
photons from the satellite X-ray data, arising from the Primakoff
interaction of the ALPs with the Earth's magnetic field.
The Earth's magnetic field is known to have a dipole structure around
the Earth in a north-south direction and we have a good knowledge of its
strength and field configuration from various observations.
Therefore, we can expect the ALP-induced photons in X-ray wavelengths,
if produced, to vary with the Earth's magnetic field strength integrated
along the line-of-sight direction for each observation, even if ALPs
arriving on Earth have an isotropic distribution in the sky.
To search for such ALP-origin X-ray radiation, we focus on the
{\it Suzaku} X-ray data in the four deep fields, collected over
eight years. These fields had been observed frequently by the {\it Suzaku},
but the magnetic field strengths varied with each observation
depending on its location in the orbit.
For a null hypothesis of ALP-induced photons, the diffuse X-ray
background brightness estimated from the same field should {\it not}
show any dependence on the integrated magnetic field.
This is the signal we will search for in this paper.
{\it Suzaku} data is suitable for our purpose, because the
satellite, compared to the
{\it XMM-Newton}\footnote{\url{http://sci.esa.int/xmm-newton/}} or
{\it Chandra}\footnote{\url{https://chandra.harvard.edu/}}, has lower
background due to its low-altitude orbit that prevents cosmic rays from
penetrating the X-ray detectors \cite{Mitsuda2007}.}
\mtrv{Our study is somewhat similar to}
Fraser et al., (2014) \cite{Fraser2014}, which claimed a detection of
seasonal variations in the {\it XMM-Newton} X-ray data.
\mtrv{The work claimed that}
the X-ray flux modulation
\mtrv{at a level}
4.6 $\times$ 10$^{-12}$ ergs s$^{-1}$ cm$^{-2}$
deg$^{-2}$ in 2--6 keV
\mtrv{might be due to a conversion of solar axions by their interaction
with the Earth's magnetic field
\citep[also see][]{Davoudiasl2006, Davoudiasl2008}.}
However, Roncadelli and Tavecchio, (2015) \cite{Roncadelli2015} claimed
that the {\it XMM-Newton} satellite which never points toward the Sun
cannot observe
\mtrv{such ALP-induced photons originating from solar axions due to
momentum conservation.}
\mtrv{The structure of this paper is as follows. In
Section~\ref{sec:basics}, we briefly review basics of the inverse
Primakoff effect and how photons can be induced by
the effect from ALPs that are created by dark matter in the
expanding universe. In Section~\ref{165855_4May19} we show the main
results of this paper using the {\it Suzaku} data, when combined with
data of the Earth's magnetic field at each orbit of the {\it Suzaku}
satellite at each observation.
Section~\ref{sec:discussion} cotains the discussion and conclusion.}
\section{Process of photon emission from ALPs}
\label{sec:basics}
\mtrv{In this section we describe a mechanism of photon emission
from ALPs via the interaction with magnetic fields.
To do this, we consider a model in which dark matter, which fills up
space of the universe, preferentially decays into ALPs.
This model is an example of moduli dark matter model in a
string-theory-inspired scenario.}
\mtrv{When a dark matter particle decays into two ALPs, i.e. }
DM $\rightarrow$ 2ALPs,
\mtrv{each ALP has}
a monochromatic energy $E_a = m_\phi /2$,
where $m_\phi$ is the mass of the dark matter \ryrv{particle}.
The emissivity of
\mtrv{DM $\rightarrow$ 2ALPs decaying process}
is
\mtrv{given in terms of}
the energy density of dark matter,
$\rho_\phi \left( r\right)$,
the decay rate, $\Gamma_{\phi \rightarrow 2a}$, and $m_\phi$ as
\begin{equation}
\epsilon_a = \frac{2 \rho_\phi \left( r\right)
\Gamma_{\phi \rightarrow 2a} }{m_\phi}.
\end{equation}
Considering the spatial distribution of dark matter along the
line-of-sight direction, the ALP intensity,
$I_{a,{\rm line}}$ [counts s$^{-1}$ cm$^{-2}$ sr$^{-1}$],
is \mtrv{given as}
\begin{equation}
I_{a,{\rm line}} = \int_{\rm l.o.s.} \frac{2 \Gamma_{\phi \rightarrow
2a}} {4 \pi m_\phi} \rho_\phi \left( r\right)~dr
= \frac{S_\phi \Gamma_{\phi \rightarrow 2a}}{2\pi m_\phi},
\label{eq:line}
\end{equation}
\mtrv{at $E_a=m_\phi/2$, and $S_\phi$ is the column density of dark
matter in the line-of-sight direction \citep{Sekiya2016}, defined as}
\begin{equation}
S_\phi = \int_{\rm l.o.s.} \rho_\phi (r)~dr.
\end{equation}
In this case, the converted photon spectrum is a line emission.
\mtrv{If dark matter is uniformly distributed in the universe, we
would observe a continuum spectrum of the ALP intensity
because free-streaming ALPs undergo a cosmological redshift in
the expanding universe.
Assuming light-mass ALPs, i.e. relativistic ALPs, produced by
dark matter decay, a superposition of line spectra over
different redshifts leads us to observe a continuum spectrum of ALPs
\cite{Kawasaki1997,Asaka1998}: }
\begin{eqnarray}
\frac{dN}{dE_a} &=& \int_{\rm l.o.s.} \mathrm{d}r~
\frac{\Gamma_{\phi \rightarrow 2a}}{4 \pi m_\phi}
\rho_\phi \left( r\right) \times 2
\delta_D\! \left( E_a \left( 1+z \right) - m_\phi /2 \right) \\
&=&
\frac{\sqrt{2} c \Gamma_{\phi \rightarrow 2a} \rho_{\phi_0}}{\pi H_0}
~m_\phi^{-\frac{5}{2}} ~E_a^{\frac{1}{2}}
~f\left( \frac{m_\phi}{2E_a} \right)
\label{eq:con}
\end{eqnarray}
\mtrv{where $\delta_D(x)$ is the Dirac delta function, and the function
$f(x)$ is defined as}
\begin{equation}
f(x) \equiv \left\{ \Omega_{m0} +
\left( 1-\Omega_{m0} -\Omega_{\Lambda 0}
\right)/x - \Omega_{\Lambda 0}/x^3 \right\}^{-\frac{1}{2}}.
\label{124611_31May18}
\end{equation}
\mtrv{In the above equation}
$z$ is the redshift at decay, $\rho_{\phi_0}$ is the present energy density,
$H_0$ is present the Hubble constant, $\Omega_{m0}$ and $\Omega_{\Lambda 0}$
are the density parameters of
\mtrv{non-relativistic}
matter and the cosmological constant, respectively.
The spectral shape of ALPs is transcribed as a simple power-law function
whose number index is a positive value of $+1/2$.
In this case, the converted photon spectrum is also expected as a power-law
function with a photon index of $+1/2$.
The ALP-photon conversion probability in a vacuum with a magnetic field
via inverse Primakoff effect is given
in Ref.~\cite{VanBibber1989} as
\begin{equation}
P_{a \rightarrow \gamma} \left(x\right)
= \left| \frac{g_{a \gamma \gamma}}{2} \int_0^{x} B_\perp
\left( x'\right)
\exp \left( -i \frac{m_a^2}{2E_a} x' \right) dx' \right|^2,
\label{193602_11Jun16}
\end{equation}
with
\begin{equation}
B_\perp \left(x'\right) \equiv \left|
\vec{B} \left(x'\right) \times \vec{e}_a
\right|.
\end{equation}
Here, $g_{a \gamma \gamma}$ is an ALP-photon
coupling constant, $m_{a}$ and $E_{a}$ are mass and energy scales of ALP, and
$B_\perp(x)$ is the perpendicular component of magnetic field to the
ALP \mtrv{momentum direction, denoted as} $\vec{e}_a$.
The ALP-photon momentum transfer $q$ is defined as
\begin{equation}
q = \frac{m_a^2}{2E_a}. \label{181820_20Jun16}
\end{equation}
Assuming the $B_\perp (x')$ is uniform in the range $0<x'<L$,
we can write Equation \eqref{193602_11Jun16} as:
\begin{equation}
P_{a \rightarrow \gamma} = \left( \frac{g_{a\gamma \gamma} B_\perp}{2}
\right)^2~2L^2~
\frac{1- \cos \left( qL \right) }{(qL)^2}.
\label{163650_19Feb16}
\end{equation}
In the limit of
\mtrv{light}
ALP masses \mtrv{compared to the photon energy scale satisfying $q L\ll 1$,}
$1- \cos \left( qL \right) \simeq \left( qL \right)^2 /2$, and
the conversion rate is simply given by
\begin{equation}
P_{a \rightarrow \gamma} = \left( \frac{g_{a \gamma \gamma} B_\perp
L}{2} \right)^2.
\label{145821_20Feb16}
\end{equation}
under the coherence condition of
\begin{equation}
qL < \pi ~ \rightarrow ~ m_a < \sqrt{\frac{2\pi E_a}{L}}
\label{qcon}
\end{equation}
\mtrv{The following analysis uses
Equation~\eqref{163650_19Feb16} to constrain the ALP-photon coupling
constant.}
\mtrv{As shown above,}
the probability of ALP \mtrv{particles converting to photons}
proportional to $(B_\perp L)^2$ in the light mass limit.
\mtrv{Plugging typical values of the strength and coherent length scale
of Earth's magnetic field,}
Equation~\eqref{145821_20Feb16} gives
\begin{eqnarray}
P_{a \rightarrow \gamma} &\simeq & 2.45 \times 10^{-21}~
\left( \frac{g_{a \gamma \gamma} }{10^{-10}~{\rm GeV^{-1}}} \right)^2
\left( \frac{B_\perp L}{{\rm T~m}} \right)^2
\label{con_g10}
\end{eqnarray}
\section{Analysis and results: A search for the correlation between
residual {\it Suzaku} background radiation with the Earth's magnetic
strength}
\label{165855_4May19}
\subsection{Selection of blank sky observations from {\it Suzaku}
archival data}
\label{subsec:field_selection}
\mtrv{To locate ALP-induced photons, we use the {\it Suzaku} X-ray data,
and search for photons in the detector's field of view (FoV)
depending on the integrated magnetic strength along the line-of-sight
direction, $\left(B_\perp L\right)^2$.
Because most X-ray data contains X-ray emission photons from targeted
or unresolved sources, we need to study the
X-ray diffuse background (XDB) in blank fields, and search for a residual
signal in the background that is
correlated with the magnetic strengths following the scaling of
$(B_\perp L)^2$.} The X-ray satellite {\it Suzaku} is suitable for
\mtrv{this study because of its}
low instrumental background noise and
\mtrv{the low background radiation from cosmic rays
(compared to other X-ray satellites) due to its low-altitude Earth orbit;}
an altitude of $\sim$ 570 km and an inclination of $31^\circ$
\mtrv{from the Earth's equatorial plane, where the Earth's magnetic field
prevents cosmic rays from penetrating the satellite's detectors}
\cite{Mitsuda2007}.
Figure \ref{112957_31May18} is a schematic \mtrv{illustration}
of the {\it Suzaku} orbit
\mtrv{and the Earth's magnetic field configuration. Even if the
satellite observes the same field or the same angular direction--as
denoted by the black dotted line--the integrated strength of
perpendicular magnetic components along the line-of-sight direction
varies with the satellite position.
The {\it Suzaku} satellite orbits the Earth with a period of
approximately 96 minutes and it causes a
modulation of the integrated magnetic strength $(B_\perp L)^2$ with the
orbit, or when the target field is observed.
Thus, we expect variations in
the ALP-induced photons, if they exist, depending on the strength
$(B_\perp L)^2$.}
We calculated \mtrv{the Earth's magnetic field every 60 seconds for
each line-of-sight direction of a given target field using the
software,}
{\it International Geomagnetic Reference Field:} the 12th generation
(IGRF-12 \cite{Thebault2015}) for up to 6 times the Earth's radius
($R_E$), where \mtrv{typically} $B$ $\sim$ $10^{-7} {~\rm T}$.
The right panel in Figure \ref{112957_31May18}
\mtrv{shows a typical case of $(B_\perp L)^2$ as a function of the
satellite position or equivalently the observation time.}
\mtrv{It can be found} that a typical
value of $\left( B_\perp L\right)^2$ is of order of
$10^{4}$--$10^{5}$ T$^{2}$m$^{2}$,
\mtrv{which is greater than that of terrestrial experiments such as the
CAST experiment\footnote{\url{http://cast.web.cern.ch/CAST/CAST.php}}.}
If we apply the non-oscillation condition
\mtrv{of $qL \ll 1$}
(Equation~\eqref{qcon}), the
corresponding mass of ALP is limited to be $m_a$ $\leq$ $\mu$eV if
\mtrv{we assume that the}
converted photons are in X-ray \mtrv{wavelengths}.
\mtrv{Note that we considered an oscillation regime of $q L\sim 1$ to
obtain constraints on the ALP-photon coupling constant.}
\begin{figure}[htbp]
\centering
\includegraphics[width=.48\textwidth,bb=0 0 483 480]{BL_suzaku_orbit.png}
\hfill
\includegraphics[width=.48\textwidth,bb=0 0 640 480]{time_dependenve_BL.pdf}
\caption{Left: Schematic view of the position and
observation direction of {\it Suzaku} satellite relative to the Earth's
magnetosphere. Right: Time dependence of $\left( B_\perp L\right)^2$ in
a Lockman hole observation.
Gray hatched regions show periods of the Earth occultation, i.e.
the Earth exists between a target and {\it Suzaku}.}
\label{112957_31May18}
\end{figure}
\mtrv{To estimate the XDB spectrum, we
consider blank sky data from four deep fields selected from}
the {\it Suzaku} archives as tabulated in Table \ref{130850_22Jun16}.
The selection criteria are as follows.
\begin{enumerate}
\item No bright sources \mtrv{in the FoV of {\it Suzaku} X-ray Imaging
Spectrometer (XIS) \cite{Koyama2007}, and compact sources in the
FoV are already identified and can be masked in our analysis.}
\item Galactic latitudes of $|b| > 20^\circ$ to avoid
X-ray emission \mtrv{originating from sources in}
the Galactic disk \citep{Masui2009}.
\item \mtrv{Sufficiently distant from regions of high X-ray diffuse
emissions such as the North Polar Spur.}
\item Exposure time obtained by standard processing \mtrv{should be}
more than 200~ksec.
\end{enumerate}
\mtrv{The above criteria are met by the following four fields, also shown
in Table~\ref{130850_22Jun16}. First, we use the multiple observation data in}
the Lockman hole field, which is a famous
\mtrv{region with minimum neutral hydrogen column density}
that was annually observed with {\it Suzaku} for calibration.
\mtrv{We also use the data in the}
South Ecliptic Pole (SEP) and North Ecliptic Pole (NEP) fields.
\mtrv{Finally, we use the data in the field of high latitude, the neutral
hydrogen cloud or the so-called} MBM16 field.
\begin{table}[htbp!]
\begin{center}
\begin{threeparttable}
\caption{Observation of long exposure background observation by {\it
Suzaku} satellite}
\label{130850_22Jun16}
\begin{tabular}{l c c c c c c} \hline \hline
Field name & $\left( \alpha_{2000}, \delta_{2000}\right)$ & Num. of &
Total & Num. of & Exposure used & Obs. Year\\
~ & ~ & Obs. & exposure${}^\ast$ & events${}^\dagger$ &
in this analysis${}^\dagger$ & ~ \\
~ & ~ & ~ & [ksec] & [counts] & [ksec] & ~ \\ \hline
Lockman hole & (162.9, 57.3) & 11 & 542.5 & 5595 & 210.7 & 2006--2014 \\
MBM16 & (49.8, 11.7) & 6 & 446.9 & 10755 & 231.8 & 2012--2015 \\
NEP & (279.1, 66.6) & 4 & 205.0 & 7666 & 221.9 & 2009\\
SEP & (90.0, -66.6) & 4 & 204.2 & 6102 & 180.2 & 2009\\ \hline
\end{tabular}
\begin{tablenotes}[flushlef]
\begin{footnotesize}
\item[$\ast$] \ryrv{Exposure time at each XIS after the standard data
processing pipeline.}
\item[$\dagger$] \ryrv{The sum of the three XIS exposure time after
extra data reduction and $(B_\perp L)^2$ selection, used values in this
paper.}
\end{footnotesize}
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
\mtrv{We use the data reduction pipelines, the {\it Ftools} in HEAsoft
version 6.16 and XSPEC version 12.8.2, to analyze the X-ray data in the
four fields of Table~\ref{130850_22Jun16}, collected from the archive of
{\it Suzaku} XIS}.
\mtrv{To avoid a possible contamination from high X-ray background,}
we removed data during the South Atlantic Anomaly region,
Earth occultation, low elevation angle from the Earth's rim, and low
cut-off-rigidity (COR; < 8 GV/c) regions.
We \mtrv{stacked} the X-ray image in the 0.5--7 keV band
\mtrv{for each of the four fields, where we}
removed the point sources whose flux is larger than 1 $\times$ $10^{-14}$
ergs s$^{-1}$ cm$^{-2}$, with a radius of 1.5 arcminutes
corresponding to encircled power function of 90\% for {\it Suzaku}'s
mirror.
We then calculated the $\left( B_\perp L \right)^2$ every
60 seconds \mtrv{for each observation, as a function of}
the satellite position in orbit and observing line-of-sight direction.
Figure \ref{153849_24Jun16} shows
the distribution of $\left( B_\perp L \right)^2$ in each
\mtrv{of the four fields}.
\mtrv{We subdivided the data into four to six bins of the
$\left(B_\perp L \right)^2$ values, where the binning was determined so
that each bin had almost the same photon statistics, as denoted by
the different colored histograms in the figure.}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.35]{./all_hist2_BLbin.pdf}
\caption{The histograms of $\left( B_\perp L \right)^2$ during
observation of 4 direction. Binning of exposure time, to
obtain an almost equal number of photons in each class of
$\left(B_\perp L \right)^2$ are shown.
Note that only $\left(B_\perp L \right)^2$ $\geq$ 2$\times10^{4}$
T$^{2}$ m$^{2}$ are used for spectral analysis as shown in
section 3.2.}
\label{153849_24Jun16}
\end{center}
\end{figure}
\subsection{An assessment of non-Xray background contamination}
\label{sec:non_xray_background}
\mtrv{Before presenting the main results, we need to assess the level
of non-Xray background (NXB) contamination in the data.}
Although the NXB of {\it Suzaku} is \mtrv{usually}
low,
\mtrv{there could be a residual NXB contamination, up to 16--50\% of the
observed CXB in the 2--6~keV range.}
\mtrv{A part of NXB is due to} fluorescence lines by materials such as
Si, Al, Au, and Ni around the detector.
\mtrv{These lines are distinguishable, if their emission lines are
identified at their corresponding energy scales, in each XIS spectrum,
as} identified in Ref. \cite{Yamaguchi2006}.
Not only X-rays but charged particles too can produce pseudo-events in
the CCD instrument.
Pseudo events produced by particles originating from cosmic rays in
orbit were studied by GEANT4 Monte-Carlo simulation.
The continuum spectra by pseudo events are
reproducible at an accuracy of 20\% in its amplitude in each energy
bin \cite{Murakami2006}.
The production processes of pseudo events are well understood, but the
input cosmic-ray flux varies by time and position of the satellite.
The reproduction of the events was studied by using the event database
collected during the periods when the FoV was blocked by the night side of
the Earth (NTE) \cite{Tawa2008}.
It was found that the intensity of the background could be estimated as
a function of COR, and that the spectra were similar.
They proposed a background estimation method to make a spectrum from
$\pm$ 150 days of stacked night-Earth data weighted to reproduce the
distribution of COR.
They also found that the fluctuation of the background is larger than the
simple Poisson statistics.
Uncertainty of reproducibility for a typical 50 ksec
exposure was reported to be 3.4\%, although the expected statistical
error by Poisson statistics is 1/10 \cite{Tawa2008}.
This procedure is used as a standard background estimation for
{\it Suzaku} and adapted as a HEASoft tool.
In our analysis, we needed to sort the data by
$\left( B_\perp L \right)^2$, which correlated with the COR. If the orbital
position or $\left( B_\perp L \right)^2$ is a potential control parameter
of the NXB, it will affect our determination of the $\left( B_\perp
L\right)^2$ modulated signal.
COR parameters used in the {\it Suzaku} analysis (defined as COR2 in the
calibration database) are defined by the projected geographic
coordinates, and calculated by the geomagnetic model on 2006 Apr.
Actual COR would change gradually with time, and the cosmic ray flux is
affected by the solar activity.
We thus stepped into further NXB analysis of {\it Suzaku}, to evaluate
possible range of background fluctuation, and to define further data
reduction methods if needed.
We evaluated the fluctuation of input cosmic-ray flux by event
rates at 12--15 keV of the XIS1. As the effective area of
the X-ray mirror dropped rapidly above the Au L-edge below 1\%, the
event rate above 12 keV is considered to be an indicator of the cosmic
ray flux.
Due to the back-illumination structure, the background rate of XIS1 is
higher than the other front-illuminated CCD, XIS0 and XIS3, and is more
sensitive.
\ryrv{Apparently, the fluctuation of the background count rate exceeds
the Poisson statistics.
In \cite{Tawa2008}, the intrinsic fluctuation is evaluated as
$\sqrt{\sigma_{\rm calc}^{2}-\sigma_{\rm Poisson}^{2}}$.
We evaluate the intrinsic fluctuation as follows:
\begin{enumerate}
\item Counting the number of events in 12--15 keV during the 60 sec
for each COR range.
\item Calculating an mean of the count every 60 sec bins, denoted as
$\mu$, from the distribution of count rate as shown in histogram
in Figure \ref{calc_sample_short_term}.
\item Assuming a certain value $\sigma$, and simulating a $P_{\rm
NXB}$ by Monte-Carlo method as shown in Equation
\eqref{122042_23Jan17} (lines in Figure
\ref{calc_sample_short_term}).
\begin{equation}
P_{\rm NXB}\left( X=k \right) =
\frac{\lambda\left( k, \sigma \right)^k e^{-\lambda\left( k, \sigma
\right)}}{k!}, \quad \lambda \left( k, \sigma \right)
= \frac{1}{\sqrt{2\pi \sigma^2}}
\exp \left(-\frac{\left( k-\mu \right)^2}{ 2\sigma^2} \right),
\label{122042_23Jan17}
\end{equation}
where $k$ is an observation frequency per interval,
$\lambda \left( k, \sigma \right)$ is an average number of events per
interval, $\sigma $ is an estimated systematic error by the variation
of $\lambda \left( k, \sigma \right)$,
and $\mu$ is an average of observed NXB events.
\item Comparing the observed and simulated histograms by Pearson's
chi-squared test and obtaining 95\% upper limit for $\sigma$.
\end{enumerate}
It assumes that the variations of the mean value of the count rates
follow a Gaussian distribution,
and the detected count follows a Poisson distribution.
Thus, the observed count rate is expressed by a convolution of these
functions.
We use Pearson's chi-squared test to set a quantitative upper limit
for the short term variability. A sample of these tests is shown in
Figure \ref{calc_sample_short_term}.
The 95\% confidence range for the standard deviation of the Gaussian
is obtained as 22--39\% of the mean value.}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.5,bb=0 0 640 480]{./short_term_calc_sample_xis1.pdf}
\caption{A sample of histogram for observed events in 12--15 keV of
XIS1 in COR range of 9 to 10 GV\slash $c$ and
the probability distribution of each $\sigma \slash \mu$
estimated from Equation \eqref{122042_23Jan17}.}
\label{calc_sample_short_term}
\end{center}
\end{figure}
The background rate anomaly of the geographical position is checked
as follows.
We divided the orbital position projected onto the Earth's surface by
every $10^\circ$ in longitude
and $5^\circ$ in latitude as defined as {\tt Loc\_ID}, and sorted them
into 4 COR classes.
For each observation, the NTE count rate in 2--5.6 keV for
$\pm$150 days, which is used by the standard background estimation, was
accumulated on every {\tt Loc\_ID}. If the count rate at an
{\tt Loc\_ID} is higher than the averaged value over the same COR class by
3$\sigma$, the events occurred at that {\tt Loc\_ID} were discarded from the
spectral analysis.
After these data were reduced, the count rate in 2--6 keV before and after
the standard background subtraction were plotted as a function of
$\left(B_\perp L\right)^2$. We found that there is an negative
correlation between the count rate and the $\left(B_\perp L\right)^2$,
contrary to the ALP origin signal prediction.
We checked the count rate of the upper discriminator (PIN-UD) of the
Hard X-ray Detector (HXD)
onboard {\it Suzaku},
which corresponds to energy deposited by protons approximatelly $>100 $
MeV \cite{Kokubun2007},and found the same trend.
The PIN-UD is affected by the radio-activation of HXD itself, and
cannot be used to estimate the XIS background.
We evaluated the correlation by a linear function fit, and decided that
only those data satisfying $\left(B_\perp L\right)^2$ $\geq$ 2 $\times$
$10^4$ T$^2$ m$^2$ would be used for the analysis.
\subsection{Spectral analysis for $\left( B_\perp L \right)^2$ sorted data}
In the spectral analysis, we assumed that celestial diffuse emission of
each blank field is expressed by the sum of Cosmic X-ray Background
(CXB), Milky Way Halo (MWH) emission, Solar Wind Charge eXchange (SWCX),
Local Hot Bubble (LHB), and unknown
High Temperature Component (HTC) as studied by previous works
\cite{Yoshino2009,Sekiya2016,Nakashima2018}.
\ryrv{These are collectively called XDB.}
The surface brightness and
spectral parameters for the celestial emission can be varied by the FoV
in a reasonable range.
The ALP signal has a power-law spectral shape with a photon index of
$+1/2$, and with intensities proportional to $\left( B_\perp L \right)^2$.
The NXB for each observation can be estimated by the standard background
estimation method
\cite{Tawa2008}, but the intensities can also be varied within the
fluctuation studied in the previous subsection.
Steps of spectral analysis for one observing direction are as follows;
\begin{enumerate}
\addtocounter{enumi}{-1}
\item Apply standard data reduction for XIS 0,1,3 of each observation ID
(a unit of archival data, events from continuous pointing for
the same observation direction), point source removal, {\tt Loc\_ID}
selection and $\left(B_\perp L\right)^2$ cut.
Response matrices \cite{Ishisaki2007} and template NXB by standard
method \cite{Tawa2008} are also prepared.
\label{163543_28May18}
\item Accumulate the energy spectra in the 0.7--7.0 keV range for each
XIS 0,3 and the 0.7--5.0 keV range for XIS 1
subtract the standard NXB, and fit them simultaneously by an
empirical X-ray background model, \ryrv{obtained the best-fit
values and errors} with $\chi^2$ statistics
and $C$-statistics \cite{Cash1979} in {\it Xspec}, and evaluate
the validity of parameters. \label{162401_28May18}
\item Divide the energy spectra by $\left(B_\perp L\right)^2$ values and
fit them again simultaneously with $C$-statistics because of low
photon statistics in each range. Check the consistency of
spectral parameters obtained in step \ref{162401_28May18}.
\label{164512_28May18}
\item Add ALP emission model as a power-law function with a photon
index of $+1/2$ and with a surface brightness proportional to the
$\left( B_\perp L\right)^2$, and treat the background as
a spectral model whose intensities can be tuned.
\label{164529_28May18}
\end{enumerate}
The fitting model describing the diffuse X-ray emission is similar to that
used in \cite{Sekiya2016}; it is shown by
\begin{displaymath}
apec_{\rm SWCX+LHB}+phabs(apec_{\rm MWH}+power{\mathchar`-}law_{\rm
CXB}+apec_{\rm HTC}).
\end{displaymath}
The APEC (Astrophysical Plasma Emission Code) \cite{Smith2001}
\footnote{latest version is available at http://www.atomdb.org} is
an emission model from collisional equilibrium and optically thin plasma
installed in {\it Xspec} and applied to estimate the SWCX and LHB
blend, MWH, and HTC.
The temperature of {\it apec} in the SWCX and LHB blend was fixed to
$kT =$ 0.1 keV \cite{Yoshitake2013}.
The typical temperature of the MWH is $kT =$ 0.15--0.35 keV
\cite{Yoshino2009,Sekiya2016},
a part of the blank sky spectra requires a HTC with
$kT=0.6\mathchar`-0.9$ keV
to describe emission of approximately 0.9 keV \cite{Sekiya2016}.
The CXB was represented by a power-law emission model with a photon
index of $\sim 1.4$.
The solar abundance table of {\it apec} model was given by \cite{Anders1989}.
{\it Phabs} describes the absorption by the Galactic interstellar medium,
whose column density is
fixed from the LAB(Leiden/Argentine/Bonn) survey \cite{Kalberla2005} database.
Steps \ref{163543_28May18} --\ref{162401_28May18} are the standard spectral
fitting procedure for {\it Suzaku},
and the parameters obtained in Step \ref{162401_28May18} were consistent
with each other within
90\% error, and with previous works like
Sekiya et al. (2016) \cite{Sekiya2016}.
In step \ref{164512_28May18}, we divided the data by $\left( B_\perp L \right)^2$.
For example, in the case of Lockman hole, there were 11 observations, sorted by
$\left( B_\perp L \right)^2$ into 3 classes, and 3 CCDs,
thus 99 spectra were fitted simultaneously with the same emission parameter.
The number of the energy spectra for Lockman hole,
MBM 16, SEP, and NEP, are 99, 36, 24, and 36, respectively.
The degrees of freedom in the spectral fit also increased, and the number of
photons in each energy bin decreased.
We applied $C$-statistics, which assumes that the data
follows a Poisson distribution and uses the likelihood ratio to be
minimized, and confirmed that the obtained parameters are consistent
with Step \ref{162401_28May18}.
Because we would divide the spectra in later analysis, complex structure
(mainly of Oxygen lines below 0.7 keV) are not well resolved. We only
used the data in from the 0.7--7.0 keV range.
Some components whose
intensities were consistent with null were ignored by setting the
intensities to 0.
In step \ref{164529_28May18}, we added the ALP component whose surface
brightness is proportional to $\left( B_\perp L \right)^2$. In
usual spectral fitting like Steps
\ref{162401_28May18}--\ref{164512_28May18}, we used the spectra after
subtraction of the estimated background.
Here, we treated the NXB as one of the input models with a
normalization factor, which can be variable in each observation ID, CCD,
and $\left( B_\perp L \right)^2$ class.
In contrast, the parameters for celestial emission and the normalization
of the ALP component at a fixed $\left( B_\perp L \right)^2=10^{4}$
T$^{2}$m$^{2}$ are common for the same FoV.
The final fitting results are summarized in Table \ref{fit_result}.
\rytrv{We assumed the flux of the ALP both in negative and positive
mathematically to evaluate proper error ranges, as shown in Figure
\ref{155247_29May18}.}
\begin{table}[htbp!]
\begin{center}
\begin{threeparttable}
\caption{Summary of best-fit parameters of the XDB + ALP + NXB model by
spectral fitting in Lockmann Hole, MBM16, SEP, and NEP observation with
$\left( B_\perp L \right)^2$ sorted.}
\label{fit_result}
\begin{tabular}{l l c c c c} \hline \hline
Model & Parameter & Lockman hole & MBM16 & SEP & NEP \\ \hline
Num of Obs.ID & ~ & 11 & 6 & 4 & 4 \\
\multicolumn{2}{l}{Num of $\left( B_\perp L \right)^2$
classification$^{\ast}$} & 3 & 2 & 2 & 3 \\
Absorption & $N_{\rm H} ~ [10^{20}~{\rm cm^{-2}}]$
& 0.58(fix) & 16.90(fix) & 4.72(fix) & 3.92(fix) \\
LHB+SWCX & $kT$ [keV] & - & - & 0.1(fix) & - \\
& ${\rm Norm^{\dagger}}$ & 0(fix)& 0(fix) & $33.4^{+88.9}_{-33.4}$ &
0(fix) \\
MWH & ${kT}_1$ [keV] & $0.14^{+0.08}_{-0.09}$ &
$0.32^{+0.24}_{-0.23}$ & - & $0.21^{+0.18}_{-0.13}$ \\
~ & ${\rm Norm}_1^{\dagger}$ & $28^{+622}_{-20}$ & $2.1^{+23.0}_{-1.3}$ &
0(fix) & $4.1^{+13.5}_{-3.9}$ \\
HTC & ${kT}_2$ [keV] & $0.61^{\P}$ & - & $0.66^{+0.08}_{-0.06}$ & $0.69$ \\
~ & ${\rm Norm}_2^{\dagger}$ & $0.5^{+0.4}_{-0.5} {}^{\parallel}$ & 0(fix) &
$2.1^{+0.5}_{-0.6}$ & $0.3^{+0.7}_{-0.3}$ \\
CXB & $\Gamma_{\rm CXB}$ & $-1.42^{+0.13}_{-0.13}$ & $-1.33^{+0.16}_{-0.16}$ &
$-1.50^{+0.23}_{-0.25}$ & $-1.53^{+0.18}_{-0.18}$ \\
~ & $S_{\rm CXB}^{\ddagger}$ & $7.7^{+0.6}_{-0.6}$ & $6.5^{+1.1}_{-1.0}$ &
$5.6^{+0.9}_{-0.8}$ & $6.9^{+0.7}_{-0.8}$ \\
ALP & $\Gamma_{\rm ALP}$ & +0.5(fix) & +0.5(fix) & +0.5(fix) & +0.5(fix) \\
~ & $S_{\rm ALP}^{\S}$ & $0.012^{+0.015}_{-0.016}$ &
$0.005^{+0.022}_{-0.024}$ & $0.011^{+0.027}_{-0.030}$ &
$0.012^{+0.020}_{-0.022}$ \\
$C/{\rm dof}$(dof) & ~ & 1.11(2865) & 1.03(1505)
& 0.99(1000) & 1.10(1503) \\ \hline
\end{tabular}
\begin{tablenotes}[flushlef]
\begin{footnotesize}
\item All errors indicatte 90\% confidence level.
\item[$\ast$] See classification shown in Figure \ref{153849_24Jun16}.
\item[$\dagger$] The emission measure of CIE plasma integrated over the
line-of-sight for SWCX+LHB, MWH
(the normalization of {\it apec} model):
$(1/4\pi)\int n_{\rm e} n_{\rm H}ds$ in unit of
$10^{14}~{\rm cm^{-5}~sr^{-1}}$.
\item[$\ddagger$] The surface brightness of the CXB
(the normalization of a power-law model): in unit of photons
cm$^{-2}$sec$^{-1}$keV$^{-1}$str$^{-1}$ at 1 keV.]
\item[$\S$] The surface brightness of the ALP
(the normalization of a power-law model): in unit of photons
cm$^{-2}$sec$^{-1}$keV$^{-1}$str$^{-1}$ at 1 keV and
$10^4$ T$^2$ m$^2$.
\item[$\P$] Parameter pegged at fitting limit: 0.
\item[$\parallel$] Because the normalization of {\it apec} allows 0 within the
error range, the temperature is not determined.
\end{footnotesize}
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
In all four observational directions, the surface brightness for the ALP
components is consistent with 0 within a 90\% confidence level.
We also checked that the normalizations of the NXB model were within
$\pm$ 40\%, or the fluctuation studied in section 3.2.
We made contour plots by
surface brightness of ALP and CXB components between 2--6 keV, as shown
in Figure \ref{155247_29May18}.
As the surface brightness varies with the index and normalization of an
assumed power-law part, the contour is not smooth, owing to the steps in
the parameter search.
The limit obtained from the MBM16 observation
is the lowest among these four fields and gives the tightest upper limit
on the ALP flux: $1.6 \times10^{-9}$ ergs s$^{-1}$cm$^{-2}$sr$^{-1}$
normalized at $10^{4}$ T$^{2}$m$^{2}$.
\ryrv{An accumulated spectrum of all fitted data is shown in
Figure \ref{180039_14Oct19} with averaged XDB and NXB model and the
obtained upper limit for ALP.}
In Table \ref{upper_limit}, we also tabulated the center values
of the CXB surface brightness and the upper limits of the ratio of ALP
emission hidden in the CXB.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.35,bb=0 0 1024 768]{./CXBvsALP.pdf}
\caption{The confidence contour between surface brightness of CXB and
ALP calculated from the photon index
$\Gamma_{\rm CXB}$, $\Gamma_{\rm ALP}$ and normalization
$S_{\rm CXB}$, $S_{\rm ALP}$ as shown in Table \ref{fit_result}
obtained for Lockman hole, MBM16, SEP, and NEP observations,
where the NXB normalization parameters were allowed to vary.
3 confidence levels: 68\% (black), 90\% (red) and
99\% (green).
Dashed line: 99\% upper limit for ALP surface brightness.}
\label{155247_29May18}
\end{center}
\end{figure}
\rytrv{To show the degeneracy among ALP and NXB normalization, we made a contour
plot with the Lockman hole observation at one Obs. ID, one BL class,
and one XIS, as shown in Figure \ref{152346_30Nov19}.
In the case of the Lockman hole, we have independently determined NXB
normalization for all 11Obs. ID, 3 BL classes, and 3 XISs.}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.30,bb=0 0 1024 768]{./NXBvsALP_lockmanhole.pdf}
\caption{\rytrv{The confidence contour between ratio of NXB
normalization and surface brightness of ALP calculated from the photon
index $\Gamma_{\rm ALP}$ and normalization $S_{\rm ALP}$ as shown in
Table \ref{fit_result} obtained for Lockman hole at one Obs. ID, one
$(B_\perp L)^2$ class, and one XIS.
3 confidence levels: 68\% (black), 90\% (red) and 99\% (green).}}
\label{152346_30Nov19}
\end{center}
\end{figure}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.40,bb=0 0 640 480]{./data_bestfit_MBM16.pdf}
\caption{\ryrv{Accumulated spectrum used in the spectral fit for MBM16
direction.
The spectrum is the sum of all Obs.ID, $(B_\perp L)^2$
classes, and XISs. A response is weighted by the number of photons, and
the NXB model is weighted by the exposure time after applying the
normalization constant. Note that actual fitting was done with a set of
energy spectra simultaneously, and no residuals are shown.}}
\label{180039_14Oct19}
\end{center}
\end{figure}
\begin{table}[htbp!]
\begin{center}
\begin{threeparttable}
\caption{Summary of upper limit at 99\% confidence level of surface
brightness for ALP origin emissions as shown by dashed line in Figure
\ref{155247_29May18}.}
\label{upper_limit}
\begin{tabular}{l c c c} \hline \hline
Field Name & 99\% UL for ALP & best-fit CXB & 99\% UL for CXB
ratio \\
~ & surface brightness${}{\ast}$ & surface brightness${}^{\dagger}$ &
[\%] \\ \hline
Lockman hole & 2.0 & 28.3 & 7.1\\
MBM16 & 1.6 & 26.9 & 5.9 \\
SEP & 2.8 & 18.6 & 15.1 \\
NEP & 1.9 & 22.0 & 8.6 \\
\hline
\end{tabular}
\begin{tablenotes}[flushlef]
\begin{footnotesize}
\item[$\ast$] In unit of $10^{-9}~{\rm ergs~s^{-1}~cm^{-2}~sr^{-1}}$ at
$10^4~{\rm T^2~m^2}$ in 2--6 keV band.
\item[$\dagger$] In unit of $10^{-9}~{\rm ergs~s^{-1}~cm^{-2}~sr^{-1}}$
in 2--6 keV band.
\end{footnotesize}
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
\section{Discussion and Conclusions}
\label{sec:discussion}
We assumed that the cosmologically distributed ALPs would make a
power-law with a photon index of +0.5 ($dN/dE$ $\propto$ $E^{+0.5}$)
emission by the Earth's magnetosphere in proportion to the integrated
$\left( B_\perp L \right)^2$ in the FoV, and analyzed the data with
{\it Suzaku} for four different directions.
We did not detect any possible continuous emission from ALPs reported by
previous similar studies \cite{Fraser2014}.
We obtained the 99\% upper limit of the X-ray surface brightness and
flux originating from ALPs in the 2.0-6.0 keV energy range as
1.6 $\times$ $10^{-9}$ ergs s$^{-1}$ cm$^{-2}$ sr$^{-1}$, at
$\left( B_\perp L \right)^2$ = $10^4$ T$^{2}$ m$^{2}$, as shown in Table
\ref{upper_limit}.
It corresponds to 6--15\% of the apparent CXB surface
brightness in the 2--6 keV band, and is consistent with the idea that
80--90\% of the CXB in the 2--8 keV band are resolved into point sources
\cite{Cappelluitti2017, Luo2017}.
In other words, it could not be denied that 10--20\% of the unresolved
CXB could originate from the ALP converted to X-ray by the Earth
atmosphere at {\it Suzaku} orbit.
If we assume the dark matter density and decay rate, we can limit the
ALP-photon coupling constant. By combining Equations \eqref{eq:con},
\eqref{124611_31May18} and \eqref{con_g10},
the ALP-photon coupling constant, $g_{a \gamma \gamma}$, was
constrained in the ALP mass range of
$m_{a}$ $<$ $\sqrt{2\pi E_a \slash L}$ $\sim$ 3.3 $\times$ 10$^{-6}$ eV
to be
\begin{eqnarray}
g_{a \gamma \gamma} < 3.3 \times 10^{-7}~{\rm GeV^{-1}} ~
\left( \frac{m_{\phi}}{10~{\rm keV}} \right)^{5/4}
\left( \frac{\tau_{\phi}}{4.32\times10^{17}~{\rm ~s}} \right)^{1/2}
\left( \frac{B_\perp L}{100~{\rm T~m}} \right)^{-1} \nonumber \\
\left( \frac{\rho_{\phi}}{1.25~{\rm keV~cm^{-3}}} \right)^{-1/2}
\left( \frac{H_0}{67.8~{\rm ~km~s^{-1}~Mpc^{-1}}} \right)^{-1/2}
\left( \frac{f}{1.92} \right)^{-1/2},
\end{eqnarray}
as shown in Figure \ref{122724_31May18}.
Here, we assume a standard cosmology model with a mass density and a
Hubble constant $H_0$.
The decay rate of the dark matter to the ALPs
$\Gamma_{\phi \rightarrow 2a}$ $=$ $1/\tau_\phi$ $<$ $1/t_0$,
where $t_0$ is the Hubble time.
The factor $f$ is defined by the Equation \eqref{124611_31May18}.
\ryrv{Note that we neglect the reduction and anisotropy of the ALP flux due to
interstellar and intergalactic magnetic fields.}
For the line emission search in the X-ray band, Sekiya et al. (2016)
collected the longest exposure of 12 Msec from 10
years of {\it Suzaku} archival data, and obtained a 3$\sigma$ upper
limit for a narrow line emission between 1 and 7 keV to be 0.021 photons
s$^{-1}$ cm$^{-2}$ sr$^{-1}$ \cite{Sekiya2016}.
The ALP-photon conversion rate, $P_{a \rightarrow \gamma}$ $\propto$
$\left( B_\perp L\right)^2$, was also computed by using IGRF-12
model every 60 seconds, and the averaged value was obtained to be
$\left( B_\perp L\right)$ = 140 T m.
It is larger than the value of 84 T m by CAST \cite{Andriamonje2007}.
This value gives the upper limits of
\begin{equation}
I_{a,{\rm line}} \cdot (\frac{g_{a\gamma\gamma}}{10^{-10}{\rm
GeV}^{-1}})^2 < 4.4 \times 10^{14}
~{\rm axions ~s^{-1} ~cm^{-2} ~sr^{-1}} \quad
{\rm in~the~1.0-7.0~keV~band.}
\label{175448_10Nov17}
\end{equation}
A $g_{a \gamma \gamma}$ can be also constrained as
\begin{equation}
g_{a \gamma \gamma} < 8.4 \times 10^{-8}~{\rm GeV^{-1}}
\left( \frac{B_{\perp} L}{140 {\rm ~T~m}} \right)^{-1}
\left( \frac{\tau_{\phi}}{4.32\times10^{17} {\rm ~s}} \right)^{1/2}
\left( \frac{S_{\phi}}{50 {\rm ~M_{\odot}pc^{-2}}} \right)^{-1/2},
\end{equation}
when the ALP density is connected with dark matter density around our galaxy.
It is also shown in Figure \ref{122724_31May18} as galactic monochromatic ALP.
In the plot, we consider the oscillation effect by Equation \eqref{163650_19Feb16}.
This restriction of a physical parameter of ALPs
is less strict than other experiments (e.g. CAST, ADMX),
which assume a different axion and ALP model than this research.
Nevertheless, it is important to note that
we found these restrictions by using a new independent method from X-ray
observations.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[scale=0.6,bb=0 0 640 480]{./gma.pdf}
\caption{ALP parameters constrains in this paper in Universal continuous
ALP (cyan) and Galactic monochromatic ALP (yellow) see details in text.
Limits of other experiments are taken from \cite{Carosi2013,
Anastassopoulos2017}}
\label{122724_31May18}
\end{center}
\end{figure}
\acknowledgments
This work was partially supported by JSPS KAKENHI Grant Numbers
26220703 and 14J11023.
We thank Prof. M. Kawasaki and Prof. M. Teshima for valuable comments,
and Dr. N. Sekiya for using his data and suggestions.
We would like to thank Editage (www.editage.com) for English language
editing.
| proofpile-arXiv_065-7047 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Understanding the many-body instabilities and symmetry breaking
in strongly interacting fermions in two-dimension (2D) holds the key to
several long-standing
problems in condensed matter physics.
One example is the precise mechanism by which unconventional
superconductivity with various pairing symmetries emerges from repulsive
interactions,
in materials ranging from cuprate \cite{RevModPhys.78.17}, ruthenate
\cite{ruthenates}, and pnictide \cite{iron}
superconductors.
These and other correlated quantum materials typically display
intertwined vestigial orders, e.g. in the so-called pseudogap
region where charge density waves, pairing, and other fluctuations compete.
Recently, ultracold Fermi
gases \cite{RevModPhys.80.1215,ketterle2008making} of atoms and molecules have become a promising experimental platform to
tackle some of these open
problems by realizing
Hamiltonians such as the Fermi-Hubbard model \cite{hart2015observation,mazurenko2017cold,brown2019bad}
with tunable interactions \cite{RevModPhys.80.885}.
This offers opportunity to deepen our understanding of the ``pairing glue" in repulsively
interacting systems, and shed light on the complex interplay of quantum fluctuations
in distinct channels for simple and highly controlled
Hamiltonians.
In this paper, we show theoretically that Rydberg-dressed Fermi gas of alkali
atoms with tunable long-range interactions gives rise
to not only $p$-wave topological superfluids for attractive bare interactions,
but also $f$-wave superfluid with high transition temperatures stemming from repulsive bare interactions.
Rydberg atoms and Rydberg-dressed atoms haven long been recognized for their
potential in quantum simulation and quantum information \cite{Weimer2010,
PhysRevLett.87.037901,Saffman2010,Browaeys2016,Karpiuk_2015}.
Recent experiments have successfully demonstrated a panoply of two-body
interactions in cold gases of Rydberg-dressed alkali atoms
\cite{schauss2015crystallization,zeiher2016many,holletith2018,
PhysRevX.7.041063,Jau2015,PhysRevX.8.021069,borish2019transversefield}.
In Rydberg dressing, the
ground state atom (say $n_0S$) is weakly coupled to a Rydberg state (say $nS$
or $nD$) with large principal number $n$ by off-resonant light with Rabi
frequency $\Omega$ and detuning $\Delta$. The coupling can be achieved for
example via a two-photon process involving an intermediate state $n_1 P$ to
yield longer coherence times \cite{Henkel2010}. The huge
dipole moments of the Rydberg states lead to strong interactions that
exceed the natural van der Waals interaction by a factor that scales with
powers
of $n$ \cite{Saffman2010,Browaeys2016}.
The
interaction between two Rydberg-dressed atoms takes the following form
\cite{Henkel2010}:
\begin{equation}
V(\mathbf{r}) = \frac{u_0}{r^6+r_c^6}.
\label{eq:rydberg_interaction_real}
\end{equation}
Here $r=|\mathbf{r}|$ is the inter-particle distance,
$u_0=(\Omega/2\Delta)^4C_6$ is the interaction strength, $C_6$ is the
van der Waals coefficient, and $r_c=|C_6/2\hbar\Delta|^{1/6}$ is
the soft-core radius and the characteristic scale for the interaction
range. As shown in Fig.~\ref{fig:rydberg_interaction}, $V({\bf r})$ has a
step-like soft-core for $r\lesssim r_c$ before decaying to a van der Waals
tail at long distances.
Both $u_0$ and $r_c$ can be tuned experimentally
via $\Omega$ and $\Delta$ \cite{Henkel2010}. Moreover, by choosing proper
Rydberg states (e.g. $nS$ versus $nD$ for $^6$Li with $n>30$ \cite{Xiong2014})
$C_6$ and $u_0$ can be made either repulsive or attractive.
By choosing proper $n$, $\Delta$ and $\Omega$, atom loss can be reduced to achieve a
sufficiently long life time to observe many-body phenomena
\cite{Henkel2010, PhysRevX.7.041063, PhysRevX.8.021069,Li2016a}.
Previous theoretical studies have explored the novel many-body phenomena
associated with interaction Eq.~\eqref{eq:rydberg_interaction_real} in
bosonic
\cite{Henkel2010,Maucher2011,Glaetzle2014,tanatar2019,Pupillo2010,PhysRevLett.108.265301,PhysRevLett.123.045301,PhysRevLett.119.215301}
and fermionic gases \cite{Li2016a} including the prediction of topological
superfluids \cite{Xiong2014} and topological density waves \cite{Li2015a}.
Here we consider single-component Rydberg Fermi gases confined in 2D
\cite{Khasseh2017}, where mean-field and random phase approximation (RPA)
become unreliable due to enhanced quantum fluctuations. Our goal is to set
up a theory to systematically describe the competing many-body phases of 2D
Rydberg-dressed Fermi gas by treating them on equal footing beyond the
weak-coupling regime and RPA.
We achieve this by solving the functional renormalization group flow equations
for the fermionic interaction vertices. The resulting phase diagram
(Fig.~\ref{fig:phase_diagram})
is much richer than the RPA prediction \cite{Khasseh2017} and reveals
an unexpected $f$-wave phase.
The paper is organized as follows. In Sec.~\ref{sec:mean-field} we introduce
many-body phases of Rydberg-dressed Fermi gas within mean-field from the
standard Cooper
instability analysis and Random Phase Approximation. In Sec.~\ref{sec:frg} we
present the numerical implementation of Functional Renormalization
Group to this problem and in Sec.~\ref{sec:results} we show many-body phases
beyond mean field calculation which manifest intertwined quantum
fluctuations in pairing and density-wave channels. In
Sec.~\ref{sec:conclusions}, we summarize our study and implications of our
findings for
future experimental developments in ultracold gases.
\section{Rydberg-dressed Fermi gas}
\label{sec:mean-field}
We first highlight the unique properties of
Rydberg-dressed Fermi gas by comparing it with other well-known Fermi systems
with long-range interactions such as the electron gas and dipolar Fermi gas.
Correlations in electron liquid are characterized by a single
dimensionless parameter $r_s$, the ratio of Coulomb interaction energy to
kinetic energy. In the high density limit $r_s\ll 1$, the system
is weakly interacting while in the low density limit $r_s\gg 1$, Wigner
crystal is formed. The intermediate correlated regime with $r_s\sim 1$ can
only be described by various approximations \cite{nozieres-pines}. Similarly,
dipolar Fermi gas also has a power-law interaction that lacks a scale, so a
parameter analogous to $r_s$ can be introduced which varies monotonically with
the density \cite{Baranov2012a}.
The situation is different in Rydberg-dressed Fermi gas with interaction given
by Eq.~\eqref{eq:rydberg_interaction_real}. From the inter-particle spacing
$1/\sqrt{2\pi n}$ and the Fermi energy $\ef=2\pi n/m$ (we put $\hbar=1$ and
$k_B=1$) in terms of areal density $n$, one finds that the ratio of interaction
energy to kinetic energy scales as $n^2/[1+(2\pi r_c^2)^3 n^3]$, which varies
non-monotonically with $n$ unlike electron liquid due to $r_c$
(Fig.~\ref{fig:rydberg_interaction}a inset).
Distinctive feature of the interaction $V(\mathbf{r})$ is revealed by its
Fourier transform in 2D \cite{Khasseh2017},
\begin{equation}
V(\qb) = g
G\left(
{q^6r_c^6}/{6^6}
\right),\;\;\; g=\pi u_0/3r_c^4,
\label{eq:rydberg_interaction_momentum}
\end{equation}
where $\mathbf{q}$ is the momentum, $q=|\mathbf{q}|$, $g$ is the coupling strength
and
$G$ is the Meijer G-function \footnote{
In Mathematica, this Meijer G-function is called with
\texttt{
MeijerG\big[\{\{\},\{\}\},\{\{0,1/3,2/3,2/3\},\{0,1/3\}\},$z^6/6^6$\big]
} where $z=qr_c$.
}.
The function $V(\qb)$, plotted in Fig.~\ref{fig:rydberg_interaction}b,
develops a negative minimum at $q=q_c\sim 4.82/r_c$. This is the momentum
space manifestation of the step-like interaction potential
Eq.~\eqref{eq:rydberg_interaction_real}. These unique behaviors
are the main culprits of its rich
phase diagram.
\begin{figure
\centering
\includegraphics[scale=.5]{rydberg_int.pdf}
\includegraphics[scale=.5]{rydberg_int_2.pdf}
\includegraphics[scale=.5]{gamma_uv_m.pdf}
\includegraphics[scale=.5]{rpa_phase_diagram.pdf}
\caption{Single-component Fermi gas with Rydberg-dressed interactions in 2D.
(a) The interaction potential Eq. \eqref{eq:rydberg_interaction_real}
shows a step-like soft core of radius $r_c$ and a long-range tail.
(Inset) Ratio of the interaction to kinetic energy
varies non-monotonically with density. (b) The Rydberg-dressed
interaction Eq. \eqref{eq:rydberg_interaction_momentum} in momentum space
attains a negative minimum at $q_c\sim 4.82/r_c$. (c) For attractive
interactions, the critical temperatures in different angular momentum
$\ell$ channels (in arbitrary units) from the solution of the Cooper
problem. The leading instability is $p$-wave, $\ell=\pm 1$. Maximum $T_c$
is around $p_Fr_c\approx 2$. (d) For repulsive interactions, random phase
approximation points to a density-wave order. False color (shading) shows the
ordering wave vector of density modulations.}
\label{fig:rydberg_interaction}
\end{figure}
Starting from the free Fermi gas,
increasing the interaction $g$ may lead to a diverging susceptibility and
drive the Fermi liquid into a symmetry-broken
phase.
We first give a qualitative discussion of potential ordered phases
using standard methods to orient our numerical FRG results later.
For attractive interactions, $u_0<0$, an arbitrarily small $g$ is sufficient
to drive the Cooper instability. By decomposing $V(\qb=\pb_F-\pb_F')$ into
angular momentum channels, $V(2p_F\sin\frac{\theta}{2})=\sum_\ell V_\ell
e^{i\ell \theta}$ where $\theta$ is the angle between $\pb_{F}$ and $\pb_F'$,
one finds different channels decouple and the critical
temperature of the $\ell$-th channel $T_c(\ell) \sim e^{-1/N_0V_\ell}$
\cite{mineev1999} with
$N_0=m/2\pi$ being the density of states.
Thus the
leading instability comes from the channel with the largest $V_\ell$ (hence
the largest $T_c$). Fig.~\ref{fig:rydberg_interaction}c illustrates
$T_c(\ell)$ as a function of $r_c$ for fixed $p_F$. It is apparent that the
dominant instability is in the $\ell=\pm 1$ channel, i.e., $p$-wave pairing.
Its $T_c$ develops a dome structure and reaches maximum around $p_Fr_c\approx
2$. For large $r_c$, higher angular momentum channels start to compete with
the $\ell=\pm 1$ channel.
For repulsive bare interactions, $u_0>0$, a sufficiently strong interaction
$g$ can induce an instability toward the formation of (charge) density waves.
This has been shown recently \cite{Khasseh2017} for 2D Rydberg-dressed Fermi
gas using random phase approximation (RPA) which sums over a geometric series
of ``bubble diagrams" to yield the static dielectric function,
$\epsilon(\qb)=1-V(\qb)\chi_0(\qb)$ where the Linhard function
$\chi_0(\qb)=-N_0[ 1- \Theta(q-2k_F) \sqrt{q^2-4k_F^2}/q]$. The onset of
density wave instability is signaled by $\epsilon(\qb)=0$ at some wave vector
$q=q_{ins}$, i.e. the softening of particle-hole excitations. Within RPA,
$q_{ins}$ always coincides with $q_c$, and the resulting phase diagram is
shown in Fig.~\ref{fig:rydberg_interaction}d.
While these standard considerations capture the $p$-wave pairing and density
wave order, they fail to describe the physics of
intertwined scattering between particle-particle and particle-hole channels.
We show below that this missing ingredient exhibits significant effects,
leading to the emergence of a robust $f$-wave superfluid in the repulsive
regime.
For a detailed comparison between RPA and FRG see Ref.
\cite{PhysRevA.94.033616}.
\section{Numerical Implementation of Functional Renormalization Group}
\label{sec:frg}
Functional renormalization group (FRG) is a powerful
technique that can accurately predict the many-body instabilities of strongly
interacting fermions \cite{RevModPhys.84.299}. It implements Wilson's
renormalization group for interacting fermions in a formally exact manner by
flowing the generating functional of the many-body system $\Gamma$ as a sliding
momentum scale $\Lambda$ is varied. Starting from the bare
interaction $V(\qb)$ at a chosen ultraviolet scale $\Lambda_{UV}$, higher
energy fluctuations are successively integrated out to yield the self-energy
$\Sigma$ and effective interaction vertex $\Gamma$ at a lower scale
$\Lambda<\Lambda_{UV}$. As $\Lambda$ is lowered toward a very small value
$\Lambda_{IR}$, divergences in the channel coupling matrices and
susceptibilities point to the development of long-range order. Its advantage
is that all ordering tendencies are treated unbiasedly with full momentum
resolution. The main draw back is its numerical complexity: at each RG step,
millions of running couplings have to be retained.
FRG has been applied to dipolar Fermi gas
\cite{PhysRevA.94.033616,PhysRevLett.108.145301} and extensively benchmarked
against different techniques
\cite{PhysRevA.84.063633, PhysRevLett.108.145304, PhysRevB.84.235124,
PhysRevA.84.063633, PhysRevB.91.224504}.
For more details about the formalism, see reviews \cite{RevModPhys.84.299} and
\cite{peterkopietz2010}. Note that our system is a continuum Fermi gas, not a
lattice system extensively studied and reviewed in \cite{RevModPhys.84.299}.
The central task of FRG is to solve the coupled flow equations for self-energy
$\Sigma_{1',1}$ and two-particle vertex $\Gamma_{1',2';1,2}$
\cite{RevModPhys.84.299}:
\begin{align}
\partial_\Lambda \Sigma_{1',1} &= -\sum_{2} S_{2} \Gamma_{1',2;1,2},
\nonumber\\
\partial_\Lambda\Gamma_{1',2';1,2} &= \sum_{3,4} \Pi_{3,4}
\big[
\frac{1}{2} \Gamma_{1',2';3,4} \Gamma_{3,4;1,2}
-\Gamma_{1,'4;1,3}\Gamma_{3,2';4,2}
\nonumber\\
&+\Gamma_{2',4;1,3}\Gamma_{3,1';4,2}
\big],
\label{eq:flow}
\end{align}
Here the short-hand notation $1\equiv(.5_1,\pb_1)$, $1,2$ ($1',2'$)
label the incoming (outgoing) legs of the four-fermion vertex $\Gamma$, and
the sum stands for integration over frequency
and momentum, $\Sigma\rightarrow \int d.5 d^2\pb/(2\pi)^3$.
Diagrammatically, the first term in Eq. \eqref{eq:flow} is the BCS diagram in
the particle-particle channel, and the second and third terms are known as the
ZS and ZS' diagram in the particle-hole channel \cite{RevModPhys.66.129}.
The polarization bubble $\Pi_{3,4} = G_{3} S_{4} + S_{3} G_{4}$
contains the product of two scale-dependent Green functions defined by
\begin{align}
G_{.5,\pb} =
\frac{\Theta(|\xi_\pb|-\Lambda) }{i.5-\xi_\pb-\Sigma_{.5,\pb} } ,\quad
\quad
S_{.5,\pb} =
\frac{\delta(|\xi_\pb|-\Lambda) }{i.5-\xi_\pb-\Sigma_{.5,\pb} }.
\label{eq:propagators}
\end{align}
Note that $G$, $S$, $\Sigma$ and $\Gamma$ all depend on the sliding scale
$\Lambda$, we suppressed their $\Lambda$-dependence in equations above for
brevity.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{phase_diagram.pdf}
\includegraphics[width=0.4\textwidth]{point-P1.pdf}
\includegraphics[width=0.4\textwidth]{point-P2.pdf}
\includegraphics[width=0.4\textwidth]{point-P3.pdf}
\caption{Phase diagram of Rydberg-dressed spinless Fermi gas in 2D based on
FRG. Tuning the interaction range $r_c$ and interaction strength $g$
yields Fermi Liquid (FL), $p$-wave superfluid (p-SF), $f$-wave superfluid
(f-SF), and density-wave (DW). False color (shading) indicates the
critical scale
$\Lambda_c$ of the instability where brighter (darker) regions have
higher (lower) $T_c$. Panels labelled with $\mathcal{P}_1$,
$\mathcal{P}_2$ and $\mathcal{P}_3$ show the details of renormalization
flow and vertex function for points marked with white diamonds on the
phase diagram. The leading eigenvalues for a few channels (see legends)
are shown on the left. The maps of vertex function
$\Gamma(\pb_{F1}',\pb_{F2}',\pb_{F1})$ are shown on the right for fixed
$\pb_{F1}=(-p_F,0)$. Superfluid (density wave) order displays diagonal
(horizontal and vertical) correlations.}
\label{fig:phase_diagram}
\end{figure}
Several well-justified approximations are used to make the flow equations
computationally tractable. To identify leading
instabilities, the self-energy can be safely dropped, and the frequency
dependence of $\Gamma$ can be neglected \cite{RevModPhys.84.299}. As a
result, the frequency integral of the fermion loops in Eq. \eqref{eq:flow} can
be performed analytically. Furthermore, we retain the most relevant dependence of
$\Gamma$ on $\pb$ by projecting all off-shell
momenta radially onto the Fermi surface \cite{RevModPhys.84.299}.
Then, $\Gamma$ is
reduced to
$\Gamma_{1',2';1,2}\rightarrow\Gamma(\pb_{F1}',\pb_{F2}',\pb_{F1})$ where
the last momentum variable is dropped because it is fixed by
conservation, and the
subscript in $\pb_F$ indicates radial projection onto the Fermi surface.
The initial condition for $\Gamma$ at the ultraviolet scale $\Lambda_{UV}$
is given by the antisymmetrized bare interaction $V(\qb)$,
\begin{equation}
\Gamma(\pb_{F1}',\pb_{F2}',\pb_{F1})\big|_{\Lambda_{UV}}
\equiv \frac{1}{2}
[V(\pb_{F1}'-\pb_{F1})-V(\pb_{F2}'-\pb_{F1})].
\end{equation}
We solve the flow equation by the Euler method on a logarithmic grid of
$\Lambda$ consisting of $10^3$ RG steps going from $\Lambda_{UV}=0.99E_F$
down to $\Lambda_{IR}=10^{-3}E_F$. Each $\pb_{F}$ is discretized on an angular
grid with up to hundreds of patches on the Fermi surface \footnote{To speed up
the calculation, the FRG algorithm is adapted to run parallel on Graphic
Processing Units.}.
We monitor the flow of $\Gamma(\pb_{F1}',\pb_{F2}',\pb_{F1})$
which contains hundreds of millions of running coupling constants.
When the absolute value of a running coupling constant in $\Gamma$ exceeds a
threshold, e.g. $50E_F$, signaling an imminent divergence, we terminate the
flow, record the critical scale $\Lambda_c$, and analyze the vertex to
diagnose the instability.
If the flow continues smoothly down to
$\Lambda_{IR}$, we conclude the Fermi liquid is stable down to
exponentially small temperatures.
Scanning the parameter space $(g,r_c)$ gives the phase diagram, whereas
$\Lambda_c$ provides a rough estimate of the $T_c$ of each ordered phase.
Two complementary methods are employed to identify the leading
instability from the large, complex data set of $\Gamma$.
First, we plot $\Gamma(\pb_{F1}',\pb_{F2}',\pb_{F1})$ at $\Lambda_c$ against
the angular directions of $\pb_{F1}'$ and $\pb_{F2}'$ for fixed
$\pb_{F1}=(-p_F,0)$ \footnote{This is done without loss of generality due to
the rotational invariance.} to reveal the dominant correlations between
particles on the Fermi surface. The color map (Fig.~\ref{fig:phase_diagram},
lower right columns) shows
diagonal structures ($\pb_{F1}'=-\pb_{F2}'$) for pairing instability, and
horizontal-vertical structures (scattering $\pb_{F1}\rightarrow \pb_{F1}'$
with momentum transfer close to $0$ or $2p_F$) for density waves
\cite{PhysRevB.63.035109,peterkopietz2010}. This method directly exposes the
pairing symmetry through the number of nodes along the diagonal structures: a
$p$-wave phase has one node, and an $f$-wave phase has three nodes, etc.
In the second method, we construct the channel matrices from $\Gamma$, e.g.
$V_{BCS}(\pb',\pb)=\Gamma(\pb',-\pb',\pb)$ for the pairing channel, and
$V^\qb_{DW}(\pb',\pb)=\Gamma(\pb+\qb/2,\pb'-\qb/2,\pb-\qb/2)$ for the density
wave channel. Different values of $\qb$, e.g. $\qb_i=(q_i,0)$ with
$q_i\in\{0.05p_F,0.5p_F,p_F,2p_F\}$ for $i\in\{1,...,4\}$ respectively, are
compared (see DW$_i$ in Fig.~\ref{fig:phase_diagram},
left column).
The channel matrices are then diagonalized and their the most negative
eigenvalues are monitored. This method provides a clear picture of the
competition among the channels. The eigenvector of the
leading divergence exposes the orbital symmetry, e.g. $p$- or $f$-wave, of the
incipient order.
\section{Phase diagram from FRG}
\label{sec:results}
The resulting phase diagram is summarized in
the top panel of Fig.~\ref{fig:phase_diagram}. In addition to the Fermi
liquid, three ordered phases are clearly identified.
Here the filled circles mark the phase boundary, the color
indicates the critical scales $\Lambda_c$ which is proportional to $T_c$ \cite{RevModPhys.84.299},
and the dash lines are guide for the eye and they roughly enclose the regions where
$\Lambda_c$ is higher than the numerical IR scale $\Lambda_{IR}$.
For attractive interactions $g<0$, e.g. at the point $\mathcal{P}_1$, the
leading eigenvalues are from $V_{BCS}$ and doubly degenerate with $p$-wave
symmetry.
The vertex map also reveals
diagonal structures with single node (Fig.~\ref{fig:phase_diagram}),
confirming a $p$-wave
superfluid phase. While the FRG here cannot directly access the wavefunction
of the broken symmetry phase, mean field argument favors a $p_x+ip_y$ ground
state because it is fully gapped and has the most condensation energy. Thus
Rydberg-dressed Fermi gas is a promising system to realize the $p_x+ip_y$
topological superfluid. Our analysis suggests that the optimal $T_c$ is
around $p_Fr_c\sim 2$ and $T_c$ increases with $|u_0|$.
For repulsive interactions $g>0$, which channel gives the leading instability
depends intricately on the competition between $p_F$ and $r_c$.
{\bf (a)} First, FRG reveals a density wave phase for $p_Fr_c \gtrsim 4$, in
broad agreement with RPA. For example, at point $\mathcal{P}_3$, the most
diverging eigenvalue comes from $V_{DW}$, and the vertex map shows clear
horizontal-vertical structures (Fig.~\ref{fig:phase_diagram}). Note the
separations between the
horizontal/vertical lines, and relatedly the ordering wave vector, depend on
$r_c$.
{\bf (b)} For $p_Fr_c\lesssim 4$, however, the dominant instability comes from
the BCS channel despite that the bare interaction is purely repulsive in real
space. In particular, for small $p_Fr_c\lesssim2$, such as the point
$\mathcal{P}_2$ in Fig.~\ref{fig:phase_diagram}, the pairing symmetry can be
unambiguously
identified to be $f$-wave: the vertex map has three nodes, the most diverging
eigenvalues of $V_{BCS}$ are doubly degenerate, and their eigenvectors follow
the form $e^{\pm i 3\theta}$.
This $f$-wave superfluid is the most striking
result from FRG. {\bf (c)}
For $p_Fr_c$ roughly between 2 and 4, sandwiched
between the density wave and $f$-wave superfluid, lies a region
where the superfluid paring channel strongly intertwines with the density wave channel.
While the leading divergence is still superfluid, it is no longer pure
$f$-wave, and it becomes increasingly degenerate with a subleading density wave order.
This hints at a coexistence of superfluid and density wave.
To determine the phase boundary, we trace the evolution of $\Lambda_c$ along a few vertical cuts in
the phase diagram, and use the kinks in $\Lambda_c$ as indications for the
transition between the density wave and superfluid phase, or a change in
pairing symmetry within the superfluid (see inset, top panel of
Fig.~\ref{fig:phase_diagram}). We
have checked the phase boundary (filled circles) determined this way is
consistent with the eigenvalue flow and vertex map.
Cooper pairing can occur in repulsive Fermi liquids via the Kohn-Luttinger
(KL) mechanism through the
renormalization of fermion vertex by the particle-hole fluctuations. Even for featureless bare interactions $V(\qb)=U>0$, the
effective interaction $V_{\ell}$ in angular momentum channel $\ell$ can become
attractive due to over-screening by the remaining fermions
\cite{PhysRevLett.15.524}.
In 2D,
the KL mechanism becomes effective at
higher orders of perturbation theory, e.g. $U^3$, and the leading pairing
channel is believed to be $p$-wave \cite{PhysRevB.48.1097}. Here, the
effective interaction is also
strongly renormalized from the bare interaction by
particle-hole fluctuations.
We have checked that turning off the ZS and ZS' channels eliminates
superfluid order on the repulsive side.
However, our system exhibits $f$-wave pairing with a significant critical
temperature in contrast to usual KL mechanism with exponentially small $T_c$.
This is because the Rydberg-dressed
interaction already contains a ``pairing seed": $V(\qb)$ develops
a negative minimum in momentum space for $q=q_c$ unlike the featureless
interaction $U$.
Among all the scattering processes $(\pb_{F},-\pb_{F})\rightarrow
(\pb'_F,-\pb'_F)$, those with $q=|\pb'_F-\pb_F|\sim q_c$ favor pairing. It
follows that pairing on the repulsive side occurs most likely when
the Fermi surface has a proper size, roughly $2p_F\sim q_c$, in broad
agreement with the FRG phase diagram. These considerations based on the bare
interaction and BCS approach, however, are insufficient to explain the
$f$-wave superfluid revealed only by FRG, which accurately accounts the
interference between the particle-particle and particle-hole channels.
The pairing seed and over screening conspire to give
rise to a robust $f$-wave superfluid with significant $T_c$.
\section{Conclusion}
\label{sec:conclusions}
We developed an unbiased numerical
technique based on FRG to obtain the phase diagram for the new system of Rydberg-dressed Fermi gas to guide future experiment.
We found an $f$-wave superfluid with unexpectedly high $T_c$ driven by repulsive interactions beyond the conventional Kohn-Luttinger paradigm. The physical mechanism behind the $T_c$ enhancement is traced back to the negative minimum in the bare interaction, as well as the renormalization
of the effective interaction by particle-hole fluctuations. These results contribute to our understanding of unconventional pairing from repulsive interactions, and more generally, competing many-body instabilities of fermions with long-range interactions.
Our analysis may be used for optimizing $T_c$ by engineering effective interactions using schemes similar to Rydberg dressing.
Our FRG approach can also be applied to illuminate the rich interplay of competing density wave and pairing fluctuations in solid state
correlated quantum materials. Note that f-wave
pairing has been previously discussed in the context of fermions on the
p-orbital bands \cite{PhysRevA.82.053611, PhysRevB.83.144506}.
\begin{acknowledgments
This work is supported by
NSF Grant No. PHY-1707484,
AFOSR Grant No. FA9550-16-1-0006 (A.K. and E.Z.),
ARO Grant No. W911NF-11-1-0230, and
MURI-ARO Grant No. W911NF-17-1-0323 (A.K.).
X.L. acknowledges support by National Program on Key Basic Research Project of China under Grant No. 2017YFA0304204 and National Natural Science Foundation of China under Grants No. 11774067.
\end{acknowledgments}
| proofpile-arXiv_065-7061 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The \textit{proceedings} are the records of a conference.\footnote{This
is a footnote} ACM seeks
to give these conference by-products a uniform, high-quality
appearance. To do this, ACM has some rigid requirements for the
format of the proceedings documents: there is a specified format
(balanced double columns), a specified set of fonts (Arial or
Helvetica and Times Roman) in certain specified sizes, a specified
live area, centered on the page, specified size of margins, specified
column width and gutter size.
\section{The Body of The Paper}
Typically, the body of a paper is organized into a hierarchical
structure, with numbered or unnumbered headings for sections,
subsections, sub-subsections, and even smaller sections. The command
\texttt{{\char'134}section} that precedes this paragraph is part of
such a hierarchy.\footnote{This is a footnote.} \LaTeX\ handles the
numbering and placement of these headings for you, when you use the
appropriate heading commands around the titles of the headings. If
you want a sub-subsection or smaller part to be unnumbered in your
output, simply append an asterisk to the command name. Examples of
both numbered and unnumbered headings will appear throughout the
balance of this sample document.
Because the entire article is contained in the \textbf{document}
environment, you can indicate the start of a new paragraph with a
blank line in your input file; that is why this sentence forms a
separate paragraph.
\subsection{Type Changes and {\itshape Special} Characters}
We have already seen several typeface changes in this sample. You can
indicate italicized words or phrases in your text with the command
\texttt{{\char'134}textit}; emboldening with the command
\texttt{{\char'134}textbf} and typewriter-style (for instance, for
computer code) with \texttt{{\char'134}texttt}. But remember, you do
not have to indicate typestyle changes when such changes are part of
the \textit{structural} elements of your article; for instance, the
heading of this subsection will be in a sans serif\footnote{Another
footnote here. Let's make this a rather long one to see how it
looks.} typeface, but that is handled by the document class file.
Take care with the use of\footnote{Another footnote.} the
curly braces in typeface changes; they mark the beginning and end of
the text that is to be in the different typeface.
You can use whatever symbols, accented characters, or non-English
characters you need anywhere in your document; you can find a complete
list of what is available in the \textit{\LaTeX\ User's Guide}
\cite{Lamport:LaTeX}.
\subsection{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of
the three are discussed in the next sections.
\subsubsection{Inline (In-text) Equations}
A formula that appears in the running text is called an
inline or in-text formula. It is produced by the
\textbf{math} environment, which can be
invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end}
construction or with the short form \texttt{\$\,\ldots\$}. You
can use any of the symbols and structures,
from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a
few examples of in-text equations in context. Notice how
this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsubsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols
and structures available in \LaTeX\@; this section will just
give a couple of examples of display equations in context.
First, consider the equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\subsection{Citations}
Citations to articles~\cite{bowman:reasoning,
clark:pct, braams:babel, herlihy:methodology},
conference proceedings~\cite{clark:pct} or maybe
books \cite{Lamport:LaTeX, salas:calculus} listed
in the Bibliography section of your
article will occur throughout the text of your article.
You should use BibTeX to automatically produce this bibliography;
you simply need to insert one of several citation commands with
a key of the item cited in the proper location in
the \texttt{.tex} file~\cite{Lamport:LaTeX}.
The key is a short reference you invent to uniquely
identify each work; in this sample document, the key is
the first author's surname and a
word from the title. This identifying key is included
with each item in the \texttt{.bib} file for your article.
The details of the construction of the \texttt{.bib} file
are beyond the scope of this sample document, but more
information can be found in the \textit{Author's Guide},
and exhaustive details in the \textit{\LaTeX\ User's
Guide} by Lamport~\shortcite{Lamport:LaTeX}.
This article shows only the plainest form
of the citation command, using \texttt{{\char'134}cite}.
Some examples. A paginated journal article \cite{Abril07}, an enumerated
journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96},
a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document)
\cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00}
followed by the same example, however we only output the series if the volume number is given
\cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.),
a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book
in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97},
an article in a proceedings (of a conference, symposium, workshop for example)
(paginated proceedings article) \cite{Andler79}, a proceedings article
with all possible elements \cite{Smith10}, an example of an enumerated
proceedings article \cite{VanGundy07},
an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85},
a master's thesis: \cite{anisi03}, an online document / world wide web
resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03}
and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001},
work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author
\cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain
'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}.
Boris / Barbara Beeton: multi-volume works as books
\cite{MR781536} and \cite{MR781537}.
A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128,
Kirschmer:2010:AEI:1958016.1958018}.
Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}.
\subsection{Tables}
Because tables cannot be split across pages, the best
placement for them is typically the top of the page
nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and
the table caption. The contents of the table itself must go
in the \textbf{tabular} environment, to
be aligned properly in rows and columns, with the desired
horizontal and vertical rules. Again, detailed instructions
on \textbf{tabular} material
are found in the \textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed
output of this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more desirable.
Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
It is strongly recommended to use the package booktabs~\cite{Fear05}
and follow its main principles of typography with respect to tables:
\begin{enumerate}
\item Never, ever use vertical rules.
\item Never use double rules.
\end{enumerate}
It is also a good idea not to overuse horizontal rules.
\subsection{Figures}
Like tables, figures cannot be split across pages; the best placement
for them is typically the top or the bottom of the page nearest their
initial cite. To ensure this proper ``floating'' placement of
figures, use the environment \textbf{figure} to enclose the figure and
its caption.
This sample document contains examples of \texttt{.eps} files to be
displayable with \LaTeX. If you work with pdf\LaTeX, use files in the
\texttt{.pdf} format. Note that most modern \TeX\ systems will convert
\texttt{.eps} to \texttt{.pdf} for you on the fly. More details on
each of these are found in the \textit{Author's Guide}.
\begin{figure}
\includegraphics{fly}
\caption{A sample black and white graphic.}
\end{figure}
\begin{figure}
\includegraphics[height=1in, width=1in]{fly}
\caption{A sample black and white graphic
that has been resized with the \texttt{includegraphics} command.}
\end{figure}
As was the case with tables, you may want a figure that spans two
columns. To do this, and still to ensure proper ``floating''
placement of tables, use the environment \textbf{figure*} to enclose
the figure and its caption. And don't forget to end the environment
with \textbf{figure*}, not \textbf{figure}!
\begin{figure*}
\includegraphics{flies}
\caption{A sample black and white graphic
that needs to span two columns of text.}
\end{figure*}
\begin{figure}
\includegraphics[height=1in, width=1in]{rosette}
\caption{A sample black and white graphic that has
been resized with the \texttt{includegraphics} command.}
\end{figure}
\subsection{Theorem-like Constructs}
Other common constructs that may occur in your article are the forms
for logical constructs like theorems, axioms, corollaries and proofs.
ACM uses two types of these constructs: theorem-like and
definition-like.
Here is a theorem:
\begin{theorem}
Let $f$ be continuous on $[a,b]$. If $G$ is
an antiderivative for $f$ on $[a,b]$, then
\begin{displaymath}
\int^b_af(t)\,dt = G(b) - G(a).
\end{displaymath}
\end{theorem}
Here is a definition:
\begin{definition}
If $z$ is irrational, then by $e^z$ we mean the
unique number that has
logarithm $z$:
\begin{displaymath}
\log e^z = z.
\end{displaymath}
\end{definition}
The pre-defined theorem-like constructs are \textbf{theorem},
\textbf{conjecture}, \textbf{proposition}, \textbf{lemma} and
\textbf{corollary}. The pre-defined de\-fi\-ni\-ti\-on-like constructs are
\textbf{example} and \textbf{definition}. You can add your own
constructs using the \textsl{amsthm} interface~\cite{Amsthm15}. The
styles used in the \verb|\theoremstyle| command are \textbf{acmplain}
and \textbf{acmdefinition}.
Another construct is \textbf{proof}, for example,
\begin{proof}
Suppose on the contrary there exists a real number $L$ such that
\begin{displaymath}
\lim_{x\rightarrow\infty} \frac{f(x)}{g(x)} = L.
\end{displaymath}
Then
\begin{displaymath}
l=\lim_{x\rightarrow c} f(x)
= \lim_{x\rightarrow c}
\left[ g{x} \cdot \frac{f(x)}{g(x)} \right ]
= \lim_{x\rightarrow c} g(x) \cdot \lim_{x\rightarrow c}
\frac{f(x)}{g(x)} = 0\cdot L = 0,
\end{displaymath}
which contradicts our assumption that $l\neq 0$.
\end{proof}
\section{Conclusions}
This paragraph will end the body of this sample document.
Remember that you might still have Acknowledgments or
Appendices; brief samples of these
follow. There is still the Bibliography to deal with; and
we will make a disclaimer about that here: with the exception
of the reference to the \LaTeX\ book, the citations in
this paper are to articles which have nothing to
do with the present subject and are used as
examples only.
\section{Introduction} \label{introduction}
Link prediction is usually understood as a problem of predicting missed edges in partially observed networks or predicting edges which will appear in the near future of evolving networks~\cite{menon2011link}. A prediction is based on the currently observed edges and takes into account a topological structure of the network. Also, there may be some side information or meta-data such as node and edge attributes.
The importance of link prediction problem follows naturally from a variety of its practical applications. For example, popular online social networks such as Facebook and LinkedIn suggest a list of people you may know. Many e-commerce websites have personalized recommendations which can be interpreted as predictions of links in bipartite graphs \cite{schafer1999recommender}. Link prediction can also help in the reconstruction of some partially studied biological networks by allowing researches to focus on the most probable connections~\cite{lu2011link}.
More formally, in order to evaluate the performance of the proposed link prediction method we consider the following problem formulation. The input is a partially observed graph and our aim is to predict the status (existence or non-existence) of edges for unobserved pairs of nodes. This definition is sometimes called structural link prediction problem~\cite{menon2011link}. Another possible definition suggests to predict future edges based on the past edges but it is limited to time-evolving networks which have several snapshots.
An extensive survey of link prediction methods can be found in~\cite{liben2007link,lu2011link,cukierski2011graph}.
Here we describe some of the most popular approaches that are usually used as baselines for evaluation~\cite{menon2011link,sherkat2015structural}.
The simplest framework of link prediction methods is the similarity-based algorithm, where to each pair of nodes a score is assigned based on topological properties of the graph~\cite{lu2011link}. This score should measure similarity (also called proximity) of any two chosen nodes. For example, one such score is the number of common neighbours that two nodes share, because usually if nodes have a lot of common neighbours they tend to be connected with each other and belong to one cluster. Other popular scores are Shortest Distance, Preferential Attachment~\cite{barabasi1999emergence}, Jaccard~\cite{small1973co} and Adamic-Adar score~\cite{adamic2003friends}.
Another important class of link prediction methods are latent feature models \cite{miller2009nonparametric,menon2010log,menon2011link,dunlavy2011temporal,acar2009link}. The basic idea is to assign each node a vector of latent features in such a way that connected nodes will have similar latent features. Many approaches from this class are based on the matrix factorization technique which gained popularity through its successful application to the Netflix Prize problem \cite{koren2009matrix}.
The basic idea is to factor the adjacency matrix of a network into the product of two matrices. The rows and columns of these matrices can be interpreted as latent features of the nodes. Latent features can be also the result of a graph embedding \cite{goyal2017graph}. In particular, there are recent attempts to apply neural networks for this purpose \cite{perozzi2014deepwalk,grover2016node2vec}.
In this paper, we propose to use spring-electrical models to address the link prediction problem. These models have been successfully used for networks visualization \cite{fruchterman1991graph,walshaw2000multilevel,hu2005efficient}. A good network visualization usually implies that nodes similar in terms of network topology, e.g., connected and/or belonging to one cluster, tend to be visualized close to each other \cite{noack2009modularity}. Therefore, we assumed that the Euclidean distance between nodes in the obtained network layout correlates with a probability of a link between them. Thus, our idea is to use the described distance as a prediction score. We evaluate the proposed method against several popular baselines and demonstrate its flexibility by applying it to undirected, directed and bipartite networks.
The rest of the paper is organized as follows. First, we formalize the considered problem and present standard metrics for performance evaluation in Section~\ref{Problem Statement} and review related approaches which we used as baselines in Section~\ref{Baselines}. Next, we discuss spring-electrical models and introduce our method for link prediction in Section~\ref{Spring-Electrical-Models}.
We start a comparison of methods with a case study discussed in Section~\ref{Case-study}. Section~\ref{Results for Undirected Networks} presents experiments with undirected networks. We modify the basic model to apply it to bipartite and directed networks in Section~\ref{Model Modifications}, followed by conclusion in Section~\ref{Conclusion}.
\section{Problem Statement} \label{Problem Statement}
We focus on the structural definition of the link prediction problem~\cite{menon2011link}. The network with some missing edges is given and the aim is to predict these missing edges. This definition allows us to work with networks having only a single time snapshot. We also assume that there is no side information such as node or edge attributes, thus, we focus on the link prediction methods based solely on the currently observed link structure.
More formally, suppose that we have a network ${G = \langle V, E \rangle}$ without multiple edges and loops, where $V$ is the set of nodes and $E$ is the set of edges.
We assume that $G$ is a connected graph, otherwise we change it to its largest connected component. The set of all pairs of nodes from $V$ is denoted by $U$.
Given the network $G$, we actually do not know its missing edges. Thus, we hide a random subset of edges $E_{pos} \subset E$, while keeping the network connected. The remaining edges are denoted by $E_{train}$. Also we randomly sample unconnected pairs of nodes $E_{neg}$ from $U \backslash E$. In this way, we form ${E_{test} = E_{pos} \cup E_{neg}}$ such that ${|E_{neg}| = |E_{pos}|}$ and ${E_{test} \cap E_{train} = \emptyset}$. We train models on the network ${G' = \langle V, E_{train} \rangle}$ and try to find missing edges $E_{pos}$ in $E_{test}$.
We assume that each algorithm provides a list of scores for all pairs of nodes $(u,v) \in E_{test}$. The $score(u,v)$ characterizes similarity of nodes $u$ and $v$. The higher the $score(u,v)$, the higher probability of these nodes to be connected is.
To measure the quality of algorithms we use the area under the receiver operating characteristic curve (AUC) \cite{hanley1982meaning}. From a probabilistic point of view AUC is the probability that a randomly selected pair of nodes from $E_{pos}$ has higher score than a randomly selected pair of nodes from $E_{neg}$:
\begin{equation*}
\sum_{(u_p, v_p) \in E_{pos}}
\sum_{(u_n, v_n) \in E_{neg}}
\frac{ I \left[{score(u_p,v_p) > score(u_n,v_n)}\right] }{ | E_{pos} | \cdot | E_{neg} |},
\end{equation*}
where $I[\cdot]$ denotes an indicator function.
We repeat the evaluation several times in order to compute the mean AUC as well as the standard deviation of AUC.
\section{Related work} \label{Baselines}
In this section we describe some popular approaches to link prediction problem. The mentioned methods will be used as baselines during our experiments.
\subsection{Local Similarity Indices}
Local similarity-based methods calculate $score(u,v)$ by analyzing direct neighbours of $u$ and $v$ based on different assumptions about link formation behavior. We use $\delta(u)$ to denote the set of neighbours of $u$.
The assumption of \textit{Common Neighbours index} is that a pair of nodes has a higher probability to be connected if they share many common neighbours $$CN(u, v) := |\delta(u) \cap \delta(v)|.$$
\textit{Adamic-Adar index} is a weighted version of Common Neighbours index
$$AA(u, v) := \sum_{z \in \delta(u) \cap \delta(v)} \frac{1}{|\delta(z)|}.$$
The weight of a common neighbour is inversely proportional to its degree.
\textit{Preferential Attachment index} is motivated by Barab\'asi--Albert model~\cite{barabasi1999emergence} which assumes that the ability of a node to obtain new connections correlates with its current degree,
$$PA(u, v) := |\delta(u)| \cdot |\delta(v)|.$$
Our choice of these three local similarity indices is based on the methods comparison conducted in~\cite{liben2007link, menon2011link}.
\subsection{Matrix Factorization}
Matrix factorization approach is extensively used for link prediction problem \cite{miller2009nonparametric,menon2010log,menon2011link,dunlavy2011temporal,acar2009link}. The adjacency matrix of the network is approximately factorized into the product of two matrices with smaller ranks. Rows and columns of these matrices can be interpreted as latent features of the nodes and the predicted score for a pair of nodes is a dot-product of corresponding latent vectors.
A truncated singular value decomposition (Truncated SVD) of matrix $A \in R^{m \times n}$ is a factorization of the form $A_r = U_r \Sigma_r V^T_r$, where $U_r \in R^{m \times r}$ has orthonormal columns, $\Sigma_r = diag(\sigma_1, ... , \sigma_r) \in R^{r \times r}$ is diagonal matrix with $\sigma_i \geq 0$ and $V_r \in R^{n \times r}$ also has orthonormal columns \cite{klema1980singular}. Actually it solves the following optimization problem:
\begin{equation*}\label{eq:svd}
\begin{aligned}
\underset{A_r: rank(A_r) \leq r}{\text{min}} \|A - U_r \Sigma_r V^T_r \|_F = \sqrt[]{\sigma_{r+1}^2 + ... + \sigma_{n}^2},
\end{aligned}
\\
\end{equation*}
where $\sigma_1, ... , \sigma_n$ are singular values of the matrix $A$.
To cope with sparse matrices we use \texttt{scipy.sparse.linalg.svds}\footnote{https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.sparse.linalg.svds.html} implementation of Truncated SVD based on the implicitly restarted Arnoldi method \cite{van1996matrix}.
Another popular approach for training latent features is a non-negative matrix factorization (NMF). NMF with $r$ components is a group of algorithms where a matrix $A \in R^{n \times m}$ is factorized into two matrices $W_r \in R^{n \times r}$ and $H_r \in R^{m \times r}$ with the property that all three matrices have non-negative elements \cite{lin2007projected}:
\begin{equation*}\label{eq:nmf}
\begin{aligned}
\underset{W_r, H_r: W_r \geq 0, H_r \geq 0}{\text{min}} \|A - W_r H_r^T \|_F.
\end{aligned}
\\
\end{equation*}
These conditions are consistent with the non-negativity of the adjacency matrix in our problem. We take as a baseline alternating non-negative least squares method with coordinate descent optimization approach \cite{cichocki2009fast} from \texttt{sklearn.decomposition.NMF}\footnote{http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html}.
\subsection{Neural Embedding}
Several attempts to apply neural networks for graph embedding, such as DeepWalk \cite{perozzi2014deepwalk} and node2vec \cite{grover2016node2vec}, were motivated by word2vec, a widely used algorithm for extracting vector representations of words \cite{mikolov2013distributed}.
The general idea of adopting word2vec for graph embedding is to treat nodes as ``words'' and generate ``sentences'' using random walks.
The objective is to maximize likelihood of observed nodes co-occurrences in random walks. Probability of nodes $u$ and $v$ with latent vectors $x_u$ and $x_v$ to co-occur in a random walk is estimated using a softmax:
\begin{equation*}\label{soft-max}
P(u | v) = \frac{\exp(x_u \cdot x_v)}{\sum_{w \in V} \exp(x_w \cdot x_v)}.
\end{equation*}
In practice a direct computation of the softmax is infeasible, thus, some approximations, such as a ``negative sampling'' or a ``hierarchical softmax'', are used \cite{mikolov2013efficient}.
In this paper we consider node2vec which has shown a good performance in the link prediction \cite{grover2016node2vec}.
This method generates $2^{\text{nd}}$ order random walks $c_1, \ldots, c_n$ with a transition probability defined by the following relation:
$$
P\left(c_i = x | c_{i-1} = v, c_{i-2} = t\right) \propto \begin{cases}
0,~~\text{if}~~(v,x) \notin E\\
\frac{1}{p},~~\text{else if}~~d_{tx} = 0\\
1,~~\text{else if}~~d_{tx} = 1\\
\frac{1}{q},~~\text{else if}~~d_{tx} = 2
\end{cases}
$$
where $d_{tx}$ is the graph distance between nodes $t$ and $x$. The parameters $p$ and $q$ allows to interpolate between walks that are more akin to breadth-first or depth-first search. Generated random walks are given as input to word2vec. Finally, for each node $u$ a vector $x_u$ is assigned.
In order to estimate $score(u, v)$ we compute the dot-product of the corresponding latent vectors:
$$
node2vec(u, v) := x_u \cdot x_v.
$$
We have used a reference implementation of node2vec\footnote{https://github.com/aditya-grover/node2vec} with default parameters unless stated otherwise.
\section{Spring-Electrical Models for Link Prediction} \label{Spring-Electrical-Models}
Currently the main application of spring-electrical models to graph analisys is a graph visualization. The basic idea is to represent a graph as a mechanical system of like charges connected by springs~\cite{fruchterman1991graph}. In this system between each pair of nodes act repulsive forces and between adjacent nodes act attractive forces. In an equilibrium state of the system, the edges tend to have uniform length (because of the spring forces), and nodes that are not connected tend to be drawn further apart (because of the electrical repulsion).
Actually, in practice edge attraction and vertex repulsion forces may be defined using functions that are not precisely based on the Hooke's and Coulomb's laws. For instance, in ~\cite{fruchterman1991graph}, the pioneering work of Fruchterman and Reingold, repulsive forces are inversely proportional to the distance and attractive forces are proportional to the square of the distance. In~\cite{walshaw2000multilevel} and~\cite{hu2005efficient} spring-electrical models were further studied and the repulsive force got new parameters $C$, $K$ and $p$, which we will discuss later. In our research, we will also use their modification with the following forces:
\begin{align}
\label{eqn:sfdp_forces}
\begin{split}
f_r(u, v) &= -C K^{1+p}/||x_u - x_v||^p, \,\,\,\,\,\, p > 0, u \neq v; u, v \in V ,
\\
f_a(u, v) &= ||x_u - x_v||^2/K , \,\,\,\,\,\, (u, v) \in E; u, v \in V.
\end{split}
\end{align}
Here we denote by $\|x_u-x_v\|$ the Euclidean distance between coordinate vectors of the nodes $u$ and $v$ in a layout, and by $f_r(u, v)$ and $f_a(u, v)$ we denote values of repulsive and attractive forces, respectively.
Figure~\ref{fig:spring_electrical} illustrates the forces acting on one of the nodes in a simple star graph.
\begin{figure}[H]
\centering
\includegraphics[width=0.27\textwidth]{spring_eletrical.pdf}
\caption{Spring-electrical model}%
\label{fig:spring_electrical}%
\end{figure}
\begin{figure*}[ht]%
\centering
\subfloat[SFDP]{{\includegraphics[width=0.27\linewidth,trim={3.5cm 0 2.5cm 0},clip]{sfdp_sphere.png} }}%
\qquad
\subfloat[SVD]{{\includegraphics[width=0.27\linewidth,trim={3.5cm 0 2.5cm 0},clip]{svd_sphere.png} }}%
\qquad
\subfloat[node2vec]{{\includegraphics[width=0.27\linewidth,trim={3.5cm 0 2.5cm 0},clip]{node2vec_sphere_walk80.png} }}%
\caption{Triangulation of 3d-sphere (3d latent features)}%
\label{fig:sphere_viz}%
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.43\textwidth]{triangulated_3d-sphere_dims1.pdf}
\caption{Triangulation of 3d-sphere (AUC scores)}%
\label{fig:sphere_auc}%
\end{figure}
By exploiting that force is the negative gradient of energy, the force model can be transformed into an energy model, such that force equilibria correspond to (local) energy minima \cite{noack2009modularity}. An optimal layout is achieved in the equilibrium state of the system with the minimum value of the system energy. Thus, finding of an optimal layout is actually the following optimization problem:
\begin{equation}
\label{eqn:sfdp_energy}
\min_{\{x_w, w \in V \}} \left( \sum_{(u,v) \in E} \frac{||x_u - x_v||^3}{3K} + \sum_{\substack{u, v \in V\\ u \neq v}} \frac{\frac{1}{p-1} C K^{1+p} }{||x_u - x_v||^{p-1} } \right).
\end{equation}
A lot of algorithms were proposed to find the optimal layout. All of them meet two main challenges:
(i) a computational complexity of the repulsive forces,
(ii) a slow convergence speed and trapping in local minima.
Note that local minima of the energy might lead to layouts with bad visualization characteristics.
We will use Scalable Force Directed Placement (SFDP) algorithm described in~\cite{hu2005efficient}, which is able to overcome both challenges.
The computational complexity challenge in SFDP is solved by Barnes-Hut optimization~\cite{barnes1986hierarchical}.
As a result, the total complexity of all repulsive forces calculation reduces to $O(|V|\log |V|)$, compared to a straightforward method with complexity $O(|V|^2)$.
The second challenge is solved by a multilevel approach in combination with an adaptive cooling scheme~\cite{walshaw2000multilevel,hu2005efficient}.
The idea is to iteratively coarse the network until the network size falls below some threshold. Once the initial layout for the coarsest network is found, it is successively refined and extended to all the levels starting with the coarsest networks and ending with the original.
Let us now discuss how the model parameters influence on the equilibrium states of the system. According to Theorem~1~in~\cite{hu2005efficient}, parameters $K$ and $C$ do not change possible equilibriums and only scale them, however, they influence on the speed of convergence to an equilibrium.
As it follows from equation (\ref{eqn:sfdp_forces}), a parameter $p$ controls the strength of repulsive forces. For small values of $p$, nodes on the periphery of a layout are affected strongly by repulsive forces of central nodes. This leads to so-called ``peripheral effect'': edges at the periphery are longer than edges at the center~\cite{hu2005efficient}. On the other hand, despite larger values of $p$ reduce ``peripheral effect'', too small repulsive forces might lead to clusters collapsing \cite{noack2009modularity}. We study influence of the repulsive force exponent $p$ on the performance of our method in Section~\ref{Results for Undirected Networks}.
A good network visualization usually implies that nodes similar in terms of network topology, e.g., connected and/or belonging to one cluster, tend to be visualized close to each other~\cite{noack2009modularity}.
Therefore, we assumed that the Euclidean distance between nodes in the obtained network layout correlates with a probability of a link between them.
Thus, to address the link prediction problem, we first find a network layout using SFDP and then use the distance between nodes as a prediction score:
\begin{equation*}
\text{SFDP}(u, v) = \| x_u - x_v \|.
\end{equation*}
Our method can be interpreted as a latent feature approach to link prediction.
\section{Case study} \label{Case-study}
\begin{figure*}[h]
\centering
\includegraphics[width=0.32\textwidth]{powergrid_dims1.pdf}
\includegraphics[width=0.32\textwidth]{euroroad_dims1.pdf}
\includegraphics[width=0.32\textwidth]{airport_dims1.pdf}
\medskip
\includegraphics[width=0.32\textwidth]{facebook_dims1.pdf}
\includegraphics[width=0.32\textwidth]{reactome_dims1.pdf}
\includegraphics[width=0.32\textwidth]{ca-helpth_dims1.pdf}
\caption{Influence of dimensionality}%
\label{fig:dimensions_1}%
\end{figure*}
Before discussing experiments with real-world networks, we consider a graph obtained by triangulation of a three-dimensional sphere. This case study reveals an important difference between SFDP and other baselines which use latent feature space.
First, we have trained three-dimensional latent vectors and visualized them (see Figure~\ref{fig:sphere_viz}). SFDP arranges latent vectors on the surface of a sphere as one might expect. SVD's latent vectors form three mutually perpendicular rays and node2vec places latent vectors on the surface of a cone. The reason of such a different behavior is that SFDP uses the Euclidean distance in its loss function (see equation~(\ref{eqn:sfdp_energy})), while node2vec and SVD relies on the dot-product. One can see that dot-product based methods fail to express the fact that all nodes in the considered graph are structurally equivalent.
The difference between the dot-product and the Euclidean distance is in the way they deal with latent vectors corresponding to completely unrelated nodes. The dot-product tends to make such vectors perpendicular, while the Euclidean distance just places them ``far away''. The number of available mutually perpendicular direction is determined by the dimensionality of the latent feature space. As the result, dimensionality becomes restrictive for dot-product based methods.
In order to further support this observation we have evaluated AUC score of the discussed methods depending on the dimensionality of the latent feature space. The results are presented on Figure~\ref{fig:sphere_auc}. SFDP has good quality starting from very low dimensions. Node2vec achieves a reasonable quality in the ten-dimensional latent feature space, while SVD and NMF needs about 100 dimensions.
Our experiments with real-world networks, described above, confirm that SFDP might have a competitive quality of link prediction even in very low dimensions. This advantage might lead to some practical applications as many problems related to vector embedding are much easier in low dimensions, e.g., searching of the nearest neighbors.
\section{Experiments with Undirected Networks} \label{Results for Undirected Networks}
First, we have chosen several undirected networks in which geographical closeness correlates with the probability to obtain a connection. Thus, the ability to infer a distance feature can be tested on them.
\begin{itemize}
\item ``PowerGrid''~\cite{konect} is an undirected and unweighted network representing the US electric powergrid network. There are $4,941$ nodes and $6,594$ supplies in the system. It is a sparse network with average degree $2.7$.
\item ``Euroroad''~\cite{konect} is a road network located mostly in Europe. Nodes represent cities and an edge between two nodes denotes that they are connected by a road. This network consists of $1,174$ vertices (cities) and $1,417$ edges (roads).
\item ``Airport''~\cite{konect} has information about $28,236$ flights between $1,574$ US airports in $2010$. Airport network has hubs, i.e. several busiest airports. Thus, the connections occur not only because of geographical closeness, but also based on the airport sizes.
\end{itemize}
We have also chosen several undirected networks of other types.
\begin{itemize}
\item ``Facebook''~\cite{konect} is a Facebook friendship network consist of
$817,035$ friendships and $63,731$ users. This network is a subset of full Facebook friendship graph.
\item ``Reactome''~\cite{konect} has information about $147,547$ interactions between $6,327$ proteins.
\item ``Ca-HepTh''~\cite{snapnets} is a collaboration network from the arXiv High Energy Physics - Theory section from January 1993 to April 2003. The network has $9,877$ authors and $25,998$ collaborations.
\end{itemize}
All the datasets and our source code are available in our GitHub repository\footnote{https://github.com/KashinYana/link-prediction
}.
In our experiments we have used two implementations of SFDP, from \texttt{graphviz}\footnote{\url{http://www.graphviz.org}} and \texttt{graph\_tool}\footnote{\url{https://graph-tool.skewed.de/}} libraries, with the default parameters unless stated otherwise.
{\small
\begin{table}[h]
\centering
\caption{Comparison with latent features models}\label{tab:vs_latent}
\begin{tabular}{|c||c|c|c|c|}
\hline
Dataset & SFDP & SVD & NMF & node2vec \\ \hline \hline
PowerGrid & \makecell{2d \\ \maxff{bx}{0.978}$\pm$0.005} & \makecell{30d \\ 0.848$\pm$0.007} & \makecell{40d \\ 0.913$\pm$0.009} & \makecell{50d \\ \maxff{sb}{0.931}$\pm$0.011} \\ \hline
Euroroad & \makecell{2d \\ \maxff{bx}{0.941}$\pm$0.012} & \makecell{7d \\ 0.785$\pm$0.023} & \makecell{6d \\ 0.829 $\pm$ 0.037} & \makecell{75d \\ 0.871$\pm$0.021} \\ \hline
Airport & \makecell{3d \\ \maxff{sb}{0.953}$\pm$0.000} & \makecell{5d \\ \maxff{sb}{0.957}$\pm$0.005} & \makecell{6d \\ \maxff{bx}{0.966}$\pm$ 0.003} & \makecell{2d \\ 0.804$\pm$0.026} \\ \hline \hline
Facebook & \makecell{3d \\ \maxf{0.951}$\pm$0.000} & \makecell{20d \\ 0.922$\pm$0.000} & \makecell{500d \\ \maxf{0.959}$\pm$0.001} & \makecell{150d \\ 0.935$\pm$0.000} \\ \hline
Reactome & \makecell{3d \\ \maxff{sb}{0.986}$\pm$0.000} & \makecell{100d \\ \maxff{sb}{0.987} $\pm$0.001} & \makecell{125d \\ \maxff{bx}{0.993} $\pm$0.000} & \makecell{100d \\ 0.954$\pm$0.000} \\ \hline
Ca-HepTh & \makecell{3d \\ \maxff{bx}{0.931}$\pm$0.004} & \makecell{100d \\ 0.856$\pm$ 0.005} & \makecell{150d \\ \maxff{sb}{0.921}$\pm$0.007} & \makecell{125d \\ 0.884$\pm$0.006} \\ \hline
\end{tabular}
\end{table}
}
Following the discussion in Section~\ref{Case-study}, we have first analyzed the behavior of latent feature methods in low dimensions. On the sparse datasets ``PowerGrid'' and ``Euroroad'' we hide $10\%$ of edges in order to keep a train network connected. On other datasets $30\%$ of edges were included in a test set. We repeat this process several times and report mean AUC scores as well as 95\% confidence intervals. The results are presented on Figure~\ref{fig:dimensions_1}.
As it was expected the dot-product based methods have a clear growing trend on the most of networks, with lower performance in low dimensions. In contrast, SFDP has a good quality starting from two dimensions usually with a slight increase at dimensionality equals three and a slowly decreasing trend after this point.
We have also studied higher dimensions (up to 500 dimensions).
In Table~\ref{tab:vs_latent} for each method and dataset one can find an optimal dimension with the corresponding AUC score, standard deviation values smaller than $0.0005$ are shown as zero. Surprisingly SFDP demonstrates competitive quality in comparison even with high-dimensional dot-product based methods. This observation suggests that real networks might have a lower inherit dimensionality than one might expect.
\begin{figure}[h]\center{
\includegraphics[width=0.9\linewidth]{parameter_p.pdf}}
\caption{Influence of the repulsive force exponent}
\label{ris:p}
\end{figure}
As the influence of dimensionality on the performance of SFDP is not very significant we further focused on the two-dimensional SFDP. Speaking about other parameters, according to Section~\ref{Spring-Electrical-Models} parameters $C$ and $K$ do not change SFDP performance regarding link prediction since they only scale the optimal layout. Thus, we tried to vary the parameter $p$. Based on Figure~\ref{ris:p}, we have decided to continue using of the default value of the repulsive force exponent $p$ which equals~$2$.
{\small
\begin{table}[h]
\centering
\caption{Comparison with local similarity indices}\label{tab:2d}
\begin{tabular}{|c||c|c|c|c|}
\hline
Dataset & SFDP 2d & PA & CN & AA \\ \hline \hline
PowerGrid & \makecell{\maxff{bx}{0.978}}$\pm$0.005 & 0.576$\pm$0.005 & 0.625$\pm$0.006 & 0.625$\pm$0.006 \\ \hline
Euroroad & \makecell{\maxff{bx}{0.941}$\pm$0.012} & 0.432$\pm$0.015 & 0.535$\pm$0.011 & 0.534$\pm$0.011 \\ \hline
Airport & 0.938$\pm$0.001 & 0.949 $\pm$0.000 & \makecell{\maxff{sb}{0.959}$\pm$0.000} & \makecell{\maxff{bx}{0.962}$\pm$0.001} \\ \hline \hline
Facebook & \makecell{\maxff{bx}{0.943}$\pm$0.000} & 0.887$\pm$0.000 & \makecell{\maxff{sb}{0.915}$\pm$0.000} & \makecell{\maxff{sb}{0.915}$\pm$0.000} \\ \hline
Reactome & \makecell{\maxff{sb}{0.981}$\pm$0.000} & 0.899$\pm$0.001 & \maxff{bx}{0.988}$\pm$0.000 & \makecell{\maxff{bx}{0.989}$\pm$0.000} \\ \hline
Ca-HepTh & \makecell{\maxff{bx}{0.905}$\pm$0.001} & 0.787$\pm$0.000 & 0.867$\pm$0.002 & 0.867$\pm$0.002 \\ \hline
\end{tabular}
\end{table}
}
Finally, we have compared SFDP with local similarity indices. The results can be found in Table~\ref{tab:2d}. SFDP has shown the highest advance on the geographical networks ``PowerGrid'' and ``Euroroad''. This observation supports our hypothesis that SFDP can infer geographical distance.
The result of SFDP on the ``Airport'' network is not so good and we link this fact to the presence of distinct hubs, we will return to this network in Section~\ref{sec:DiSFDP}. On other datasets spring-electrical approach has shown superior or competitive quality.
\section{Model Modifications} \label{Model Modifications}
The basic spring-electrical model can be adapted to different networks types. In this section we will present possible modifications for bipartite and directed networks.
\subsection{Bipartite Networks}\label{BiSFDP}
{
\begin{table*}[!ht]
\caption{AUC scores for bipartite datasets, |$E_{pos}$|/|$E$| = $0.3$}\label{tab:bi_SFDP}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
Dataset & Bi-SFDP 2d & SFDP 2d & PA & NMF 100d & SVD 100d & node2vec 100d \\ \hline \hline
Movielens & 0.758 $\pm$0.001 & 0.755$\pm$0.005 &
0.773$\pm$0.000 &
\makecell{\maxff{sb}{0.870}$\pm$0.002} & \makecell{\maxff{bx}{0.876}$\pm$0.000} &
0.725 $\pm$0.000 \\ \hline
Condmat & 0.682$\pm$0.007 & \makecell{\maxff{bx}{0.910}$\pm$0.002} & 0.617$\pm$0.001 &
0.819$\pm$0.005 & 0.787$\pm$0.004 & \makecell{\maxff{bx}{0.912} $\pm$0.001} \\ \hline
Frwiki &
\makecell{\maxff{bx}{0.828}$\pm$0.005} &
0.745$\pm$0.002 & \makecell{\maxff{sb}{0.800}$\pm$0.000} &
0.544$\pm$0.027 & 0.571$\pm$0.005 & 0.508 $\pm$ 0.003 \\ \hline
\end{tabular}
\end{table*}
}
A bipartite network is a network which nodes can be divided into two disjoint sets such that edges connect nodes only from different sets. It is interesting to study this special case because the link prediction in bipartite networks is close to a collaborative filtering problem.
We use the following bipartite datasets in our experiments:
\begin{itemize}
\item ``Movielens''~\cite{konect} dataset contains information how users rated movies on the website \url{http://movielens.umn.edu/}. The version of the dataset which we used has $9,746$ users, $6,040$ movies and $1$ million ratings. Since we are not interested in a rating score, we assign one value to all edge weights.
\item ``Frwiki''~\cite{konect} is a network of $201,727$ edit events done by $30,997$ users in $2,884$ articles of the French Wikipedia.
\item ``Condmat''~\cite{konect} is an authorship network from the arXiv condensed matter section (cond-mat) from 1995 to 1999. It contains $58,595$ edges which connect $16,726$ publications and $38,741$ authors.
\end{itemize}
Let us consider the ``Movielens'' dataset. This network has two types of nodes: users and movies. When applying SFDP model to this network we expect movies of the same topic to be placed nearby. Similarly, we expect users which may rate the same movie to be located closely. In the basic spring-electrical models the repulsive forces are assigned between all nodes. It works good for visualization purpose, but it hinders formation of cluster by users and movies. Therefore, we removed repulsive forces between nodes of the same type.
Consider a bipartite network $G= \langle V, E \rangle$, which nodes are partitioned into two subsets $V = L \sqcup R$ such that $E \subset L \times R$. In our modification of SFDP model for bipartite networks, denoted Bi-SFDP, the following forces are assigned between nodes.
\begin{align}
\label{eqn:bi_sfdp_forces}
\begin{split}
f_{r}(u, v) &= -C K^{(1+p)} / ||x_u - x_v||^p, \,\,\,\,\,\, p > 0, u \in L, v \in R ,
\\
f_a(u, v) &= ||x_u - x_v||^2 / K , \,\,\,\,\,\, (u, v) \in E; u \in L, v \in R.
\end{split}
\end{align}
To carry out experiments with Bi-SFDP model we have written a patch for \texttt{graph\_tools} library.
Figure~\ref{fig:compare} demonstrates how this modification affects the optimal layout. We consider a small user--movie network of ten users and three movies. Note that on Figure~\ref{fig:compare}~(b) some of the yellow nodes were collapsed. The reason is that if we remove repulsive forces between nodes of the same type, users which link to the same movies will have the same positions. Thus, Bi-SFDP model could assigned close positions to users with the same interests.
During our preliminary experiments with bipartite networks we have found that PA demonstrates too high results. The reason is that the preferential attachment mechanism is too strong in bipartite networks. In order to focus on thinner effects governing links formation, we have changed a way to sample the set of negative pairs of nodes $E_{neg}$. A half of pairs of nodes in $E_{neg}$ were sampled with probability proportional to the product of its degrees, such pairs of nodes are counterexamples for the preferential attachment mechanism.
The results of the experiments are summarized in Table~\ref{tab:bi_SFDP}. Baselines CN and AA are not included in the table, because their scores are always equal to zero on bipartite networks.
\begin{figure*}[h]%
\centering
\subfloat[SFPD]{{\includegraphics[width=0.35\linewidth]{bi_sfdp_1_fixed.png}}} \qquad
\subfloat[Bi-SFDP]{{\includegraphics[width=0.37\linewidth]{bi_sfdp_2_fixed2.png} }}%
\caption{The visualization of a bipartite network by SFDP and Bi-SFDP}%
\label{fig:compare}%
\end{figure*}
Bi-SFDP modification has shown the increase in quality compared with the basic SFDP model on ``Movielens'' and ``Frwiki'' datasets. It means that our assumption works for these networks. In contrast, on the ``Condmat'' standard SFDP and node2vec outperforms all other baselines. In general, the results for bipartite graphs are very dataset-dependent.
\subsection{Directed Networks}\label{sec:DiSFDP}
Spring-electrical models are not suitable for predicting links in directed networks because of the symmetry of the forces and the distance function. Therefore, we first propose to transform the original directed network.
Given a directed network $G= \langle V, E \rangle$, we obtain an undirected bipartite network $G'= \langle V', E' \rangle$, $V' = L \sqcup R$, $E' \subset L \times R$ by the following process. Each node $u \in V$ corresponds to two nodes $u_{out} \in L$ and $u_{in} \in R$. One of the nodes is responsible for outgoing connections, the other one for incoming connections. Thus, for each directed edge $(u, v) \in E$ an edge $(u_{out}, v_{in})$ is added in $E'$. Figure~\ref{fig_disfdp} illustrates the described transformation. As the result, $G'$ has information about the all directed edges of $G$.
Then Bi-SFDP model can be applied to find a layout of the network $G'$. Finally, prediction scores for pairs of nodes from the network $G$ can be easily inherited from the layout of the network $G'$.
We have called this approach Di-SFDP and have tested on the following datasets.
\begin{itemize}
\item ``Twitter''~\cite{konect} is a user-user network, where directed edges represent the fact that one user follows the other user. The network contains $23,370$ users and $33,101$ follows.
\item ``Google+''~\cite{konect} is also a user-user network. Directed links indicate that one user has the other user in his circles. There are $23,628$ users and $39,242$ friendships in the network.
\item ``Cit-HepTh''~\cite{konect} has information about $352,807$ citations among $27,770$ publications in the arXiv High Energy Physics Theory (hep-th) section.
\end{itemize}
All pairs of nodes $(u, v)$ such that $(v, u) \in E$ but $(u, v)\not \in E$ we call \textit{difficult pairs}. They can not be correctly scored by the basic SFDP model. It is especially interesting to validate models on such pairs of nodes. Therefore, in our experiments a half of pairs of nodes in $E_\text{neg}$ are difficult pairs.
The experiment results are shown in Table~\ref{tab:di_SFDP}. The baselines PA, CC and AA can be also calculated on $G'$, but their quality is close to a random predictor.
One can see that Di-SFDP outperforms other baselines on two of the datasets and have a competitive quality on the last one. Note that out-of-box node2vec can not correctly score difficult pairs of nodes as it infers only one latent vector for each node, while other methods has two latent vectors, one is responsible for outgoing connections and another one for incoming connections.
Di-SFDP has also helped us to improve quality on the ``Airport'' dataset. Despite ``Airport'' is an undirected network, due to presence of hubs it has a natural orientation of edges. Thus, our idea was to first orient edges from the nodes of low degrees to the nodes of high degrees and then apply Di-SFDP. This trick allowed us to improve the mean AUC from 0.938 to 0.972.
{\small
\begin{table}[H]
\caption{AUC scores for directed datasets, |$E_{pos}$|/|$E$| = $0.3$}\label{tab:di_SFDP}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
Dataset & Di-SFDP 2d & NMF 100d & SVD 100d & node2vec 100d \\ \hline \hline
Twitter & \makecell{\maxff{bx}{0.952} $\pm$0.002} & 0.783$\pm$0.010 & 0.694$\pm$0.014
& 0.550 $\pm$ 0.001
\\ \hline
Google+ & \makecell{\maxff{bx}{0.998} $\pm$0.000} & \makecell{\maxff{sb}{0.936}$\pm$0.007} & 0.466$\pm$0.033
& 0.449 $\pm$ 0.002
\\ \hline
Cit-HepTh & \makecell{\maxff{sb}{0.836} $\pm$0.003} & \makecell{\maxff{bx}{0.842}$\pm$0.002} & \makecell{\maxff{sb}{0.838}$\pm$0.001}
& 0.679$\pm$0.000
\\ \hline
\end{tabular}
\newline
\vspace*{2mm}
\newline
\begin{tabular}{|c||c|c|}
\hline
Dataset & Di-SFDP 2d & SFDP 2d \\ \hline \hline
Airport & \makecell{\maxff{bx}{0.972} $\pm$0.001} & 0.938$\pm$0.001 \\ \hline
\end{tabular}
\end{table}
}
\begin {figure
\centering
\subfloat[][Graph $G$]{
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
\node[main node] (1) {$A$};
\node[main node] (2) [below left of=1] {$B$};
\node[main node] (3) [below right of=1] {$C$};
\path[]
(1) edge [] node[left] {} (2)
(2) edge [] node[left] {} (1)
(2) edge [] node[left] {} (3)
(3) edge [] node[right] {} (1);
\end{tikzpicture}
}
\hspace{1cm}
\subfloat[][Graph $G'$]{
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,
thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
\node[main node] (1) {$A_{out}$};
\node[main node] (2) [right of=1] {$B_{out}$};
\node[main node] (3) [right of=2] {$C_{out}$};
\node[main node] (4) [below of=1] {$A_{in}$};
\node[main node] (5) [below of=2] {$B_{in}$};
\node[main node] (6) [below of=3] {$C_{in}$};
\path[]
(1) edge [] node[left] {} (5)
(2) edge [] node[left] {} (4)
(2) edge [] node[left] {} (6)
(3) edge [] node[left] {} (4);
\end{tikzpicture}
}
\caption{Di-SFDP graph transformation}%
\label{fig_disfdp}%
\end{figure}
\section{Conclusion} \label{Conclusion}
In this paper we proposed to use spring-electrical models to address the link prediction problem. We first applied the basic SFDP model to the link prediction in undirected networks and then adapted it to bipartite and directed networks by introducing two novel methods, Bi-SFDP and Di-SFDP. The considered models demonstrates superior or competitive performance of our approach over several popular baselines.
A distinctive feature of the proposed method in comparison with other latent feature models is a good performance even in very low dimensions. This advantage might lead to some practical applications as many problems related to vector embedding are much easier in low dimensions, e.g., searching of the nearest neighbors.
On the other hand this observation suggests that real networks might have lower inherit dimensionality than one might expect.
We consider this work as a good motivation towards a new set of research directions.
Future research can be focused on choosing an optimal distance measure for latent feature models and deeper analysis of inherit networks dimensionality.
| proofpile-arXiv_065-7064 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $G$ be a connected semisimple algebraic group of adjoint type defined over an algebraically closed field $K$ of positive characteristic $p > 0$. In case the characteristic of $K$ is very good for $G$, which, broadly speaking, implies that $G$ is not of type $A$ and $p > 5$, it is known that the dual nilpotent cone $\mathcal{N}^* \subseteq \mathfrak{g}^*$ is a normal variety, and it admits a desingularisation $\mu: T^*\mathcal{B} \to \mathcal{N}$ from the cotangent bundle of the flag variety $\mathcal{B}$ of $G$; the so-called $\emph{Springer resolution}$ of $\mathcal{N}$. \\
When $p$ is small, the classical proofs of these results break down. The goal of this paper is to investigate in which bad characteristics the dual nilcone $\mathcal{N}^*$ remains a normal variety and the Springer map is a resolution of singularities. \\
In case $G$ is of type $A_n$, the picture is a little different. Here, the classical proofs are valid when $p$ does not divide $n+1$. We have the following main theorems: \\
\begin{MainThm}\label{thm a}
Let $G = PGL_n$ and suppose $p|n$. Then the dual nilpotent cone $\mathcal{N}^* \subseteq \mathfrak{g}^*$ is a normal variety.
\end{MainThm} \vspace{4.4 mm}
\begin{MainThm}\label{thm b}
Suppose $G$ is of type $G_2$ and $p = 2$. Then the dual nilpotent cone $\mathcal{N}^* \subseteq \mathfrak{g}^*$ is a normal variety.
\end{MainThm}
As an application, let $p$ be a prime, $G$ be a semisimple compact $p$-adic Lie group and let $K$ be a finite extension of $\mathbb{Q}_p$. Ardakov and Wadsley studied the coadmissible representations of $G$, which are finitely generated modules over the completed group ring $KG$ with coefficients in $K$, in \cite{AW}. These completed group rings may be realised as Iwasawa algebras, which are important objects in noncommutative Iwasawa theory. \\
One of the central results in \cite{AW} is an estimate for the canonical dimension of a coadmissible representation of a semisimple $p$-adic Lie group in a $p$-adic Banach space. When $p$ is very good for $G$, Ardakov and Wadsley showed that this canonical dimension is either zero or at least half the dimension of a nonzero coadjoint orbit. We extend their results to the case where $G = PGL_n$, $p|n$, and $n > 2$. The main result of this section is as follows: \\
\begin{MainThm}\label{thm c}
Let $G$ be a compact $p$-adic analytic group whose Lie algebra is semisimple. Suppose that $G = PGL_n$, $p|n$, and $n > 2$. Let $G_{\mathbb{C}}$ be a complex semisimple algebraic group with the same root system as $G$, and let $r$ be half the smallest possible dimension of a nonzero coadjoint $G_{\mathbb{C}}$-orbit. Then any coadmissible $KG$-module $M$ that is infinite-dimensional over $K$ satisfies $d(M) \geq r$.
\end{MainThm}
$\textbf{Acknowledgments.}$ I would like to thank Konstantin Ardakov for suggesting this research project, and Kevin McGerty for his interest in my work and his helpful contributions.
\section{The nilpotent cone and the Springer resolution}\label{nilcone chapter}
\subsection{Characteristic}
In this section, we study the geometric structure of the nilpotent cone $\mathcal{N}$ of the Lie algebra $\mathfrak{g}$ of a reductive algebraic group $G$ in arbitrary characteristic. We begin with a discussion of the ordinary nilpotent cone, defined as a subvariety of $\mathfrak{g}$, and then give a characterisation of the dual nilpotent cone $\mathcal{N}^*$.
Our treatment of the material on $\mathcal{N}$ is based on that of Jantzen in \cite{JN}. We generalise some of his arguments which are dependent on certain restrictions on the characteristic. Later, we will specialise further to the case $G = PGL_n$ and $p|n$ at certain points of the argument. The last subsection of the section discusses analogues of the results presented here when we consider a more general algebraic group $G$. \\
Let $\textbf{G}$ be a split reductive algebraic group scheme, defined over $\mathbb{Z}$, and $K$ an algebraically closed field of characteristic $p > 0$. Let $G := \textbf{G}(K)$. Let $\mathfrak{g}$ denote the Lie algebra of $G$ and $W(G)$ the Weyl group of $G$. When $G$ is clear from context, we will abbreviate $W(G)$ to $W$. Since $G$ is a linear algebraic group, we fix an embedding $G \subseteq GL(V)$ for some $n$-dimensional $K$-vector space $V$. \\
\begin{defn}
Let $\alpha_i$ be the simple roots of the root system $R$ of $G$, and let $\beta$ be the highest-weight root. Writing $\beta = \sum_i m_i \alpha_i$, $p$ is $\textit{bad}$ for $G$ if $p = m_i$ for some $i$. $p$ is $\emph{good}$ if $p$ is not bad. \\
The prime $p$ is $\emph{very good}$ if one of the following conditions hold: \\
(a) $G$ is not of type $A$ and $p$ is good, \\
(b) $G$ is of type $A_n$ and $p$ does not divide $n+1$. \\
\end{defn}
In practice, we have the following classification. In types $B, C$ and $D$, the only bad prime is 2. For the exceptional Lie algebras, the bad primes are 2 and 3 for types $E_6, E_7, F_4$ and $G_2$, and 2,3 and 5 for type $E_8$. In type $A$, there are no bad primes. For more details of this classification, see \cite[I.4.3]{SS}. \\
\begin{defn}\label{nonspecial defn}
A prime $p$ is $\emph{special}$ for $G$ if the pair (Dynkin diagram of $G$, $p$) lies in the following list:
(a) ($B$, 2), \\
(b) ($C$, 2), \\
(c) ($F_4$, 2), \\
(d) ($G_2$, 3).
A prime $p$ is $\emph{nonspecial}$ for $G$ if it is not special.
\end{defn}
This definition, and material on the importance of nonspecial primes, can be found in \cite[Section 5.6]{PS}.
\subsection{The $W$-invariants of $S(\mathfrak{h})$}
Let $G = PGL_n$ and suppose $p|n$. This short section investigates the structure of the invariants of the Weyl group action on the symmetric algebra $S(\mathfrak{h})$. \\
Let $\mathfrak{g}^*$ be the dual vector space of $\mathfrak{g}$. Since $G$ is of type $A$ and the prime $p$ is always good for $G$, there is a $G$-equivariant isomorphism $\kappa: \mathfrak{g} \to \mathfrak{g}^*$ by the argument in \cite[Section 6.5]{JN}. Since $\mathfrak{g}$ is a finite-dimensional vector space, we naturally identify the symmetric algebra $S(\mathfrak{g})$ and the algebra of polynomial functions $K[\mathfrak{g}^*]$. \\
Let $\mathfrak{h}$ be a fixed Cartan subalgebra of $\mathfrak{g}$. The Weyl group $W$ has a natural action on $\mathfrak{h}$, which can be extended linearly to an action of $W$ on the symmetric algebra $S(\mathfrak{h})$. The identification $S(\mathfrak{h}) \cong K[\mathfrak{h}^*]$ is compatible with the $W$-action. We begin this section by studying the $W$-invariants under this action. \\
\begin{thm}\label{polynomial theorem}
Suppose $G = PGL_n$ and $p|n$. Then $S(\mathfrak{h})^W$ is a polynomial ring.
\end{thm}
\begin{proof}
Recall the Weyl group $W$ is isomorphic to $S_n$, and let $\mathfrak{t}$ be the image of the diagonal matrices in $\mathfrak{g} = \mathfrak{pgl}_n$. Then $\mathfrak{t}$ is the quotient of the natural $S_n$-module $V$ with basis $\lbrace e_1, \cdots, e_n \rbrace$, permuted by $S_n$, by the trivial submodule $U := K(\sum_{i=1}^n e_i)$. Let $X = V/U$. The quotient map $V \to X$ induces a surjective map $S(V) \to S(X)$. \\
Suppose $p = n = 2$ and let $\lbrace \overline{e_1}, \overline{e_2} \rbrace$ be the images of the vector space basis $\lbrace e_1, e_2 \rbrace$ of $V$ inside $X$. Let $\sigma$ denote the non-identity element of $S_2$. Then $\sigma \cdot \overline{e_1} = \overline{e_2}$ and $\sigma \cdot \overline{e_2} = \overline{e_1}$. Since $\overline{e_1} + \overline{e_2} = 0$, it follows that $\overline{e_1} = \overline{e_2}$. Hence $S(X)^{S_2} = S(X)$, which is a polynomial ring. \\
Now suppose $n > 2$ and $p|n$. We claim that the $S_n$-action on $V$ and on $X$ is faithful. The $S_n$-action on $V$ is by permutation and therefore is faithful. To see the claim for the $S_n$-action on $X$, let $N := \lbrace g \in S_n \mid g \cdot x = x \text{ } \forall x \in S(X) \rbrace$ denote the kernel of the natural map $S_n \to S(X)$. \\
Suppose $g$ is some non-identity element of $N$. Then, relabelling the elements $\overline{e_i}$ if necessary, $g \cdot \overline{e_1} = \overline{e_2}$. Hence it suffices to show that $\overline{e_1} \neq \overline{e_2}$. If $\overline{e_1} = \overline{e_2}$, then since $\sum_{i=1}^n \overline{e_i} = 0$, $\sum_{i=2}^n \overline{e_i} = \overline{e_1}$ and $e_1 + \sum_{i=3}^n \overline{e_i} = \overline{e_2}$. Rearranging, $\sum_{i=3}^n \overline{e_i} = (p-2)\overline{e_1}$. Hence the set $\lbrace \overline{e_1}, \overline{e_3}, \cdots, \overline{e_{n-1}} \rbrace$ spans $X$, but $X$ is an $(n-1)$-dimensional vector space, a contradiction. It follows that the $S_n$-action on $X$ is faithful. \\
The ring of invariants $S(V)^{S_n}$ is generated by the elementary symmetric polynomials $s_1(e_1, \cdots, e_n), \cdots, s_n(e_1, \cdots, e_n)$, which are algebraically independent by \cite[Section 6, Theorem 1]{Bo3}. Applying \cite[Proposition 4.1]{Na}, we see that $S(X)^{S_n}$ is also a polynomial ring. The proof of \cite[Proposition 5.1]{KM} also demonstrates that $S(X)^{S_n}$ is generated by the images of $s_2(e_1, \cdots, e_n), \cdots, s_n(e_1, \cdots, e_n)$ under the map $S(V) \to S(X)$. \\
To finish the proof, it suffices to note that we may identify $\mathfrak{t} \cong \mathfrak{h}$ and that $\mathfrak{h} \cong V/U = X$.
\end{proof}
We state a version of Kostant's freeness theorem that will be useful for our applications. \\
\begin{thm}\label{freeness polynomial}
$S(\mathfrak{h})$ is a free $S(\mathfrak{h})^W$-module if and only if $S(\mathfrak{h})^W$ is a polynomial ring.
\end{thm}
\begin{proof}
See \cite[Corollary 6.7.13]{Sm}.
\end{proof}
\subsection{Properties of the nilpotent cone}
We now outline some general preliminaries on the structure theory of groups acting on varieties. At first, we do not impose any restriction on the characteristic. \\
Let $M$ be a variety which admits an algebraic group action by $G$, and let $x \in M$. The closure $\overline{Gx}$ of the orbit $Gx$ of $x$ is a closed subvariety of $M$. By \cite[Proposition 8.3]{Hu2}, $Gx$ is open in $\overline{Gx}$ and so $Gx$ has the structure of an algebraic variety. \\
The orbit map $\pi_x: G \to Gx$, $\pi_x(g) = gx$, is a surjective morphism of varieties. The stabiliser $G_x := \lbrace g \in G \mid gx = x \rbrace$ is a closed subgroup of $G$, and $\pi_x$ induces a bijective morphism:
\begin{gather*}
\overline{\pi_x}: G/G_x \to Gx
\end{gather*}
by \cite[Section 12]{Hu2}. \\
We now specialise to the case where $M = \mathfrak{g}$ and $G$ acts on $\mathfrak{g}$ via the adjoint action. Let $X \in \mathfrak{g}$ and let $GX$ denote the $G$-orbit of $X$ under the adjoint action $\text{Ad}: G \to \text{Ad}(\mathfrak{g})$. \\
Recall that an element $g \in \mathfrak{g}$ is $\emph{nilpotent}$ if the operator $\text{ad}_g(y)$ is nilpotent for each $y \in \mathfrak{g}$. The set of nilpotent elements is denoted $\mathcal{N}$.
Since $G$ is a linear algebraic group, fix an embedding $G \subseteq GL(V)$ for some $n$-dimensional $K$-vector space $V$. Then $\mathcal{N} = \mathfrak{g} \cap \mathcal{N}(\mathfrak{gl}(V))$, where $\mathcal{N}(\mathfrak{gl}(V))$ denotes the set of nilpotent elements of the Lie algebra of $GL(V)$. It follows that $\mathcal{N}$ is closed in $\mathfrak{g}$, and hence $\mathcal{N}$ has the structure of a subvariety of the algebraic variety $\mathfrak{g}$. \\
Let:
\begin{gather*}
P_X(t) := \text{det}(tI - X)
\end{gather*}
denote the characteristic polynomial of $X$ in the variable $t$. Then:
\begin{gather*}
P_X(t) := t^n + \sum_{i=1}^n (-1)^i s_i(X) t^{n-i}
\end{gather*}
where each $s_i$ is a homogeneous polynomial of degree $i$ in the entries of $X$. If $a_1, \cdots, a_n$ are the eigenvalues of $X$, counted with algebraic multiplicity, then, since $K$ is algebraically closed, $P_X(t) = \prod_{i=1}^n (t-a_i)$, and so $s_i(X)$ can be identified with the $i$th elementary symmetric function in the $a_j$. It follows that $X$ is nilpotent if and only if $P_X(t) = t^n$ if and only if $s_i(X) = 0$ for each $i$:
\begin{gather*}
\mathcal{N}(\mathfrak{gl}(V)) = \lbrace X \in \mathfrak{gl}(V) \mid s_i(X) = 0 \text{ for all } i \rbrace.
\end{gather*}
Let $S(V)$ denote the algebra of polynomial functions on $V$. This has a natural grading by degree, with $S(V) = \bigoplus_{i \geq 0} S^i(V)$. Set $S^+(V) := \bigoplus_{i \geq 1} S^i(V)$. \\
Now the restrictions of the $s_i$ to $\mathfrak{g}$ are $G$-invariant polynomial functions on $\mathfrak{g}$, and so $s_{i|\mathfrak{g}} \in S^i(\mathfrak{g}^*)^G$. It follows that there exist $f_1, \cdots, f_n \in S^+(\mathfrak{g}^*)^G$ such that:
\begin{gather*}
\mathcal{N} = \lbrace X \in \mathfrak{g} \mid f_i(X) = 0 \text{ for all } i \rbrace.
\end{gather*}
\begin{prop}\label{affine nilpotent cone}
The nilpotent cone $\mathcal{N}$ may be realised as:
\begin{gather*}
\mathcal{N} = \lbrace X \in \mathfrak{g} \mid f(X) = 0 \text{ for all } f \in S^+(\mathfrak{g}^*)^G \rbrace.
\end{gather*}
Hence $\mathcal{N} = V(S^+(\mathfrak{g}^*)^G)$ is an affine variety.
\end{prop}
\begin{proof}
It is clear that $\lbrace X \in \mathfrak{g} \mid f(X) = 0 \text{ for all } f \in S^+(\mathfrak{g}^*)^G \rbrace \subseteq \mathcal{N}$ by the above discussion. Conversely, given $f \in S^+(\mathfrak{g}^*)^G$, $f(0) = 0$ and $f$ is constant on the closure of the orbits under the adjoint action. Then $f$ is constant on $\overline{GX}$, the closure of the regular orbit under the adjoint action, and $0 \in \overline{GX}$ by \cite[Proposition 2.11(1)]{JN}.
\end{proof}
\begin{lemma}\label{borel subgp variety}
Let $\mathcal{B}$ be the set of all Borel subalgebras of $\mathfrak{g}$. Then there is a bijection $G/B \leftrightarrow \mathcal{B}$.
\end{lemma}
\begin{proof}
$\mathcal{B}$ is the closed subvariety of the Grassmannian of $\text{dim } \mathfrak{b}$-dimensional subspaces in $\mathfrak{g}$ formed by solvable Lie algebras. Hence $\mathcal{B}$ is a projective variety. All Borel subalgebras are conjugate under the adjoint action of $G$, and the stabiliser subgroup $G_{\mathfrak{b}}$ of $\mathfrak{b}$ in $G$ is equal to $B$ by \cite[Theorem 11.16]{Bo}. Hence the claimed bijection follows via the assignment $g \mapsto g \cdot \mathfrak{b} \cdot g^{-1}$.
\end{proof}
\begin{defn}\label{enhanced nilpotent cone}
Set $\widetilde{\mathfrak{g}} := \lbrace (x, \mathfrak{b}) \in \mathfrak{g} \times \mathcal{B} \mid x \in \mathfrak{b} \rbrace$, and let $\mu: \widetilde{\mathfrak{g}} \to \mathfrak{g}$ be the projection onto the first coordinate. The $\emph{enhanced nilpotent cone}$ is the preimage of $\mathcal{N}$ under the map $\mu$:
\begin{gather*}
\widetilde{\mathcal{N}} := \mu^{-1}(\mathcal{N}) = \lbrace (x, \mathfrak{b}) \in \mathcal{N} \times \mathcal{B} \mid x \in \mathfrak{b} \rbrace.
\end{gather*}
\end{defn}
\begin{lemma}\label{enhanced nilpotent cone smooth}
$\widetilde{\mathcal{N}}$ is a smooth irreducible variety.
\end{lemma}
\begin{proof}
Let $\mathfrak{b} \in \mathcal{B}$ be a fixed Borel subalgebra. The fibre over $\mathfrak{b}$ of the second projection $\pi: \widetilde{\mathcal{N}} \to \mathcal{B}$ is the set of nilpotent elements of $\mathfrak{b}$. Decomposing $\mathfrak{b} = \mathfrak{h} \oplus \mathfrak{n}$, where $\mathfrak{n}:= [\mathfrak{b}, \mathfrak{b}]$ is the nilradical of $\mathfrak{b}$, an element $x \in \mathfrak{b}$ is nilpotent if and only if it is has no component in the Cartan subalgebra $\mathfrak{h}$. Hence $\pi$ makes $\widetilde{\mathcal{N}}$ a vector bundle over $\mathcal{B}$ with fibre $\mathfrak{n}$. \\
The canonical map $G \to G/B$ is locally trivial by \cite[II.1.10(2)]{Ja2}, so the set of $B$-orbits on $G \times \mathfrak{n}$ has a natural structure of a variety, denoted $G \times_B \mathfrak{n}$. The above construction yields a $G$-equivariant vector bundle isomorphism:
\begin{gather*}
\widetilde{\mathcal{N}} \cong G \times_B \mathfrak{n},
\end{gather*}
where $B$ is the Borel subgroup of $G$ corresponding to $\mathfrak{b}$. It follows that we may view $\widetilde{\mathcal{N}}$ as a vector bundle over the smooth variety $G/B$, and so $\widetilde{\mathcal{N}}$ is smooth. \\
Using Lemma \ref{borel subgp variety}, identify $\mathcal{B}$ with $G/B$ and consider the morphism $f: \mathfrak{g} \times G \to \mathfrak{g} \times \mathcal{B}$ defined by $f(x,g) = (x,gB)$. The inverse image:
\begin{gather*}
f^{-1}(\widetilde{\mathcal{N}}) = \lbrace (x,g) \in \mathcal{N} \times G \mid \text{Ad}(g^{-1})(x) \in \mathfrak{n} \rbrace
\end{gather*}
is closed in $\mathfrak{g} \times G$ since it is the inverse image of $\mathfrak{n}$ under the natural map $\mathfrak{g} \times G \to \mathfrak{g}$, $(x,g) \mapsto \text{Ad}(g^{-1})(x)$. Since $f$ is an open map and $f^{-1}(\widetilde{\mathcal{N}})$ is closed, $\widetilde{\mathcal{N}}$ is a closed subvariety of $\mathcal{N} \times \mathcal{B}$. \\
The morphism $\mathfrak{n} \times G \to \widetilde{\mathcal{N}}, (x,g) \mapsto (\text{ad}(g)(x), gB)$ is surjective by definition. Hence $\widetilde{\mathcal{N}}$ is irreducible.
\end{proof}
By \cite[Theorem 2.8(1)]{JN}, there are only finitely many orbits for the $G$-action in the nilpotent cone $\mathcal{N}$. Let $X_1, \cdots, X_r$ be representatives for these orbits. Then:
\begin{gather*}
\mathcal{N} = \bigcup_{i=1}^r \overline{\mathcal{O}_{X_i}}
\end{gather*}
Since $\mathcal{N}$ is irreducible by Lemma \ref{enhanced nilpotent cone smooth}, one of these closed subsets must be all of $\mathcal{N}$: let $\overline{GZ} = \mathcal{N}$. Then, by \cite[1.13, Corollary 1]{St3}, this orbit is open in $\mathcal{N}$ and $\text{dim}(GZ) = \text{dim}(\mathcal{N})$, while $\text{dim}(GY) < \text{dim}(GZ)$ for any $GY \neq GZ$. Hence $GZ$ is unique with respect to this property. \\
\begin{defn}\label{regular element}
An element $X \in \mathfrak{g}$ is $\emph{regular}$ if it lies in $GZ$, the unique open dense $G$-orbit of $\mathcal{N}$.
\end{defn}
We now specialise to the case where $G = PGL_n$ and $p|n$. \\
\begin{lemma}\label{enhanced nilpotent cone cotangent}
There is a natural $G$-equivariant vector bundle isomorphism:
\begin{gather*}
\widetilde{\mathcal{N}} \cong T^*\mathcal{B}.
\end{gather*}
\end{lemma}
\begin{proof}
This follows from \cite[Section 6.5]{JN}.
\end{proof}
\begin{defn}
The map $\mu: T^*\mathcal{B} \to \mathcal{N}$ is the $\emph{Springer resolution}$ for the nilpotent cone $\mathcal{N}$.
\end{defn} \vspace{4.4 mm}
\begin{lemma}\label{density}
Let $\mathcal{N}_s$ denote the set of smooth points of $\mathcal{N}$. Then $\mu^{-1}(\mathcal{N}_s)$ is dense in $\widetilde{\mathcal{N}}$.
\end{lemma}
\begin{proof}
$\mathcal{N}_s$ is an open and non-empty subset of $\mathcal{N}$. Hence it is dense, and its preimage is open and non-empty in $\widetilde{\mathcal{N}}$. By Lemma \ref{enhanced nilpotent cone smooth}, $\widetilde{\mathcal{N}}$ is irreducible and so $\mu^{-1}(\mathcal{N}_s)$ is dense.
\end{proof}
\begin{lemma}\label{birationality}
Let $GZ$ denote the orbit of all regular nilpotent elements. The morphism $\mu^{-1}(GZ) \to GZ$ is an isomorphism of varieties.
\end{lemma}
\begin{proof}
By \cite[Corollary 6.8]{JN}, $GZ$ is an open subset of $\mathcal{N}$, and $|\mu^{-1}(X)| = 1$ for $X \in GZ$. Hence $\mu$ induces a bijection $\mu^{-1}(GZ) \to GZ$. Since the morphism $\mu: T^*\mathcal{B} \to \mathcal{N}$ is given by projection onto the first coordinate, from Definition \ref{enhanced nilpotent cone}, it is a morphism of varieties and hence so is the restriction $\mu \mid_{\mu^{-1}(GZ)}: \mu^{-1}(GZ) \to GZ$. The result follows.
\end{proof}
Recall from Theorem \ref{polynomial theorem} that $S(\mathfrak{h})^W$ is a polynomial ring, with algebraically independent generators $f_1, \cdots, f_n$. \\
\begin{thm}\label{KW}
Let $G$ be a simple algebraic group, and suppose $(G,p) \neq (B,2)$. There is a projection map $\mathfrak{g} = \mathfrak{n}^- \oplus \mathfrak{h} \oplus \mathfrak{n} \to \mathfrak{h}$, which induces a map $S(\mathfrak{g}) \to S(\mathfrak{h})$. \\
This map induces a map $\eta: S(\mathfrak{g})^G \to S(\mathfrak{h})^W$, which is an isomorphism.
\end{thm}
\begin{proof}
This is \cite[Theorem 4]{KW}.
\end{proof}
When $G = PGL_n$ and $p|n$, the hypotheses of Theorem \ref{KW} are satisfied. This allows us to make sense of the following definition.
\begin{defn}\label{Steinberg quotient}
The $\emph{Steinberg quotient}$ is the map $\chi: \mathfrak{g} \to K^n$ defined by $\chi(Z) = (\eta^{-1}(f_1)(Z), \cdots, \eta^{-1}(f_n)(Z))$. Note that the nilpotent cone $\mathcal{N} = \chi^{-1}(0)$.
\end{defn} \vspace{4.4 mm}
\begin{lemma}\label{smooth elements}
The smooth points of $\mathcal{N}$ are precisely the regular nilpotent elements.
\end{lemma}
\begin{proof}
By the assumptions on the prime $p$, applying Theorem \ref{polynomial theorem} and Theorem \ref{freeness polynomial} shows that $S(\mathfrak{h})$ is a free $S(\mathfrak{h})^W$-module and $S(\mathfrak{h})^W$ is a polynomial ring, with generators $f_1, \cdots, f_n$. Hence the argument for \cite[Claim 6.7.10]{CG} applies and the Steinberg quotient $\chi$ satisfies, for $Z \in \mathfrak{g}$, the condition that $(d\chi)_Z$ is surjective if and only if $Z$ is regular. By \cite[Proposition 7.11]{JN}, for each $b = (b_1, \cdots, b_n) \in K^n$, the ideal of $\chi^{-1}(b)$ is generated by all $\eta^{-1}(f_i) - b_i$. \\
By \cite[I.5]{Ha}, $Z \in \chi^{-1}(b)$ is a smooth point if and only if the $d(\eta(f_i) - b_i)$ are linearly independent at $Z$ if and only if the map $(d\chi)_Z$ is surjective. Let $b=0$. Then the smooth points in $\chi^{-1}(0)$ are the regular elements contained in $\chi^{-1}(0)$, and so the smooth points of $\mathcal{N}$ are precisely the regular nilpotent elements.
\end{proof}
\begin{thm}\label{Springer resolution}
$\mu: T^*\mathcal{B} \to \mathcal{N}$ is a resolution of singularities for $\mathcal{N}$.
\end{thm}
\begin{proof}
By Lemma \ref{enhanced nilpotent cone smooth} and Lemma \ref{enhanced nilpotent cone cotangent}, $\widetilde{\mathcal{N}}$ is a smooth irreducible variety. Furthermore, $\mu$ is proper by \cite[Lemma 6.10(1)]{JN}. By Lemma \ref{density}, $\mu^{-1}(\mathcal{N}_s)$ is dense in $\widetilde{\mathcal{N}}$, and by Lemma \ref{birationality}, $\mu$ is a birational morphism between $\mu^{-1}(\mathcal{N}_s)$ and $\mathcal{N}_s$. Hence $\mu$ is a resolution of singularities.
\end{proof}
\subsection{The dual nilpotent cone is a normal variety}\label{Geometric results}
In this section, we demonstrate that the dual nilcone $\mathcal{N}^*$ is a normal variety in the case $G = PGL_n$, $p|n$.
\begin{defn}
Since we have a $G$-equivariant isomorphism $\kappa: \mathfrak{g} \to \mathfrak{g}^*$ by \cite[Section 6.5]{JN}, the $\emph{dual nilcone}$ $\mathcal{N}^*$ may be defined as:
\begin{gather*}
\mathcal{N}^* = \lbrace X \in \mathfrak{g}^* \mid f(X) = 0 \text{ for all } f \in S^+(\mathfrak{g})^G \rbrace.
\end{gather*}
\end{defn}
The same argument as in Proposition \ref{affine nilpotent cone} shows that $\mathcal{N} = V(S^+(\mathfrak{g})^G)$ is an affine variety.
We next review some basic properties of normal rings and varieties. \\
\begin{defn} \cite[Definition 2.2.12]{CG}
A finitely generated commutative $K$-algebra $A$ is $\textit{Cohen-Macaulay}$ if it contains a subalgebra of the form $\mathcal{O}(V)$ such that $A$ is a free $\mathcal{O}(V)$-module of finite rank, and $V$ is a smooth affine scheme. \\
A scheme $X$ defined over $K$ is $\emph{Cohen-Macaulay}$ if, at each point $x \in X$, the local ring $\mathcal{O}_{X,x}$ is a Cohen-Macaulay ring.
\end{defn} \vspace{4.4 mm}
\begin{defn}
A commutative ring $A$ is $\textit{normal}$ if the localization $A_{\mathfrak{p}}$ for each prime ideal $\mathfrak{p}$ is an integrally closed domain. \\
A variety $V$ is $\textit{normal}$ if, for any $x \in V$, the local ring $\mathcal{O}_{V,x}$ is a normal ring. \\
\end{defn}
We now begin the proof of the normality of the dual nilpotent cone $\mathcal{N}^*$. We adapt the arguments in \cite{BL} to our situation. \\
\begin{thm}\label{important lemma}
Let $X$ be an irreducible affine Cohen-Macaulay scheme defined over $K$ and $U \subseteq X$ an open subscheme. Suppose $\text{dim } (X/U) \leq \text{dim } X - 2$ and that the scheme $U$ is normal. Then the scheme $X$ is normal.
\end{thm}
\begin{proof}
This is \cite[Corollary 2.3]{BL}.
\end{proof}
We aim to apply Theorem \ref{important lemma} to our situation. We begin with the following lemma, a variant on Hartogs' lemma. \\
\begin{lemma}\label{stackexchange lemma}
Let $Y$ be an affine normal variety and $Z \subseteq Y$ be a subvariety of codimension at least 2. Then any rational function on $Y$ which is regular on $Y \setminus Z$ can be extended to a regular function on $Y$.
\end{lemma}
\begin{proof}
Write $Y = \text{Spec } B$, where $B$ is a normal domain. Set $Z := V(I)$ for some ideal $I$, and write $U := Y \setminus Z$. Then $U = \bigcup_{f \in I} D(f)$, where $D(f)$ denotes the basic open sets in the Zariski topology. \\
Let $\mathfrak{p}$ be a prime ideal of height 1. By assumption, $\text{ht } I \geq 2$, and so there exists some $f \in I$ with $f \notin \mathfrak{p}$. It follows that $B_f \subseteq B_{\mathfrak{p}}$. \\
Let $a/b$ be a regular function on $U$, with $a/b \in \text{Frac} B$, the field of fractions of $B$. Since $\mathfrak{p}$ has height 1, we can find $f \in I \setminus \mathfrak{p}$. Then $a/b$ is regular on $D(f)$, and so $a/b \in \mathcal{O}(D(f)) = B_f \subseteq B_{\mathfrak{p}}$. As $\mathfrak{p}$ was arbitrary, $a/b \in \bigcap_{\text{ht}\mathfrak{p} = 1} B_{\mathfrak{p}} = B$. Hence $a/b$ can be extended to a regular function on $Y$.
\end{proof}
\begin{lemma}\label{function field isomorphism}
Let $X$ be an affine Cohen-Macaulay scheme with an open subscheme $U$. Let $r: \mathcal{O}(X) \to \mathcal{O}(U)$ be the restriction morphism. Then: \\
(a) if $\text{dim }(X \setminus U) < \text{dim } X$, then $r$ is injective, \\
(b) if $\text{dim } (X \setminus U) \leq \text{dim } X - 2$, then $r$ is an isomorphism.
\end{lemma}
\begin{proof}
We expand on the proof given in \cite[Lemma 2.2]{BL}. For ease of notation, we suppose $\mathcal{O}(X)$ is a finitely generated $\mathcal{O}(Y)$-module for some smooth affine scheme $Y$. Now the projection map $p: X \to Y$ is a finite morphism and hence is closed. Without loss of generality, we can shrink $U$, replacing it by a smaller open subset $p^{-1}(W)$, where $W = Y \setminus p(X \setminus U)$ is an open subset of $Y$. \\
Let $F := p_*(\mathcal{O}_X)$. This is a free $\mathcal{O}_Y$-module and we clearly have $\Gamma(Y,F) = p_*(\mathcal{O}_X)(Y) = \mathcal{O}_X(p^{-1}(Y)) = \mathcal{O}_X(X)$, and similarly $\Gamma(W,F) = p_*(\mathcal{O}_X)(W) = \mathcal{O}_X(p^{-1}(W)) = \mathcal{O}_X(U)$. Hence the restriction morphism $r$ agrees with the natural restriction map $r: \Gamma(Y,F) \to \Gamma(W,F)$. \\
If $\text{dim }(X \setminus U) < \text{dim } X$, then $\text{dim }(p(U \setminus X)) < \text{dim } (p(X))$, so $\text{dim }(Y \setminus W) < \text{dim } Y$, and so $r$ is injective. \\
Similarly, if $\text{dim } (X \setminus U) \leq \text{dim } X - 2$, then $\text{dim } (Y \setminus W) \leq \text{dim } Y - 2$. Hence, by Lemma \ref{stackexchange lemma}, any regular function on $W$ can be extended to a regular function on $Y$. Furthermore, $F$ is a free $\mathcal{O}_Y$-module; it follows that $r$ is surjective.
\end{proof}
As an immediate consequence, we see that if the scheme $U$ is reduced and normal, then so is $X$. \\
We now demonstrate that the hypotheses in Theorem \ref{important lemma} are satisfied in our situation. Recall that $\mathcal{N}^*$ is an affine variety. It suffices to show that $\mathcal{N}^*$ is irreducible and Cohen-Macaulay. \\
\begin{defn}
$\lambda \in \mathfrak{h}^*$ is $\emph{regular}$ if its centraliser in $\mathfrak{g}$ under the natural $\mathfrak{g}$-action on $\mathfrak{g}^*$ coincides with the Cartan subalgebra $\mathfrak{h}$. A general $\lambda \in \mathfrak{g}^*$ is $\emph{regular}$ if its coadjoint orbit contains a regular element of $\mathfrak{h}^*$.
\end{defn}
The subvariety $U$ in Lemma \ref{function field isomorphism} will be taken to be the subset of regular nilpotent elements. \\
\begin{prop} \label{codimension prop}
Suppose $p$ is nonspecial for $G$. Then:
(a) the dual nilcone $\mathcal{N}^* \subseteq \mathfrak{g}^*$ is a closed irreducible subvariety of $\mathfrak{g}^*$, and it has codimension $r$ in $G$, where $r$ is the rank of $G$. \\
(b) Let $U$ denote the set of regular elements of $\mathcal{N}^*$. Then $U$ is a single coadjoint orbit, which is open in $\mathcal{N}^*$, and its complement has codimension $\geq 2$.
\end{prop}
\begin{proof}
(a) We define an auxiliary variety $S$ via:
\begin{gather*}
S := \lbrace (gB, \zeta) \in G/B \times \mathfrak{g}^* \mid g \cdot \zeta \in \mathfrak{b}^{\perp} \rbrace.
\end{gather*}
This subset of $G/B \times \mathfrak{g}^*$ is closed. Define a map $\phi: G \times \mathfrak{b}^{\perp} \to G/B \times \mathfrak{g}^*$ by $\phi(g, \zeta) = (gB, g^{-1} \cdot \zeta)$. Now the image of $\phi$ is contained in $S$, and we can also see that $\text{im}(\phi) \cong S$ since we have a linear isomorphism $\mathfrak{b}^{\perp} \to g^{-1} \cdot \mathfrak{b}^{\perp}$. Hence the image of $\phi$ coincides with $S$. It follows that $S$ is a morphic image of an irreducible variety, and hence $S$ is itself an irreducible subvariety. \\
Let $p_1: G/B \times \mathfrak{g}^* \to G/B$ and $p_2: G/B \times \mathfrak{g}^* \to \mathfrak{g}^*$ be the obvious projection maps. Clearly $p_1(S) = G/B$. The fiber of $gB$ under the map $p_1$ is $g^{-1} \cdot \mathfrak{n}$, which is isomorphic to $\mathfrak{n}$. Hence the fibers are equidimensional, and we have:
\begin{gather*}
\text{dim } S = \text{dim } G/B + \text{dim } \mathfrak{n}, \\
\text{dim } S = \text{dim } G/B + \text{dim } U \\
= \text{dim } G - r.
\end{gather*}
Using the second projection, $\text{dim } p_2(S) \leq \text{dim } G - r$, with equality if some fibre is finite (as a set). First notice that:
\begin{gather*}
p_2(S) = \lbrace \zeta \in \mathfrak{g}^* \mid \exists g \in G \text{ s.t. } g \cdot \zeta \in \mathfrak{b}^{\perp} \rbrace = \mathcal{N}^*.
\end{gather*}
Hence $\mathcal{N}^*$ is irreducible, and, since the flag variety $G/B$ is complete by \cite{Bo}, $\mathcal{N}^*$ is closed. We show that there exists some $\zeta \in \mathfrak{g}^*$ with:
\begin{gather*}
| \lbrace gB \mid g \cdot \zeta \in \mathfrak{b}^{\perp} \rbrace | < \infty, \\
| \lbrace gB \mid \zeta(\text{Ad}_g^{-1}(\mathfrak{b})) = 0 \rbrace | < \infty.
\end{gather*}
By \cite[Proposition 2]{HS}, we have the following dimension formula:
\begin{gather*}
\text{dim } p_1(p_2^{-1}(\zeta)) = \frac{\text{dim } Z_G(\zeta) - r}{2}.
\end{gather*}
Since $p$ is nonspecial for $G$, the set of regular nilpotent elements $U$ in $\mathcal{N}^*$ is non-empty, by \cite[Section 6.4]{GKM}, and thus we can always pick some $\zeta \in \mathfrak{g}^*$ such that $\text{dim } Z_G(\zeta) - r = 0$. Thus there exists $\zeta$ with $| \lbrace p_1(p_2^{-1}(\zeta)) \rbrace | < \infty$. \\
Now consider two points $(gB, \zeta), (hB, \zeta) \in S$. By definition, $g \cdot \zeta \in \mathfrak{b}^{\perp}$ and $h \cdot \zeta \in \mathfrak{b}^{\perp}$. The coadjoint action then gives $\zeta(\text{ad}_g^{-1}(\mathfrak{b})) = \zeta(\text{ad}_h^{-1}(\mathfrak{b})) = 0$. It follows that $gB = hB$, and so $p_1$ is injective when restricted to the fibre $p_2^{-1}(\zeta)$. It follows that there is a fibre of $p_2$ which is finite as a set. \\
Given the existence of a finite fibre of $p_2$, we have $\text{dim } S = \text{dim } p_2(S) = \text{dim } \mathcal{N}^* = \text{dim } G - r$. \\
(b) Now $\mathcal{N}^*$ has only finitely many $G$-orbits by \cite{Xu2} and \cite[Proposition 7.1]{Xu3}, so the dimension of $\mathcal{N^*}$ is equal to the dimension of at least one of these orbits. Since $\text{dim } \mathcal{N^*} = \text{dim } G - r$, some orbit in $\mathcal{N^*}$ also has dimension equal to $\text{dim } G - r$. This orbit is regular and its closure is all of $\mathcal{N^*}$, since the dimensions are equal and $\mathcal{N^*}$ is irreducible. Since any $G$-orbit is open in its closure, by \cite[1.13, Corollary 1]{St3}, this class is open in $\mathcal{N^*}$ and thus is dense. \\
Let $R$ be the root system of $G$ and fix a subset of positive roots $R^+ \subseteq R$. Let $\alpha_i$ be a simple root, $X_{\alpha}$ the corresponding root subgroup, and set $U_i := \prod_{\alpha \in R^+, \alpha \neq \alpha_i} X_{\alpha}$. Let $T$ be the maximal torus of $G$ defined by this root system and let $P_i := T \cdot \langle X_{\alpha_i}, X_{-\alpha_i} \rangle \cdot U_i$. Since both $T$ and $\langle X_{\alpha_i}, X_{-\alpha_i} \rangle$ normalise $U_i$ by the commutation formulae in \cite[3.7]{St3}, we see that $P_i$ is a rank 1 parabolic subgroup of $G$, $U_i$ is its unipotent radical and $T \cdot \langle X_{\alpha_i}, X_{-\alpha_i} \rangle$ is a Levi subgroup of $P_i$. \\
Note that $\text{dim } T \cdot \langle X_{\alpha_i}, X_{-\alpha_i} \rangle = r+2$ and so $\text{dim } P_i - \text{dim } U_i = r+2$. \\
Parallel to the definition of the variety $S$, we set:
\begin{gather*}
S_i := \lbrace (gP_i, \zeta) \in G/P_i \times \mathfrak{g}^* \mid g \cdot \zeta \in \mathfrak{b}^{\perp}_i \rbrace
\end{gather*}
where $\mathfrak{b}^{\perp}_i = \lbrace \zeta \in \mathfrak{g}^* \mid \zeta(\text{Lie}(U_iT)) = 0 \rbrace$. Then $S_i$ is a closed and irreducible variety and, by the same argument as in part (a) of the proposition:
\begin{gather*}
\text{dim } S_i = \text{dim } G/P_i + \text{dim } U_i \\
= \text{dim } G - (r+2).
\end{gather*}
Projecting onto the second factor, we see that:
\begin{gather*}
\text{dim } p_2(S_i) \leq \text{dim } S_i = \text{dim } G - (r+2).
\end{gather*}
But an element $\zeta \in \mathcal{N}^*$ fails to be regular if and only if $G \cdot \zeta \cap \mathfrak{h}^*_{\text{reg}} = \emptyset$. By the decomposition in \cite[Section 6.4]{GKM}, this occurs precisely when the centraliser of each $\xi \in G \cdot \zeta \cap \mathfrak{h}^*$ contains some non-zero root $\alpha$ such that $\xi(\alpha^{\vee}(1)) = 0$, where $\alpha^{\vee}$ is the coroot corresponding to $\alpha$. It follows that $\zeta \in \mathcal{N}^*$ fails to be regular if and only if it lies in $p_2(S_i)$ for some $i$. Then:
\begin{gather*}
\text{dim }(\mathcal{N}^* \setminus U) = \text{sup}_i \text{ dim } p_2(S_i) \leq \text{dim } G - (r+2).
\end{gather*}
\end{proof}
\begin{lemma} \label{Kostant}
Let $r: S(\mathfrak{g}) \to S(\mathfrak{h})$ be the natural map, and $r^{\prime}$ its restriction to the graded subalgebra $S(\mathfrak{g})^G$. Suppose that $r^{\prime}$ is an isomorphism onto its image $S(\mathfrak{h})^W$ and $S(\mathfrak{h})$ is a free $S(\mathfrak{h})^W$-module. Then $S(\mathfrak{g})$ is a free $R$-module, where $R := S(\mathfrak{g}/\mathfrak{h}) \otimes S(\mathfrak{g})^G$, and hence is a free $S(\mathfrak{g})^G$-module.
\end{lemma}
\begin{proof}
The argument is similar to that which is set out in \cite[2.2.12]{CG} and the following discussion. Consider the projection map $\mathfrak{g} \to \mathfrak{g}/\mathfrak{h}$. This makes $\mathfrak{g}$ a vector bundle over $\mathfrak{g}/\mathfrak{h}$, and defines a natural increasing filtration on $S(\mathfrak{g})$ via:
\begin{gather*}
F_pS(\mathfrak{g}) = \lbrace P \in S(\mathfrak{g}) \mid P \text{ has degree } \leq p \text{ along the fibers} \rbrace.
\end{gather*}
Let $\text{gr}_F(S(\mathfrak{g}))$ denote the associated graded ring corresponding to this filtration, and set $S(\mathfrak{g})(p)$ to denote the $p$-th graded component. Clearly $S(\mathfrak{g})(0) = S(\mathfrak{g}/\mathfrak{h})$, and each graded component is an infinite-dimensional free $S(\mathfrak{g}/\mathfrak{h})$-module. There is a $K$-algebra isomorphism:
\begin{gather*}
S(\mathfrak{g})(p) \cong S(\mathfrak{g}/\mathfrak{h}) \otimes_K S^p(\mathfrak{h}),
\end{gather*}
where $S^p(\mathfrak{h})$ denotes the space of degree $p$ homogeneous polynomials on $\mathfrak{h}$. \\
Let $\sigma_p: F_pS(\mathfrak{g}) \to S(\mathfrak{g})(p)$ be the principal symbol map. Suppose $f \in F_pS(\mathfrak{g})$ is a homogeneous degree $p$ polynomial whose restriction $r(f)$ to $\mathfrak{h}$ is non-zero. Then $\sigma_p(f)$ equals the image of the element $1 \otimes_k r(f)$ under the above isomorphism, and so is non-zero in $S(\mathfrak{g})(p)$. \\
To see this, choose a vector subspace $\mathfrak{j}$ of $\mathfrak{g}$ such that $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{j}$. This yields a graded algebra isomorphism $S(\mathfrak{g}) = S(\mathfrak{h}) \otimes S(\mathfrak{j})$, and so one writes $F_pS(\mathfrak{g}) = \sum_{i \leq p} S^i(\mathfrak{h}) \otimes S^{p-i}(\mathfrak{j})$. Hence $f \in F_pS(\mathfrak{g})$ has the form:
\begin{gather*}
f = e_p \otimes 1 + \sum_{i \leq p} e_i \otimes w_{p-i},
\end{gather*}
where $e_i \in S^i(\mathfrak{h})$ and $w_{p-i} \in S^{p-i}(\mathfrak{j})$. Hence $r(f) = e_p$ and $\sigma_p(f) = e_p \otimes 1$, as required. \\
Given this claim, consider the filtration $F^pS(\mathfrak{g})^G$ in $S(\mathfrak{g})$. For any homogeneous element $f \in S(\mathfrak{g})^G$, its symbol $\sigma_p(f)$ coincides with $r(f) \in S(\mathfrak{h}) \subseteq \text{gr}_{\Phi} (S(\mathfrak{g}))$. Hence the subalgebra $\sigma_{\Phi}(f) \subseteq \text{gr}_{\Phi} (S(\mathfrak{g}))$ coincides with $r(S(\mathfrak{g})^G) = S(\mathfrak{h})^W$. \\
Let $\lbrace a_k \rbrace$ be a free basis for the $S(\mathfrak{h})^W$-module $S(\mathfrak{h})$, and fix $b_k \in S(\mathfrak{g})$ with $r(b_k) = a_k$. Then $\sigma_p(b_k) = a_k$. The $a_k$ form a free basis of the $\text{gr}_pR$-module $\text{gr}_p (S(\mathfrak{g})) = S(\mathfrak{h}) \otimes S(\mathfrak{g}/\mathfrak{h})$, via tensoring on the right and applying the second part of the claim. It follows that the $\lbrace b_k \rbrace$ form a free basis of the $R$-module $S(\mathfrak{g})$.
\end{proof}
\begin{thm}\label{main theorem 1}
Let $G = PGL_n$ and suppose $p|n$. Then the dual nilpotent cone $\mathcal{N}^* \subseteq \mathfrak{g}^*$ is a normal variety.
\end{thm}
\begin{proof}
Recall that $\mathcal{N}^*$ is an affine variety with defining ideal $J := V(S^+(\mathfrak{g})^G))$. It follows that its algebra of global functions $\mathcal{O}(\mathcal{N}^*) = S(\mathfrak{g})/J$. Consider $Y := \mathfrak{g}/\mathfrak{h}$ as an affine variety. Then Lemma $\ref{Kostant}$ implies that $\mathcal{O}(\mathcal{N}^*)$ is a free finitely generated module over the polynomial algebra $S(Y)$. Hence $\mathcal{N}^*$ is a Cohen-Macaulay variety. \\
By Proposition \ref{codimension prop}, $\mathcal{N}^*$ is a closed irreducible subvariety of $\mathfrak{g}^*$, and the complement of the set of regular elements $U$ in $\mathcal{N}^*$ has codimension $\geq 2$. Hence all conditions in the statement of Theorem \ref{important lemma} are satisfied, and so $\mathcal{N}^*$ is normal.
\end{proof}
$\textbf{Proof of Theorem A:}$ This is immediate from Theorem \ref{main theorem 1}. \\
We conclude this section with an application of this result, which will be used in later sections. \\
\begin{cor} \label{global sections isomorphism}
We have an isomorphism $\mu^*: \mathcal{O}(\mathcal{N}^*) \to \mathcal{O}(T^*\mathcal{B})$.
\end{cor}
\begin{proof}
The map $\mu: T^*\mathcal{B} \to \mathcal{N}$ is a resolution of singularities by Theorem \ref{Springer resolution}. Let $\tau: T^*\mathcal{B} \to \mathcal{N}^*$ be the composition of $\mu$ with the $G$-equivariant isomorphism $\kappa: \mathcal{N} \to \mathcal{N}^*$ from \cite[Section 6.5]{JN}. This induces an isomorphism $\tau^s: \tau^{-1}((\mathcal{N^*})^s) \to (\mathcal{N^*})^s$ on the smooth points. These are non-empty open subsets of $T^*\mathcal{B}$ and $\mathcal{N^*}$ respectively, and so $T^*\mathcal{B}$ and $\mathcal{N^*}$ are birationally equivalent. \\
Let $\mathcal{Q}(A)$ denote the field of fractions of an integral domain $A$. By \cite[I.4.5]{Ha}, $\mu$ induces an isomorphism $\mathcal{Q}(\mathcal{O}(\mathcal{N^*})) \to \mathcal{Q}(\mathcal{O}(T^*\mathcal{B}))$, and so $\mathcal{O}(T^*\mathcal{B})$ can be considered as a subring of $\mathcal{Q}(\mathcal{O}(\mathcal{N^*}))$. \\
Since the map $T^*\mathcal{B} \to \mathcal{N^*}$ is surjective, and $\mathcal{O}(T^*\mathcal{B})$, $\mathcal{O}(\mathcal{N^*})$ are integral domains, there is an inclusion $\mathcal{O}(\mathcal{N^*}) \to \mathcal{O}(T^*\mathcal{B})$. The map $\tau$ is proper, and so the direct image sheaf $\tau_*\mathcal{O}_{T^*\mathcal{B}}$ is a coherent $\mathcal{O}_{\mathcal{N^*}}$-module. In particular, taking global sections, we have that $\Gamma(\mathcal{N^*}, \tau_*\mathcal{O}_{T^*\mathcal{B}})$ is a finitely generated $\mathcal{O}(\mathcal{N^*})$-module. By definition, $\Gamma(\mathcal{N^*}, \tau_*\mathcal{O}_{T^*\mathcal{B}}) = \mathcal{O}(T^*\mathcal{B})$, so $\mathcal{O}(T^*\mathcal{B})$ is a finitely generated $\mathcal{O}(\mathcal{N^*})$-module. \\
The variety $\mathcal{N}^*$ is normal, and so $\mathcal{O}(\mathcal{N}^*)$ is an integrally closed domain. Let $b \in \mathcal{O}(T^*\mathcal{B})$. Then clearly $\mathcal{O}(T^*\mathcal{B})b \subseteq \mathcal{O}(T^*\mathcal{B})$, and hence $b$ is integral over $\mathcal{O}(\mathcal{N}^*)$. Hence, by integral closure, $b \in \mathcal{N}^*$ and there is an isomorphism $\mathcal{O}(\mathcal{N}^*) \to \mathcal{O}(T^*\mathcal{B})$.
\end{proof}
\subsection{Analogous results when $G$ is not of type A}
The restriction that $G = PGL_n$, $p|n$ plays a role in only a few places in the argument that $\mathcal{N}^*$ is a normal variety. In this section, we indicate some of the issues that arise when we replace $PGL_n$ by a more general simple algebraic group of adjoint type. \\
Theorem \ref{polynomial theorem} demonstrated that, in case $G = PGL_n$, $p|n$, the Weyl group invariants $S(\mathfrak{h})^W$ is a polynomial ring. This result is usually false in bad characteristic. In case the $W$-action on $\mathfrak{h}$ is irreducible,\cite[Theorem 3]{Bro} gives a full classification of the types in which this result holds, drawing on \cite[Theorem 7.2]{KM}. \\
\begin{prop}\label{fulton harris}
Suppose the pair (Dynkin diagram of $G$, $p$) lies in the following list: \\
(a) ($E_7$, 3), \\
(b) ($E_8$, 2), \\
(c) ($E_8$, 3), \\
(d) ($E_8$, 5), \\
(e) ($F_4$, 3), \\
(f) ($G_2$, 2). \\
Then the $W$-action on $\mathfrak{h}$ is irreducible.
\end{prop}
\begin{proof}
In all of these cases, the argument in \cite[Section 6.5]{JN} demonstrates that there is a $G$-equivariant bijection $\kappa: \mathfrak{g} \to \mathfrak{g}^*$, which restricts to a $G$-equivariant bijection $\mathfrak{h} \to \mathfrak{h}^*$. Furthermore, the classification in \cite[Section 0.13]{Hu2} demonstrates that $\mathfrak{g}$ is simple. Given these two statements, we may apply the same proof as that given in \cite[Proposition 14.31]{FH} to obtain the result.
\end{proof}
\begin{thm}\label{G2 polynomial}
Suppose $G$ is of type $G_2$ and $p = 2$. Then the invariant ring $S(\mathfrak{h})^W$ is polynomial.
\end{thm}
\begin{proof}
This follows from the calculations in \cite[Theorem 7.2]{KM}.
\end{proof}
In case $G$ is of type $G_2$ and $p = 2$, we may apply the same argument as for $G = PGL_n$ to obtain the following result. \\
\begin{thm}\label{G2 case}
Let $(G,p) = (G_2, 2)$. Then the dual nilpotent cone $\mathcal{N}^* \subseteq \mathfrak{g}^*$ is a normal variety.
\end{thm}
$\textbf{Proof of Theorem B:}$ This is immediate from Theorem \ref{G2 case}. \\
If $S(\mathfrak{h})^W$ is not polynomial, there are significant obstacles to generalising the result that $\mathcal{N}^*$ is a normal variety. In particular, the following behaviour may be observed. \\
- Kostant's freeness theorem, stated as Theorem \ref{freeness polynomial}, fails. This means that $S(\mathfrak{h})$ is not free as an $S(\mathfrak{h})^W$-module, meaning that we cannot apply the argument in Lemma \ref{Kostant} to show that $\mathcal{N^*}$ is a Cohen-Macaulay variety. \\
- The Steinberg quotient $\chi: \mathfrak{g} \to K^n$, defined in Definition \ref{Steinberg quotient}, makes sense as an abstract function, but since the generators $\lbrace f_1, \cdots, f_n \rbrace$ of $S(\mathfrak{h})^W$ are not algebraically independent, we cannot apply the argument in Lemma \ref{smooth elements} to show that the smooth elements of $\mathcal{N}$ coincide with the regular elements, which is a key step in the proof that the Springer resolution $\mu: T^*\mathcal{B} \to \mathcal{N}$ is a resolution of singularities for $\mathcal{N}$. \\
Calculations in \cite[Section 3.2]{Bro} show that, in the following cases (Dynkin diagram of $G$, $p$), the invariant ring $S(\mathfrak{h})^W$ is not even Cohen-Macaulay. \\
(a) $(E_7, 3)$, \\
(b) $(E_8, 3)$, \\
(c) $(E_8, 5)$. \\
\begin{conj}
In case the invariant ring $S(\mathfrak{h})^W$ is not Cohen-Macaulay, is it true that the dual nilpotent cone $\mathcal{N}^*$ is not a normal variety?
\end{conj}
\newpage
\section{Applications to representations of $p$-adic Lie groups}\label{annals chapter}
\subsection{Generalising the Beilinson-Bernstein theorem for $\widehat{\mathcal{D}^{\lambda}_{n,K}}$}
In this section, we apply the results of Section \ref{nilcone chapter} to the constructions given in \cite{AW}. This allows us to weaken the restrictions on the characteristic of the base field given in \cite[Section 6.8]{AW}, thereby providing us with generalisations of their results. \\
Throughout Section \ref{annals chapter}, we suppose $R$ is a fixed complete discrete valuation ring with uniformiser $\pi$, residue field $k$ and field of fractions $K$. Assume throughout this section that $K$ has characteristic 0 and $k$ is algebraically closed. \\
We recall some of the arguments from \cite[Section 4]{AW}, to define the sheaf of enhanced vector fields $\widetilde{\mathcal{T}}$ on a smooth scheme $X$, and the relative enveloping algebra $\widetilde{\mathcal{D}}$ of an $\textbf{H}$-torsor $\xi: \widetilde{X} \to X$. \\
Let $X$ be a smooth separated $R$-scheme that is locally of finite type. Let $\textbf{H}$ be a flat affine algebraic group defined over $R$ of finite type, and let $\widetilde{X}$ be a scheme equipped with an $\textbf{H}$-action. \\
\begin{defn}\label{torsor def}
A morphism $\xi: \widetilde{X} \to X$ is an $\textbf{H}-\emph{torsor}$ if: \\
(i) $\xi$ is faithfully flat and locally of finite type, \\
(ii) the action of $\textbf{H}$ respects $\xi$, \\
(iii) the map $\widetilde{X} \times \textbf{H} \to \widetilde{X} \times_X \widetilde{X}$, $(x, h) \to (x, hx)$ is an isomorphism. \\
An open subscheme $U$ of $X$ $\emph{trivialises the torsor}$ $\xi$ if there is an $\textbf{H}$-invariant isomorphism:
\begin{gather*}
U \times \textbf{H} \to \xi^{-1}(U)
\end{gather*}
where $\textbf{H}$ acts on $U \times \textbf{H}$ by left translation on the second factor.
\end{defn} \vspace{4.4 mm}
\begin{defn}\label{locally trivial defn}
Let $\mathcal{S}_X$ denote the set of open subschemes $U$ of $X$ such that: \\
(i) $U$ is affine, \\
(ii) $U$ trivialises $\xi$, \\
(iii) $\mathcal{O}(U)$ is a finitely generated $R$-algebra. \\
$\xi$ is $\emph{locally trivial}$ for the Zariski topology if $X$ can be covered by open sets in $\mathcal{S}_X$.
\end{defn} \vspace{4.4 mm}
\begin{lemma}\label{locally trivial lemma}
If $\xi$ is locally trivial, then $\mathcal{S}_X$ is a base for $X$.
\end{lemma}
\begin{proof}
Since $X$ is separated, $\mathcal{S}_X$ is stable under intersections. If $U \in \mathcal{S}_X$ and $W$ is an open affine subscheme of $U$, then $W \in \mathcal{S}_X$. Hence $\mathcal{S}_X$ is a base for $X$.
\end{proof}
The action of $\textbf{H}$ on $\widetilde{X}$ induces a rational action of $\textbf{H}$ on $\mathcal{O}(V)$ for any $\textbf{H}$-stable open subscheme $V \subseteq \widetilde{X}$, and therefore induces an action of $\textbf{H}$ on $\mathcal{T}_{\widetilde{X}}$ via:
\begin{gather*}
(h \cdot \partial)(f) = h \cdot \partial(h^{-1} \cdot f)
\end{gather*}
for $\partial \in \mathcal{T}_{\widetilde{X}}, f \in \mathcal{O}(\widetilde{X})$ and $h \in \textbf{H}$. The $\emph{sheaf of enhanced vector fields}$ on $X$ is:
\begin{gather*}
\widetilde{\mathcal{T}} := (\xi_*\mathcal{T}_{\widetilde{X}})^{\textbf{H}}.
\end{gather*}
Differentiating the $\textbf{H}$-action on $\widetilde{X}$ gives an $R$-linear Lie algebra homomorphism:
\begin{gather*}
j: \mathfrak{h} \to \mathcal{T}_{\widetilde{X}}
\end{gather*}
where $\mathfrak{h}$ is the Lie algebra of $\textbf{H}$. \\
\begin{defn}\label{enhanced cotangent bundle}
Let $\xi: \widetilde{X} \to X$ be an $\textbf{H}$-torsor. Then $\xi_*\mathcal{D}_{\widetilde{X}}$ is a sheaf of $R$-algebras with an $\textbf{H}$-action. The $\emph{relative enveloping algebra}$ of the torsor is the sheaf of $\textbf{H}$-invariants of $\xi_*\mathcal{D}_{\widetilde{X}}$:
\begin{gather*}
\widetilde{\mathcal{D}} := (\xi_*\mathcal{D}_{\widetilde{X}})^{\textbf{H}}.
\end{gather*}
\end{defn}
This sheaf has a natural filtration:
\begin{gather*}
F_m\widetilde{\mathcal{D}} := (\xi_*F_m\mathcal{D}_{\widetilde{X}})^{\textbf{H}}
\end{gather*}
induced by the filtration on $\mathcal{D}_{\widetilde{X}}$ by order of differential operator. \\
Let $\textbf{B}$ be a Borel subgroup of $\textbf{G}$. Let $\textbf{N}$ be the unipotent radical of $\textbf{B}$, and $\textbf{H} := \textbf{B}/\textbf{N}$ the abstract Cartan group. Let $\widetilde{\mathcal{B}}$ denote the homogeneous space $\textbf{G}/\textbf{N}$. There is an $\textbf{H}$-action on $\widetilde{\mathcal{B}}$ defined by:
\begin{gather*}
b\textbf{N} \cdot g \textbf{N} := gb\textbf{N}
\end{gather*}
which is well-defined since $[\textbf{B}, \textbf{B}]$ is contained in $\textbf{N}$. $\mathcal{B} := \textbf{G}/\textbf{B}$ is the $\emph{flag variety}$ of $\textbf{G}$. $\widetilde{\mathcal{B}}$ is the $\emph{basic affine space}$ of $\textbf{G}$. \\
By the splitting assumption of $\textbf{G}$, we can find a Cartan subgroup $\textbf{T}$ of $\textbf{G}$ complementary to $\textbf{N}$ in $\textbf{B}$. This is naturally isomorphic to $\textbf{H}$, and induces an isomorphism of the corresponding Lie algebras $\mathfrak{t} \to \mathfrak{h}$. \\
We let $\textbf{W}$ denote the Weyl group of $\textbf{G}$, and let $\textbf{W}_k$ denote the Weyl group of $\textbf{G}_k$, the $k$-points of the algebraic group $\textbf{G}$. \\
We may differentiate the natural $\textbf{G}$-action on $\widetilde{\mathcal{B}}$ to obtain an $R$-linear Lie homomorphism:
\begin{gather*}
\varphi: \mathfrak{g} \to \mathcal{T}_{\widetilde{\mathcal{B}}}.
\end{gather*}
Since the $\textbf{G}$-action commutes with the $\textbf{H}$-action on $\widetilde{\mathcal{B}}$, this map descends to an $R$-linear Lie homomorphism $\varphi: \mathfrak{g} \to \widetilde{\mathcal{T}}_{\mathcal{B}}$ and an $\mathcal{O}_{\mathcal{B}}$-linear morphism:
\begin{gather*}
\varphi: \mathcal{O}_{\mathcal{B}} \otimes \mathfrak{g} \to \widetilde{\mathcal{T}}_{\mathcal{B}}
\end{gather*}
of locally free sheaves on $\mathcal{B}$. Dualising, we obtain a morphism of vector bundles over $\mathcal{B}$:
\begin{gather*}
\varphi^*: \widetilde{T^*\mathcal{B}} \to \mathcal{B} \times \mathfrak{g}^*
\end{gather*}
from the enhanced cotangent bundle to the trivial vector bundle of rank dim $\mathfrak{g}$. \\
\begin{defn}\label{enhanced moment map}
The $\emph{enhanced moment map}$ is the composition of $\varphi^*$ with the projection onto the second coordinate:
\begin{gather*}
\beta: \widetilde{T^*\mathcal{B}} \to \mathfrak{g}^*.
\end{gather*}
\end{defn}
We may apply the deformation functor (\cite[Section 3.5]{AW}) to the map $j: U(\mathfrak{h}) \to \widetilde{\mathcal{D}}$, defined above Definition \ref{enhanced cotangent bundle}, to obtain a central embedding of the constant sheaf $U(\mathfrak{h})_n$ into $\widetilde{\mathcal{D}}_n$. This gives $\widetilde{\mathcal{D}}_n$ the structure of a $U(\mathfrak{h})_n$-module. \\
Let $\lambda \in \text{Hom}_R(\pi^n\mathfrak{h}, R)$ be a linear functional. This extends to an $R$-algebra homomorphism $U(\mathfrak{h})_n \to R$, which gives $R$ the structure of a $U(\mathfrak{h})_n$-module, denoted $R_{\lambda}$. \\
\begin{defn}
The $\emph{sheaf of deformed twisted differential operators}$ $\mathcal{D}^{\lambda}_n$ on $\mathcal{B}$ is the sheaf:
\begin{gather*}
\mathcal{D}^{\lambda}_n := \widetilde{\mathcal{D}_n} \otimes_{U(\mathfrak{h})_n} R_{\lambda}
\end{gather*}
\end{defn}
By \cite[Lemma 6.4(b)]{AW}, this is a sheaf of deformable $R$-algebras. \\
\begin{defn}
The $\pi$-$\emph{adic completion}$ of $\mathcal{D}^{\lambda}_n$ is $\widehat{\mathcal{D}^{\lambda}_n} := \varprojlim \mathcal{D}^{\lambda}_n /\pi^a\mathcal{D}^{\lambda}_n$. Furthermore, set $\widehat{\mathcal{D}^{\lambda}_{n,K}} := \widehat{\mathcal{D}^{\lambda}_n} \otimes_R K$.
\end{defn} \vspace{4.4 mm}
The adjoint action of $\textbf{G}$ on $\mathfrak{g}$ extends to an action on $U(\mathfrak{g})$ by ring automorphisms, which is filtration-preserving and so descends to an action on $\text{gr } U(\mathfrak{g}) \cong S(\mathfrak{g})$. Let:
\begin{gather*}
\psi: S(\mathfrak{g})^{\textbf{G}} \to S(\mathfrak{t})
\end{gather*}
denote the composition of the inclusion $S(\mathfrak{g})^{\textbf{G}} \to S(\mathfrak{g})$ with the projection $S(\mathfrak{g}) \to S(\mathfrak{t})$. By \cite[Theorem 7.3.7]{Di}, the image of $\psi$ is contained in $S(\mathfrak{t})^{\textbf{W}}$, and $\psi$ is injective. \\
Since taking $\textbf{G}$-invariants is left exact, we have an inclusion $\text{gr }(U(\mathfrak{g})^{\textbf{G}}) \to S(\mathfrak{g})^{\textbf{G}}$. Our next proposition gives a description of the associated graded ring of $U(\mathfrak{g})^{\textbf{G}}$. \\
\begin{prop}\label{6.9}
The rows of the diagram:
\begin{figure}[H]
\centering
\begin{tikzcd}
0 \arrow{r} & \text{gr}(U(\mathfrak{g})^{\textbf{G}}) \arrow{r}{\pi} \arrow{d}{\iota} & \text{gr}(U(\mathfrak{g})^{\textbf{G}}) \arrow{r} \arrow {d}{\iota} & \text{gr}(U(\mathfrak{g}_k)^{\textbf{G}_k}) \arrow{d}{\iota_k} \arrow{r} & 0 \\
0 \arrow{r} & S(\mathfrak{g})^{\textbf{G}} \arrow{r}{\pi} \arrow{d}{\psi} & S(\mathfrak{g})^{\textbf{G}} \arrow{r} \arrow{d}{\psi} & S(\mathfrak{g}_k)^{\textbf{G}_k} \arrow{d}{\psi_k} \arrow{r} & 0 \\
0 \arrow{r} & S(\mathfrak{t})^{\textbf{W}} \arrow{r}{\pi} & S(\mathfrak{t})^{\textbf{W}} \arrow{r} & S(\mathfrak{t}_k)^{\textbf{W}_k} \arrow{r} & 0 \\
\end{tikzcd}
\end{figure}
are exact, and each vertical map is an isomorphism.
\end{prop}
\begin{proof}
View the diagram as a sequence of complexes $C^{\bullet} \to D^{\bullet} \to E^{\bullet}$. Since $\pi$ generates the maximal ideal $\mathfrak{m}$ of $R$ by definition, and $R/\mathfrak{m} = k$, it is clear that each complex is exact in the left and in the middle. The exactness of $E^{\bullet}$ follows from the fact that $S(\mathfrak{t}_k)^{\textbf{W}_k}$ is a polynomial ring by Theorem \ref{polynomial theorem}: since $n > 2$ we may fix homogeneous generators $s_1, \cdots, s_l$ and lift these generators to homogeneous generators $S_1, \cdots, S_l$ of the ring $S(\mathfrak{t})^{\textbf{W}}$ with $s_i = S_i (\text{mod } \mathfrak{m})$ by the proof of \cite[Proposition 5.1]{KM}. Hence the map $S(\mathfrak{t})^{\textbf{W}} \to S(\mathfrak{t}_k)^{\textbf{W}_k}$ is surjective, and the complex $E^{\bullet}$ is exact. \\
By \cite[Theorem 7.3.7]{Di}, $\psi$ is injective, and since $p$ is nonspecial from Definition \ref{nonspecial defn}, $\psi_k$ is an isomorphism by Theorem \ref{KW}. Thus the composite map of complexes $\psi^{\bullet} \circ \iota^{\bullet}$ is injective. Set $F^{\bullet} := \text{coker}(\psi^{\bullet} \circ \iota^{\bullet})$: by definition, the sequence of complexes $0 \to C^{\bullet} \to E^{\bullet} \to F^{\bullet} \to 0$ is exact. \\
Since $C^{\bullet}$ is exact in the left and in the middle, $H^0(C^{\bullet}) = H^1(C^{\bullet}) = 0$. As $E^{\bullet}$ is exact, taking the long exact sequence of cohomology shows that $H^0(F^{\bullet}) = H^2(F^{\bullet}) = 0$ and yields an isomorphism $H^1(F^{\bullet}) \cong H^2(C^{\bullet})$. \\
Since $K$ is a field of characteristic zero, the map $\psi_K \circ \iota_K: \text{gr}(U(\mathfrak{g}_K)^{\textbf{G}_K}) \to S(\mathfrak{t}_K)^{\textbf{W}_K}$ is an isomorphism by \cite[Theorem 7.3.7]{Di}. Hence $F^0 = F^1 = \text{coker}(\psi \circ \iota)$ is $\pi$-torsion. Now $H^0(F^{\bullet}) = 0$, and so we have an exact sequence $0 \to F^0 \to F^1$. So $F^0 = F^1 = 0$, and hence $H^1(F^{\bullet}) = H^2(C^{\bullet}) = 0$. It follows that the top row $C^{\bullet}$ is exact. \\
Hence $\psi^{\bullet} \circ \iota^{\bullet}: C^{\bullet} \to E^{\bullet}$ is an isomorphism in all degrees except possibly 2, and so is an isomorphism via the Five Lemma. The result follows from the fact that $\psi^{\bullet}$ and $\iota^{\bullet}$ are both injections.
\end{proof}
It follows that, since $\psi \circ \iota$ is a graded isomorphism and $p$ is nonspecial, $\text{gr}(U(\mathfrak{g})^\textbf{G})$ is isomorphic to a commutative polynomial algebra over $R$ in $l$ variables by Theorem \ref{polynomial theorem}. The commutative polynomial algebra $R[x_1, \cdots, x_l]$ is a free $R$-module and hence is flat, and so $(U(\mathfrak{g})^\textbf{G})$ is a deformable $R$-algebra by \cite[Definition 3.5]{AW}. Furthermore, $\widehat{U(\mathfrak{g})^\textbf{G}_{n,K}}$ is also a commutative polynomial algebra over $R$ in $l$ variables, so the $\pi$-adic completion $\widehat{U(\mathfrak{g})^\textbf{G}_{n,K}}$ is a commutative Tate algebra. \\
By \cite[Proposition 4.10]{AW}, we have a commutative square consisting of deformable $R$-algebras:
\begin{figure}[H]
\centering
\begin{tikzcd}
U((\mathfrak{g})^{\textbf{G}})_n \arrow{r}{\phi_n} \arrow{d}{i_n} & U(\mathfrak{t})_n \arrow{d}{(j \circ i)_n} \\
U(\mathfrak{g})_n \arrow{r}{U(\phi)_n} & \widetilde{\mathcal{D}_n},
\end{tikzcd}
\end{figure}
We set:
\begin{gather*}
\mathcal{U}^{\lambda}_n := U(\mathfrak{g}) \otimes_{(U(\mathfrak{g})^{\textbf{G}})_n} R_{\lambda}, \\
\widehat{\mathcal{U}^{\lambda}_n} := \varprojlim \frac{\mathcal{U}^{\lambda}_n}{\pi^a \mathcal{U}^{\lambda}_n}, \\
\widehat{\mathcal{U}^{\lambda}_{n,K}} := \widehat{\mathcal{U}^{\lambda}_n} \otimes_R K.
\end{gather*}
By commutativity of the diagram, the map:
\begin{gather*}
U(\phi)_n \otimes (j \circ i)_n: U(\mathfrak{g})_n \otimes U(\mathfrak{t})_n \to \widetilde{\mathcal{D}_n}
\end{gather*}
factors through $U((\mathfrak{g})^\textbf{G})_n$, and we obtain the algebra homomorphisms:
\begin{gather*}
\phi^{\lambda}_n: \mathcal{U}^{\lambda}_n \to \mathcal{D}^{\lambda}_n, \\
\widehat{\phi^{\lambda}_n}: \widehat{\mathcal{U}^{\lambda}_n} \to \widehat{\mathcal{D}^{\lambda}_n}, \\
\widehat{\phi^{\lambda}_{n,K}}: \widehat{\mathcal{U}^{\lambda}_{n,K}} \to \widehat{\mathcal{D}^{\lambda}_{n,K}}.
\end{gather*}
\begin{thm}\label{6.10}
(a) $\widehat{\mathcal{U}^{\lambda}_{n,K}} \cong \widehat{U(\mathfrak{g})_{n,K}} \otimes_{\widehat{U(\mathfrak{g})^\textbf{G}_{n,K}}} K_{\lambda}$ is an almost commutative affinoid $K$-algebra. \\
(b) The map $\widehat{\phi^{\lambda}_{n,K}}: \widehat{\mathcal{U}^{\lambda}_{n,K}} \to \Gamma(\mathcal{B}, \widehat{\mathcal{D}^{\lambda}_{n,K}})$ is an isomorphism of complete doubly filtered $K$-algebras. \\
(c) There is an isomorphism $S(\mathfrak{g}_k) \otimes_{S(\mathfrak{g}_k)^{\textbf{G}_k}} k \cong \text{Gr }(\widehat{\mathcal{U}^{\lambda}_{n,K}})$.
\end{thm}
\begin{proof}
(a): This is identical to the proof given in \cite[Theorem 6.10(a)]{AW}. \\
(b): Let $ \lbrace U_1, \cdots, U_m \rbrace$ be an open cover of $\mathcal{B}$ by open affines that trivialise the torsor $\xi$, which exists by \cite[Lemma 4.7(c)]{AW}. The special fibre $\mathcal{B}_k$ is covered by the special fibres $U_{i,k}$. It suffices to show that the complex:
\begin{gather*}
C^{\bullet}: 0 \to \widehat{\mathcal{U}_{n,K}} \to \bigoplus_{i=1}^m \widehat{\mathcal{D}^{\lambda}_{n,K}}(U_i) \to \bigoplus_{i<j} \widehat{\mathcal{D}^{\lambda}_{n,K}}(U_i \cap U_j)
\end{gather*}
is exact. \\
Clearly, $C^{\bullet}$ is a complex in the category of complete doubly-filtered $K$-algebras, and so it suffices to show that the associated graded complex $\text{Gr}(C^{\bullet})$ is exact. By \cite[Corollary 3.7]{AW}, there is a commutative diagram with exact rows:
\begin{figure}[ht!]
\centering
\begin{tikzcd}
0 \arrow{r} & \text{gr}(U(\mathfrak{g})^{\textbf{G}}) \arrow{r}{\pi} \arrow{d} & \text{gr}(U(\mathfrak{g})^{\textbf{G}}) \arrow{r} \arrow {d} & \text{Gr}(\widehat{U(\mathfrak{g})^{\textbf{G}}_K}) \arrow{d} \arrow{r} & 0 \\
0 \arrow{r} & \text{gr}(U(\mathfrak{g})) \arrow{r}{\pi} & \text{gr}(U(\mathfrak{g})) \arrow{r} & \text{Gr}(\widehat{U(\mathfrak{g})_{n,K}}) \arrow{r} & 0. \\
\end{tikzcd}
\end{figure}
Via the identification $\text{gr}(U(\mathfrak{g})) = S(\mathfrak{g})$, Proposition \ref{6.9} induces a commutative square:
\begin{figure}[ht!]
\centering
\begin{tikzcd}
\text{Gr}(\widehat{U(\mathfrak{g})^{\textbf{G}}_{n,K}}) \arrow{r} \arrow{d} & S(\mathfrak{g}_k)^{\textbf{G}_k} \arrow{d} \\
\text{Gr}(\widehat{U(\mathfrak{g})_{n,K}}) \arrow{r} & S(\mathfrak{g}_k)
\end{tikzcd}
\end{figure}
where the horizontal maps are isomorphisms and the vertical maps are inclusions. Since $\text{Gr}(K_{\lambda})$ is the trivial $\text{Gr}(\widehat{U(\mathfrak{g})^{\textbf{G}}_{n,K}})$-module $k$, we have a natural surjection:
\begin{gather*}
S(\mathfrak{g}_k) \otimes_{S(\mathfrak{g}_k)^{{\textbf{G}}_k}} k \cong \text{Gr}(\widehat{U(\mathfrak{g})_{n,K}} \otimes_{\text{Gr}(\widehat{U(\mathfrak{g})^{\textbf{G}}_{n,K}})} \text{Gr}(K_{\lambda})) \to \text{Gr}(\widehat{\mathcal{U}^{\lambda}_{n,K}}).
\end{gather*}
This surjection fits into the commutative diagram:
\begin{figure}[ht!]
\centering
\begin{tikzcd}
0 \arrow{r} & S(\mathfrak{g}_k) \otimes_{S(\mathfrak{g}_k)^{{\textbf{G}}_k}} k \arrow{r} \arrow{d} & \bigoplus_{i=1}^m \mathcal{O}(T^*U_{i,k}) \arrow{r} & \bigoplus_{i<j} \mathcal{O}(T^*(U_{i,k} \cap U_{j,k})) \\
0 \arrow{r} & \text{Gr}(\widehat{\mathcal{U}^{\lambda}_{n,K}}) \arrow{r} & \bigoplus_{i=1}^m \text{Gr}(\widehat{\mathcal{D}^{\lambda}_{n,K}}(U_i)) \arrow{r} \arrow{u} & \bigoplus_{i<j} \text{Gr}(\widehat{\mathcal{D}^{\lambda}_{n,K}}(U_i \cap U_j)). \arrow{u} \\
\end{tikzcd}
\end{figure}
The bottom row is $\text{Gr}(C^{\bullet})$ by definition, and the top row is induced by the moment map $T^*\mathcal{B}_k \to \mathfrak{g}_k^*$. To see this, note that by Lemma \ref{enhanced nilpotent cone cotangent}, we have an identification $\widetilde{\mathcal{N}^*} \to T^*\mathcal{B}$ under our assumptions on $p$, and so exactness of the top row is equivalent to the existence of an isomorphism:
\begin{gather*}
S(\mathfrak{g}) \otimes_{S(\mathfrak{g})^{\textbf{G}}} k \cong \Gamma(\widetilde{\mathcal{N}^*}, \mathcal{O}_{\widetilde{\mathcal{N}^*}}).
\end{gather*}
By Theorem \ref{main theorem 1}, $\mathcal{N}^*$ is a normal variety and, by Theorem \ref{Springer resolution}, the map $\gamma: T^*\mathcal{B} \to \mathcal{N}^*$ is a resolution of singularities. It follows, by Corollary \ref{global sections isomorphism}, that there is an isomorphism of global sections:
\begin{gather*}
\gamma^*: \Gamma(\mathcal{N}^*, \mathcal{O}_{\mathcal{N}^*}) \to \Gamma(T^*\mathcal{B}, \mathcal{O}_{T^*\mathcal{B}}).
\end{gather*}
Recall from the proof of Theorem \ref{main theorem 1} that $\mathcal{O}(\mathcal{N}^*) = S(\mathfrak{g}) \otimes_{S(\mathfrak{g})^{\textbf{G}}} k$. Putting these isomorphisms together, we see that $S(\mathfrak{g}) \otimes_{S(\mathfrak{g})^{\textbf{G}}} k \cong \Gamma(\widetilde{\mathcal{N}^*}, \mathcal{O}_{\widetilde{\mathcal{N}^*}})$. \\
Now the second and third vertical arrows are isomorphisms by \cite[Proposition 6.5(a)]{AW}, which shows that $\text{Gr}(C^{\bullet})$ is exact. \\
(c) This is immediate, since one can also show that the first vertical arrow in the above diagram is an isomorphism via the Five Lemma.
\end{proof}
\begin{defn}\label{twisted localisation}
For each $\lambda \in \text{Hom}_R(\pi^n\mathfrak{h}, R)$, we define a functor:
\begin{gather*}
\text{Loc}^{\lambda}: \widehat{U(\mathfrak{g})^{\lambda}_{n,K}}-\text{mod} \to \widehat{\mathcal{D}^{\lambda}_{n,K}}-\text{mod}
\end{gather*}
given by $M \mapsto \widehat{\mathcal{D}^{\lambda}_{n,K}} \otimes_{\widehat{\mathcal{U}^{\lambda}_{n,K}}} M$.
\end{defn}
\subsection{Modules over completed enveloping algebras}
The adjoint action of $\textbf{G}$ on $\mathfrak{g}$ induces an action of $\textbf{G}$ on $U(\mathfrak{g})$ by algebra automorphisms. Composing the inclusion $U(\mathfrak{g})^{\textbf{G}} \to U(\mathfrak{g})$ with the projection $U(\mathfrak{g}) \to U(\mathfrak{t})$ defined by the direct sum decomposition $\mathfrak{g} = \mathfrak{n} \oplus \mathfrak{t} \oplus \mathfrak{n}^+$ yields the $\emph{Harish-Chandra}$ $\emph{homomorphism}$:
\begin{gather*}
\phi: U(\mathfrak{g})^{\textbf{G}} \to U(\mathfrak{t})
\end{gather*}
This is a morphism of deformable $R$-algebras, so by applying the deformation and $\pi$-adic completion functors, one obtains the $\emph{deformed Harish}$-$\emph{Chandra homomorphism}$:
\begin{gather*}
\widehat{\phi_{n,K}}: \widehat{U(\mathfrak{g})^\textbf{G}_{n,K}} \to \widehat{U(\mathfrak{t})_{n,K}}
\end{gather*}
which we will denote via the shorthand $\widehat{\phi}: Z \to \widetilde{Z}$. We have an action of the Weyl group $\textbf{W}$ on the dual Cartan subalgebra $\mathfrak{t}^*_K$ via the shifted dot-action:
\begin{gather*}
w \bullet \lambda = w(\lambda + \rho^{\prime}) - \rho^{\prime}
\end{gather*}
where $\rho^{\prime}$ is equal the half-sum of the T-roots on $\mathfrak{n}^+$. Viewing $U(\mathfrak{t})_K$ as an algebra of polynomial functions on $\mathfrak{t}^*_K$, we obtain a dot-action of $\textbf{W}$ on $U(\mathfrak{t})_K$. This action preserves the $R$-subalgebra $U(\mathfrak{t})_n$ of $U(\mathfrak{t})_K$ and so extends naturally to an action of $\textbf{W}$ on $\widetilde{Z}$. \\
\begin{thm}\label{9.3}
Suppose that that $\textbf{G} = PGL_n$, $p|n$, and $n > 2$. Then: \\
(a) set $A := \widehat{U(\mathfrak{g})_{n,K}}$. The algebra $Z$ is contained in the centre of $A$. \\
(b) the map $\widehat{\phi}$ is injective, and its image is the ring of invariants $\widetilde{Z}^{\textbf{W}}$. \\
(c) the algebra $\widetilde{Z}$ is free of rank $|\textbf{W}|$ as a module over $\widetilde{Z}^{\textbf{W}}$. \\
(d) $\widetilde{Z}^{\textbf{W}}$ is isomorphic to a Tate algebra $K \langle S_1, \cdots, S_l \rangle$ as complete doubly filtered $K$-algebras.
\end{thm}
\begin{proof}
(a): The algebra $U(\mathfrak{g})^{\textbf{G}}_K$ is central in $U(\mathfrak{g})_K$ via \cite[Lemma 23.2]{Hu3}. Since $U(\mathfrak{g})_K$ is dense in $A$, it is also contained in the centre of $A$. But $U(\mathfrak{g})^{\textbf{G}}_K$ is also dense in $Z$, and so $Z$ is central in $A$. \\
(b): By the Harish-Chandra homomorphism (see \cite[Theorem 7.4.5]{Di}), $\phi$ sends $U(\mathfrak{g})^{\textbf{G}}_K$ onto $U(\mathfrak{t})^{\textbf{W}}_K$, and so $\widehat{\phi}(Z)$ is contained in $\widetilde{Z}^{\textbf{W}}$. This is a complete doubly filtered algebra whose associated graded ring $\text{Gr}(\widetilde{Z}^{\textbf{W}})$ can be identified with $S(\mathfrak{t}_k)^{\textbf{W}_k}$. This induces a morphism of complete doubly filtered $K$-algebras $\alpha: Z \to \widetilde{Z}^{\textbf{W}}$. Its associated graded map $\text{Gr}(\alpha): \text{Gr}(Z) \to \text{Gr}(\widetilde{Z}^{\textbf{W}})$ can be identified with the isomorphism $\psi_k: S(\mathfrak{g}_k)^{\textbf{G}_k} \to S(\mathfrak{t}_k)^{\textbf{W}_k}$ by Proposition \ref{6.9}. Hence $\text{Gr}(\alpha)$ is an isomorphism, and so $\alpha$ is an isomorphism by completeness. \\
(c): By Theorem \ref{polynomial theorem} and Theorem \ref{G2 polynomial}, $S(\mathfrak{t}_k)$ is a free graded $S(\mathfrak{t}_k)^{\textbf{W}_k}$-module of rank $|\textbf{W}|$. Hence, by \cite[Lemma 3.2(a)]{AW}, $\widetilde{Z}$ is finitely generated over $Z$, and in fact is free of rank $|\textbf{W}|$. \\
(d): By Theorem \ref{polynomial theorem} and Theorem \ref{G2 polynomial}, $S(\mathfrak{t}_k)^{\textbf{W}_k}$ is a polynomial algebra in $l$ variables. Fix double lifts $s_1, \cdots, s_l \in U(\mathfrak{t})^{\textbf{W}}$ of these generators, as in the proof of Proposition \ref{6.9}. Define an $R$-algebra homomorphism $R[S_1, \cdots, S_l] \to \widetilde{Z}^{\textbf{W}}$ which sends $S_i$ to $s_i$. This extends to an isomorphism $K \langle S_1, \cdots, S_l \rangle \to \widetilde{Z}^{\textbf{W}}$ of complete doubly filtered $K$-algebras.
\end{proof}
We identify the $k$-points of the scheme $\mathfrak{g}^* := \text{Spec}(\text{Sym}_R \mathfrak{g})$ with the dual of the $k$-vector space $\mathfrak{g}$, so $\mathfrak{g}^*(k) = \mathfrak{g}^*_k$. Let $G$ denote the $k$-points of the algebraic group scheme $\textbf{G}$. $G$ acts on $\mathfrak{g}_k$ and $\mathfrak{g}^*_k$ via the adjoint and coadjoint action respectively.
Recall the definition of the enhanced moment map $\beta: \widetilde{T^*\mathcal{B}}(k) \to \mathfrak{g}^*_k$ from Definition \ref{enhanced moment map}. Given $y \in \mathfrak{g}^*_k$, write $G.y$ to denote the $G$-orbit of $y$ under the coadjoint action. We write $\mathcal{N}$ (resp. $\mathcal{N}^*$) to denote the nilpotent cone (resp. dual nilpotent cone) of the $k$-vector spaces $\mathfrak{g}_k$ and $\mathfrak{g}^*_k$. \\
\begin{prop}\label{9.8}
Suppose $p$ is nonspecial for $G$. For any $y \in \mathcal{N}^*$, we have $\text{dim } \beta^{-1}(y) = \text{dim } \mathcal{B} - \frac{1}{2} \text{dim } G.y$.
\end{prop}
\begin{proof}
This is stated for $\mathcal{N}$ as \cite[Theorem 10.11]{JN}. The result follows by applying the $G$-equivariant bijection $\kappa: \mathcal{N} \to \mathcal{N}^*$ from \cite[Section 6.5]{JN}.
\end{proof}
We now let $\mathfrak{g}_{\mathbb{C}}$ denote the complex semisimple Lie algebra with the same root system as $G$, and let $G_{\mathbb{C}}$ be the corresponding adjoint algebraic group. By \cite[Remark 4.3.4]{CM}, there is a unique non-zero nilpotent $G_{\mathbb{C}}$-orbit in $\mathfrak{g}^*_{\mathbb{C}}$, under the coadjoint action, of minimal dimension. Since each coadjoint $G_{\mathbb{C}}$-orbit is a symplectic manifold, it follows that each of these dimensions is an even integer. We set:
\begin{gather*}
r := \frac{1}{2} \text{min } \lbrace \text{dim } G_{\mathbb{C}} \cdot y \mid 0 \neq y \in \mathfrak{g}_{\mathbb{C}} \rbrace
\end{gather*}
\begin{prop}\label{9.9}
For any non-zero $y \in \mathcal{N}^*, \frac{1}{2} \text{dim } G \cdot y \geq r$, with no restrictions on $(G,p)$.
\end{prop}
\begin{proof}
We will demonstrate that this inequality holds for all split semisimple algebraic groups $G$ defined over an algebraically closed field $k$ of positive characteristic. When the characteristic $p$ is small, we will proceed via a case-by-case calculation of the maximal dimension of the centraliser $Z_G(y)$ of $y \in \mathcal{N}^*$. \\
By Proposition \ref{9.8}, $\text{dim } \beta^{-1}(y) = \text{dim } \mathcal{B} - \frac{1}{2} \text{dim } G \cdot y$. We may assume $y \in \mathcal{N}$ and $G$ acts on $\mathfrak{g}$ via the adjoint action by \cite[Section 6.5]{JN}. By \cite[Theorem 2]{HS}, we see that:
\begin{gather*}
\text{dim }\beta^{-1}(y) = \frac{1}{2}(\text{dim } Z_G(y) - \text{rk }(G))
\end{gather*}
where $Z_G(y)$ denotes the centraliser of $y$ in $G$. Hence it suffices to demonstrate that the following inequality:
\begin{gather*}
\text{dim } \mathcal{B} - \frac{1}{2}(\text{dim } Z_G(y) - \text{rk }(G)) \geq r
\end{gather*}
holds in all types. We evaluate on a case-by-case basis, aiming to find the maximal dimension of the centraliser. We first note that, using the work of \cite[1.6]{Sm1}, we have the following table:
\begin{center}
\begin{tabular}{ c|c|c }
Type & dim $\mathcal{B}$ & $r$ \\
\hline
$A_n$ & $1/2n(n+1)$ & $n$ \\
$B_n$ & $n^2$ & $2n-2$ \\
$C_n$ & $n^2$ & $n$ \\
$D_n$ & $n^2 - n$ & $2n-3$ \\
$E_6$ & 36 & 11 \\
$E_7$ & 63 & 17 \\
$E_8$ & 120 & 29 \\
$F_4$ & 24 & 8 \\
$G_2$ & 6 & 3
\end{tabular}
\end{center}
By \cite[Theorem 2.33]{LS}, when $p$ is nonspecial, the dimension of the centraliser is independent of the isogeny type of $G$. \\
Since $p$ is always nonspecial for a group of type $A$, it therefore suffices to consider $Z_{\mathfrak{sl}_n}(y)$. Since $p$ is good and $SL_n$ is a simply connected algebraic group, by \cite[Lemma 2.15]{LS}, it suffices to consider the centraliser of a non-identity unipotent element in $SL_n$. Via the identification $GL_n(k) = SL_n(k)Z(GL_n(k))$, it is sufficient to compute $Z_{GL_n(k)}(u)$, for some unipotent matrix $u$. This dimension is bounded above by $n^2$, the dimension of $GL_n(k)$ as an algebraic group, and so we have the expression:
\begin{gather*}
\text{dim } \mathcal{B} - \frac{1}{2}(\text{dim } Z_G(y) - \text{rk }(G)) \\
\geq \frac{1}{2} n(n+1) - \frac{1}{2}(n^2 - n)
\geq n.
\end{gather*}
Hence the inequality is verified in type $A$. \\
For the remaining classical groups, view $y \in \mathcal{N}$ as a nilpotent matrix, which without loss of generality may be taken to be in Jordan normal form. Let $m_1 \geq \cdots \geq m_r$ be the sizes of the Jordan blocks, with $\sum_{i=1}^r m_i = n$, the rank of the group. By \cite[Theorem 4.4]{He}, we have:
\begin{gather*}
\text{dim } Z_G(y) = \sum_{i=1}^r (im_i - \chi_V(m_i))
\end{gather*}
where $\chi_V$ is a function $\chi_V: \mathbb{N} \to \mathbb{N}$. It follows that:
\begin{gather*}
\text{dim } Z_G(y) \leq \sum_{i=1}^r im_i = \sum_{j=1}^n \sum_{i=j}^r m_i.
\end{gather*}
Since $m_1 \geq \cdots \geq m_r$ by construction, the maximum value of this sum is attained when $m_k = 1$ for all $k$. Hence we obtain the inequality $\text{dim } Z_G(y) \leq \frac{1}{2}n(n+1)$. Using this, it is easy to see that the required inequality holds except possibly in the cases $B_2, B_3, D_4$ and $D_5$. \\
For these cases, along with all exceptional cases, we directly verify that the inequality holds using the calculations on dimensions of centralisers in \cite[Chapter 8 and Chapter 22]{LS}. \\
\end{proof}
This allows us to prove our generalisation of \cite[Theorem 9.10]{AW}; a result on the minimal dimension of finitely generated modules over $\pi$-adically completed enveloping algebras. \\
\begin{defn}\label{canonical dimension def}
Let $A$ be a Noetherian ring. $A$ is $\emph{Auslander-Gorenstein}$ if the left and right self-injective dimension of $A$ is finite and every finitely generated left or right $A$-module $M$ satisfies, for $i \geq 0$ and every submodule $N$ of $\text{Ext}^i_A(M,A)$, $\text{Ext}^j_A(N,A) = 0$ for $j < i$. \\
In this case, the $\emph{grade}$ of $M$ is given by:
\begin{gather*}
j_A(M) := \text{inf} \lbrace j \mid \text{Ext}^j_A(M,A) \neq 0 \rbrace
\end{gather*}
and the $\emph{canonical dimension}$ of $M$ is given by:
\begin{gather*}
d_A(M) := \text{inj.dim}_A(A) - j_A(M).
\end{gather*}
\end{defn}
By the discussion in \cite[Section 9.1]{AW}, the ring $\widehat{U(\mathfrak{g})_{n,K}}$ is Auslander-Gorenstein and so it makes sense to define the canonical dimension function:
\begin{gather*}
d: \lbrace \text{finitely generated } \widehat{U(\mathfrak{g})_{n,K}}-\text{modules} \rbrace \to \mathbb{N}.
\end{gather*}
\begin{thm}\label{9.10}
Suppose $n > 0$ and let $M$ be a finitely generated $\widehat{U(\mathfrak{g})_{n,K}}$-module with $d(M) \geq 1$. Then $d(M) \geq r$.
\end{thm}
\begin{proof}
By \cite[Proposition 9.4]{AW}, we may assume that $M$ is $Z$-locally finite. We may also assume that $M$ is a $\widehat{\mathcal{U}^{\lambda}_{n,K}}$-module for some $\lambda \in \mathfrak{h}^*_K$, by passing to a finite field extension if necessary and applying \cite[Theorem 9.5]{AW}. \\
By Proposition $\ref{9.3}$(b), $\lambda \circ (i \circ \widehat{\phi}) = (w \bullet \lambda) \circ (i \circ \widehat{\phi})$ for any $w \in \textbf{W}$. Hence we may assume $\lambda$ is $\rho$-dominant by \cite[Lemma 9.6]{AW}. Hence $\text{Gr }(M)$ is a $\text{Gr }(\widehat{\mathcal{U}^{\lambda}_{n.K}}) \cong S(\mathfrak{g}_k) \otimes_{S(\mathfrak{g}_k)^{\textbf{G}_k}} k$-module by Theorem $\ref{6.10}$. If $\mathcal{M} := \text{Loc}^{\lambda}(M)$ is the corresponding coherent $\widehat{\mathcal{D}^{\lambda}_{n,K}}$-module in the sense of Definition \ref{twisted localisation}, then $\beta(\text{Ch}(\mathcal{M})) = \text{Ch}(M)$ via \cite[Corollary 6.12]{AW}. \\
Let $X$ and $Y$ denote the $k$-points of the characteristic varieties $\text{Ch}(\mathcal{M})$ and $\text{Ch}(M)$ respectively. Now $\text{Gr}(M)$ is annihilated by $S^+(\mathfrak{g}_k)^{\textbf{G}_k}$, and so $Y \subseteq \mathcal{N}^*$. We see that the map $\beta: T^*\mathcal{B} \to \mathfrak{g}$ maps $X$ onto $Y$. \\
Let $f: X \to Y$ be the restriction of $\beta$ to $X$. By \cite[Corollary 9.1]{AW}, since $\text{dim } Y = d(M) \geq 1$ we can find a non-zero smooth point $y \in Y$. By surjectivity, we have a smooth point $x \in f^{-1}(y)$. The induced differential $df_x: T_{X,x} \to T_{Y,y}$ on Zariski tangent spaces yields the inequality:
\begin{gather*}
\text{dim } Y + \text{dim } f^{-1}(y) \geq \text{dim } T_{X,x}
\end{gather*}
By \cite[Theorem 7.5]{AW}, $\text{dim } T_{X,x} \geq \text{dim } \mathcal{B}$. Hence:
\begin{gather*}
d(M) = \text{dim } Y \geq \text{dim } \mathcal{B} - \text{dim } \beta^{-1}(y)
\end{gather*}
By Proposition $\ref{9.8}$ and Proposition $\ref{9.9}$, the RHS equals $r$.
\end{proof}
$\textbf{Proof of Theorem C:}$ This follows from Theorem \ref{9.10} and \cite[Section 10]{AW} in the split semisimple case. We may then apply the same argument as in \cite{AJ} to remove the split hypothesis on the Lie algebra.
\bibliographystyle{plain}
| proofpile-arXiv_065-7078 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{The gauged linear sigma model}
In 1993, Witten gave a physical derivation of the Landau--Ginzburg
(LG)/Calabi--Yau (CY) correspondence by constructing a family of
theories, known as the \emph{gauged linear sigma model} or GLSM
\cite{Wi93}.
A mathematical realization of the LG model, called the
Fan--Jarvis--Ruan--Witten (FJRW) theory has been established in
\cite{FJR13} via topological and analytical methods.
On the algebraic side, an approach using the cosection localization
\cite{KiLi13} to construct the GLSM virtual cycle was discovered in
\cite{ChLi12, CLL15} in the narrow case, and in general in
\cite{CKL18, KL18}.
Along the cosection approach, some hybrid models were studied in
\cite{Cl17}, and a general algebraic theory of GLSM for
(\emph{compact-type sectors} of) GIT targets was put on a firm
mathematical footing by Fan, Jarvis and the third author \cite{FJR18}.
A further algebraic approach for broad sectors using matrix
factorizations has been developed in \cite{PoVa16, CFGKS18P}, while an
analytic approach has been developed in \cite{FJR16}, \cite{TX18}.
As discovered in \cite{ChLi12} and further developed in \cite{KiOh18P,
ChLi18P, CJW19P}, GLSM can be viewed as a deep generalization of the
hyperplane property of Gromov--Witten (GW) theory for arbitrary
genus.
However, comparing to GW theory, a major difference as well as a main
difficulty of GLSM is the appearance of an extra torus action on the
target, called the \emph{R-charge}, which makes the moduli stacks in
consideration for defining the GLSM virtual cycles non-proper in
general.
This makes the powerful tool of virtual localization \cite{GrPa99}
difficult to apply.
This is the second paper of our project aiming at a logarithmic GLSM
theory that solves the non-properness issue and provides a
localization formula by combining the cosection approach with the
logarithmic maps of Abramovich--Chen--Gross--Siebert \cite{AbCh14,
Ch14, GrSi13}.
This leads to a very powerful technique for computing higher genus
GW/FJRW-invariants of complete intersections in GIT quotients.
Applications include computing higher genus invariants of quintic
$3$-folds \cite{GJR17P, GJR18P}\footnote{A
different approach to higher genus Gromov--Witten invariants of
quintic threefolds has been developed by Chang--Guo--Li--Li--Liu
\cite{CGL18Pb, CGL18Pa, CGLL18P, CLLL15P, CLLL16P}.}, and the cycle
of holomorphic differentials \cite[Conjecture~A.1]{PPZ16P} by
establishing a localization formula of $r$-spin cycles conjectured by
the second author \cite{CJRSZ19P}.
This conjectural localization formula was the original motivation of
this project.
In our first paper \cite{CJRS18P}, we developed a \emph{principalization} of the boundary of the moduli of log maps, which provides a natural framework for extending cosections to the boundary of the logarithmic compactification. The simple but important $r$-spin case has been studied in \cite{CJRS18P} via the log compactification for maximal explicitness.
The goal of the current paper is to further establish a log GLSM
theory in the hybrid model case which allows possibly \emph{non-GIT
quotient} targets.
As our theory naturally carries two different perfect obstruction
theories, we further prove explicit relations among these virtual
cycles.
This will provide a solid foundation for our forthcoming paper
\cite{CJR20P2} where various virtual cycles involved in log GLSM will
be further decomposed using torus localizations and the developments
in \cite{ACGS20P,ACGS17}.
In the case of GIT quotient targets, another aspect of GLSM moduli
spaces is that they depend on a stability parameter and exhibit a
rich wall-crossing phenomenum.
To include general targets, the current paper concerns only the $\infty$-stability that
is closely related to stable maps. We leave the study of other
stability conditions in the case of GIT quotient targets to a future research.
\subsection{$R$-maps}
The following fundamental notion of $R$-maps is the result of our
effort to generalize pre-stable maps to the setting of GLSM with
possibly non-GIT quotient targets.
While the definition makes essential use of stacks, it is what makes
various constructions in this paper transparent.
\begin{definition}\label{def:R-map}
Let $\mathfrak{P} \to \mathbf{BC}^*_\omega$ be a proper, DM-type morphism of log stacks where
$\mathbf{BC}^*_\omega := \mathbf{BG}_m$ is the stack parameterizing line bundles with the
trivial log structure.
A \emph{logarithmic $R$-map} (or, for short, log $R$-map) over a log
scheme $S$ with target $\mathfrak{P} \to \mathbf{BC}^*_\omega$ is a commutative diagram:
\[
\xymatrix{
&& \mathfrak{P} \ar[d] \\
\cC \ar[rru]^{f} \ar[rr]_{\omega^{\log}_{\cC/S}}&& \mathbf{BC}^*_\omega
}
\]
where $\cC \to S$ is a log curve (see \S\ref{sss:log-curves}) and the
bottom arrow is induced by the log cotangent bundle
$\omega^{\log}_{\cC/S}$.
The notation $\mathbf{BC}^*_\omega$ is reserved for parameterizing the line bundle
$\omega^{\log}_{\cC/S}$ of the source curve.
\emph{Pull-backs} of log $R$-maps are defined as usual via pull-backs
of log curves.
For simplicity, we will call $f\colon \cC \to \mathfrak{P}$ a log $R$-map without specifying arrows to $\mathbf{BC}^*_\omega$. Such $f$ is called an \emph{$R$-map} if it factors through the open substack $\mathfrak{P}^{\circ}\subset \mathfrak{P}$ with the trivial log structure.
A pre-stable map $\underline{f}\colon \underline{\cC} \to \underline{\mathfrak{P}}$ over $\underline{S}$ with
compatible arrows to $\mathbf{BC}^*_\omega$ is called an \emph{underlying $R$-map}.
Here $\underline{\mathfrak{P}}$ is the underlying stack obtained by removing the log
structure of $\mathfrak{P}$.
\end{definition}
\begin{remark}
Our notion of $R$-maps originates from the $R$-charge in physics.
In the physical formulation of GLSM \cite{Wi93}, there is a target
space $X$ which is a K\"ahler manifold (usually a GIT quotient of a
vector space) and a superpotential $W \colon X \to \mathbb C$.
To define the A-topological twist, one needs to choose a
$\mathbb C^*$-action on $X$, called the R-charge, such that $W$ has $R$-charge
(weight) $2$.
The weights of the R-charge on the coordinates of $X$ are used to
twist the map or the fields of the theory \cite{Wi93,FJR18,GS09}.
As pointed out by one of the referees, the setting of this article
is more general than GLSM in physics in the sense that $X$ does not
necessarily have an explicit coordinate description.
For this purpose, we formulate the more abstract notion of $R$-maps
as above.
Our notion of $R$-maps agrees with those of \cite{Wi93,FJR18,GS09}
when $X$ is a GIT quotient of a vector space.
The moduli space of $R$-maps should give a mathematical
description of the spaces on which the general A-twist localizes in
physics.
\end{remark}
\begin{example}[quintic threefolds]
Consider the target $\mathfrak{P}^\circ = [\vb_{\mathbb{P}^4}(\cO(-5))/\mathbb C^*_{\omega}]$,
where $\mathbb C^*_{\omega} \cong \mathbb{G}_{m}$ acts on the line bundle $\vb_{\mathbb{P}^4}(\cO(-5))$ by
scaling the fibers with weight one.
The map $\mathfrak{P}^{\circ} \to \mathbf{BC}^*_\omega$ is the canonical map from the quotient
description of $\mathfrak{P}^{\circ}$.
In this case, an $R$-map $f\colon \cC \to \mathfrak{P}^{\circ}$ is equivalent to the
data of a map $g\colon \cC \to \mathbb{P}^4$ together with a section (or
``$p$-field'') $p \in H^0(\omega_C^{\log} \otimes f^* \cO(-5))$.
Therefore, if $\cC$ is unmarked, we recover the moduli space of
stable maps with $p$-fields \cite{ChLi12}, which is the GLSM moduli
space \cite{FJR18} for a quintic hypersurface in $\mathbb{P}^4$. The construction of this paper will provide a compactification of $\mathfrak{P}^{\circ}$ relative to $\mathbf{BC}^*_\omega$, and a compactification of the moduli of $p$-fields.
We refer the reader to Section~\ref{sec:examples} for more examples in a general situation.
\end{example}
Just like in Gromov--Witten theory, various assumptions on $\mathfrak{P}$ are
needed to build a proper moduli space as well as a virtual cycle.
While a theory of stable log $R$-maps for general $\mathfrak{P}$ seems to
require much further development using the full machinery of
logarithmic maps, we choose to restrict ourselves to the so called
\emph{hybrid targets} which already cover a large class of interesting
examples including both FJRW theory and complete intersections in
Gromov--Witten theory.
We leave the general case to a future research.
\subsection{$R$-maps to hybrid targets}
\label{ss:target-data}
\subsubsection{The input}\label{sss:input}
A hybrid target is determined by the following data:
\begin{enumerate}
\item A proper Deligne--Mumford stack $\cX$ with a projective coarse moduli scheme $X$.
\item A vector bundle $\mathbf{E}$ over $\cX$ of the form
\begin{equation*}
\mathbf{E} = \bigoplus_{i \in \ZZ_{> 0}} \mathbf{E}_i
\end{equation*}
where $\mathbf{E}_i$ is a vector bundle with the positive grading $i$. Write $d := \gcd\big(i \ | \ \mathbf{E}_i \neq 0 \big)$.
\item A line bundle $\mathbf{L}$ over $\cX$.
\item A positive integer $r$.
\end{enumerate}
For later use, fix an ample line bundle $H$ over $X$, and denote by $\cH$ its pull-back over $\cX$.
\subsubsection{The $r$-spin structure} The R-charge leads to the universal $r$-spin structure as follows.
Consider the cartesian diagram
\begin{equation}\label{diag:spin-target}
\xymatrix{
\mathfrak{X} \ar[rrr]^\Sp \ar[d] &&& \mathbf{BG}_m \ar[d]^{\nu_r} \\
\mathbf{BC}^*_\omega\times\cX \ar[rrr]^{\mathcal L_\omega\boxtimes\mathbf{L}^{\vee}} &&& \mathbf{BG}_m
}
\end{equation}
where $\mathcal L_\omega$ is the universal line bundle over $\mathbf{BC}^*_\omega$,
$\nu_r$ is the $r$th power map, the bottom arrow is defined by $\mathcal L_\omega\boxtimes\mathbf{L}^{\vee}$, and the top arrow is
defined by the universal $r$-th root of
$\mathcal L_\omega\boxtimes\mathbf{L}^{\vee}$, denoted by $\Sp$.
\subsubsection{The targets}
Fix a \emph{twisting choice} $a \in \frac{1}{d}\cdot\ZZ_{>0}$, and
set $\tilde{r} = a \cdot r$.
We form the weighted projective stack bundle over $\mathfrak{X}$:
\begin{equation}\label{equ:universal-proj}
\underline{\mathfrak{P}} := \underline{\mathbb{P}}^{\mathbf w}\left(\bigoplus_{i > 0}(\mathbf{E}^{\vee}_{i,\mathfrak{X}}\otimes\mathcal L_{\mathfrak{X}}^{\otimes i})\oplus \cO_{\mathfrak{X}} \right),
\end{equation}
where $\mathbf w$ is the collection of the weights of the $\mathbb{G}_m$-action such that the weight on the $i$-th factor is the positive integer
$a\cdot i$, while the weight of the last factor $\cO$ is 1.
Here for any vector bundle $V = \oplus_{i=1}^{r} V_i$ with a $\mathbb{G}_m$-action of weight $\mathbf w$, we use the notation
\begin{equation}\label{equ:def-proj-bundle}
\mathbb{P}^\mathbf w(V) = \left[\Big(\vb(V)\setminus \mathbf{0}_V \Big) \Big/ \mathbb{G}_m \right],
\end{equation}
where $\vb(V)$ is the total space of $V$, and $\mathbf{0}_V$ is the zero section of $V$. Intuitively, $\underline\mathfrak{P}$ compactifies the GLSM given by
\[
\mathfrak{P}^{\circ} := \vb\left(\bigoplus_{i > 0}(\mathbf{E}^{\vee}_{i,\mathfrak{X}}\otimes\mathcal L_{\mathfrak{X}}^{\otimes i})\right).
\]
The boundary $\infty_{\mathfrak{P}} = \underline{\mathfrak{P}} \setminus \mathfrak{P}^{\circ}$ is the Cartier divisor
defined by the vanishing of the coordinate corresponding to
$\cO_{\mathfrak{X}}$.
We make $\underline\mathfrak{P}$ into a log stack $\mathfrak{P}$ by equipping it with the log structure corresponding to the Cartier divisor $\infty_\mathfrak{P}$. Denote by $\mathbf{0}_\mathfrak{P}$ the zero section of the vector bundle $\mathfrak{P}^{\circ}$.
We arrive at the following commutative diagram
\begin{equation}\label{equ:hyb-target}
\xymatrix{
\mathfrak{P} \ar[r]^{\fp} \ar[rd]_{\ft}& \mathfrak{X} \ar[r]^{\zeta} \ar[d] & \mathbf{BC}^*_\omega \\
&\cX&
}
\end{equation}
where $\zeta$ is the composition $\mathfrak{X} \to \mathbf{BC}^*_\omega \times \cX \to \mathbf{BC}^*_\omega$ with the left arrow the projection to $\mathbf{BC}^*_\omega$. By construction, $\zeta\circ\fp$ is proper of DM-type.
The general notion of log R-maps formulated using $\mathfrak{P}$ can be
described more concretely in terms maps with log fields, see
\S~\ref{ss:log-fields}.
\subsubsection{The stability}\label{sss:stability}
A log $R$-map $f\colon \cC \to \mathfrak{P}$ over $S$ is \emph{stable} if $f$ is
representable, and if for a sufficiently small $\delta_0 \in (0,1)$ there exists $k_0 > 1$ such that for any pair $(k,\delta)$ satisfying $k > k_0$ and $\delta_0 > \delta > 0$, the following holds
\begin{equation}\label{equ:hyb-stability}
(\omega_{\cC/S}^{\log})^{1 + \delta} \otimes (\ft \circ f)^* \cH^{\otimes k} \otimes f^* \cO(\tilde{r}\infty_{\mathfrak{P}}) > 0.
\end{equation}
The notation $>$ in \eqref{equ:hyb-stability} means that the left hand side has strictly positive degree when restricting to each irreducible component of the source curve.
\begin{remark}
It has been shown that the stack of pre-stable log maps are proper
over the stack of usual pre-stable maps \cite{AbCh14, Ch14, GrSi13}.
Even given this, establishing a proper moduli stack remains a rather
difficult and technical step in developing our theory.
An evidence is that the moduli of underlying $R$-maps fails to be
universally closed \cite[Section~4.4.6]{CJRS18P} in even most basic
cases.
The log structure of $\mathfrak{P}$ plays an important role in the properness
as evidenced by the subtle stability \eqref{equ:hyb-stability} which
was found after many failed attempts.
\end{remark}
\begin{remark}
In case of rank one $\mathbf{E}$, the stability \eqref{equ:hyb-stability}
is equivalent to a similar formulation as in
\cite[Definition~4.9]{CJRS18P} using $\mathbf{0}_{\mathfrak{P}}$, see
Remark~\ref{rem:r-spin-stability}.
However, the latter does not generalize to the higher rank case,
especially when $\mathbf{E}$ is non-splitting.
Consequently, we have to look for a stability condition of very
different form, and a very different strategy for the proof of
properness compared to the intuitive proof in \cite{CJRS18P}, see
Section~\ref{sec:properties} for details.
\end{remark}
\subsubsection{The moduli stack}
Denote by $\SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$ the category of stable log $R$-maps fibered over the category of log schemes with fixed discrete data $(g, \vec{\varsigma}, \beta)$ such that
\begin{enumerate}
\item $g$ is the genus of the source curve.
\item The composition of the log $R$-map with $\ft$ has curve class $\beta \in H_2(\cX)$.
\item $\vec{\varsigma} = \{(\gamma_i, c_i)\}_{i=1}^n$ is a collection of pairs
such that $c_i$ is the contact order of the $i$-th marking with
$\infty_{\mathfrak{P}}$, and $\gamma_i$ is a component of the inertia stack
fixing the local monodromy at the $i$-th (orbifold) marking
(Definition~\ref{def:hyb-sector}).
\end{enumerate}
The first main result of the current article is the compactification:
\begin{theorem}[Theorem~\ref{thm:representability}]
\label{thm:intro-representability}
The category $\SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$ is
represented by a proper logarithmic Deligne--Mumford stack.
\end{theorem}
\begin{remark}
Different choices of data in Section \ref{sss:input} may lead to the
same $\mathfrak{P}$, hence the same $\SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$.
The ambiguity in our set-up is analogous to the non-unique choice
of R-charge of general GLSM \cite[Section 3.2.3]{FJR18}.
\end{remark}
\subsection{Virtual cycles}
Another goal of this paper is to construct various virtual cycles of (log) $R$-maps. For this purpose, we now impose the condition that $\cX$ is smooth.
\subsubsection{The canonical virtual cycles}
Olsson's logarithmic cotangent complex \cite{LogCot} provides a \emph{canonical perfect obstruction theory} for $\SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$, see Section \ref{ss:canonical-obs}. If $c_i = 0$ for all $i$, we refer it as a \emph{holomorphic theory}. Otherwise, we call it a \emph{meromorphic theory}. For our purposes, we are in particularly interested in the holomorphic theory and the closed substack
\[
\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta) \subset \SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)
\]
where log $R$-maps factor through $\mathbf{0}_{\mathfrak{P}}$ along all marked
points.
We call $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$ the \emph{stack of log
R-maps with compact-type evaluations}.
In this case, $\vec{\varsigma}$ is simply a collection of connected components
of the inertia stack $\ocI\mathbf{0}_{\mathfrak{P}_{\mathbf{k}}} $ of $\mathbf{0}_{\mathfrak{P}_{\mathbf{k}}} := \mathbf{0}_{\mathfrak{P}}\times_{\mathbf{BC}^*_\omega}\spec \mathbf{k}$ as all contact orders are zero.
The canonical perfect obstruction theory of $\SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$ induces a canonical perfect obstruction theory of $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$, see \eqref{equ:obs-compact-evaluation}, hence defines the \emph{canonical virtual cycle} $[\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)]^{\mathrm{vir}}$.
\subsubsection{Superpotentials and cosection localized virtual cycles}
A \emph{superpotential} is a morphism of stacks $W \colon \mathfrak{P}^{\circ} \to \mathcal L_\omega$ over $\mathbf{BC}^*_\omega$. Its {\em critical locus} $\crit(W)$ is the closed substack of $\mathfrak{P}^{\circ}$ where $\diff W: T_{\mathfrak{P}/\mathbf{BC}^*_\omega} \to W^*T_{\mathcal L_\omega/\mathbf{BC}^*_\omega}$ degenerates. We will consider the case that $\crit(W)$ is proper over $\mathbf{BC}^*_\omega$.
This $W$ induces a canonical Kiem--Li cosection of the canonical obstruction of the open sub-stack $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta) \subset \SR_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$. This leads to a \emph{cosection localized virtual cycle} $[\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)]_{\sigma}$ which represents $[\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)]^{\mathrm{vir}}$, and is supported on the proper substack
\begin{equation}\label{equ:red-cycle-support}
\IR_W \subset \SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)
\end{equation} parameterizing $R$-maps to the critical locus of $W$, see Section \ref{sss:cosection-localized-class}.
The virtual cycle $[\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)]_{\sigma}$ is the \emph{GLSM virtual cycle} that we next recover as a virtual cycle over a {\em proper} moduli stack.
\subsubsection{The reduced virtual cycles}
In general, the canonical cosection over
$\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$ does not have a nice
extension to $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$.
The key to this is a proper morphism constructed in \cite{CJRS18P},
called a \emph{modular principalization}:
\[
F\colon \UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta) \to \SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)
\]
where $\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$ is the moduli of stable log $R$-maps with {\em uniform maximal degeneracy}. Note that $F$ restricts to the identity on the common open substack $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$ of both its source and target.
The canonical perfect obstruction theory of $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$ pulls back to a canonical perfect obstruction theory of $\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$, hence the canonical virtual cycle $[\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)]^{\mathrm{vir}}$. Though $F$ does not change the virtual cycles in that $F_*[\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{},\beta)]^{\mathrm{vir}} = [\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{},\beta)]^{\mathrm{vir}}$, the cosection over $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$ extends to the boundary
\begin{equation}\label{equ:boundary}
\Delta_{\UH} := \UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta) \setminus \SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)
\end{equation}
with explicit poles \eqref{equ:canonical-cosection}. Then a general
machinery developed in Section~\ref{sec:POT-reduction}, produces a
\emph{reduced perfect obstruction theory} of
$\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)$, hence the \emph{reduced virtual
cycle} $[\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)]^{\mathrm{red}}$, see
Section~\ref{sss:reduced-theory}.
\begin{remark}
The two virtual cycles $[\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)]^{\mathrm{vir}}$ and $[\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P},\beta)]^{\mathrm{red}}$ have the same virtual dimension.
\end{remark}
\subsection{Comparing virtual cycles}
\subsubsection{Reduced versus cosection localized cycle}
We first show that log GLSM recovers GLSM:
\begin{theorem}[First comparison theorem \ref{thm:reduced=local}]
Let $\iota\colon \IR_W \to \UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ be the inclusion \eqref{equ:red-cycle-support}. Then we have
$$\iota_*[\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)]_{\sigma} = [\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)]^{\mathrm{red}}.$$
\end{theorem}
In Section~\ref{sec:examples}, we study a few examples explicitly. By the
first comparison theorem, the reduced virtual cycle of the compact moduli space of stable log $R$-maps recovers FJRW-theory and Clader's hybrid model when they are constructed using cosection localized virtual cycles \cite{CLL15, Cl17}.
Our machinery also applies to the Gromov--Witten theory of a complete
intersection, or more generally the zero locus $\mathcal{Z}$ of a non-degenerate section $s$ of
a vector bundle $\mathbf{E}$, Section \ref{ss:examples-GW}.
Examples include the quintic threefolds in $\mathbb{P}^4$, and Weierstrass
elliptic fibrations, which are hypersurfaces in a $\mathbb{P}^2$-bundle over
a not necessarily toric base $B$.
In this case, we may chose $r=1$ and
$\underline{\mathfrak{P}} = \mathbb{P}(\mathbf{E}^\vee \otimes \mathcal L_\omega \oplus \cO)$.
Combining with the results in \cite{ChLi12, KiOh18P, ChLi18P}, and
more generally in \cite{CJW19P, Pi20P}, we have
\begin{corollary}[Proposition \ref{prop:glsm-gw}]
Notations as above, we have
\begin{equation*}
p_*[\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)]^{\mathrm{red}}
= (-1)^{\rk(\mathbf{E})(1 - g) + \int_\beta c_1(\mathbf{E}) - \sum_{j = 1}^n \age_j(\mathbf{E})} \iota_*[\scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta)]^\mathrm{vir}
\end{equation*}
where
$p\colon \UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta) \to \scrM_{g,
\vec{\varsigma}}(\cX, \beta)$ sends a log $R$-map to the underlying stable
map to $\cX$, $\scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta)$ is the moduli of
stable maps to $\mathcal{Z}$, and
$\iota\colon \scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta) \to \scrM_{g,
\vec{\varsigma}}(\cX, \beta)$ is the inclusion.
\end{corollary}
Therefore Gromov--Witten invariants of $\mathcal{Z}$ (involving only cohomology classes
from the ambient $\cX$) can be computed in terms of (log) GLSM
invariants defined using $(\cX, W)$.
\subsubsection{Canonical versus reduced virtual cycles}
The canonical perfect obstruction and the canonical cosection of $\UH^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ together defines a reduced perfect obstruction theory of $\Delta_{\UH}$, hence the reduced virtual cycle $[\Delta_{\UH}]^{\mathrm{red}}$, see Section \ref{ss:comparison-2}. The following relates the reduced virtual cycle with the canonical virtual cycle by a third virtual cycle:
\begin{theorem}[Second comparison theorem \ref{thm:comparison-2}]
\begin{equation*}
[\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}, \beta)]^\mathrm{vir}
= [\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}, \beta)]^\mathrm{red} + \tilde{r} [\Delta_{\UH}]^\mathrm{red}.
\end{equation*}
\end{theorem}
By Lemma \ref{lem:pole-of-potential}, $\tilde{r}$ is the order of poles
of $W$ along $\infty_{\mathfrak{P}}$.
In particular, it is a positive integer.
The fact that the difference between the reduced and canonical virtual
cycles is again virtual allows us to further decompose
$[\Delta_{\UH}]^{\mathrm{red}}$ in \cite{CJR20P2} in terms of canonical and
reduced virtual cycles of punctured and meromorphic theories using
\cite{ACGS17, ACGS20P}.
This is an important ingredient in the proof of the structural
properties of Gromov--Witten invariants of quintics in \cite{GJR18P}.
\subsubsection{Change of twists}
Let $a_1, a_2$ be two twisting choices leading to two targets $\mathfrak{P}_1$ and $\mathfrak{P}_2$ respectively. Assume that $\frac{a_1}{a_2} \in \ZZ$. Then there is a morphism $\mathfrak{P}_1 \to \mathfrak{P}_2$ by taking $\frac{a_1}{a_2}$-th root stack along $\infty_{\mathfrak{P}_2}$.
\begin{theorem}[Change of twist theorem \ref{thm:red-ind-twists}]
There is a canonical morphism
$$
\nu_{a_1/a_2} \colon \UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_1,\beta) \to \UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_2,\beta)
$$
induced by $\mathfrak{P}_1 \to \mathfrak{P}_2$. Pushing forward virtual cycles along $\nu_{a_1/a_2}$, we have
\begin{enumerate}
\item $\nu_{{a_1/a_2},*}[\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_1,\beta)]^{\mathrm{vir}} = [\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_2,\beta)]^{\mathrm{vir}}$,
\item $\nu_{{a_1/a_2},*}[\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_1,\beta)]^{\mathrm{red}} = [\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_2,\beta)]^{\mathrm{red}}$,
\item $\nu_{{a_1/a_2},*}[\Delta_{\UH,1}]^{\mathrm{red}} = \frac{a_2}{a_1} \cdot [\Delta_{\UH,2}]^{\mathrm{red}}.$
\end{enumerate}
where $\Delta_{\UH,i} \subset \UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_i,\beta)$ is the boundary \eqref{equ:boundary} for $i=1,2$.
\end{theorem}
\begin{remark}
The flexibility of twisting choices allows different targets with
isomorphic infinity hyperplanes.
The above push-forwards together with the decomposition formulas in
\cite{CJR20P2} will provide relations among invariants of different
targets.
For example, they can be used to prove the
Landau--Ginzburg/Calabi--Yau correspondence for quintic threefolds
\cite{GJR19P}, as well as to prove a formula \cite[Conjecture~A.1]{PPZ16P}
for the class of the locus of holomorphic differentials with
specified zeros \cite{CJRSZ19P}.
\end{remark}
\subsection{Plan of Paper}
The paper is organized as follows.
In Section~\ref{sec:rmap}, we introduce stable $R$-maps and collect
the basic properties of their moduli spaces.
The canonical and reduced virtual cycles are constructed and the comparison theorems are
proven in Section~\ref{sec:tale}.
In Section~\ref{sec:examples}, we work out several examples
explicitly.
Theorem~\ref{thm:intro-representability} is proven in
Section~\ref{sec:properties}.
Section~\ref{sec:POT-reduction} discusses reducing virtual cycles
along the boundary in more generality, and is used extensively in
Section~\ref{sec:tale}.
\subsection{Acknowledgments}
The first author would like to thank Dan Abramovich, Mark Gross and
Bernd Siebert for the collaborations on foundations of stable log maps which
influenced the development of this project.
Last two authors wish to thank Shuai Guo for the collaborations which
inspired the current work.
The authors would like to thank Adrien Sauvaget, Rachel Webb and Dimitri Zvonkine
for discussions related to the current work.
The authors thank Huai-Liang Chang, Young-Hoon Kiem, Jun Li and
Wei-Ping Li for their inspiring works on cosection localization needed in our construction.
Part of this research was carried out during a visit of the Institute
for Advanced Studies in Mathematics at Zhejiang University. Three of us would like to thank the
Institute for the support.
The first author was partially supported by NSF grant DMS-1700682 and DMS-2001089.
The second author was partially supported by an AMS Simons Travel
Grant and NSF grants DMS-1901748 and DMS-1638352.
The last author was partially supported by Institute for Advanced Study in Mathematics of Zhejiang University,
NSF grant DMS-1405245 and
NSF FRG grant DMS-1159265 .
\subsection{Notations}
In this paper, we work over an algebraically closed field of characteristic zero, denoted by $\mathbf{k}$. All log structures are assumed to be \emph{fine and saturated} \cite{Ka88} unless otherwise specified. A list of notations is provided below:
\begin{description}[labelwidth=2cm, align=right]
\item[$\vb(V)$] the total space of a vector bundle $V$
\item[$\mathbb{P}^\mathbf w(V)$] the weighted projective bundle stack with weights $\mathbf w$
\item[$\cX$] a proper Deligne-Mumford stack with a projective coarse moduli
\item[$\cX \to X$] the coarse moduli morphism
\item[$\mathbf{BC}^*_\omega$] the universal stack of $\mathbb C^*_{\omega}$-torsors
\item[$r$] a positive integer
\item[$\Sp \to \mathfrak{X}$] universal $r$-spin bundle
\item[$\mathfrak{P} \to \mathbf{BC}^*_\omega$] the target of log $R$-maps
\item[$\uC \to \uS$] a family of underlying curves over $\uS$
\item[$\uC \to \ucC$] the coarse moduli morphism of underlying curves
\item[$\cC \to S$] a family of log curves over $S$
\item[$\cC \to C$] the coarse moduli morphism of log curves
\item[$f\colon \cC \to \mathfrak{P}$] a log $R$-map
\item[$\beta$] a curve class in $\cX$
\item[$n$] the number of markings
\item[$\vec{\varsigma}$] collection of discrete data at all markings
\item[$\mathscr{R}_{g,\vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$] the moduli stack of stable R-maps
\item[$\mathscr{R}_{g,\vec{\varsigma}}(\mathfrak{P},\beta)$] the moduli stack of stable log $R$-maps
\item[$\UH_{g,\vec{\varsigma}}(\mathfrak{P},\beta)$] the moduli stack of stable log $R$-maps with uniform maximal degeneracy
\item[$\mathscr{R}^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$] the moduli stack of stable R-maps with compact type evaluations
\item[$\mathscr{R}^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P},\beta)$] the moduli stack of stable log $R$-maps with compact type evaluations
\item[$\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P},\beta)$] the moduli stack of stable log $R$-maps with compact type evaluations and uniform maximal degeneracy
\item[$W\colon \mathfrak{P}^\circ \to \mathcal L_\omega$] the superpotential
\end{description}
\section{Logarithmic $R$-maps}
\label{sec:rmap}
\subsection{Twisted curves and pre-stable maps}
We first collect some basic notions needed in our construction.
\subsubsection{Twisted curves}
Recall from \cite{AbVi02} that a \emph{twisted $n$-pointed curve} over a scheme $\underline{S}$ consists of the following data
\[
(\underline{\cC} \to \underline{C} \to \underline{S}, \{p_i\}_{i=1}^n)
\]
where
\begin{enumerate}
\item $\underline{\cC}$ is a Deligne--Mumford stack proper over $\underline{S}$, and \'etale locally is a nodal curve over $\underline{S}$.
\item $p_i \subset \underline{\cC}$ are disjoint closed substacks in the smooth locus of $\underline{\cC} \to \underline{S}$.
\item $p_i \to \underline{S}$ are \'etale gerbes banded by the multiplicative group $\mu_{r_i}$ for some positive integer $r_i$.
\item the morphism $\underline{\cC} \to \underline{C}$ is the coarse moduli morphism.
\item Each node of $\underline{\cC} \to \underline{S}$ is balanced.
\item $\underline{\cC} \to \underline{C}$ is an isomorphism over $\underline{\cC}_{gen}$, where $\underline{\cC}_{gen}$ is the complement of the markings and the stacky critical locus of $\underline{\cC} \to \underline{S}$.
\end{enumerate}
The balancing condition means that formally locally near a node, the
geometric fiber is isomorphic to the stack quotient
\[
[\spec \big(\mathbf{k}[x,y]/(xy)\big) \big/ \mu_k]
\]
where $\mu_k$ is some cyclic group with the action
$\zeta(x,y) = (\zeta\cdot x, \zeta^{-1}\cdot y)$.
Given a twisted curve as above, by \cite[4.11]{AbVi02} the coarse
space $\underline{C} \to \underline{S}$ is a family of $n$-pointed usual pre-stable
curves over $\underline{S}$ with the markings determined by the images of
$\{p_i\}$.
The \emph{genus} of the twisted curve $\underline{\cC}$ is defined as the
genus of the corresponding coarse pre-stable curve $\underline{C}$.
When there is no danger of confusion, we will simply write
$\underline{\cC} \to \underline{S}$, and the terminology twisted curves and pre-stable curves are interchangable.
\subsubsection{Logarithmic curves}
\label{sss:log-curves}
An \emph{$n$-pointed log curve} over a fine and saturated log scheme $S$ in the sense of \cite{Ol07} consists of
\[
(\pi\colon \cC \to S, \{p_i\}_{i=1}^n)
\]
such that
\begin{enumerate}
\item The underlying data $(\underline{\cC} \to \underline{C} \to \underline{S}, \{p_i\}_{i=1}^n)$ is a twisted $n$-pointed curve over $\underline{S}$.
\item $\pi$ is a proper, logarithmically smooth, and integral morphism of fine and saturated logarithmic stacks.
\item If $\underline{U} \subset \underline{\cC}$ is the non-singular locus of $\underline{\pi}$, then $\ocM_{\cC}|_{\underline{U}} \cong \pi^*\ocM_{S}\oplus\bigoplus_{i=1}^{n}\NN_{p_i}$ where $\NN_{p_i}$ is the constant sheaf over $p_i$ with fiber $\NN$.
\end{enumerate}
For simplicity, we may refer to $\pi\colon \cC \to S$ as a log curve
when there is no danger of confusion.
The \emph{pull-back} of a log curve $\pi\colon \cC \to S$ along an
arbitrary morphism of fine and saturated log schemes $T \to S$ is the
log curve $\pi_T\colon \cC_T:= \cC\times_S T \to T$ with the fiber
product taken in the category of fine and saturated log stacks.
Given a log curve $\cC \to S$, we associate the \emph{log cotangent bundle} $\omega^{\log}_{\cC/S} := \omega_{\uC/\uS}(\sum_i p_i)$ where $\omega_{\uC/\uS}$ is the relative dualizing line bundle of the
underlying family $\uC \to \uS$.
\subsection{Logarithmic $R$-maps as logarithmic fields}\label{ss:log-fields}
In this subsection, we reformulate the notion of a log $R$-map in
terms of the more concrete notion of spin-maps with fields.
This will be useful for relating to previous constructions in GLSM (see Section \ref{sec:examples}), and for some of the proofs in
Section~\ref{sec:properties}.
\begin{definition}\label{def:spin}
Let $\underline{g}\colon \underline{\cC} \to \underline{\cX}$ be a pre-stable map
over $\underline{S}$.
An \emph{$r$-spin structure} of $\underline{g}$ is a line bundle $\mathcal L$ over
$\ucC$ together with an isomorphism
\[\mathcal L^{\otimes r} \cong \omega^{\log}_{\underline{\cC}/\underline{S}}\otimes g^*\mathbf{L}^{\vee}.\]
The pair $(\underline{g}, \mathcal L)$ is called an {\em $r$-spin map}.
\end{definition}
Given a log map $g\colon \cC \to \cX$ over $S$ and an $r$-spin
structure $\mathcal L$ of the underlying map $\underline{g}$, we introduce a
weighted projective stack bundle over $\uC$:
\begin{equation}\label{equ:proj-bundle}
\underline{\cP}_{\cC} := \mathbb{P}^\mathbf w\left(\bigoplus_{i > 0} (g^*(\mathbf{E}_i^\vee) \otimes \mathcal L^{\otimes i}) \oplus \cO\right)
\end{equation}
where $\mathbf w$ indicates the weights of the $\mathbb{G}_m$-action as in
\eqref{equ:universal-proj}.
The Cartier divisor $\infty_{\cP} \subset \underline{\cP}_{\cC}$ defined by the vanishing of the last coordinate,
is called the \emph{infinity
hyperplane}.
Let $\cM_{\infty_{\cP}}$ be the log structure on $\underline{\cP}_{\cC}$
associated to the Cartier divisor $\infty_{\cP}$, see \cite{Ka88}. Form the log stack
\[
\cP_{\cC} := (\underline{\cP}_{\cC}, \cM_{\cP_\cC} := \cM_{\cC}|_{\underline{\cP}_{\cC}}\oplus_{\cO^*}\cM_{\infty_{\cP}}),
\]
with the natural projection $\cP_{\cC} \to \cC$.
\begin{definition}
\label{def:log-field}
A \emph{log field} over an $r$-spin map $(g, \mathcal L)$ is a section
$\rho\colon \cC \to \cP_{\cC}$ of $\cP_{\cC} \to \cC$.
The triple $(g, \mathcal L, \rho)$ over $S$ is called an \emph{$r$-spin map
with a log field}.
The \emph{pull-back} of an $r$-spin map with a log field is
defined as the pull-back of log maps.
\end{definition}
We now show that the two notions --- log $R$-maps and pre-stable maps
with log fields --- are equivalent.
\begin{proposition}\label{prop:map-field-equiv}
Fix a log map $g\colon \cC \to \cX$ over $S$, and consider the following diagram of solid arrows
\[
\xymatrix{
\cC \ar@{-->}[rrd] \ar@/^2ex/@{-->}[rrrd] \ar@/^4ex/[rrrrd]^{g} \ar@/_6ex/[rrrdd]_{\omega^{\log}_{\cC/S}} &&& \\
&& \mathfrak{P} \ar[r] & \mathfrak{X} \ar[d]^{\zeta} \ar[r] & \cX \\
&& & \mathbf{BC}^*_\omega &
}
\]
We have the following equivalences:
\begin{enumerate}
\item The data of an $r$-spin map $(g, \mathcal L)$ is equivalent to
a morphism $\cC \to \mathfrak{X}$ making the above diagram
commutative.
\item The data of a log field $\rho$ over a given $r$-spin map
$(g, \mathcal L)$ is equivalent to giving a log $R$-map
$f\colon \cC \to \mathfrak{P}$ making the above diagram commutative.
\end{enumerate}
\end{proposition}
\begin{proof}
The first equivalence follows from Definition \ref{def:spin} and \eqref{diag:spin-target}.
Note that $(g, \mathcal L)$ induces a commutative diagram of solid arrows with all squares cartesian:
\begin{equation}\label{diag:map-field}
\xymatrix{
\cP_\cC \ar[d] \ar[r] &\mathfrak{P}_{\cC} \ar[r] \ar[d] & \mathfrak{P} \ar[d] \\
\cC \ar[r] \ar[rd]_{=} \ar@/^1pc/@{-->}[u]^{\rho}& \mathfrak{X}_{\cC} \ar[r] \ar[d] & \mathfrak{X} \ar[d] \\
& \cC \ar[r]^{\omega_{\cC/S}^{\log}} & \mathbf{BC}^*_\omega.
}
\end{equation}
Thus (2) follows from the universal property of cartesian squares.
\end{proof}
\begin{definition}\label{def:field-stability}
An $r$-spin map with a log field is \emph{stable} if the
corresponding $R$-map is stable.
\end{definition}
Let $f\colon \cC \to \mathfrak{P}$ be a logarithmic $R$-map over $S$, and
$\rho\colon\cC \to \cP_\cC$ be the corresponding logarithmic field.
Using \eqref{diag:map-field}, we immediately obtain
\[
f^* \cO(\tilde{r}\infty_{\mathfrak{P}}) = \rho^* \cO(\tilde{r}\infty_{\cP_{\cC}}),
\]
hence the following equivalent description of the stability condition:
\begin{corollary}\label{cor:field-stability}
A pre-stable map with log field $(g, \mathcal L, \rho)$ over $S$ is
stable iff the corresponding $R$-map $f$ is representable, and if for a sufficiently small $\delta_0 \in (0,1)$ there exists $k_0 > 1$ such that for any pair $(k,\delta)$ satisfying $k > k_0$ and $\delta_0 > \delta > 0$, the following holds
\begin{equation}\label{equ:field-stability}
(\omega_{\cC/S}^{\log})^{1 + \delta} \otimes g^* \cH^{\otimes k} \otimes \rho^* \cO(\tilde{r}\infty) > 0.
\end{equation}
\end{corollary}
\begin{remark}\label{rem:r-spin-stability}
The condition \eqref{equ:field-stability} is compatible
with the stability of log $r$-spin fields in
\cite[Definition~4.9]{CJRS18P}.
Let $\cX = \spec \mathbf{k}$ and $\rho\colon \cC \to \cP_\cC$ be a log $r$-spin
field over $S$ as in \cite{CJRS18P}.
The stability of $\rho$ is equivalent to
\begin{align*}
0 & < \omega^{\log}_{\cC/S}\otimes \rho^*\cO(k\cdot \mathbf{0}_{\cP}) \\
&= \omega^{\log}_{\cC/S}\otimes \mathcal L^{\otimes k}\otimes\rho^*\cO(k\cdot \infty_{\cP}) \\
&= \left((\omega^{\log}_{\cC/S})^{1+\frac{r}{k}}\otimes\rho^*\cO(r\cdot \infty_{\cP})\right)^{\otimes \frac{k}{r}}
\end{align*}
for $k \gg 0$. Now replacing $\frac{r}{k}$ by $\delta$ in
\[
(\omega^{\log}_{\cC/S})^{1+\frac{r}{k}}\otimes\rho^*\cO(r\cdot \infty_{\cP}) > 0,
\]
we recover \eqref{equ:field-stability} as desired.
\end{remark}
\subsection{The structure of the infinity divisor}
For later use, we would like to study the structure of $\infty_{\mathfrak{P}}$.
Let $\mathbf w$ and $\mathbf w'$ be two weights as in \eqref{equ:proj-bundle} such that $\mathbf w'$ corresponds to $a = \frac{1}{d}$. Consider $\mathbf w_{\infty}$ (resp.\ $\mathbf w'_{\infty}$) obtained by removing the weight of the $\cO$ factor from $\mathbf w$ (resp.\ $\mathbf w'$).
Since $\gcd(\mathbf w'_{\infty}) = 1$, we observe that
\[
\infty_{\mathfrak{P}'} = \mathbb{P}^{\mathbf w'_{\infty}}\big(\bigoplus_i\mathbf{E}^{\vee}_{i,\mathfrak{X}}\otimes \Sp^{\otimes i}\big) \cong \mathbb{P}^{\mathbf w'_{\infty}}\big(\bigoplus_i\mathbf{E}^{\vee}_{i,\mathfrak{X}}\big).
\]
In particular, there is a cartesian diagram
\begin{equation}\label{diag:infinity-pullback}
\xymatrix{
\infty_{\mathfrak{P}'} \ar[rr] \ar[d] && \infty_{\cX} := \mathbb{P}^{\mathbf w'_{\infty}}\big(\bigoplus_i\mathbf{E}^{\vee}_{i}\big) \ar[d] \\
\mathfrak{X} \ar[rr] && \cX.
}
\end{equation}
To fix the notation, denote by $\cO_{\infty_{\cX}}(1)$ the tautological line bundle over $\infty_{\cX}$ associated to the upper right corner, and by $\cO_{\infty_{\mathfrak{P}'}}(1)$ the pull-back of $\cO_{\infty_{\cX}}(1)$ via the top horizontal arrow. Let $\cO_{\mathfrak{P}'}(1)$ be the tautological line bundle associated to the expression of $\mathfrak{P}'$ as in \eqref{equ:universal-proj}.
Let $\ell = \gcd(\mathbf w_{\infty})$. Observe that
$
\underline{\mathfrak{P}} \to \underline{\mathfrak{P}}'
$
is an $\ell$-th root stack along $\infty_{\mathfrak{P}'}$. Thus, $\infty_{\mathfrak{P}}$ parameterizes $\ell$-th roots of the normal bundle $N_{\infty_{\mathfrak{P}'}/\mathfrak{P}'}$ over $\infty_{\mathfrak{P}'}$. In particular, the morphism
\[
\infty_{\mathfrak{P}} \to \infty_{\mathfrak{P}'}
\]
is a $\mu_{\ell}$-gerbe.
As shown below, the small number ``$\delta$'' in the stability condition \eqref{equ:hyb-stability} plays an important role in stabilizing components in $\infty_{\mathfrak{P}}$.
\begin{proposition}\label{prop:curve-in-infinity}
Consider an underlying $R$-map
\[
\xymatrix{
&& \infty_{\mathfrak{P}} \ar[d]^{\zeta\circ\fp} \\
\uC \ar[rru]^{\underline{f}} \ar[rr]_{\omega^{\log}_{\cC/S}}&& \mathbf{BC}^*_\omega
}
\]
over a geometric point. Consider the following commutative diagram
\begin{equation}\label{equ:map-to-rigid-infinity}
\xymatrix{
\uC \ar[r] \ar@/^3ex/[rr]^{\underline{f}'} \ar@/_3ex/[rrr]_{\underline{f}_{\cX}} & \infty_{\mathfrak{P}} \ar[r] & \infty_{\mathfrak{P}'} \ar[r] & \infty_{\cX}
}
\end{equation}
Then we have
\begin{equation}\label{equ:curve-in-infinity}
\underline{f}^* \cO_{\mathfrak{P}}(\tilde{r} \infty_{\mathfrak{P}})
= (\omega^{\log}_{\uC})^{\vee}\otimes \underline{f}^*\mathbf{L}\otimes (\underline{f}_\cX)^*\cO_{\infty_{\cX}}(\frac{r}{d}).
\end{equation}
Furthermore, we have
\begin{equation}\label{equ:infinity-stability}
(\omega_{\uC}^{\log})^{1 + \delta} \otimes (\ft \circ f)^* \cH^{\otimes k} \otimes f^* \cO(\tilde{r}\infty_{\mathfrak{P}}) > 0
\end{equation}
if and only if the coarse of $\underline{f}_{\cX}$ is stable in the usual sense.
\end{proposition}
\begin{proof}
Recall $\mathbf w$ corresponds to the choice $a = \frac{\ell}{d}$. We have
\[
\underline{f}^* \cO_{\mathfrak{P}}(\tilde{r} \infty_{\mathfrak{P}}) = (\underline{f}')^*\cO_{\mathfrak{P}'}(\frac{r}{d}\cdot \infty_{\mathfrak{P}'}).
\]
Since $\cO_{\mathfrak{P}'}(\infty_{\mathfrak{P}'}) \cong \cO_{\mathfrak{P}'}(1)$, we calculate
\[
(\underline{f}')^*\cO_{\mathfrak{P}'}(\infty_{\mathfrak{P}'})|_{\infty_{\mathfrak{P}'}} \cong (\underline{f}')^*\cO_{\infty_{\mathfrak{P}'}}(1)\otimes\mathcal L^{\otimes -d} = (\underline{f}_{\cX})^*\cO_{\infty_{\cX}}(1)\otimes \mathcal L^{\otimes -d}.
\]
Equation \eqref{equ:curve-in-infinity} is proved by combining the above calculation and Definition \ref{def:spin}.
Now using \eqref{equ:curve-in-infinity}, we obtain
\begin{multline}
\label{eq:positive-in-infinity}
(\omega_{\underline{\cC}}^{\log})^{1 + \delta} \otimes (\ft \circ f)^* \cH^{\otimes k} \otimes\underline{f}^* \cO(\tilde{r}\infty_{\mathfrak{P}}) \\
\cong (\omega_{\underline{\cC}}^{\log})^\delta \otimes (\ft \circ f)^* \cH^{\otimes k} \otimes (\ft \circ f)^*\mathbf{L}\otimes (f_\cX)^*\cO_{\infty_{\cX}}(\frac{r}{d}),
\end{multline}
Let $\underline{\mathcal{Z}} \subset \underline{\cC}$ be an irreducible component.
Note the \eqref{equ:infinity-stability} holds over $\underline{\mathcal{Z}}$ for
$k \gg 0$ unless $\ft \circ f$ contracts $\mathcal{Z}$ to a point.
Suppose we are in the latter situation, hence both
$(\ft \circ f)^* \cH^{\otimes k}$ and $(\ft \circ f)^*\mathbf{L}$ have
degree zero over $\underline{\mathcal{Z}}$.
Since $\underline{f}_{\cX}^*\cO_{\infty_\cX}(1)|_{\underline{\mathcal{Z}}}$ has
non-negative degree and $1 \gg\delta > 0$,
\eqref{equ:infinity-stability} holds if and only if either
$\underline{f}_{\cX}(\underline{\mathcal{Z}})$ is not a point, or
$\omega^{\log}_{\underline{\cC}}|_{\mathcal{Z}}$ is positive.
This proves the second statement.
\end{proof}
\begin{corollary}\label{cor:rat-bridge-stability}
Let $\underline{f}\colon \underline{\cC} \to \underline{\mathfrak{P}}$ be
an underlying R-map.
Then a rational bridge $\mathcal{Z} \subset \underline{\cC}$ fails to satisfy the
stability condition \eqref{equ:hyb-stability} if and only if
$\deg \underline{f}^*\cO_{\mathfrak{P}}(\infty_{\mathfrak{P}})|_{\mathcal{Z}} = 0$ and
$\deg (\ft \circ f)^*\cH|_{\mathcal{Z}} = 0$.
\end{corollary}
\begin{proof}
Suppose $\mathcal{Z}$ is unstable.
Then \eqref{equ:hyb-stability} implies that
$\deg (\ft \circ f)^*\cH^{\otimes k}|_{\mathcal{Z}} = 0$ for any $k \gg 0$,
hence $\deg (\ft \circ f)^*\cH|_{\mathcal{Z}} = 0$.
Since $\omega^{\log}_{\cC/S}|_{\mathcal{Z}} = \cO_{\mathcal{Z}}$, we have
$\deg \underline{f}^*\cO_{\mathfrak{P}}(\infty_{\mathfrak{P}})|_{\mathcal{Z}} \leq 0$.
It is clear that
$\deg \underline{f}^*\cO_{\mathfrak{P}}(\infty_{\mathfrak{P}})|_{\mathcal{Z}} \geq 0$ if
$f(\mathcal{Z}) \not\subset \infty_{\mathfrak{P}}$.
If $f(\mathcal{Z}) \subset \infty_{\mathfrak{P}}$, then \eqref{equ:curve-in-infinity}
implies $\deg \underline{f}^*\cO_{\mathfrak{P}}(\infty_{\mathfrak{P}})|_{\mathcal{Z}} \geq 0$.
Thus we have $\deg \underline{f}^*\cO_{\mathfrak{P}}(\infty_{\mathfrak{P}})|_{\mathcal{Z}} = 0$.
The other direction follows immediately from
\eqref{equ:hyb-stability}.
\end{proof}
\subsection{The combinatorial structures}
The \emph{minimality} or \emph{basicness} of stable log maps, which
plays a crucial role in constructing the moduli of stable log maps,
was introduced in \cite{AbCh14, Ch14, GrSi13}.
Based on their construction, a modification called \emph{minimality
with uniform maximal degeneracy} has been introduced in
\cite{CJRS18P} for the purpose of constructing reduced virtual cycles.
We recall these constructions for later reference.
\subsubsection{Degeneracies and contact orders}\label{ss:combinatorics}
We fix a log $R$-map $f\colon \cC \to \mathfrak{P}$ over $S$. Consider the induced morphism of characteristic sheaves:
\begin{equation}\label{equ:combinatorics}
f^{\flat} \colon f^*\ocM_{\mathfrak{P}} \to \ocM_{\cC}.
\end{equation}
Note that characteristic sheaves are constructible. We recall the following terminologies.\\
\noindent{(1)\em Degeneracies of irreducible components.}
An irreducible component $\mathcal{Z}\subset \cC$ is called \emph{degenerate} if $(f^*\ocM_{\mathfrak{P}})_{\eta_\mathcal{Z}}\cong \NN$ where $\eta_\mathcal{Z} \in \mathcal{Z}$ is the generic point, and \emph{non-degenerate} otherwise.
Equivalently $\mathcal{Z}$ is degenerate iff $f(\mathcal{Z}) \subset \infty_{\mathfrak{P}}$.
For a degenerate $\mathcal{Z}$, write
\[
e_\mathcal{Z} =\ \bar{f}^{\flat}(1)_{\eta_\mathcal{Z}} \in \ocM_{\cC, \eta_\mathcal{Z}} = \ocM_{S}
\]
and call it the \emph{degeneracy} of $\mathcal{Z}$. If $\mathcal{Z}$ is non-degenerate, set $e_\mathcal{Z} = 0$.
An irreducible component $\mathcal{Z}$ is called \emph{a maximally degenerate component}, if $e_{\mathcal{Z}'} \poleq e_{\mathcal{Z}}$ for any irreducible component $\mathcal{Z}'$. Here for $e_1, e_2 \in \ocM_{S}$, we define $e_1 \poleq e_2$ iff $(e_2 - e_2) \in \ocM_S$.
\noindent{(2)\em The structure at markings.}
Let $p \in \mathcal{Z}$ be a marked point. Consider
\[
(f^*\ocM_{\mathfrak{P}})_{p} \stackrel{\bar{f}^{\flat}}{\longrightarrow} \ocM_{\cC,p} \cong \ocM_S\oplus \NN \longrightarrow \NN
\]
where the arrow on the right is the projection.
If $(f^*\ocM_{\mathfrak{P}})_{p} \cong \NN$ or equivalently $f(p) \in \infty_{\mathfrak{P}}$, we denote by
$c_p \in \ZZ_{\geq 0}$ the image of $1 \in \NN$ via the above
composition, and $c_p = 0$ otherwise.
We call $c_p$ the \emph{contact order} at the marking $p$. Contact orders are a generalization of tangency multiplicities in the log setting.\\
\noindent{(3)\em The structure at nodes.}
Define the \emph{natural partial order} $\poleq$ on the set of irreducible components of $\cC$ such that $\mathcal{Z}_1 \poleq \mathcal{Z}_2$ iff $(e_{\mathcal{Z}_2} - e_{\mathcal{Z}_1}) \in \ocM_S$.
Let $q \in \cC$ be a node joining two irreducible components $\mathcal{Z}_1$ and $\mathcal{Z}_2$ with $\mathcal{Z}_1 \poleq \mathcal{Z}_2$.
Then \'etale locally at $q$, \eqref{equ:combinatorics} is of the form
\[
(\bar{f}^{\flat})_q \colon (f^*\ocM_{\mathfrak{P}})_q \to \ocM_{\cC,q} \cong \ocM_{S}\oplus_{\NN}\NN^2,
\]
where the two generators $\sigma_1$ and $\sigma_2$ of $\NN^2$
correspond to the coordinates of $\mathcal{Z}_1$ and $\mathcal{Z}_2$ at $q$
respectively, and the arrow $\NN := \langle\ell_q\rangle \to \NN^2$ is
the diagonal $\ell_q \mapsto \sigma_1 + \sigma_2$.
If $(f^*\ocM_{\mathfrak{P}})_q \cong \NN$ or equivalently
$f(q) \in \infty_{\mathfrak{P}}$, we have
\[
(\bar{f}^{\flat})_q (1) = e_{\mathcal{Z}_1} + c_q \cdot \sigma_1,
\]
where the non-negative integer $c_q$ is called the \emph{contact order} at $q$. In this case, we have a relation between the two degeneracies
\begin{equation}\label{equ:edge-relation}
e_{\mathcal{Z}_1} + c_q \cdot \ell_q = e_{\mathcal{Z}_2}.
\end{equation}
If $(f^*\ocM_{\mathfrak{P}})_q$ is trivial, then we set the contact order $c_q = 0$. Note that in this case $e_{\mathcal{Z}_1} = e_{\mathcal{Z}_2} = 0$, and \eqref{equ:edge-relation} still holds.
\subsubsection{Minimality}
We recall the construction of minimal monoids in \cite{Ch14, AbCh14,
GrSi13}.
The \emph{log combinatorial type} of the $R$-map $f$ consists of:
\begin{equation}\label{equ:combinatorial-type}
G = \big(\underline{G}, V(G) = V^{n}(G) \cup V^{d}(G), \poleq, (c_i)_{i\in L(G)}, (c_l)_{l\in E(G)} \big)
\end{equation}
where
\begin{enumerate}[(i)]
\item $\underline{G}$ is the dual intersection graph of the underlying curve $\underline{\cC}$.
\item $V^{n}(G) \cup V^{d}(G)$ is a partition of $V(G)$ where $V^{d}(G)$ consists of vertices of with non-zero degeneracies.
\item $\poleq$ is the natural partial order on the set $V(G)$.
\item We associate to a leg $i\in L(G)$ the contact order $c_i \in \NN$ of the corresponding marking $p_i$.
\item We associate to an edge $l\in E(G)$ the contact order $c_l \in \NN$ of the corresponding node.
\end{enumerate}
We introduce a variable $\ell_l$ for each edge $l \in E(G)$, and a variable $e_v$ for each vertex $v \in V(G)$. Denote by $h_l$ the relation
$ e_{v'} = e_v + c_l\cdot\ell_l
$
for each edge $l$ with the two ends $v \poleq v'$ and contact order $c_l$. Denote by $h_v$ the following relation
$
e_v = 0
$
for each $v \in V^{n}(G)$. Consider the following abelian group
\[
\mathcal{G} = \left(\big(\bigoplus_{v \in V(G)} \ZZ e_v\big) \oplus \big( \bigoplus_{l \in E(G)} \ZZ \rho_l \big) \right) \big/ \langle h_v, h_l \ | \ v\in V^{d}(G), \ l \in E(G) \rangle
\]
Let $\mathcal{G}^{t} \subset \mathcal{G}$ the torsion subgroup. Consider the following composition
\[
\big( \bigoplus_{v \in V(G)} \NN e_v \big) \oplus \big( \bigoplus_{l \in E(G)} \NN \rho_l\big) \to \mathcal{G} \to \mathcal{G}/\mathcal{G}^{t}
\]
Let $\oM(G)$ be the smallest submonoid that is saturated in $\mathcal{G}/\mathcal{G}^{t}$, and contains the image of the above composition. We call $\oM(G)$ the \emph{minimal or basic monoid} associated to $G$.
Recall from \cite[Proposition 3.4.2]{Ch14}, or \cite[Proposition 2.5]{CJRS18P} that there is a canonical map of monoids
\begin{equation}\label{equ:minimal}
\oM(G) \to \ocM_S
\end{equation}
induced by sending $e_v$ to the degeneracy of the component associated to $v$, and sending $\ell_l$ to the element $\ell_{q}$ as in \eqref{equ:edge-relation} associated to $l$. In particular, the monoid $\oM(G)$ is fine, saturated, and sharp.
\begin{definition}\label{def:minimal}
A log $R$-map is \emph{minimal} or \emph{basic} if over each geometric fiber, the natural morphism \eqref{equ:minimal} is an isomorphism.
\end{definition}
\subsubsection{Logarithmic $R$-map with uniform maximal degeneracy}
\label{sss:UMD}
\begin{definition}
A log $R$-map is said to have \emph{uniform maximal degeneracy} if there exists a maximal degenerate component over each geometric fiber, see Section \ref{ss:combinatorics} (1).
\end{definition}
Let $f \colon \cC \to \mathfrak{P}$ be a log $R$-map over a geometric log point $S$, and $G$ be its log combinatorial type. Assume that $f$ has uniform maximal degeneracy, and denote by $V_{\max} \subset V(G)$ the collection of vertices with the maximal degeneracy. We call $(G, V_{\max})$ the \emph{log combinatorial type with uniform maximal degeneracy}, and form the corresponding minimal monoid below.
Consider the torsion-free abelian group
\[
\big( \oM(G)^{gp}\big/ \sim \big)^{tf}
\]
where $\sim$ is given by the relations $(e_{v_1} - e_{v_2}) = 0$ for any $v_1, v_2 \in V_{\max}$. By abuse of notation, we may use $e_v$ for the image of the degeneracy of the vertex $v$ in $\big( \oM(G)^{gp}\big/ \sim \big)^{tf}$. Thus, for any $v \in V_{\max}$ their degeneracies in $\big( \oM(G)^{gp}\big/ \sim \big)^{tf}$ are identical, denoted by $e_{\max}$. Let $\oM(G,V_{\max})$ be the saturated submonoid in $\big( \oM(G)^{gp}\big/ \sim \big)^{tf}$ generated by
\begin{enumerate}
\item the image of $\oM(G) \to \big( \oM(G)^{gp}\big/ \sim \big)^{tf}$, and
\item the elements $(e_{\max} - e_v)$ for any $v \in V(G)$.
\end{enumerate}
By \cite[Proposition 3.7]{CJRS18P}, there is a natural morphism of monoids $\oM(G) \to \oM(G,V_{\max})$ which fits in a commutative diagram
\[
\xymatrix{
\oM(G) \ar[r] \ar[rd]_{\phi} & \oM(G,V_{\max}) \ar[d]^{\phi_{\max}}\\
& \ocM_S
}
\]
We call $\oM(G,V_{\max})$ the \emph{minimal monoid with uniform maximal degeneracy} associated to $(G, V_{\max})$, or simply the \emph{minimal monoid} associated to $(G,V_{\max})$.
\begin{definition}\label{def:umd-minimal}
A log $R$-map is \emph{minimal with uniform maximal degeneracy} if over each geometric fiber the morphism $\phi_{\max}$ is an isomorphism.
\end{definition}
Note that in general a log $R$-map minimal with
uniform maximal degeneracy does not need to be minimal in the sense of
Definition \ref{def:minimal}.
\subsubsection{The universal logarithmic target}\label{sss:universal-log-target}
Consider the log stack $\cA$ with the underlying stack $[\A^1/\mathbb{G}_m]$ and log structure induced by its toric boundary. It parameterizes Deligne--Faltings log structures of rank one \cite[A.2]{Ch14}. Thus there is a canonical strict morphism of log stacks
\begin{equation}\label{equ:universal-log-target}
\mathfrak{P} \to \cA.
\end{equation}
Let $\infty_{\cA} \subset \cA$ be the strict closed substack, then $\infty_{\mathfrak{P}} = \infty_{\cA}\times_{\cA}\mathfrak{P}$.
Given any log R-map $f \colon \cC \to \mathfrak{P}$, we obtain a log map
$f' \colon \cC \to \cA$ via composing with
\eqref{equ:universal-log-target}.
Then $f'$ and $f$ share the same log combinatorial type (with uniform
maximal degeneracy) since
\[
(f')^*\cM_{\cA} \cong f^*\cM_{\mathfrak{P}} \ \ \ \mbox{and} \ \ \ f^{\flat} = (f')^{\flat}.
\]
This point of view will be used later in our construction.
\subsection{The evaluation morphism of the underlying structure}\label{ss:underlying-evaluation}
Denote by $\mathfrak{P}_\mathbf{k} := \mathfrak{P}\times_{\mathbf{BC}^*_\omega}\spec \mathbf{k}$
where the arrow on the right is the universal $\mathbb{G}_m$-torsor.
Let $\cI\mathfrak{P}_\mathbf{k}$ be the cyclotomic inertia stack of $\mathfrak{P}_{\mathbf{k}}$ \cite[Definition 3.1.5]{AGV08}.
Then $\cI\infty_{\mathfrak{P}_\mathbf{k}} = \cI\mathfrak{P}_\mathbf{k}\times_{\cA}\infty_{\cA}$ is the
cyclotomic inertia stack of $\infty_{\mathfrak{P}_\mathbf{k}}$ equipped with the
pull-back log structure from $\cA$.
\begin{lemma}\label{lem:underlying-evaluation}
Let $f\colon \cC \to \mathfrak{P}$ be a log $R$-map over $S$, and $p \subset \cC$ be a marking. Then the restriction $f|_{p}$ factors through $\mathfrak{P}_\mathbf{k} \to \mathfrak{P}$. Furthermore, $f$ is representable along $p$ if the induced morphism $p \to \mathfrak{P}_\mathbf{k}$ is representable.
\end{lemma}
\begin{proof}
Since $\omega^{\log}_{\cC/S}|_{p} \cong \cO_{p}$, the composition $\underline{p} \to \underline{\cC} \to \mathbf{BC}^*_\omega$ factors through $\spec \mathbf{k} \to \mathbf{BC}^*_\omega$. This proves the statement.
\end{proof}
Consider the universal gerbes $\cI\mathfrak{P}_\mathbf{k} \to \ocI\mathfrak{P}_\mathbf{k}$ and
$\cI\infty_{\mathfrak{P}_\mathbf{k}} \to \ocI\infty_{\mathfrak{P}_\mathbf{k}}$ in $\mathfrak{P}_\mathbf{k}$ and
$\infty_{\mathfrak{P}_\mathbf{k}}$ \cite[Section 3]{AGV08}.
Let $f\colon \cC \to \mathfrak{P}$ be a log $R$-map over $S$ with constant
contact order $c_i$ along its $i$-th marking $p_i \subset \cC$.
Write $\ocI^i = \ocI\mathfrak{P}_\mathbf{k}$ if $c_i = 0$, and
$\ocI^i = \ocI\infty_{\mathfrak{P}_\mathbf{k}}$, otherwise.
By the above lemma, the restriction $f|_{p_i}$ induces the
\emph{$i$-th evaluation morphism of the underlying structures}
\begin{equation}\label{equ:underlying-evaluation}
\ev_i\colon \underline{S} \to \ocI^{i}
\end{equation}
such that $p_i \to \underline{S}$ is given by the pull-back of the universal gerbe over $\ocI^{i}$. Thus, connected components of $\ocI\mathfrak{P}_\mathbf{k} \cup \ocI\infty_{\mathfrak{P}_\mathbf{k}}$ provide discrete data for log $R$-maps. Note that $\ocI\mathfrak{P}_\mathbf{k} \cup \ocI\infty_{\mathfrak{P}_\mathbf{k}}$ is smooth provided that $\mathfrak{P} \to \mathbf{BC}^*_\omega$, hence $\mathfrak{P}_\mathbf{k}$ is smooth.
\begin{definition}\label{def:hyb-sector}
A \emph{log sector} $\gamma$ is a connected component of
either $\ocI\mathfrak{P}_\mathbf{k}$ or $\ocI\infty_{\mathfrak{P}_\mathbf{k}}$.
It is \emph{narrow} if gerbes parameterized by $\gamma$ all avoids $\infty_{\mathfrak{P}_\mathbf{k}}$.
A {\em sector of compact type} is a connected component of $\ocI\mathbf{0}_{\mathfrak{P}_\mathbf{k}}$. In particular all narrow sectors are of compact type.
\end{definition}
Due to the fiberwise $\mathbb C^*_\omega$-action of $\mathfrak{P} \to \mathfrak{X}$, it is easy to
see that a sector is narrow iff it parameterizes gerbes in
$\mathbf{0}_{\mathfrak{P}_\mathbf{k}}$.
Thus, the above definition is compatible with
\cite[Definition~4.1.3]{FJR18}.
Furthermore, since $\mathbf{0}_{\mathfrak{P}_\mathbf{k}}$ and $\infty_{\mathfrak{P}_\mathbf{k}}$ are
disjoint, the compact-type condition forces the contact order to be
trivial.
\subsection{The stack of logarithmic $R$-maps}\label{ss:hybrid-stack}
The discrete data of a log $R$-map $f\colon\cC \to \mathfrak{P}$
consists of the genus $g$, and the curve class $\beta \in H_2(\cX)$ of
$\ft \circ f$.
Furthermore, each marking has {\em discrete data} given by its contact
order $c$ and the log sector $\gamma$.
Let $\vec{\varsigma} = \{(\gamma_i, c_i)\}_{i=1}^n$ be the collection of discrete data at all markings where $n$ is the number of markings.
Denote by $\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ the stack of stable $R$-maps
over the category of logarithmic schemes with discrete data $g$,
$\beta$, $\vec{\varsigma}$.
Let $\UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$
be the category of objects with uniform maximal
degeneracy.
There is a tautological morphism \cite[Theorem~3.14]{CJRS18P}
\begin{equation}\label{equ:forget-uniform-degeneracy}
\UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta) \to \mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta).
\end{equation}
which is representable, proper, log \'etale, and surjective. Furthermore, \eqref{equ:forget-uniform-degeneracy} restricts to the identity over the open substack parameterizing log $R$-maps with images in $\mathfrak{P}^{\circ}$.
\begin{theorem}\label{thm:representability}
The categories $\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ and $\UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ are represented by proper log Deligne--Mumford stacks.
\end{theorem}
\begin{proof}
Since \eqref{equ:forget-uniform-degeneracy} is representable and proper, it suffices to verify the statement for
$\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$, which will be done in Section \ref{sec:properties}. A key to the representability is the fact discovered in \cite{AbCh14, Ch14,GrSi13} that the underlying stack $\underline{\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)}$ is the stack of minimal objects in Definition \ref{def:minimal}, and $\underline{\UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)}$ is the stack of minimal objects in Definition \ref{def:umd-minimal}, see also \cite[Theorem 2.11]{CJRS18P}.
\end{proof}
\subsection{Change of twists}\label{ss:change-twists}
This section studies R-maps under the change of twists in preparation
of the proof of the Change of twist theorem
(Theorem~\ref{thm:red-ind-twists}).
The reader may skip this section on first reading, and return when
studying the proof of Theorem~\ref{thm:red-ind-twists}.
Consider two twisting choices
$a_1, a_2 \in \frac{1}{d}\cdot \ZZ$ such that
$\frac{a_1}{a_2} \in \ZZ$.
Let $\mathfrak{P}_1$ and $\mathfrak{P}_2$ be the hybrid targets corresponding to the
choices of $a_1$ and $a_2$ respectively as in
\eqref{equ:universal-proj}.
Then there is a cartesian diagram of log stacks
\begin{equation}\label{diag:targets-twists}
\xymatrix{
\mathfrak{P}_1 \ar[rr] \ar[d] && \mathfrak{P}_2 \ar[d] \\
\cA_1 \ar[rr]^{\nu} && \cA_2
}
\end{equation}
where $\cA_{1}$ and $\cA_2$ are two copies of $\cA$, the vertical arrows are given by \eqref{equ:universal-log-target}, and $\nu$ is the morphism induced by $\NN \to \NN, 1 \mapsto \frac{a_1}{a_2}$ on the level of characteristic monoids. Note that the top is the $\frac{a_1}{a_2}$-th root stack along $\infty_{\mathfrak{P}_2}$ in $\mathfrak{P}_2$, and is compatible with arrows to $\mathbf{BC}^*_\omega$.
\begin{proposition}\label{prop:change-twists}
Let $f'\colon \cC' \to \mathfrak{P}_1$ be a stable log $R$-map over $S$. Then the composition $\cC' \to \mathfrak{P}_1 \to \mathfrak{P}_2$ factors through a stable log $R$-map $f\colon \cC \to \mathfrak{P}_2$ over $S$ such that
\begin{enumerate}
\item The morphism $\cC' \to \cC$ induces an isomorphism of their coarse curves, denoted by $C$.
\item The underlying coarse morphisms of $\cC' \to C\times_{\mathbf{BC}^*_\omega}\mathfrak{P}_1$ and $\cC \to C\times_{\mathbf{BC}^*_\omega}\mathfrak{P}_2$ are isomorphic.
\item If $f'$ has uniform maximal degeneracy, so does $f$.
\end{enumerate}
Furthermore, this factorization is unique up to a unique isomorphism.
\end{proposition}
\begin{proof}
Consider the stable log map
$\cC' \to \mathfrak{P}_{1,C} := \mathfrak{P}_1\times_{\mathbf{BC}^*_\omega}C$ induced by $f'$.
By \cite[Proposition~9.1.1]{AbVi02}, the underlying map of the
composition $\cC' \to \mathfrak{P}_{1,C} \to \mathfrak{P}_{2,C} := \mathfrak{P}_2\times_{\mathbf{BC}^*_\omega}C$
factors through a stable map $\underline{\cC} \to \underline{\mathfrak{P}_{2,C}}$ which
yields an induced underlying $R$-map $\underline{f}\colon \underline{\cC} \to \mathfrak{P}_{2}$.
We first construct the log curve $\cC \to S$. Let $\cC^{\sharp} \to S^{\sharp}$ be the canonical log structure associated to the underlying curve. Since $\mathfrak{P}_{1,C} \to \mathfrak{P}_{2,C}$ is quasi-finite, the morphism $\underline{\cC'} \to \underline{\cC}$ induces an isomorphism of coarse curves. Thus we obtain a log morphism $\cC' \to \cC^{\sharp}$ over $S \to S^{\sharp}$. This yields the log curve $\cC := S\times_{S^{\sharp}}\cC^{\sharp} \to S$.
Next we show that $f'$ descends to a log map $f\colon \cC \to \mathfrak{P}_2$.
Since the underlying structure $\underline{f}$ has already being
constructed, by \eqref{diag:targets-twists} it suffices to show that
the morphism $h'\colon \cC' \to \cA_1$ induced by $f'$ descends to
$h\colon \cC \to \cA_2$ with $\underline{h}$ induced by $\underline{f}$.
Since $\cA$ is an Artin cone, it suffices to check on the level of
characteristic sheaves over the log \'etale cover $\cC' \to \cC$,
i.e.\ we need to construct the dashed arrow making the following
commutative
\[
\xymatrix{
\ocM_{\cA_2}|_{\cC'} \ar@{-->}[r]^{\bar{h}^{\flat}} \ar@{^{(}->}[d] & \ocM_{\cC}|_{\cC'} \ar@{^{(}->}[d] \\
\ocM_{\cA_1} \ar[r]^{(\bar{h}')^{\flat}} & \ocM_{\cC'}.
}
\]
Thus, it suffices to consider the case where $\underline{S}$ is a geometric point.
Note that both vertical arrows are injective, hence we may view the monoids on the top as the submonoids of the bottom ones.
Let $\delta_1$ and $\delta_2$ be a local generator of $\cM_{\cA_1}$ and $\cM_{\cA_2}|_{\cC'}$ respectively. Denote by $\bar{\delta}_1 \in \ocM_{\cA_1}$ and $\bar{\delta}_1 \in \ocM_{\cA_2}|_{\cC'}$ the corresponding elements. Since $\bar{\delta}_2 \mapsto \frac{a_1}{a_2}\cdot \bar{\delta}_1$, it suffices to show that $m := (\bar{h}')^{\flat}(\frac{a_1}{a_2}\cdot \bar{\delta}_1) \in \ocM_{\cC}|_{\cC'}$.
Indeed, the morphism
$\underline{\cC'} \to \underline{\cA_1}\times_{\underline{\cA_2}}\underline{\cC}$ lifting the
identity of $\underline{\cC}$ is representable.
Hence along any marking, the morphism $\underline{\cC'} \to \underline{\cC}$ is a
$\rho$-th root stack with $\rho | \frac{a_1}{a_2}$.
And along each node, the morphism $\underline{\cC'} \to \underline{\cC}$ is a
$\rho$-th root stack with $\rho | \frac{a_1}{a_2}$ on
each component of the node.
By the definition of log curves, we have
$\frac{a_1}{a_2}\cdot \ocM_{\cC'} \subset
\ocM_{\cC}|_{\cC'}$.
This proves $m \in \ocM_{\cC}|_{\cC'}$ as needed for constructing
$h$ hence $f$.
Finally, consider any component $Z \subset \cC$ and the unique component $Z' \subset \cC'$ dominating $Z$. Then we have $e_{Z} = \frac{a_1}{a_2}\cdot e_{Z'}$ where $e_Z, e_Z' \in \ocM_S$ are the degeneracies of $Z$ and $Z'$ respectively. Therefore (3) holds, since if $Z'$ is maximally degenerate, so is $Z$.
\end{proof}
Consider log R-maps $f'$ and $f$ as in Proposition
\ref{prop:change-twists}.
Let $\vec{\varsigma}' = \{(\gamma'_i, c'_i)\}_{i=1}^n$ (resp.\
$\vec{\varsigma} = \{(\gamma_i, c_i)\}_{i=1}^n$) be the discrete data of $f'$
(resp.\ $f$) along markings.
Observe that $(\gamma_i, c_i)$ is uniquely determined by
$(\gamma'_i, c'_i)$ as follows.
First, since $\mathfrak{P}_{1,\mathbf{k}} \to \mathfrak{P}_{2,\mathbf{k}}$ is the $\frac{a_1}{a_2}$-th
root stack along $\infty_{\mathfrak{P}_{2,\mathbf{k}}}$, the sector $\gamma_i$ is uniquely
determined by $\gamma'_i$ \cite[Section~1.1.10]{AbFa16}.
Then the morphism $\cC' \to \cC$ is an $\varrho_i$-th root stack along
the $i$-th marking for some $\varrho_i | \frac{a_1}{a_2}$ uniquely
determined by the natural morphism $\gamma'_i \to \gamma_i$
\cite[Lemma~1.1.11]{AbFa16}.
The contact orders $c_i$ and $c_i'$ are then related by
\begin{equation}\label{equ:changing-contact-order}
c_i = \frac{a_1}{a_2}\cdot \frac{c'_i}{\varrho_i}.
\end{equation}
\begin{corollary}\label{cor:changing-twists}
There are canonical morphisms
\[
\mathscr{R}_{g,\vec{\varsigma}'}(\mathfrak{P}_1, \beta) \to \mathscr{R}_{g,\vec{\varsigma}}(\mathfrak{P}_2,\beta) \ \ \ \mbox{and} \ \ \ \UH_{g,\vec{\varsigma}'}(\mathfrak{P}_1, \beta) \to \UH_{g,\vec{\varsigma}}(\mathfrak{P}_2,\beta).
\]
For convenience, we denote both morphisms by $\nu_{a_1/a_2}$ when there is no danger of confusion.
\end{corollary}
\section{A tale of two virtual cycles}
\label{sec:tale}
This section forms the heart of this paper.
We first introduce the canonical perfect obstruction theory and
virtual cycle in Section~\ref{ss:canonical-obs}, and prove a change of
twist theorem in this setting in
Section~\ref{ss:can-theory-ind-twist}.
We then introduce the compact type locus, its canonical virtual cycle
(Section~\ref{ss:compact-type}) and the superpotentials
(Section~\ref{ss:superpotential}), in preparation for defining the
canonical cosection in Section~\ref{ss:can-cosection}.
This allows us to construct the reduced theory in
Section~\ref{ss:reduced}.
We then prove the comparison theorems in
Sections~\ref{ss:comparison-1}--\ref{ss:red-theory-ind-twist}.
The first time reader may skip Sections~\ref{ss:can-theory-ind-twist}
and \ref{ss:red-theory-ind-twist} related to the change of twists.
In addition, under the further simplification that the set $\Sigma$ of markings is
empty, the reader may skip Sections~\ref{ss:compact-type},
\ref{ss:modify-target} and \ref{sss:superpotential-modified-target}.
In this situation, the sup- or subscripts, ``$\mathrm{cpt}$'', ``$\mathrm{reg}$'' and
``$-$'' may be dropped.
\subsection{The canonical theory}\label{ss:canonical-obs}
For the purposes of perfect obstruction theory and virtual fundamental
classes, we impose in this section:
\begin{assumption}\label{assu:smooth-target}
$\cX$ is smooth.
\end{assumption}
The assumption implies that $\mathfrak{P} \to \mathbf{BC}^*_\omega$ is log smooth with the smooth underlying morphism.
To simplify notations, we introduce
\[
\UH := \UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta), \ \ \ \ \ \ \mathscr{R} := \mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta),
\]
for stacks of log R-maps as in Section \ref{ss:hybrid-stack}.
We also introduce
\[
\fU:= \fU_{g,\vec{c}}(\cA), \ \ \ \ \ \ \fM := \fM_{g,\vec{c}}(\cA)
\]
where $\fM_{g,\vec{c}}(\cA)$ (resp.\ $\fU_{g,\vec{c}}(\cA)$) is the stack parameterizing log
maps (resp.\ with uniform maximal degeneracy) to $\cA$ of genus $g$
and with contact orders $\vec{c}$ induced by $\vec{\varsigma}$.
These stacks fit in a cartesian diagram
\[
\xymatrix{
\UH \ar[rr]^{F} \ar[d] && \mathscr{R} \ar[d] \\
\fU \ar[rr] && \fM
}
\]
where the vertical arrows are canonical strict morphisms by Section \ref{sss:universal-log-target}, the bottom is given by \cite[Theorem 3.14]{CJRS18P}, and the top is \eqref{equ:forget-uniform-degeneracy}.
Let $\bullet$ be one of the stacks $\UH, \mathscr{R}, \fU$ or $\fM$, and $\pi_{\bullet}\colon \cC_{\bullet} \to \bullet$ be the universal curve. Denote by $\cP_{\bullet} := \cC_{\bullet}\times_{\mathbf{BC}^*_\omega}\mathfrak{P}$ where $\cC_{\bullet} \to \mathbf{BC}^*_\omega$ is induced by $\omega^{\log}_{\cC_{\bullet}/\bullet}$. Let $f_{\bullet}\colon \cC_{\bullet} \to \cP_{\bullet}$ be the section induced by the universal log $R$-map for $\bullet = \UH$ or $\mathscr{R}$.
Consider the commutative diagram
\[
\xymatrix{
\cC_{\mathscr{R}} \ar@/^1pc/[rrd]^{=} \ar[rd]^{f_{\mathscr{R}}} \ar@/_1pc/[ddr]_{f}&&& \\
& \cP_{\mathscr{R}} \ar[r] \ar[d] & \cC_{\mathscr{R}} \ar[r]^{\pi_{\mathscr{R}}} \ar[d] & \mathscr{R} \ar[d] \\
& \cP_{\fM} \ar[r] & \cC_{\fM} \ar[r]_{\pi_{\fM}} & \fM
}
\]
where the three vertical arrows are strict, and the two squares are
Cartesian.
We use $\LL$ to denote the log cotangent complexes in the sense of
Olsson \cite{LogCot}.
The lower and upper triangle yield
\[
\LL_{f_{\mathscr{R}}} \to f^*_{\mathscr{R}}\LL_{\cP_{\mathscr{R}}/\cP_{\fM}}[1] \cong \pi^*_{\mathscr{R}}\LL_{\mathscr{R}/\fM}[1] \qquad \mbox{and} \qquad \LL_{f_{\mathscr{R}}} \cong f^*_{\mathscr{R}}\LL_{\cP_{\mathscr{R}}/\cC_{\mathscr{R}}}[1],
\]
respectively.
Hence we obtain
\[f^*_{\mathscr{R}}\LL_{\cP_{\mathscr{R}}/\cC_{\mathscr{R}}} \to \pi^*_{\mathscr{R}}\LL_{\mathscr{R}/\fM}.\]
Tensoring both sides by the dualizing complex
$\omega_{\pi_{\mathscr{R}}}^\bullet = \omega_{\cC_{\mathscr{R}}/\mathscr{R}}[1]$ and applying
$\pi_{\mathscr{R},*}$, we obtain
\[
\pi_{\mathscr{R},*}\big(f^*_{\mathscr{R}}\LL_{\cP_{\mathscr{R}}/\cC_{\mathscr{R}}}\otimes\omega_{\pi_{\mathscr{R}}}^\bullet\big) \to \pi_{\mathscr{R},*}\pi^{!}_{\mathscr{R}} \LL_{\mathscr{R}/\fM} \to \LL_{\mathscr{R}/\fM}
\]
where the last arrow follows from the fact that $\pi_{\mathscr{R},*}$ is left
adjoint to $\pi^{!}(-) := \omega_{\pi_{\mathscr{R}}}^\bullet\otimes\pi^*(-)$.
Further observe that
$\LL_{\cP_{\mathscr{R}}/\cC_{\mathscr{R}}} = \Omega_{\mathfrak{P}/\mathbf{BC}^*_\omega}|_{\cP_\mathscr{R}}$ is the log
cotangent bundle. Hence, we obtain
\begin{equation}\label{equ:log-POS-R}
\varphi^{\vee}_{\mathscr{R}/\fM}\colon \EE^{\vee}_{\mathscr{R}/\fM} := \pi_{\mathscr{R},*}\big(f^*_{\mathscr{R}}\Omega_{\mathfrak{P}/\mathbf{BC}^*_\omega}\otimes\omega_{\pi_{\mathscr{R}}}^\bullet\big) \to \LL_{\mathscr{R}/\fM}.
\end{equation}
The same proof as in \cite[Proposition 5.1]{CJRS18P} shows that
$\varphi^{\vee}_{\mathscr{R}/\fM}$ is a perfect obstruction theory of
$\mathscr{R} \to \fM$ in the sense of \cite{BeFa97}.
Recall that $\fM$ is log smooth and equi-dimentional
\cite[Proposition~2.13]{CJRS18P}.
Denote by $[\mathscr{R}]^{\mathrm{vir}}$ the virtual cycle given by the virtual
pull-back of the fundamental class $[\fM]$ using
$\varphi^{\vee}_{\mathscr{R}/\fM}$.
Pulling back $\varphi^{\vee}_{\mathscr{R}/\fM}$ along $\UH \to \mathscr{R}$, we obtain a perfect obstruction theory of $\UH \to \fU$:
\begin{equation}\label{equ:log-POS}
\varphi^{\vee}_{\UH/\fU}\colon \EE^{\vee}_{\UH/\fU} := \pi_{\UH,*}\big(f^*_{\UH}\Omega_{\mathfrak{P}/\mathbf{BC}^*_\omega}\otimes\omega_{\pi_{\UH}}^\bullet\big) \to \LL_{\UH/\fU}.
\end{equation}
A standard calculation shows that $\EE_{\UH/\fU} = \pi_{\UH,*}\big(f^*_{\UH}T_{\mathfrak{P}/\mathbf{BC}^*_\omega}\big)$.
Let $[\UH]^{\mathrm{vir}}$ be the corresponding virtual cycle.
Since $\fU \to \fM$ is proper and birational by
\cite[Theorem~3.17]{CJRS18P}, by the virtual push-forward of
\cite{Co06, Ma12} the two virtual cycles are related by
\begin{equation}\label{equ:push-log-virtual-cycle}
F_*[\UH]^{\mathrm{vir}} = [\mathscr{R}]^{\mathrm{vir}}.
\end{equation}
\subsection{Independence of twists I: the case of the canonical theory}\label{ss:can-theory-ind-twist}
In this section, using the results from
Section~\ref{ss:change-twists}, we study the behavior of the canonical
virtual cycle under the change of twists.
\begin{proposition}\label{prop:can-ind-twists}
Given the situation as in Corollary \ref{cor:changing-twists}, we
have the following push-forward properties of virtual cycles
\begin{enumerate}
\item $\nu_{a_1/a_2,*}[\mathscr{R}_{g,\vec{\varsigma}'}(\mathfrak{P}_1, \beta)]^\mathrm{vir} = [\mathscr{R}_{g,\vec{\varsigma}}(\mathfrak{P}_2, \beta)]^\mathrm{vir}$,
\item $\nu_{a_1/a_2,*}[\UH_{g,\vec{\varsigma}'}(\mathfrak{P}_1, \beta)]^\mathrm{vir} = [\UH_{g,\vec{\varsigma}}(\mathfrak{P}_2, \beta)]^\mathrm{vir}$.
\end{enumerate}
\end{proposition}
\begin{proof}
We will only consider (1). Statement (2) can be proved identically
by considering only log maps with uniform maximal degeneracy, thanks
to Proposition \ref{prop:change-twists} (3).
Since $\mathfrak{P}_1 \to \mathfrak{P}_2$ is a log \'etale birational modification, (1) follows from a similar proof as in \cite{AbWi18} but in a simpler situation except that we need to take into account orbifold structures. In what follows, we will only specify the differences, and refer to \cite{AbWi18} for complete details.
First, consider the stack $\fM' := \fM_{g,\vec{c}'}'(\cA_1 \to \cA_2)$, the analogue of the one in \cite[Proposition 1.6.2]{AbWi18}, parameterizing commutative diagrams
\begin{equation}\label{eq:middle-stack}
\xymatrix{
\cC' \ar[r] \ar[d] & \cA_1 \ar[d] \\
\cC \ar[r] & \cA_2
}
\end{equation}
where $\cC' \to \cC$ is a morphism of log curves over $S$ inducing an
isomorphism of underlying coarse curves, the top and bottom are log maps with discrete data along markings given by $\vec{c}' = \{(r'_i, c'_i)\}$ (see Lemma \ref{lem:lift-root}) and $\vec{c} = \{(r_i, c_i)\}$ respectively, and the induced morphism
$\cC' \to \cC\times_{\cA_2}\cA_1$ is representable, hence is stable.
We first show that $\fM'$ is algebraic.
Indeed, let $\fM_{g,\vec{c}}(\cA)$ be the stack of genus $g$ log maps
to $\cA$ with discrete data $\vec{c}$ along markings.
Let $\fM_1$ be the stack parameterizing sequences
$\cC' \to \cC \to \cA_2$ where $\cC' \to \cC$ is a morphism of genus
$g$, $n$-marked log curves over $S$ with isomorphic underlying coarse
curves, and $\varrho_i$-th root stack along the $i$-th marking for each
$i$ \eqref{equ:changing-contact-order}.
$\fM_1$ is algebraic as the morphism $\fM_1 \to \fM_{g,\vec{c}}(\cA_2)$ defined by
\begin{equation}\label{eq:construct-mid-stack}
[\cC' \to \cC \to \cA_2] \mapsto [\cC \to \cA_2]
\end{equation}
is algebraic and $\fM_{g,\vec{c}}(\cA_2) = \fM_{g,\vec{c}}(\cA)$ is
algebraic.
Now $\fM'$ is given by the open substack of
$\fM_1\times_{\fM_{g,\vec{c}''}(\cA_2)}\fM_{g,\vec{c}'}(\cA_1)$ where
the representability of $\cC' \to \cC\times_{\cA_2}\cA_1$ holds.
Here
$\fM_{g,\vec{c}'}(\cA_1) = \fM_{g, \vec{c}'}(\cA) \to
\fM_{g,\vec{c}''}(\cA_2)$ is given by composing log maps to $\cA_1$
with $\cA_1 \to \cA_2$ hence
$\vec{c}'' = \{(r'_i, \frac{a_1}{a_2}\cdot c'_i)\}$, and
$\fM_1 \to \fM_{g,\vec{c}''}(\cA_2)$ is given by
$[\cC' \to \cC \to \cA_2] \mapsto [\cC' \to \cA_2]$.
Next, by Proposition~\ref{prop:change-twists} and
\eqref{diag:targets-twists}, we obtain the commutative diagram
\[
\xymatrix{
\mathscr{R}_{g,\vec{\varsigma}'}(\mathfrak{P}_1, \beta) \ar[rr] \ar[d]^{G'_1} \ar@/_2pc/[dd]_{G_1}&& \mathscr{R}_{g,\vec{\varsigma}}(\mathfrak{P}_2, \beta) \ar[d]^{G_2} \\
\fM' \ar[rr]_{F_2} \ar[d]^{F_1} && \fM_{g,\vec{c}}(\cA_2) \\
\fM_{g,\vec{c}'}(\cA_1) &&
}
\]
where we define a morphism $F_1 \colon \eqref{eq:middle-stack} \mapsto [\cC' \to \cA_1]$ and a proper morphism $F_2 \colon \eqref{eq:middle-stack} \mapsto [\cC \to \cA_2]$, and the square is cartesian.
Since the horizontal arrows in \eqref{diag:targets-twists} are logarithmic modifications in the sense of \cite{AbWi18}, the same proof as in \cite[Lemma 4.1, Section 5.1]{AbWi18} shows that $\fM' \to \fM_{g,\vec{c}'}(\cA_1)$ is strict and \'etale. Using the identical method as in \cite[Section 6.2]{AbWi18}, one construct a perfect obstruction theory of $G'_1$ which is identical to the one of $G_1$ as in \eqref{equ:log-POS-R}.
Furthermore Lemma \ref{lem:lift-root} and the same proof as in \cite[Proposition 5.2.1]{AbWi18} imply that $\fM' \to \fM_{g,\vec{c}'}(\cA_1)$ and $\fM' \to \fM_{g,\vec{c}}(\cA_2)$ are both birational. Finally, following the same lines of proof as in \cite[Section 6]{AbWi18} and using Costello's virtual push-forward \cite[Theorem 5.0.1]{Co06}, we obtain (1).
\end{proof}
Since log maps to $\cA$ are unobstructed \cite[Proposition 1.6.1]{AbWi18} (see also \cite[Proposition 2.13]{CJRS18P}), the discrete data along markings can be determined by studying the following non-degenerate situation.
\begin{lemma}\label{lem:lift-root}
Let $f \colon \cC \to \cA_2$ be a log map with discrete data
$(r_i, c_i)$ at the $i$-th marking.
Assume that no component of $\cC$ has image entirely contained in
$\infty_{\cA_2}$.
Let $\cC' \to \cC$ be obtained by taking the $\varrho_i$-th root
along the $i$-th marking for each $i$. Then
\begin{enumerate}
\item $f \colon \cC \to \cA_2$ lifts to $f' \colon \cC' \to \cA_1$ if and only if $\frac{a_1}{a_2} | c_i \cdot \varrho_i$. In this case, the lift is unique up to a unique isomorphism.
\item Furthermore, the induced $\cC' \to \cC\times_{\cA_2}\cA_1$ by
$f'$ is representable if and only if
$\varrho_i = \frac{a_1/a_2}{\gcd(c_i,a_1/a_2)}$.
In this case, let $(r'_i, c'_i)$ be the discrete data of $f_i$ at
the $i$-th marking.
Then for each $i$ we have
\[
r'_i = \varrho_i \cdot r_i \ \ \ \mbox{and} \ \ \ c'_i = \frac{c_i}{\gcd(c_i,a_1/a_2)}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Finding a lift $f'$ amounts to finding
$\cC' \to \cC\times_{\cA_2}\cA_1$ that lifts the identity
$\cC \to \cC$.
Thus, both (1) and (2) follow from \cite[Lemma~1.3.1]{AbFa16} and
\cite[Theorem~3.3.6]{Ca07}.
\end{proof}
\subsection{The compact type locus and its canonical virtual cycle}\label{ss:compact-type}
We next introduce the closed substack over which the reduced theory will be constructed.
\subsubsection{The logarithmic evaluation stacks}\label{sss:log-ev-stack}
Let $\fY = \fM$ (resp.\ $\fU$), and $\scrY = \mathscr{R}$ (resp.\ $\UH$) with the strict
canonical morphism $\scrY \to \fY$.
The \emph{$i$-th evaluation stack} $\fY^{\ev}_i$ associates to any
$\fY$-log scheme $S$ the category of commutative diagrams:
\begin{equation}\label{diag:log-evaluation}
\xymatrix{
p_i \ar[rr] \ar[d] && \mathfrak{P}_{\mathbf{k}} \ar[d] \\
\cC \ar[rr]^{\omega^{\log}_{\cC/S}} &&\cA
}
\end{equation}
where $\cC \to \cA$ is the log map over $S$ given by $S \to \fY$,
$p_i \subset \cC$ is the $i$-th marking with the pull-back log
structure from $\cC$, and the top horizontal arrow is representable.
There is a canonical strict morphism $ \fY^{\ev}_i \to \fY $
forgetting the top arrow in \eqref{diag:log-evaluation}.
By Lemma~\ref{lem:underlying-evaluation}, the morphism $\scrY \to \fY$
factors through the \emph{$i$-th evaluation morphism}
\[
\fev_i \colon \scrY \to \fY^{\ev}_i.
\]
For the reduced theory, we introduce substacks
\[
\fY^{\mathrm{cpt}}_i \subset \fY^{\mathring{\ev}}_i \subset \fY^{\ev}_i,
\]
where $\fY^{\mathring{\ev}}_i$ parameterizes diagrams \eqref{diag:log-evaluation} whose images at the $i$-th markings avoid $\infty_{\cA}$ (or equivalently avoids $\infty_{\mathfrak{P}}$) and $\fY^{\mathrm{cpt}}_i$ parameterizes diagrams \eqref{diag:log-evaluation} whose images at the $i$-th markings are contained in $\mathbf{0}_{\mathfrak{P}}$. Recall that $(\gamma_i, c_i)$ are the sector and contact order at the $i$-th marking, see Section \ref{ss:hybrid-stack}.
\begin{proposition}\label{prop:compact-evaluation}
Both $\fY^{\mathrm{cpt}}_i$ and $\fY^{\mathring{\ev}}_i$ are log algebraic stacks. Furthermore, we have
\begin{enumerate}
\item If $c_i > 0$, then $\fY^{\mathrm{cpt}}_i = \fY^{\mathring{\ev}}_i = \emptyset$.
\item The strict morphisms $\fY^{\mathrm{cpt}}_{i} \to \fY$ and $\fY^{\mathring{\ev}}_{i} \to \fY$ are smooth.
\end{enumerate}
\end{proposition}
\begin{remark}
$\fY^{\ev}_i$ is also algebraic, but we do not need this fact here.
\end{remark}
\begin{proof}
(1) follows from the definition of $\fY^{\mathrm{cpt}}_i$ and $\fY^{\mathring{\ev}}_i$. We now assume $c_i = 0$.
Let $\fY_{i,0} \subset \fY$ be the open dense sub-stack over which the
image of $p_i$ avoids $\infty_{\cA}$.
Let $\ocI \mathfrak{P}_\mathbf{k}^\circ \subset \ocI \mathfrak{P}_\mathbf{k}$ be the open substack
parameterizing gerbes avoiding $\infty_{\mathfrak{P}_{\mathbf{k}}}$.
By \eqref{diag:log-evaluation}, it follows that
\begin{equation}\label{equ:open-type-loci}
\fY^{\mathring{\ev}}_{i} = \fY_{i,0}\times \ocI \mathfrak{P}_\mathbf{k}^\circ
\end{equation}
hence $\fY^{\mathring{\ev}}_{i}$ is algebraic. Similarly $\fY^{\mathrm{cpt}}_i$ is a closed substack of $\fY^{\mathring{\ev}}_{i}$ given by
\begin{equation}\label{equ:compact-type-loci}
\fY^{\mathrm{cpt}}_{i} = \fY_{i,0}\times \ocI \mathbf{0}_{\mathfrak{P}_\mathbf{k}},
\end{equation}
hence is also algebraic.
(2) follows from the smoothness of $\ocI \mathbf{0}_{\mathfrak{P}_\mathbf{k}}$ and
$\ocI \mathfrak{P}_\mathbf{k}^\circ$.
\end{proof}
Consider the following fiber products both taken over $\fY$:
\begin{equation}\label{equ:compact-type-universal-stack}
\fY^{\mathrm{cpt}} := \prod_i \fY^{\mathrm{cpt}}_{i} \ \ \ \mbox{and} \ \ \ \fY^{\mathring{\ev}} := \prod_i \fY^{\mathring{\ev}}_i,
\end{equation}
where $i$ runs through all markings with contact order zero.
Consider fiber products
\begin{equation}\label{equ:compact-type-stack}
\scrY^\mathrm{cpt} := \scrY \times_{\fY^{\ev}}\fY^{\mathrm{cpt}} \ \ \ \mbox{and} \ \ \ \scrY^{\mathring{\ev}}:= \scrY \times_{\fY^{\ev}}\fY^{\mathring{\ev}}.
\end{equation}
Then $\scrY^{\mathring{\ev}} \subset \scrY$ (resp.\
$\scrY^\mathrm{cpt} \subset \scrY$) is the open (resp.\ closed) sub-stack
parameterizing stable log R-maps whose images at markings with the zero contact order avoid
$\infty_{\mathfrak{P}}$ (resp.\ in $\mathbf{0}_{\mathfrak{P}}$).
\subsubsection{The canonical perfect obstruction theory of $\scrY^\mathrm{cpt} \to \fY^{\mathrm{cpt}}$}
Consider the universal map and projection over $\fY^{\mathring{\ev}}$ respectively:
\[
\fev \colon \cup_i p_i \to \mathfrak{P} \ \ \ \mbox{and} \ \ \ \pi_{\ev}\colon \cup_{i}p_i \to \fY^{\mathring{\ev}}.
\]
By \eqref{equ:open-type-loci}, \eqref{equ:compact-type-universal-stack} and \cite[Lemma 3.6.1]{AGV08}, we have an isomorphism of vector bundles
\[
\varphi^{\vee}_{\fY^{\mathring{\ev}}/\fY}\colon \EE^{\vee}_{\fY^{\mathring{\ev}}/\fY} := (\pi_{\ev,*} \fev^*T_{\mathfrak{P}/\mathbf{BC}^*_\omega})^{\vee} \stackrel{\cong}{\longrightarrow} \LL_{\fY^{\mathring{\ev}}/\fY}.
\]
The perfect obstruction theory \eqref{equ:log-POS} restricts to a relative perfect obstruction theory
\[
\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY}\colon \EE^{\vee}_{\scrY^{\mathring{\ev}}/\fY} := \EE^{\vee}_{\mathscr{R}/\fM}|_{\scrY^{\mathring{\ev}}} \to \LL_{\scrY^{\mathring{\ev}}/\fY}.
\]
A standard construction as in \cite[A.2]{BrLe00} or \cite[Proposition 4.4, Lemma 4.5]{ACGS20P} yields a morphism of triangles
\begin{equation}\label{diag:compatible-obs}
\xymatrix{
\ev^*\EE^{\vee}_{\fY^{\mathring{\ev}}/\fY} \ar[r] \ar[d]_{\varphi^{\vee}_{\fY^{\mathring{\ev}}/\fY}} & \EE^{\vee}
_{\scrY^{\mathring{\ev}}/\fY} \ar[r] \ar[d]_{\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY}} & \EE^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}} \ar[d]_{\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}}} \ar[r]^-{[1]} & \\
\ev^*\LL_{\fY^{\mathring{\ev}}/\fY} \ar[r] & \LL_{\scrY^{\mathring{\ev}}/\fY} \ar[r] & \LL_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}} \ar[r]^-{[1]} &
}
\end{equation}
where $\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}}$ is a perfect obstruction theory of $\scrY^{\mathring{\ev}} \to \fY^{\mathring{\ev}}$ of the form
\[
\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}}\colon \EE^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}} := \pi_{\scrY^{\mathring{\ev}},*} f_{\scrY^{\mathring{\ev}}}^*(T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(- \Sigma))^{\vee} \to \LL_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}}.
\]
Here $\Sigma$ is the sum or union of all markings with the zero contact order. Thus, the two perfect obstruction theories $\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY}$ and $\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{^{\mathring{\ev}}}}$ are compatible in the sense of \cite{BeFa97}.
Pulling back $\varphi^{\vee}_{\scrY^{\mathring{\ev}}/\fY^{\mathring{\ev}}}$ to $\scrY^{\mathrm{cpt}}$ we obtain the {\em canonical perfect obstruction theory} of $\scrY^{\mathrm{cpt}} \to \fY^{\mathrm{cpt}}$
\begin{equation}\label{equ:obs-compact-evaluation}
\varphi^{\vee}_{\scrY^{\mathrm{cpt}}/\fY^{\mathrm{cpt}}}\colon \EE^{\vee}_{\scrY^{\mathrm{cpt}}/\fY^{\mathrm{cpt}}} := \pi_{\scrY^{\mathrm{cpt}},*} f_{\scrY^{\mathrm{cpt}}}^*(T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(- \Sigma))^{\vee} \to \LL_{\scrY^{\mathrm{cpt}}/\fY^{\mathrm{cpt}}}.
\end{equation}
Denote by $[\scrY^{\mathrm{cpt}}]^{\mathrm{vir}}$ the {\em canonical virtual cycle} of $\scrY^{\mathrm{cpt}}$ defined via \eqref{equ:obs-compact-evaluation}.
\subsubsection{The compact type locus}\label{sss:compact-type-loci}
We call $\scrY^\mathrm{cpt} \subset \scrY$ the {\em compact type locus} if the contact orders of all markings are equal to zero.
For the purpose of constructing the reduced virtual cycles over the compact type locus, we will impose for the rest of this section that
\begin{assumption}\label{assu:zero-contacts}
All contact orders are equal to zero.
\end{assumption}
This assumption is needed in our construction to apply the cosection localization of Kiem-Li \cite{KiLi13}.
In this case $\vec{\varsigma}$ is the same as a collection of log sectors which
are further restricted to be sectors of compact type (see
Definition~\ref{def:hyb-sector}).
Note that if all sectors are narrow, then $\scrY^\mathrm{cpt} = \scrY$.
\subsection{The superpotentials}\label{ss:superpotential}
Our next goal is to construct the reduced virtual cycle $[\scrY^{\mathrm{cpt}}]^{\mathrm{red}}$ for $\scrY^{\mathrm{cpt}} = \UH^{\mathrm{cpt}}$. The superpotential is a key ingredient which we discuss now.
\subsubsection{The definition}
A \emph{superpotential} is a morphism of stacks
\[
W\colon \mathfrak{P}^{\circ} \to \mathcal L_\omega
\]
over $\mathbf{BC}^*_\omega$. Equivalently, $W$ is a section of the line bundle $\mathcal L_\omega|_{\mathfrak{P}^{\circ}}$ over $\mathfrak{P}^{\circ}$.
Pulling-back $W$ along the universal torsor $\spec\mathbf{k} \to \mathbf{BC}^*_\omega$, we obtain a $\mathbb C^*_\omega$-equivariant function
\[
W_\mathbf{k} \colon \mathfrak{P}^{\circ}_\mathbf{k} \to \mathbf{k}
\]
which recovers the information of $W$.
Denote by $\crit(W_{\mathbf{k}}) \subset \mathfrak{P}^{\circ}_{\mathbf{k}}$ the {\em critical locus} of the holomorphic function $W_\mathbf{k}$. It descends to a closed substack $\crit(W) \subset \mathfrak{P}^{\circ}$.
\begin{definition}
We call $\crit(W)$ the {\em critical locus} of the superpotential
$W$.
We say that $W$ has \emph{proper critical locus} if $\crit(W_{\mathbf{k}})$
is proper over $\mathbf{k}$, or equivalently $\crit(W)$ is proper over
$\mathbf{BC}^*_\omega$.
\end{definition}
Let $\mathfrak{X}_\mathbf{k} := \mathfrak{X}\times_{\mathbf{BC}^*_\omega}\spec\mathbf{k}$ where the left arrow is
given by $\zeta$ in \eqref{equ:hyb-target} and the right one is the
universal torsor.
Since $\mathfrak{P}^{\circ}_\mathbf{k}$ is a vector bundle over $\mathfrak{X}_\mathbf{k}$, the
critical locus of $W_\mathbf{k}$, if proper, is necessarily supported on
the fixed locus $\mathbf{0}_{\mathfrak{P}_\mathbf{k}} \subset \mathfrak{P}_\mathbf{k}$ of the
$\mathbb C^*_\omega$-action.
\subsubsection{The extended superpotential}
To extend $W$ to $\mathfrak{P}$, we first observe the following:
\begin{lemma}\label{lem:pole-of-potential}
Suppose there exists a non-zero superpotential $W$. Then the order of poles of $W_{\mathbf{k}}$ along $\infty_{\mathfrak{P}_{\mathbf{k}}}$ is the positive integer $\tilde{r} = a \cdot r$.
\end{lemma}
\begin{proof}
The existence of non-zero $W$ implies that there is a sequence of non-negative integers $k_i$ such that
\[
r = \sum_i k_i \cdot i,
\]
where $i$ runs through the grading of non-trivial $\mathbf{E}_i$. The integrality of $\tilde{r}$ follows from the choices of $a$, and the order of poles of $W_{\mathbf{k}}$ follows from the choices of weights $\mathbf w$ in \eqref{equ:universal-proj}.
\end{proof}
Consider the $\mathbb{P}^1$-bundle over $\mathbf{BC}^*_\omega$
\[
\mathbb{P}_\omega = \mathbb{P}(\mathcal L_\omega\oplus\cO).
\]
We further equip $\mathbb{P}_\omega$ with the log structure given by its reduced
infinity divisor $\infty_{\mathbb{P}_\omega} := \mathbb{P}_\omega \setminus \vb(\mathcal L_\omega)$.
The superpotential $W$ extends to a rational map of log stacks
$\oW\colon \mathfrak{P} \dashrightarrow \mathbb{P}_\omega$ over $\mathbf{BC}^*_\omega$ with the
indeterminacy locus $\overline{(W^{-1}(\mathbf{0}_{\mathcal L_\omega}))}\cap \infty_{\mathfrak{P}}$ by Lemma \ref{lem:pole-of-potential}.
Equivalently, $\oW$ can be viewed as a rational section of $\mathcal L_\omega|_{\mathfrak{P}^{\circ}}$ extending $W$, and having poles along $\infty_{\mathfrak{P}}$ of order $\tilde{r}$.
\subsubsection{The twisted superpotential}
Next, we discuss how to extend the superpotential $W$ across the
boundary.
This will be shown to be the key to extending cosections to the
boundary of the log moduli stacks.
It should be noticed that the non-empty indeterminacy locus of $\oW$
is a new phenomenon compared to the $r$-spin case \cite{CJRS18P}, and
requires a somewhat different treatment as shown below.
Consider the log \'etale morphism of log stacks
\begin{equation}\label{equ:partial-exp}
\cA^e \to \cA\times\cA
\end{equation}
given by the blow-up of the origin of $\cA\times\cA$.
Denote by $\mathfrak{P}^e$ and $\mathbb{P}^e_\omega$ the pull-back of
\eqref{equ:partial-exp} along the following respectively
\[
\mathfrak{P}\times\cA_{\max} \stackrel{(\cM_{\mathfrak{P}}, id)}{\longrightarrow} \cA\times\cA_{\max} \ \ \ \mbox{and} \ \ \ \mathbb{P}_\omega\times\cA_{\max} \stackrel{(\cM_{\mathbb{P}_\omega}, \nu_{\tilde{r}})}{\longrightarrow} \cA\times \cA
\]
Here $\cA_{\max} = \cA$,
and $\nu_{\tilde{r}}$ is the degree $\tilde{r}$ morphism induced by $\NN \to \NN, \ 1 \mapsto \tilde{r}$ on the level of characteristics. Recall from Lemma \ref{lem:pole-of-potential} that $\tilde{r}$ is a positive integer given $W \neq 0$.
Denote by $\infty_{\mathfrak{P}^e} \subset \mathfrak{P}^e$ and $\infty_{\mathbb{P}^e} \subset \mathbb{P}^e_\omega$ the proper transforms of $\infty_{\mathfrak{P}}\times\cA_{\max}$ and $\infty_{\mathbb{P}_{\omega}}\times\cA_{\max}$ respectively. Consider
\begin{equation}\label{equ:remove-infinity}
\mathfrak{P}^{e,\circ} := \mathfrak{P}^{e}\setminus \infty_{\mathfrak{P}^e} \ \ \ \mbox{and} \ \ \ \mathbb{P}^{e,\circ}_\omega := \mathbb{P}^e_\omega \setminus \infty_{\mathbb{P}^e_\omega}.
\end{equation}
We obtain a commutative diagram with rational horizontal maps
\[
\xymatrix{
\mathfrak{P}^{e,\circ} \ar[d] \ar@{-->}[rr]^{\oW^{e,\circ}} && \mathbb{P}_\omega^{e,\circ} \ar[d] \\
\mathfrak{P}\times\cA_{\max} \ar@{-->}[rr]^{\oW\times id} && \mathbb{P}_\omega\times\cA_{\max}
}
\]
\begin{lemma}
There is a canonical surjective log morphism
\[
\fc\colon \mathbb{P}^{e,\circ}_\omega \to \vb\big(\mathcal L_\omega\boxtimes\cO_{\cA_{\max}}(\tilde{r}\Delta_{\max})\big)
\]
by contracting the proper transform of $\mathbb{P}_\omega\times \Delta_{\max}$ where $\Delta_{\max} \subset \cA_{\max}$ is the closed point, and the target of $\fc$ is equipped with the pull-back log structure from $\cA_{\max}$.
\end{lemma}
\begin{proof}
This follows from a local coordinate calculation.
\end{proof}
\begin{proposition}\label{prop:twisted-potential}
The composition $\tW := \fc\circ\oW^{e,\circ}$ is a surjective morphism that contracts the proper transform of $\mathfrak{P}\times\Delta_{\max}$.
\end{proposition}
\begin{proof}
A local calculation shows that the proper transform of
$\mathfrak{P}\times\Delta_{\max}$ dominates the proper transform of
$\mathbb{P}^{e,\circ}_\omega\times\Delta_{\max}$, hence is contracted by $\fc$.
The surjectivity of $\oW^{e,\circ}$ follows from the pole order of Lemma \ref{lem:pole-of-potential} and the above construction. Hence the surjectivity in the statement follows from the surjectivity of $\fc$ in the above lemma.
It remains to show that $\tW$ is well-defined everywhere. Let
$E^{\circ} \subset \mathfrak{P}^{e,\circ}$ be the exceptional divisor of
$\mathfrak{P}^{e,\circ} \to \mathfrak{P}\times \cA$.
Then $E^{\circ} \cong N_{\infty_{\mathfrak{P}/\mathfrak{P}}}$ is the total space
of the normal bundle.
The indeterminacy locus of $\oW^{e,\circ}$ is the fiber of
$E^{\circ} \to \infty_{\mathfrak{P}}$ over
$\overline{(W^{-1}(\mathbf{0}_{\mathcal L_\omega}))}\cap \infty_{\mathfrak{P}}$.
One checks that $\tW$ contracts the indeterminacy locus of
$\oW^{e,\circ}$ to the zero section of its target.
\end{proof}
\begin{definition}\label{def:twisted-potential-non-deg}
We call $\tW$ the \emph{twisted superpotential}.\footnote{This is
different from the ``twisted superpotential'' used in the
physics literature \cite[(2.27)]{Wi93}.}
It is said to have \emph{proper critical locus} if the vanishing
locus of the log differential $\diff \tW$, defined as a closed
strict substack of $\mathfrak{P}^{e,\circ}$, is proper over
$\mathbf{BC}^*_\omega \times \cA_{\max}$.
\end{definition}
\begin{proposition}
$\tW$ has proper critical locus iff $W$ has proper critical locus.
\end{proposition}
\begin{proof}
Since $W$ is the fiber of $\tW$ over the open dense point of $\cA_{\max}$, one direction is clear. We next assume that $W$ has proper critical locus.
Consider the substack $\mathfrak{P}^{e,*} \subset \mathfrak{P}^{e,\circ}$ obtained by
removing the zero section $\mathbf{0}_{\mathfrak{P}^{e}}$ and the proper transform of
$\mathfrak{P}\times_{\cA_{\max}}\Delta_{\max}$.
Since the proper transform of $\mathfrak{P}\times_{\cA_{\max}} \Delta_{\max}$ is proper over $\mathbf{BC}^*_\omega\times\cA_{\max}$, it suffices to show that the morphism
\[
\tW|_{\mathfrak{P}^{e,*}}\colon \mathfrak{P}^{e,*} \to \vb\big(\mathcal L_\omega\boxtimes\cO_{\cA_{\max}}(\tilde{r}\Delta_{\max})\big)
\]
has no critical points fiberwise over $\mathbf{BC}^*_\omega\times\cA_{\max}$, as
otherwise the critical locus would be non-proper due to the
$\mathbb C^*$-scaling of $\mathfrak{P}^{e,*}$.
On the other hand, $\mathfrak{P}^{e,*}$ can be expressed differently as follows
\[
\mathfrak{P}^{e,*} = \vb\big(\bigoplus_{i > 0}(\mathbf{E}^{\vee}_{i,\mathfrak{X}}\otimes\Sp^{\otimes i}\boxtimes \cO_{\cA_{\max}}(a i \Delta_{\max}))\big)\setminus \mathbf{0}
\]
where $\mathbf{0}$ is the zero section of the corresponding vector bundle.
Note that $W$ induces a morphism over $\mathbf{BC}^*_\omega\times\cA_{\max}$:
\[
\vb\big(\bigoplus_{i > 0}(\mathbf{E}^{\vee}_{i,\mathfrak{X}}\otimes\Sp^{\otimes i}\boxtimes \cO(a i \Delta_{\max}))\big) \to \vb\big(\mathcal L_\omega\boxtimes\cO(\tilde{r} \Delta_{\max}) \big).
\]
whose restriction to $\mathfrak{P}^{e,*}$ is precisely $\tW|_{\mathfrak{P}^{e,*}}$.
Since $\crit(W) $ is contained in the zero section, $\tW|_{\mathfrak{P}^{e,*}}$ has no
critical points on $\mathfrak{P}^{e,*}$.
\end{proof}
\subsection{The canonical cosection}\label{ss:can-cosection}
Next we construct the canonical cosection for the moduli of log
R-maps.
For this purpose, we adopt the assumptions in Section
\ref{sss:compact-type-loci} by assuming all contact orders are zero
and working with the compact type locus for the rest of this section.
Furthermore, in order for the canonical cosection to behave well along
the boundary of the moduli, it is important to work with log R-maps
with uniform maximal degeneracy, see Section \ref{sss:UMD}.
As already exhibited in the r-spin case \cite{CJRS18P}, this will be
shown to be the key to constructing the reduced theory in later
sections in the general case.
\subsubsection{Modifiying the target}\label{ss:modify-target}
We recall the short-hand notation $\fU^{\mathrm{cpt}}$ and $\UH^\mathrm{cpt}$ as in
\eqref{equ:compact-type-stack} and
\eqref{equ:compact-type-universal-stack}. Consider the universal log $R$-map and the projection over $\UH^\mathrm{cpt}$ respectively:
\[
f_{\UH^\mathrm{cpt}}\colon \cC_{\UH^\mathrm{cpt}} \to \mathfrak{P} \ \ \ \mbox{and} \ \ \ \pi\colon \cC_{\UH^\mathrm{cpt}} \to \UH^\mathrm{cpt}.
\]
Denote by
$f_{\UH^\mathrm{cpt}}\colon \cC_{\UH^\mathrm{cpt}} \to \cP_{\UH^\mathrm{cpt}} :=
\mathfrak{P}\times_{\mathbf{BC}^*_\omega}\cC_{\UH^\mathrm{cpt}}$ again for the corresponding section.
To obtain a cosection, we modify the target $\cP_{\UH^\mathrm{cpt}}$ as
follows.
Consider $\mathfrak{X}_{\UH^\mathrm{cpt}} := \cC_{\UH^\mathrm{cpt}}\times_{\omega^{\log},\mathbf{BC}^*_\omega,\zeta}\mathfrak{X}$.
Recall $\Sigma$ is the sum of all markings. We define $\cP_{\UH^\mathrm{cpt},-}$ to be the log stack with the underlying stack
\begin{equation}\label{equ:modified-target}
\underline{\cP_{\UH^\mathrm{cpt},-}} := \underline{\mathbb{P}}^{\mathbf w}\left(\bigoplus_{i > 0}(\mathbf{E}^{\vee}_i|_{\mathfrak{X}_{\UH^\mathrm{cpt}}}\otimes\mathcal L_{\mathfrak{X}}^{\otimes i}|_{\mathfrak{X}_{\UH^\mathrm{cpt}}}(-\Sigma))\oplus \cO_{\mathfrak{X}_{\UH^\mathrm{cpt}}} \right).
\end{equation}
The log structure on $\cP_{\UH^\mathrm{cpt},-}$ is defined to be the direct
sum of the log structures from the curve $\cC_{\UH^\mathrm{cpt}}$ and the
Cartier divisor $\infty_{\cP_{\UH^\mathrm{cpt},-}}$
similar to $\cP_{\UH^\mathrm{cpt}}$ in Section \ref{ss:canonical-obs}.
Denote by $\cP^{\circ}_{\UH^\mathrm{cpt},-} = \cP_{\UH^\mathrm{cpt},-} \setminus \infty_{\cP_{\UH^\mathrm{cpt},-}}$. We have a morphism of vector bundles over $\mathfrak{X}_{\UH^\mathrm{cpt}}$
\[
\cP^{\circ}_{\UH^\mathrm{cpt},-} \to \cP^{\circ}_{\UH^\mathrm{cpt}}
\]
which contracts the fiber over $\Sigma$, and is
isomorphic everywhere else.
This extends to a birational map
\[
\cP_{\UH^\mathrm{cpt},-} \dashrightarrow \cP_{\UH^\mathrm{cpt}}
\]
whose indeterminacy locus is precisely
$\infty_{\cP_{\UH^\mathrm{cpt},-}}|_{\Sigma}$. Denote by
\[
\cP_{\UH^\mathrm{cpt},\mathrm{reg}} = \cP_{\UH^\mathrm{cpt},-} \setminus \infty_{\cP_{\UH^\mathrm{cpt},-}}|_{\Sigma}.
\]
\begin{lemma}
There is a canonical factorization
\begin{equation}\label{equ:hmap-modify-target}
\xymatrix{
\cC_{\UH^\mathrm{cpt}} \ar[rr]^{f_{\UH^\mathrm{cpt}}} \ar[rd]_{f_{\UH^\mathrm{cpt},-}} && \cP_{\UH^\mathrm{cpt}} \\
&\cP_{\UH^\mathrm{cpt},\mathrm{reg}} \ar[ru]&
}
\end{equation}
\end{lemma}
\begin{proof}
Note that $f_{\UH^\mathrm{cpt},-}$ and $f_{\UH^\mathrm{cpt}}$ coincide when restricted
away from $\Sigma$.
The lemma follows from the constraint
$f_{\UH^\mathrm{cpt}}(\Sigma) \subset \mathbf{0}_{\cP_{\UH^\mathrm{cpt}}}$ of the compact type locus.
\end{proof}
The following lemma will be used to show the compatibility of perfect
obstruction theories constructed in \eqref{equ:obs-compact-evaluation}
and in \cite{CJW19P}.
\begin{lemma}
There is a canonical exact sequence
\begin{equation}\label{equ:modified-tangent}
0 \to f^*_{\UH^\mathrm{cpt}}T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(-\Sigma) \to f^*_{\UH^\mathrm{cpt},-}T_{\cP_{\UH^\mathrm{cpt},\mathrm{reg}}/\cC_{\UH^\mathrm{cpt}}} \to T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\Sigma} \to 0.
\end{equation}
\end{lemma}
\begin{proof}
Consider the following commutative diagram of solid arrows
\[
\xymatrix{
0 \ar[r] & T_{\mathfrak{P}/\mathfrak{X}}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \ar[r] \ar[d]^{\cong} & T_{\mathfrak{P}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \ar[r] \ar@{-->}[d] & T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \ar[r] \ar@{_{(}->}[d] & 0 \\
0 \ar[r] & T_{\cP_{\UH^\mathrm{cpt},\mathrm{reg}}/\mathfrak{X}_{\UH^\mathrm{cpt}}}|_{\cC_{\UH^\mathrm{cpt}}} \ar[r] \ar@{_{(}->}[d] & T_{\cP_{\UH^\mathrm{cpt},\mathrm{reg}}/\cC_{\UH^\mathrm{cpt}}}|_{\cC_{\UH^\mathrm{cpt}}} \ar[r] \ar@{_{(}->}[d] & T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}} \ar[r] \ar[d]^{\cong} \ar[r] & 0 \\
0 \ar[r] & T_{\mathfrak{P}/\mathfrak{X}}|_{\cC_{\UH^\mathrm{cpt}}} \ar[r] & T_{\mathfrak{P}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}} \ar[r] & T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}} \ar[r] & 0
}
\]
where the horizontal lines are exact, the top exact sequence is the twist of the bottom one, and the lower middle vertical arrow is induced by \eqref{equ:hmap-modify-target}. Note that the sheaves in the first two columns are naturally viewed as sub-sheaves of $T_{\mathfrak{P}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}$. The injection $T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \hookrightarrow T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}$ on the upper right corner can be viewed as an inclusion of quotients by the same sub-bundle
\[
T_{\mathfrak{P}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \big/T_{\mathfrak{P}/\mathfrak{X}}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \subset T_{\cP_{\UH^\mathrm{cpt},\mathrm{reg}}/\cC_{\UH^\mathrm{cpt}}}|_{\cC_{\UH^\mathrm{cpt}}}\big/T_{\mathfrak{P}/\mathfrak{X}}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma),
\]
which lifts to $T_{\mathfrak{P}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \subset T_{\cP_{\UH^\mathrm{cpt},\mathrm{reg}}/\cC_{\UH^\mathrm{cpt}}}|_{\cC_{\UH^\mathrm{cpt}}}$ by Lemma~\ref{lem:comm-alg} below. This defines the dashed arrow.
Finally, \eqref{equ:modified-tangent} follows from combinning the following exact sequence to the top two rows in the above commutative diagram
\[
0 \to T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}}(-\Sigma) \to T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\cC_{\UH^\mathrm{cpt}}} \to T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\Sigma} \to 0.
\]
\end{proof}
\begin{lemma}
\label{lem:comm-alg}
Suppose $R$ is a commutative ring, and $A, B, C$ are submodules of
an $R$-module $M$ satisfying $A \subset B$, $A \subset C$ and
$B/A \subset C/A$ as submodules of $M/A$.
Then $B \subset C$ as submodules of $M$.
\end{lemma}
\begin{proof}
The proof is left to the reader.
\end{proof}
\subsubsection{The boundary of the moduli stacks}
Recall from \cite[Section 3.5]{CJRS18P} that the maximal degeneracy induces canonical morphisms to $\cA_{\max}$
\[
\UH^\mathrm{cpt} \to \fU^{\mathrm{cpt}} \to \fU \to \cA_{\max}.
\]
Consider the Cartier divisors
\[
\Delta_{\fU} = \Delta_{\max}\times_{\cA_{\max}}\fU \subset \fU \ \ \ \mbox{and} \ \ \ \Delta_{\fU^{\mathrm{cpt}}} = \Delta_{\max}\times_{\cA_{\max}}\fU^{\mathrm{cpt}} \subset \fU^{\mathrm{cpt}}
\]
and their pre-image $\Delta_{\UH^\mathrm{cpt}} \subset \UH^\mathrm{cpt}$. Hence we have the line bundle
\[
\mathbf{L}_{\max} = \cO_{\fU^{\mathrm{cpt}}}(-\Delta_{\fU^{\mathrm{cpt}}}) = \cO_{\cA_{\max}}(-\Delta_{\max})|_{\fU^{\mathrm{cpt}}}.
\]
\begin{definition}
We call $\Delta_{\fU}$ (resp.\ \ $\Delta_{\UH^\mathrm{cpt}}$ and
$\Delta_{\fU^{\mathrm{cpt}}} $) the \emph{boundary of maximal degeneracy} of
$\fU$ (resp.\ \ $\UH^\mathrm{cpt}$ and $\fU^{\mathrm{cpt}}$).
We further introduce the \emph{interiors}
\begin{equation}\label{equ:interior-stack}
\IR^\mathrm{cpt} := \UH^\mathrm{cpt}\setminus (\Delta_{\UH^\mathrm{cpt}}) \ \ \ \mbox{and} \ \ \ \IU^{\mathrm{cpt}} := \fU^{\mathrm{cpt}}\setminus (\Delta_{\fU^{\mathrm{cpt}}})
\end{equation}
\end{definition}
By construction, $\IR^\mathrm{cpt}$ (resp.\ \ $\IU^{\mathrm{cpt}}$) parameterizes stable
log $R$-maps (resp.\ log maps) whose image avoids $\infty_{\mathfrak{P}}$
(resp.\ avoids $\infty_{\cA}$).
In this case, $\IU^{\mathrm{cpt}}$ is the stack of pre-stable curves since all
maps to $\cA$ factor through its unique open dense point.
In particular, $\IU^{\mathrm{cpt}}$ is smooth and log smooth.
\subsubsection{The twisted superpotential over the modified target}\label{sss:superpotential-modified-target}
Consider the two morphisms
\[
\cP_{\UH^\mathrm{cpt},\mathrm{reg}} \to \cA\times\cA_{\max} \ \ \ \mbox{and} \ \ \ \cP_{\UH^\mathrm{cpt}} \to \cA\times\cA_{\max}
\]
where the morphisms to the first copy $\cA$ are induced by their infinity divisors. Pulling back \eqref{equ:partial-exp} along the above two morphisms, we obtain
\[
\cP^e_{\UH^\mathrm{cpt},\mathrm{reg}} \to \cP_{\UH^\mathrm{cpt},\mathrm{reg}} \ \ \ \mbox{and} \ \ \ \cP^e_{\UH^\mathrm{cpt}} \to \cP_{\UH^\mathrm{cpt}}
\]
Further removing the proper transforms of
their infinity divisors from both, we obtain
$\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}}$ and $\cP^{e,\circ}_{\UH^\mathrm{cpt}}$.
Note that that
$\cP^{e,\circ}_{\UH^\mathrm{cpt}} \cong
\mathfrak{P}^{e,\circ}\times_{\mathbf{BC}^*_\omega}\cC_{\UH^\mathrm{cpt}}$.
Consider the short-hand
\begin{equation}\label{equ:twisted-omega}
\tomega := \omega_{\cC_{\UH^\mathrm{cpt}}/\UH^\mathrm{cpt}}\otimes \pi^*\mathbf{L}_{\max}^{-\tilde{r}} \ \ \ \mbox{and} \ \ \ \tomega_{\log} := \omega^{\log}_{\cC_{\UH^\mathrm{cpt}}/\UH^\mathrm{cpt}}\otimes \pi^*\mathbf{L}_{\max}^{-\tilde{r}}|_{\UH^\mathrm{cpt}}
\end{equation}
with the natural inclusion $\tomega \to \tomega_{\log}$.
\begin{lemma}\label{lem:potential-restrict}
There is a commutative diagram
\[
\xymatrix{
\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}} \ar[rr]^{\tcW_{-}} \ar[d] && \tomega \ar[d] \\
\cP^{e,\circ}_{\UH^\mathrm{cpt}} \ar[rr]^{\tcW} && \tomega_{\log}
}
\]
where the two vertical arrows are the natural inclusions, $\tcW$ is the pull-back of $\tW$, and the two horizontal arrows are isomorphic away from the fibers over $\Sigma$.
\end{lemma}
\begin{proof}
It suffices to construct the following commutative diagram
\[
\xymatrix{
\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}} \ar[rr]^{\tcW'_{-}} \ar[d] && \tomega_{\mathfrak{X}_{\UH^\mathrm{cpt}}} \ar[d] \\
\cP^{e,\circ}_{\UH^\mathrm{cpt}} \ar[rr]^{\tcW'} && \tomega_{\log, \mathfrak{X}_{\UH^\mathrm{cpt}}}
}
\]
where the right vertical arrow is the pull-back of
$\tomega \to \tomega_{\log}$ along
$\mathfrak{X}_{\UH^\mathrm{cpt}} \to \cC_{\UH^\mathrm{cpt}}$.
By Proposition~\ref{prop:twisted-potential}, the composition
\[
\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}} \to \cP^{e,\circ}_{\UH^\mathrm{cpt}} \to \tomega_{\log, \mathfrak{X}_{\UH^\mathrm{cpt}}}
\]
contracts the fiber of $\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}} \to \mathfrak{X}_{\UH^\mathrm{cpt}}$
over $\Sigma$ to the zero section of $\tomega_{\log, \mathfrak{X}_{\UH^\mathrm{cpt}}}$, hence
factors through
$\tomega_{\mathfrak{X}_{\UH^\mathrm{cpt}}} \cong
\tomega_{\log,\mathfrak{X}_{\UH^\mathrm{cpt}}}(-\Sigma)$.
\end{proof}
\subsubsection{The relative cosection}
By \cite[Lemma 3.18]{CJRS18P}, \eqref{equ:hmap-modify-target} canonically lifts to a commutative triangle
\begin{equation}\label{equ:hmap-e-target}
\xymatrix{
\cC_{\UH^\mathrm{cpt}} \ar[rr]^{f_{\UH^\mathrm{cpt}}} \ar[rd]_{f_{\UH^\mathrm{cpt},-}} && \cP^{e,\circ}_{\UH^\mathrm{cpt}} \\
&\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}} \ar[ru]&
}
\end{equation}
where the corresponding arrows are denoted again by
$f_{\UH^\mathrm{cpt}}$ and $f_{\UH^\mathrm{cpt},-}$.
Now we have
\[
f^*_{\UH^\mathrm{cpt},-}\diff\tcW_- \colon f^*_{\UH^\mathrm{cpt},-}T_{\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}}/\cC_{\UH^\mathrm{cpt}}} \longrightarrow (\tcW_-\circ f_{\UH^\mathrm{cpt},-})^*T_{\tomega/\cC_{\UH^\mathrm{cpt}}} \cong \tomega.
\]
By \eqref{equ:modified-tangent}, we have a composition
\begin{equation}\label{eq:composing-cosection}
f^*_{\UH^\mathrm{cpt}}T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(-\Sigma) \longrightarrow f^*_{\UH^\mathrm{cpt},-}T_{\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}}/\cC_{\UH^\mathrm{cpt}}} \to \tomega,
\end{equation}
again denoted by $f^*_{\UH^\mathrm{cpt},-}\diff\tcW_-$. Pushing forward along $\pi$ and using \eqref{equ:obs-compact-evaluation}, we have
\[
\sigma^{\bullet}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} := \pi_*\big(f^*_{\UH^\mathrm{cpt},-}\diff\tcW_-\big) \colon \EE_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \longrightarrow \pi_*\tomega \cong \pi_*\omega_{\cC_{\UH^\mathrm{cpt}}/\UH^\mathrm{cpt}}\otimes\mathbf{L}_{\max}^{-\tilde{r}}|_{\UH^\mathrm{cpt}}.
\]
where the isomorphism follows from the projection formula and \eqref{equ:twisted-omega}.
Finally, taking the first cohomology we obtain the \emph{canonical cosection}:
\begin{equation}\label{equ:canonical-cosection}
\sigma_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \colon \obs_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} := H^1(\EE_{\UH/\fU^{\mathrm{cpt}}}) \longrightarrow \mathbf{L}_{\max}^{-\tilde{r}}|_{\UH^\mathrm{cpt}}.
\end{equation}
\subsubsection{The degeneracy locus of $\sigma_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$}
Denote by $\IR_W$ the stack of $R$-maps in $\UH^\mathrm{cpt}$
which factor through $\crit(W)$.
Since $\crit(W)$ is a closed sub-stack of $\mathfrak{P}$, $\IR_W$ is a strict closed substack of $\IR^\mathrm{cpt}$. The stack $\UH^{\mathrm{cpt}}$ plays a key role in the following crucial result.
\begin{proposition}\label{prop:cosection-degeneracy-loci}
Suppose $W$ has proper critical locus.
Then the degeneracy locus of $\sigma_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$ is
supported on $\IR_W \subset \UH^\mathrm{cpt}$.
\end{proposition}
\begin{proof}
It suffices to check the statement at each geometric point.
Let $f\colon \cC \to \mathfrak{P}$ be a stable log $R$-map given by a
geometric point $S \to \UH^\mathrm{cpt}$.
Following the same line of proof as in \cite{CJW19P}, consider the
cosection:
\[
\sigma_S:= \sigma_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}|_S \colon H^1(f^*T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(-\Sigma)) \to H^1(\tomega|_{\cC})
\]
Applying Serre duality and taking dual, we have
\[
\sigma^{\vee}_S \colon H^0(\omega_{\cC/S}\otimes\tomega^{\vee}|_{\cC}) \to H^0(\omega_{\cC/S}\otimes f^*\Omega_{\mathfrak{P}/\mathbf{BC}^*_\omega}(\Sigma)).
\]
Note that $\omega_{\cC/S}\otimes\tomega^{\vee}|_{\cC} =\mathbf{L}_{\max}^{\tilde{r}}|_{\cC} \cong \cO_{\cC}$. Thus $\sigma_S$ degenerates iff
\[
id\otimes\big(f^*_{-}\diff\tcW_-\big)^{\vee} \colon \omega_{\cC/S}\otimes\tomega^{\vee}|_{\cC} \to \omega_{\cC/S}\otimes f^*\Omega_{\mathfrak{P}/\mathbf{BC}^*_\omega}(\Sigma)
\]
degenerates which translates to the vanishing of
\begin{equation}\label{equ:cosection-degeneracy}
\big(f^*_{-}\diff\tcW_-\big) \colon f^*T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(-\Sigma) \to \cO_{\cC}
\end{equation}
Note that away from markings, $\tcW_-$ is the same as $\tcW$ which is
the pull-back of $\tW$.
If $S \not\in \Delta_{\UH^{\mathrm{cpt}}}$, then
\eqref{equ:cosection-degeneracy} degenerates iff $f$ factors through
$\crit(W)$.
Consider a geometric point $S \in \Delta_{\UH^{\mathrm{cpt}}}$.
By \cite[Lemma~3.18~(2)]{CJRS18P}, $\cC$ has at least one component
$\mathcal{Z}$ whose image via $f_{-}$ is contained in the exceptional locus of
$\cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}} \to \cP_{\UH^\mathrm{cpt},\mathrm{reg}}$.
Because $\tcW$ has proper critical locus,
\eqref{equ:cosection-degeneracy} is non-zero along $\mathcal{Z}$.
This completes the proof.
\end{proof}
\subsection{The reduced theory}\label{ss:reduced}
Next we fix a $W$ hence $\tW$ with proper critical loci,
and apply the general machinery in Section~\ref{sec:POT-reduction} to construct
the reduced theory.
\subsubsection{The twisted Hodge bundle}
Consider
\[
\tomega_{\fU^{\mathrm{cpt}}} := \omega_{\cC_{\fU^{\mathrm{cpt}}}/\fU^{\mathrm{cpt}}} \otimes \pi^*_{\fU^{\mathrm{cpt}}}\mathbf{L}^{-\tilde{r}}_{\max}
\]
and its direct image cone
$ \fH :=\mathbf{C}(\pi_{\fU,*}\tomega_{\fU^{\mathrm{cpt}}}) $
as in \cite[Definition 2.1]{ChLi12}.
It is an algebraic stack over $\fU^{\mathrm{cpt}}$ parameterizing sections of
$\tomega_{\fU^{\mathrm{cpt}}}$ \cite[Proposition 2.2]{ChLi12}.
Indeed, $\fH$ is the total space of the vector bundle
\[
R^0\pi_{\fU^{\mathrm{cpt}},*} \tomega_{\fU^{\mathrm{cpt}}} \cong R^0\pi_{\fU^{\mathrm{cpt}},
*}\omega_{\fU^{\mathrm{cpt}}}\otimes \mathbf{L}^{- \tilde{r}}_{\max}|_{\fU^{\mathrm{cpt}}}
\]
over $\fU^{\mathrm{cpt}}$ by \cite[Section 5.3.5]{CJRS18P}. We further equip $\fH$ with the log structure pulled back from $\fU^{\mathrm{cpt}}$.
By \cite[Proposition 2.5]{ChLi12}, $\fH \to \fU^{\mathrm{cpt}}$ has a perfect obstruction theory
\begin{equation}\label{equ:Hodge-perfect-obs}
\varphi_{\fH/\fU^{\mathrm{cpt}}} \colon \TT_{\fH/\fU^{\mathrm{cpt}}} \to \EE_{\fH/\fU^{\mathrm{cpt}}} := \pi_{\fH,*}\tomega_{\fH}.
\end{equation}
By projection formula, we have
\begin{equation}\label{equ:fake-obs}
H^1(\EE_{\fH/\fU^{\mathrm{cpt}}}) = R^1\pi_{\fH,*} \tomega_{\fH} = R^1\pi_{\fH, *}\omega_{\fH}\otimes \mathbf{L}^{-\tilde{r}}_{\max}|_{\fH} \cong \mathbf{L}^{-\tilde{r}}_{\max}|_{\fH}.
\end{equation}
Let $\bs_{\fH}\colon \cC_{\fH} \to \vb(\tomega_{\fH})$ be the universal section over $\fH$. The morphism $\UH^\mathrm{cpt} \to \fU^{\mathrm{cpt}}$ factors through the tautological morphism
\[
\UH^\mathrm{cpt} \to \fH
\]
such that $\bs_{\fH}|_{\UH^\mathrm{cpt}} = \tcW_{-} \circ f_{\UH^\mathrm{cpt},-}$.
\subsubsection{Verifying assuptions in Section \ref{ss:reduction-set-up}}\label{sss:verify-reduction-assumption}
First, the sequence \eqref{equ:stacks-reduction} in consideration is
\[
\UH^\mathrm{cpt} \to \fH \to \fU^{\mathrm{cpt}}
\]
with the perfect obstruction theories $\varphi_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$
in \eqref{equ:obs-compact-evaluation} and $\varphi_{\fH/\fU^{\mathrm{cpt}}}$
in \eqref{equ:Hodge-perfect-obs}.
Choose the Cartier divisor $\Delta = \tilde{r} \Delta_{\fU^{\mathrm{cpt}}}$ with the pre-images
$\tilde{r}\Delta_{\UH^\mathrm{cpt}} \subset \UH^\mathrm{cpt}$ and $\tilde{r}\Delta_{\fH} \subset \fH$.
Thus we have the two term complex
$\FF = [\cO_{\fU^{\ev}_{0}} \stackrel{\epsilon}{\to}
\mathbf{L}^{-\tilde{r}}_{\max}]$ in degrees $[0,1]$.
The commutativity of \eqref{diag:compatible-POT} is verified in Lemma
\ref{lem:obs-commute} below, and the sujectivity of
\eqref{equ:general-cosection} along $\Delta_{\fU_0}$ follows from
Proposition \ref{prop:cosection-degeneracy-loci}.
\begin{lemma}\label{lem:obs-commute}
There is a canonical commutative diagram
\begin{equation}\label{diag:rel-obs-commute}
\xymatrix{
\TT_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \ar[rr] \ar[d]_{\varphi_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}} && \TT_{\fH/\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}} \ar[d]^{\varphi_{\fH/\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}}} \\
\EE_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \ar[rr]^{\sigma^{\bullet}_{\fU^{\mathrm{cpt}}}} && \EE_{\fH/\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}}
}
\end{equation}
where the two vertical arrows are the perfect obstruction theories.
\end{lemma}
\begin{proof}
Similarly as in Section \ref{ss:modify-target}, we may construct the log weighted projective bundle
\[
\cP_{\fU^{\mathrm{cpt}}} \to \mathfrak{X}_{\fU^{\mathrm{cpt}}} := \cC_{\fU^{\mathrm{cpt}}}\times_{\mathbf{BC}^*_\omega} \mathfrak{X}
\]
and its modification $\cP^{e,\circ}_{\fU^{\mathrm{cpt}},\mathrm{reg}}$ with the pull-backs
\[
\cP_{\fU^{\mathrm{cpt}}}\times_{\mathfrak{X}_{\UH^\mathrm{cpt}}}\mathfrak{X}_{\fU^{\mathrm{cpt}}} \cong \cP_{\UH^\mathrm{cpt}} \ \ \ \mbox{and} \ \ \ \cP^{e,\circ}_{\fU^{\mathrm{cpt}},\mathrm{reg}}\times_{\mathfrak{X}_{\UH^\mathrm{cpt}}}\mathfrak{X}_{\fU^{\mathrm{cpt}}} \cong \cP^{e,\circ}_{\UH^\mathrm{cpt},\mathrm{reg}}.
\]
We may also define the line bundle $\tilde{\omega}_{\fU^{\mathrm{cpt}}}$ over $\cC_{\fU^{\mathrm{cpt}}}$ similar to \eqref{equ:twisted-omega}.
The same proof as in Lemma \ref{lem:potential-restrict} yields a morphism
\[
\tcW_{\fU^{\mathrm{cpt}},-} \colon \cP^{e,\circ}_{\fU^{\mathrm{cpt}},\mathrm{reg}} \to \tilde{\omega}_{\fU^{\mathrm{cpt}}}
\]
which pulls back to $\tcW_{-}$ over $\UH^\mathrm{cpt}$. We obtain a commutative diagram
\[
\xymatrix{
\cC_{\UH^\mathrm{cpt}} \ar[rr] \ar[d]_{f_{\UH^\mathrm{cpt},-}} && \cC_\fH \ar[d]^{\bs_\fH} \\
\cP^{e,\circ}_{\fU^{\mathrm{cpt}},\mathrm{reg}} \ar[rr]^{\tcW_{\fU^{\mathrm{cpt}},-}} && \tilde{\omega}_{\fU^{\mathrm{cpt}}}
}
\]
where by abuse of notations the two vertical arrows are labeled by the morphisms inducing them. This leads to a commutative diagram of log tangent complexes
\[
\xymatrix{
\pi^* \TT_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \cong \TT_{\cC_{\UH^\mathrm{cpt}}/\cC_{\fU^{\mathrm{cpt}}}} \ar[rr] \ar[d] && \pi^* \TT_{\cC_{\fH}/\cC_{\fU^{\mathrm{cpt}}}}|_{\cC_{\UH^\mathrm{cpt}}} \cong \TT_{\fH/{\fU^{\mathrm{cpt}}}}|_{\cC_{\UH^\mathrm{cpt}}} \ar[d]\\
(f_{\UH^\mathrm{cpt},-})^* \TT_{\cP^{e,\circ}_{\fU^{\mathrm{cpt}},\mathrm{reg}}/\cC_{\fU^{\mathrm{cpt}}}} \ar[rr]^{(\diff \tcW_{\fU^{\mathrm{cpt}},-})|_{\cC_{\UH^\mathrm{cpt}}}} && (\bs_\fH)^* \TT_{\tilde{\omega}_{\fU^{\mathrm{cpt}}}/\cC_{\fU^{\mathrm{cpt}}}}|_{\cC_{\UH^\mathrm{cpt}}}
}
\]
Diagram \eqref{diag:rel-obs-commute} follows from first applying $\pi_*$ to the above diagram and then using adjunction.
\end{proof}
\subsubsection{The reduced perfect obstruction theory}\label{sss:reduced-theory}
Applying Theorem \ref{thm:reduction} to the situation above, we obtain the \emph{reduced perfect obstruction theory}
\begin{equation}\label{equ:red-OPT}
\varphi^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \colon \TT_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}} \to \EE^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}
\end{equation}
and the \emph{reduced cosection}
\[
\sigma^{\mathrm{red}}_{\fU^{\mathrm{cpt}}} \colon H^1(\EE^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}) \to \cO_{\UH^\mathrm{cpt}}
\]
with the following properties
\begin{enumerate}
\item The morphism $\varphi_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$ factors through $\varphi^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$ such that
\[
\varphi_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}\setminus\Delta_{\UH^\mathrm{cpt}}} = \varphi^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}\setminus\Delta_{\UH^\mathrm{cpt}}}
\]
\item $\sigma^{\mathrm{red}}_{\fU^{\mathrm{cpt}}}$ is surjective along $\Delta_{\UH^\mathrm{cpt}}$, and satisfies
\[
\sigma^{\mathrm{red}}_{\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}\setminus\Delta_{\UH^\mathrm{cpt}}} = \sigma_{\fU^{\mathrm{cpt}}}|_{\UH^\mathrm{cpt}\setminus\Delta_{\UH^\mathrm{cpt}}}.
\]
\end{enumerate}
The virtual cycle $[\UH^\mathrm{cpt}]^{\mathrm{red}}$ associated to $\varphi^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$ is called the {\em reduced virtual cycle} of $\UH^\mathrm{cpt}$. We emphasize that the reduced theory depends on the superpotential $W$.
\subsubsection{The cosection localized virtual cycle of $\IR^\mathrm{cpt}$}\label{sss:cosection-localized-class}
Recall from Proposition \ref{prop:cosection-degeneracy-loci} that the
degeneracy loci of $\sigma_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}$ are supported along
the proper substack $\IR_W \subset \IR^\mathrm{cpt}$.
We have canonical embeddings
\[
\iiota\colon \IR_W \hookrightarrow \IR^\mathrm{cpt} \ \ \ \mbox{and} \ \ \ \iota\colon \IR_W \hookrightarrow \UH^\mathrm{cpt}.
\]
Since $\IU^{\mathrm{cpt}}$ is smooth, we are in the situation of Section \ref{ss:general-absolut-theory}. Applying Theorem \ref{thm:generali-localized-cycle}, we obtain the \emph{cosection localized virtual cycle}
\begin{equation}\label{equ:localized-cycle}
[\IR^\mathrm{cpt}]_{\sigma} \in A_*(\IR_W)
\end{equation}
with the property that $\iiota_*[\IR^\mathrm{cpt}]_{\sigma} = [\IR^\mathrm{cpt}]^{\mathrm{vir}}$. Since the canonical theory, the reduced theory, and their cosections all agree over $\IR^\mathrm{cpt}$, the existence of the cycle $[\IR^\mathrm{cpt}]_{\sigma}$ does \emph{not} require the compactification $\UH^\mathrm{cpt}$ of $\IR^\mathrm{cpt}$.
\subsection{The first comparison theorem}\label{ss:comparison-1}
We now show that the reduced virtual cycle and the cosection localized virtual cycle agree.
\begin{theorem}\label{thm:reduced=local}
$\iota_*[\IR^\mathrm{cpt}]_{\sigma} = [\UH^\mathrm{cpt}]^{\mathrm{red}}$
\end{theorem}
\begin{proof}
Since $\UH^\mathrm{cpt}$ is of finite type, replacing $\fU$ by an open set containing the image of $\UH^\mathrm{cpt}$, we may assume that $\fU$ hence $\fU^{\mathrm{cpt}}$ is also of finite type. By \cite[Lemma 5.25]{CJRS18P}, there is a birational projective resolution $\fr\colon \widetilde{\fU} \to \fU$ which restricts to the identity on $\IU = \fU\setminus(\Delta_{\max}|_{\fU})$. Let
\begin{equation}\label{equ:resolution}
\widetilde{\fU}^{\mathrm{cpt}} =\fU^{\mathrm{cpt}}\times_{\fU}\widetilde{\fU} \to \fU^{\mathrm{cpt}} \ \ \ \mbox{and} \ \ \ \widetilde{\UH}^\mathrm{cpt} = \UH\times_{\fU}\widetilde{\fU} \to \UH.
\end{equation}
By abuse of notations, both morphisms are denoted by $\fr$ when there
is no danger of confusion.
Then the two morphisms restrict to the identity on $\IU^{\mathrm{cpt}}$ and
$\IR^{\mathrm{cpt}}$ respectively.
Furthermore, $\widetilde{\fU}^{\mathrm{cpt}} \to \fU^{\mathrm{cpt}}$ is a birational projective
resolution by Proposition \ref{prop:compact-evaluation}.
Let $(\varphi^{\mathrm{red}}_{\widetilde{\UH}^\mathrm{cpt}/\widetilde{\fU}^{\mathrm{cpt}}}, \sigma_{\widetilde{\fU}^{\mathrm{cpt}}})$ be the pull-back of $(\varphi^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}, \sigma_{{\fU}^{\mathrm{cpt}}})$ along $\fr$. Then $\varphi^{\mathrm{red}}_{\widetilde{\UH}^\mathrm{cpt}/\widetilde{\fU}^{\mathrm{cpt}}}$ defines a perfect obstruction theory of $\widetilde{\UH}^\mathrm{cpt} \to \widetilde{\fU}^{\mathrm{cpt}}$ hence a virtual cycle $[\widetilde{\UH}^\mathrm{cpt}]^{\mathrm{red}}$. By the virtual push-forward of \cite{Co06, Ma12}, we have
\begin{equation}\label{equ:vc-push-forward-along-resolution}
\fr_*[\widetilde{\UH}^\mathrm{cpt}]^{\mathrm{red}} = [\UH^\mathrm{cpt}]^{\mathrm{red}}.
\end{equation}
On the other hand, since $(\varphi^{\mathrm{red}}_{\widetilde{\UH}^\mathrm{cpt}/\widetilde{\fU}^{\mathrm{cpt}}}, \sigma_{\widetilde{\fU}^{\mathrm{cpt}}})$ is the pull-back of $(\varphi^{\mathrm{red}}_{\UH^\mathrm{cpt}/\fU^{\mathrm{cpt}}}, \sigma_{{\fU}^{\mathrm{cpt}}})$, the same properties listed in Section \ref{sss:reduced-theory} also pull back to $(\varphi^{\mathrm{red}}_{\widetilde{\UH}^\mathrm{cpt}/\widetilde{\fU}^{\mathrm{cpt}}}, \sigma_{\widetilde{\fU}^{\mathrm{cpt}}})$. Since $\widetilde{\fU}^{\mathrm{cpt}}$ is smooth, Theorem \ref{thm:generali-localized-cycle} implies
\begin{equation}\label{equ:reduced=local-resolution}
\iota_*[\widetilde{\UH}^\mathrm{cpt}]_{\sigma_{\widetilde{\fU}^{\mathrm{cpt}}}} = [\UH^\mathrm{cpt}]^{\mathrm{red}}.
\end{equation}
Since $\fr$ does not modify the interior $\IR^\mathrm{cpt}$ and $\IU^{\mathrm{cpt}}$, we have
\begin{equation}\label{equ:local=local}
[\widetilde{\UH}^\mathrm{cpt}]_{\sigma_{\widetilde{\fU}^{\mathrm{cpt}}}} = [\IR^\mathrm{cpt}]_{\sigma_{{\fU}^{\mathrm{cpt}}}}.
\end{equation}
Finally, \eqref{equ:vc-push-forward-along-resolution},
\eqref{equ:reduced=local-resolution}, and \eqref{equ:local=local}
together imply the statement.
\end{proof}
\subsection{The second comparison theorem}\label{ss:comparison-2}
By Section \ref{sss:verify-reduction-assumption} and Theorem \ref{thm:boundary-cycle} (1), we obtain a factorization of perfect obstruction theories of $\Delta_{\UH^\mathrm{cpt}} \to \Delta_{\fU^{\mathrm{cpt}}}$
\[
\xymatrix{
\TT_{\Delta_{\UH^\mathrm{cpt}}/\Delta_{\fU^{\mathrm{cpt}}}} \ar[rr]^{\varphi_{\Delta_{\UH^\mathrm{cpt}}/\Delta_{\fU^{\mathrm{cpt}}}}} \ar[rd]_{\varphi^{\mathrm{red}}_{\Delta_{\UH^\mathrm{cpt}}/\Delta_{\fU^{\mathrm{cpt}}}}} && \EE_{\Delta_{\UH^\mathrm{cpt}}/\Delta_{\fU^{\mathrm{cpt}}}} \\
&\EE^{\mathrm{red}}_{\Delta_{\UH^\mathrm{cpt}}/\Delta_{\fU^{\mathrm{cpt}}}} \ar[ru]&
}
\]
where the top is the pull-back of \eqref{equ:red-OPT}.
Let $[\Delta_{\UH^\mathrm{cpt}}]^{\mathrm{red}}$ be the \emph{reduced boundary virtual
cycle} associated to
$\varphi^{\mathrm{red}}_{\Delta_{\UH^\mathrm{cpt}}/\Delta_{\fU^{\mathrm{cpt}}}}$. We then
have:
\begin{theorem}\label{thm:comparison-2}
$[\UH^\mathrm{cpt}]^{\mathrm{vir}} = [\UH^\mathrm{cpt}]^{\mathrm{red}} + \tilde{r} [\Delta_{\UH^\mathrm{cpt}}]^{\mathrm{red}}$.
\end{theorem}
\begin{proof}
The pull-back $\varphi_{\widetilde{\UH}^\mathrm{cpt}/\widetilde{\fU}^{\mathrm{cpt}}} := \varphi_{\UH^\mathrm{cpt}/{\fU}^{\mathrm{cpt}}}|_{\widetilde{\UH}^\mathrm{cpt}}$ defines a perfect obstruction theory of $\widetilde{\UH}^\mathrm{cpt} \to \widetilde{\fU}^{\mathrm{cpt}}$ with the corresponding virtual cycle $[\widetilde{\UH}^\mathrm{cpt}]^{\mathrm{vir}}$. Applying the virtual push-forward \cite{Co06, Ma12}, we have
\begin{equation}\label{equ:con-vc-push-forward}
\fr_*[\widetilde{\UH}^\mathrm{cpt}] = [\UH^\mathrm{cpt}].
\end{equation}
Consider the resolution \eqref{equ:resolution}, and write
\[
\Delta_{\widetilde{\fU}^{\mathrm{cpt}}} = \Delta_{\fU^{\mathrm{cpt}}}\times_{\fU^{\mathrm{cpt}}}\widetilde{\fU}^{\mathrm{cpt}} \ \ \ \mbox{and} \ \ \ \Delta_{\widetilde{\UH}^\mathrm{cpt}} = \Delta_{\widetilde{\fU}^{\mathrm{cpt}}}\times_{\widetilde{\fU}^{\mathrm{cpt}}}\widetilde{\UH}^\mathrm{cpt}.
\]
Applying Theorem \ref{thm:boundary-cycle} to the data $(\widetilde{\UH}^\mathrm{cpt}, \tilde{r}\Delta_{\widetilde{\UH}^\mathrm{cpt}}, \varphi_{\widetilde{\UH}^\mathrm{cpt}/\widetilde{\fU}^{\mathrm{cpt}}}, \sigma_{\widetilde{\fU}^{\mathrm{cpt}}})$, we obtain the reduced boundary cycle $[\tilde{r}\Delta_{\widetilde{\UH}^\mathrm{cpt}}]^{\mathrm{red}} = \tilde{r}[\Delta_{\widetilde{\UH}^\mathrm{cpt}}]^{\mathrm{red}}$ and the following relation
\begin{equation}
[\widetilde{\UH}^\mathrm{cpt}] = [\widetilde{\UH}^\mathrm{cpt}]^{\mathrm{red}} + \tilde{r} \cdot [\Delta_{\widetilde{\UH}^\mathrm{cpt}}]^{\mathrm{red}}.
\end{equation}
Applying $\fr_*$ and using \eqref{equ:vc-push-forward-along-resolution} and \eqref{equ:con-vc-push-forward}, we have
\[
[\UH^\mathrm{cpt}] = [\UH^\mathrm{cpt}]^{\mathrm{red}} + \tilde{r} \cdot \fr_*[\Delta_{\widetilde{\UH}^\mathrm{cpt}}]^{\mathrm{red}}.
\]
It remains to verify that $[\Delta_{\UH^\mathrm{cpt}}]^{\mathrm{red}} = \fr_*[\Delta_{\widetilde{\UH}^\mathrm{cpt}}]^{\mathrm{red}}$.
Recall the degeneracy loci $\IR_{W} \subset \IR^\mathrm{cpt}$ of
$\sigma_{\widetilde{\fU}^{\mathrm{cpt}}}$.
Write $V = \UH^\mathrm{cpt} \setminus \IR_{W}$ and
$\widetilde{V} = \widetilde{\UH}^\mathrm{cpt}\setminus \IR_{W}$.
In the same way as in \eqref{equ:t-red-POT} we construct the totally
reduced perfect obstruction theory $\EE^{\tred}_{V/\fU^{\mathrm{cpt}}}$ for
$V \to \fU^{\mathrm{cpt}}$ which pulls back to the totally reduced perfect
obstruction theory $\EE^{\tred}_{\widetilde{V}/\widetilde{\fU}^{\mathrm{cpt}}}$ for
$\widetilde{V} \to \widetilde{\fU}^{\mathrm{cpt}}$.
Let $[V]^{\tred}$ and $[\widetilde{V}]^{\tred}$ be the corresponding
virtual cycles.
Then the virtual push-forward implies
$\fr_*[\widetilde{V}]^{\tred} = [V]^{\tred}$.
We calculate
\[
\tilde{r}\cdot \fr_*[\Delta_{\widetilde{\UH}^\mathrm{cpt}}]^{\mathrm{red}} = \fr_* i^![\widetilde{V}]^{\tred} = i^![V]^{\tred} = \tilde{r} \cdot [\Delta_{\UH^\mathrm{cpt}}]^{\mathrm{red}}
\]
where the first and the last equalities follow from \eqref{equ:tred=bred}, and the middle one follows from the projection formula. This completes the proof.
\end{proof}
\subsection{Independence of twists II: the case of the reduced theory}\label{ss:red-theory-ind-twist}
In this section, we complete the proof of the change of twists theorem.
Consider the two targets $\mathfrak{P}_1$ and $\mathfrak{P}_2$ as in Section \ref{ss:change-twists}. Since $\mathfrak{P}_1 \to \mathfrak{P}_2$ is isomorphic along $\mathbf{0}_{\mathfrak{P}_1} \cong \mathbf{0}_{\mathfrak{P}_2}$ and $\vec{\varsigma}$ is a collection compact type sectors, the morphism in Corollary \ref{cor:changing-twists} restricts to
\[
\nu_{a_1/a_2} \colon \UH^{\mathrm{cpt}}_1 := \UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_1,\beta) \to \UH^{\mathrm{cpt}}_2 :=\UH^{\mathrm{cpt}}_{g,\vec{\varsigma}}(\mathfrak{P}_2,\beta).
\]
We compare the virtual cycles:
\begin{theorem}\label{thm:red-ind-twists}
\begin{enumerate}
\item
$\nu_{{a_1/a_2},*}[\UH^{\mathrm{cpt}}_1]^{\mathrm{red}} =
[\UH^{\mathrm{cpt}}_2]^{\mathrm{red}}$
\item
$\nu_{{a_1/a_2},*}[\UH^{\mathrm{cpt}}_1]^\mathrm{vir} =
[\UH^{\mathrm{cpt}}_2]^\mathrm{vir}$.
\item
$
\nu_{{a_1/a_2},*}[\Delta_{\UH^{\mathrm{cpt}}_1}]^{\mathrm{red}} = \frac{a_2}{a_1} \cdot [\Delta_{\UH^{\mathrm{cpt}}_2}]^{\mathrm{red}}.
$
\end{enumerate}
\end{theorem}
\begin{proof}
By Theorem \ref{thm:reduced=local}, both $[\UH^{\mathrm{cpt}}_1]^{\mathrm{red}}$ and $[\UH^{\mathrm{cpt}}_2]^{\mathrm{red}}$ are represented by the same cosection localized virtual cycle contained in the common open set $\IR^\mathrm{cpt}$ of both $\UH^{\mathrm{cpt}}_1$ and $\UH^{\mathrm{cpt}}_2$, hence are independent of the choices of $a_i$. This proves the part of (1).
We can prove (2) similarly as in Proposition \ref{prop:can-ind-twists}. The only modification needed is to work over the log evaluation stack in Section \ref{sss:log-ev-stack}.
Finally, (3) follows from (1), (2) and Theorem \ref{thm:comparison-2}.
\end{proof}
\section{Examples}
\label{sec:examples}
\subsection{Gromov--Witten theory of complete intersections}\label{ss:examples-GW}
One of the most direct application of log GLSM is to study the
Gromov--Witten theory of complete intersections, and more generally,
zero loci of non-degenerate sections of vector bundles.
Here, the most prominent examples are quintic threefolds in $\mathbb{P}^4$.
The input of this log GLSM is given by a proper smooth
Deligne--Mumford stack $\cX$ with a projective coarse moduli, a vector
bundle $\mathbf{E} = \mathbf{E}_1$ over $\cX$, a section $s \in H^0(\mathbf{E})$ whose zero
locus $\mathcal{Z}$ is smooth of codimension $\rk \mathbf{E}$. In this case we may choose
$\mathbf{L} = \cO_{\cX}$, $r = 1$, and may choose $a = 1$ for
simplicity.
Then the universal targets are
$\mathfrak{P} = \mathbb{P}(\mathbf{E}^\vee \otimes \mathcal L_\omega \oplus \cO)$ and
$\mathfrak{P}^\circ = \vb(\mathbf{E}^\vee \otimes \mathcal L_\omega)$.
We may also view them as the quotients of $\mathbb{P}(\mathbf{E}^\vee \oplus \cO)$
and $\vb(\mathbf{E}^\vee)$ under the $\mathbb C^*_\omega = \mathbb C^*$-scalar
multiplication on $\mathbf{E}^\vee$.
By Proposition \ref{prop:map-field-equiv}, the data of a stable R-map
$f\colon \cC\ \to \mathfrak{P}^{\circ}$ with compact type evaluation over $S$
is equivalent to a stable map $g\colon \cC \to \cX$ over $S$ together
with a section $\rho \in H^0(\omega_\cC \otimes g^*(\mathbf{E}^\vee))$.
Thus $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$ is the same as the
\emph{moduli space of stable maps to $\cX$ with $p$-fields} studied in
\cite{ChLi12, KiOh18P, ChLi18P, CJW19P}.
In this situation, the superpotential
\begin{equation*}
W\colon \vb(\mathbf{E}^\vee \boxtimes \mathcal L_\omega) \to \vb(\mathcal L_\omega)
\end{equation*}
is defined as the pairing with $s$.
It has proper critical locus whenever $\mathcal{Z}$ is smooth of expected
dimension \cite[Lemma 2.2.2]{CJW19P}, and then the degeneracy locus $\IR_W$ is supported on
$\scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta)$ embedded in the subset
$\scrM_{g, \vec{\varsigma}}(\cX, \beta) \subset \SR^{\mathrm{cpt}}_{g,
\vec{\varsigma}}(\mathfrak{P}^{\circ},\beta)$, which is defined by log $R$-maps
mapping into $\mathbf{0}_\mathfrak{P}$.
Recall that $\vec{\varsigma}$ is a collection of connected components of the
inertia stack of $\cX$.
The moduli space $\scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta)$ parameterizes stable
maps $\cC \to \mathcal{Z}$ such that the composition $\cC \to \mathcal{Z} \to \cX$ has
curve class $\beta$, and sectors $\vec{\varsigma}$.
In particular, $\scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta)$ is a disjoint union
parameterized by curve classes $\beta'$ on $\mathcal{Z}$ such that
$\iota_* \beta' = \beta$ under the inclusion
$\iota\colon \mathcal{Z} \to \cX$.
Combining Theorem \ref{thm:reduced=local} with the results in \cite{ChLi12, KiOh18P, ChLi18P}, and more
generally in \cite{CJW19P, Pi20P}, we obtain:
\begin{proposition}
\label{prop:glsm-gw}
In the above setting, we have
\begin{equation*}
[ \UH_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)]^{\mathrm{red}}
= (-1)^{\rk(\mathbf{E})(1 - g) + \int_\beta c_1(\mathbf{E}) - \sum_{j = 1}^n \age_j(\mathbf{E})} [\scrM_{g, \vec{\varsigma}}(\mathcal{Z}, \beta)]^\mathrm{vir},
\end{equation*}
where $\age_j(\mathbf{E})$ is the age of $\mathbf{E}|_\cC$ at the $j$th
marking (see \cite[Section~7]{AGV08}).
\end{proposition}
Therefore Gromov--Witten invariants of $\mathcal{Z}$ (involving only cohomology classes
from $\cX$) can be computed in terms of log GLSM
invariants.
\begin{proof}
We will show that the perfect obstruction theory and cosection used
in this paper are compatible with those in \cite{CJW19P}.
Recall the notations
$\IR^\mathrm{cpt} = \SR^\mathrm{cpt}_{g, \vec{\varsigma}}(\mathfrak{P}^\circ, \beta)$ and
$\IU^{\mathrm{cpt}} = \fU^{\mathrm{cpt}}\setminus(\Delta_{\fU^{\mathrm{cpt}}})$ from
\eqref{equ:interior-stack}.
Note that $\IU^{\mathrm{cpt}} = \IU \times (\ocI\cX)^n$ where $ \ocI\cX$ is
the rigidified cyclotomic inertia stack of $\cX$ as in
\cite[3.4]{AGV08}, and $\IU$ is simply the moduli of twisted curves.
Note that we have a morphism of distinguished triangles over
$\IR^\mathrm{cpt}$
\[
\xymatrix{
\TT_{\IR^\mathrm{cpt}/\IU^{cpt}} \ar[r] \ar[d] & \TT_{\IR^\mathrm{cpt}/\IU} \ar[r] \ar[d] & T_{(\ocI\cX)^n}|_{\IR^\mathrm{cpt}} \ar[d]^{\cong} \ar[r] &\\
\pi_{\IR^\mathrm{cpt},*}f^*_{\IR^\mathrm{cpt}}T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(-\Sigma) \ar[r] & \pi_{\IR^\mathrm{cpt},*}f^*_{\IR^\mathrm{cpt},-}T_{\cP_{\IR^\mathrm{cpt},\mathrm{reg}}/\cC_{\IR^\mathrm{cpt}}} \ar[r] & \pi_{\IR^\mathrm{cpt},*}T_{\mathfrak{X}/\mathbf{BC}^*_\omega}|_{\Sigma} \ar[r] & \\
}
\]
where the left vertical arrow is the restriction of the perfect obstruction theory \eqref{equ:obs-compact-evaluation} to $\IR^\mathrm{cpt}$, the middle vertical arrow is precisely the perfect obstruction theory \cite[(18)]{CJW19P}, the vertical arrow on the right follows from \cite[Lemma 3.6.1]{AGV08}, and the bottom is obtained by applying the derived pushforward $\pi_{\IR^\mathrm{cpt},*}$ to \eqref{equ:modified-tangent}. Thus, the perfect obstruction theory defined in this paper is compatible with that of \cite{CJW19P}, hence they define the same absolute perfect obstruction theory of $\IR^\mathrm{cpt}$.
Now applying $R^1\pi_{\IR^\mathrm{cpt},*}$ to the composition \eqref{eq:composing-cosection}, we have
\[
\xymatrix{
R^1\pi_{\IR^\mathrm{cpt},*}f^*_{\IR^\mathrm{cpt}}T_{\mathfrak{P}/\mathbf{BC}^*_\omega}(-\Sigma) \ar[r]^{\cong \ \ } \ar[rd] & R^1\pi_{\IR^\mathrm{cpt},*}f^*_{\IR^\mathrm{cpt},-}T_{\cP_{\IR^\mathrm{cpt},\mathrm{reg}}/\cC_{\IR^\mathrm{cpt}}} \ar[d]\\
& \cO_{\IR^\mathrm{cpt}}
}
\]
where the horizontal isomorphism follows from the compatibility of perfect obstruction theories above, the vertical arrow on the right is the relative cosection \cite[(25)]{CJW19P}, and the skew arrow is the relative cosection \eqref{equ:canonical-cosection} restricted to the open substack $\IR^\mathrm{cpt}$. This means that the cosections in this paper over $\IR^\mathrm{cpt}$ is identical to the cosections in \cite{CJW19P}. Therefore, the statement follows from \cite[Theorem 1.1.1]{CJW19P}.
\end{proof}
\subsection{FJRW theory}
We discuss in this section how our set-up includes all of FJRW theory,
which is traditionally \cite{FJR13} stated in terms of a
quasi-homogeneous polynomial $W$ defining an isolated singularity at
the origin, and a diagonal symmetry group $G$ of $W$.
We first recall a more modern perspective on the input data for the
FJRW moduli space following \cite[Section~2.2]{FJR18} and
\cite[Section~3]{PoVa16}.
Fix an integer $N$, a finite subgroup $G \subset \mathbb{G}_{m}^N$, and positive
integers $c_1, \dotsc, c_N$ such that $\gcd(c_1, \dotsc, c_N) = 1$.
Let $\mathbb C^*_R$ be the one dimensional sub-torus
$\{(\lambda^{c_1}, \dotsc, \lambda^{c_N})\} \subset \mathbb{G}_{m}^N$, and assume
that $G \cap \mathbb C^*_R$ is a cyclic group of order $r$, which is usually
denoted by $\langle J\rangle$.
Consider the subgroup $\Gamma = G \cdot \mathbb C^*_R \subset \mathbb{G}_{m}^N$.
There is a homomorphism
$\zeta\colon \Gamma \to \mathbb C^*_\omega \cong \mathbb{G}_{m}$ defined by
$G \mapsto 1$ and
$(\lambda^{c_1}, \dotsc, \lambda^{c_N}) \mapsto \lambda^r$.
\begin{definition}
A {\em $\Gamma$-structure} on a twisted stable curve $\cC$ is a commutative
diagram
\begin{equation*}
\xymatrix{
& \mathbf{B\Gamma} \ar[d] \\
\cC \ar[r] \ar[ur] & \mathbf{BC}^*_\omega.
}
\end{equation*}
A {\em $\Gamma$-structure with fields} \cite{CLL15} is a commutative diagram
\begin{equation*}
\xymatrix{
& [\mathbb C^N / \Gamma] \ar[d] \\
\cC \ar[r] \ar[ur] & \mathbf{BC}^*_\omega.
}
\end{equation*}
\end{definition}
\begin{remark}
A special case of FJRW theory is the $r$-spin theory, whose logarithmic
GLSM was discussed in \cite{CJRS18P}.
In this case, $N = 1$, $\mathbb C^*_R = \Gamma$, and
$G = \mu_r \subset \mathbb C^*_R$ is the subgroup of $r$th roots of unity.
\end{remark}
\begin{lemma}
\label{lem:hybrid-vs-CLL}
There is hybrid target data (as in Section~\ref{ss:target-data})
such that there is a commutative diagram
\begin{equation*}
\xymatrix{
[\mathbb C^N/\Gamma] \ar[r]^{\sim} \ar[dr] & \mathfrak{P}^\circ \ar[d] \\
& \mathbf{BC}^*_\omega.
}
\end{equation*}
\end{lemma}
\begin{proof}
This is a special case of the following
Lemma~\ref{lem:hybrid-match}.
\end{proof}
There are several constructions of the FJRW virtual cycle in full
generality \cite{CKL18, FJR08, KL18, PoVa16}.
The construction closest to ours, and which we will follow here, is
the approach \cite{CLL15} using cosection localized virtual classes
for the special case of narrow insertions at all markings.
In the FJRW situation, by Lemma~\ref{lem:hybrid-vs-CLL}, the moduli space $\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^\circ, \beta)$ of stable $R$-maps is
the same as the moduli of $G$-spin curves with fields in \cite{CLL15}.
Indeed, $\cX$ is a point, and all compact-type
sectors are narrow.
In this case, Proposition~\ref{prop:compact-evaluation} (1) implies that
$\SR^{\mathrm{cpt}}_{g, \vec{\varsigma}}(\mathfrak{P}^\circ, \beta) = \SR_{g, \vec{\varsigma}}(\mathfrak{P}^\circ, \beta)$.
The perfect obstruction theories in this paper are constructed in a
slightly different way to the ones in \cite{CLL15} or \cite{CJRS18P} in
that construct it relative to a moduli space of twisted curves instead
of a moduli space of $G$-spin curves.
These constructions are related via a base-change of obstruction
theories as in \cite[Lemma~A.2.2]{CJW19P}, and in particular give rise
to the same virtual class.
Given a superpotential $W$ with proper critical locus, the cosection
constructed in Section~\ref{sss:cosection-localized-class} is easily
seen to agree with the one in \cite{CLL15}.
Therefore, $[\IR^\mathrm{cpt}]^\mathrm{vir}$ is the FJRW virtual class, and log GLSM
recovers FJRW theory in the narrow case.
\subsection{Hybrid models}\label{ss:ex-hyb-model}
The hybrid GLSM considered in the literature \cite{CFGKS18P, Cl17, FJR18}
fit neatly into our set-up, and they form a generalization
of the examples of the previous sections.
In this paper though, we restrict ourselves to the case of compact
type insertions, and to the $\infty$-stability in order to include non-GIT quotients.
The input data of a hybrid GLSM is the following: Let
$G \subset \mathbb{G}_{m}^{K + N}$ be a sub-torus, and $\theta\colon G \to \mathbb C^*$
be a character such that the stable locus $\mathbb C^{K, s}$ and the
semi-stable locus $\mathbb C^{K, ss}$ for the $G$-action on
$\mathbb C^K = \mathbb C^K \times \{0\}$ agree, and that
$\mathbb C^{K+N, ss} = \mathbb C^{K,ss}\times \mathbb C^{N}$.
Then $[\mathbb C^{K, ss} \times \mathbb C^N / G]$ is the total space of a vector
bundle $\mathbf{E}^\vee$ on a Deligne--Mumford stack
$\cX = [\mathbb C^{K, ss} / G]$.
Furthermore, assume that there is a one-dimensional subtorus
$\mathbb C^*_R = \{(1, \dotsc, 1, \lambda^{c_1}, \dotsc, \lambda^{c_N})\}
\subset \mathbb{G}_{m}^{K + N}$ such that $c_i > 0$ for all $i$, and
$G \cap \mathbb C^*_R \cong \ZZ/r\ZZ$.
Let $\Gamma = G\cdot \mathbb C^*_R$, and define
$\zeta\colon \Gamma \to \mathbb C^*_\omega$ via $G \mapsto 1$ and
$(\lambda^{c_1}, \dotsc, \lambda^{c_N}) \mapsto \lambda^r$.
Given this set-up, the moduli space of $\infty$-stable LG quasi-maps
\cite[Definition~1.3.1]{CFGKS18P} is the same as the moduli space of
$R$-maps to the target $[\mathbb C^{K, ss} \times \mathbb C^N/\Gamma] \to \mathbf{BC}^*_\omega$.
Analogously to the previous section, we have the following:
\begin{lemma}
\label{lem:hybrid-match}
There is hybrid target data (as in Section~\ref{ss:target-data})
such that there is a commutative diagram
\begin{equation*}
\xymatrix{
[\mathbb C^{K, ss} \times \mathbb C^N/\Gamma] \ar[r]^-{\sim} \ar[dr] & \mathfrak{P}^\circ \ar[d] \\
& \mathbf{BC}^*_\omega.
}
\end{equation*}
\end{lemma}
\begin{proof}
Choose a splitting
$\mathbb{G}_{m}^{K + N} \cong T \times \mathbb C^*_R$ into
tori.
Let $H$ be the projection of $G$ to $T$.
Then there is an isomorphism $\Gamma \cong H \times \mathbb C^*_R$
defined by the projections, and the homomorphism
$\zeta\colon \Gamma \to \mathbb C^*_\omega$ becomes of the form
$(\lambda, h) \mapsto \lambda^r \chi(h)$ for the character
$\chi := \zeta|_{H} \colon H \to \mathbb C^*_\omega$.
Set $\cX = [\mathbb C^{K, ss}/H]$ and let $\mathbf{L}$ be the line bundle induced
by $\chi$.
Then $[\mathbb C^{K, ss} \times \mathbb C^N / H]$ is a rank $N$ vector bundle
over $\cX$ with the splitting $\mathbf{E} = \oplus_j \mathbf{E}^{\vee}_{c_j}$
according to the weights $c_j$ of the $\mathbb C^*_R$-action.
Consider $\mathfrak{X} := [\mathbb C^{K, ss}/ \Gamma] \cong \mathbf{BC}^*_R \times \cX \to \mathbf{BC}^*_\omega \times \cX$ induced by the line bundle
$\mathcal L_R^{\otimes r} \boxtimes \mathbf{L}$ and the identity on the second
factor.
Here, $\mathcal L_R$ is the universal line bundle on $\mathbf{BC}^*_R$.
The universal spin structure $\Sp$ is the pull-back of $\mathcal L_R$. We then have $\mathfrak{P}^{\circ} \cong [\vb(\oplus_i \mathbf{E}^{\vee}_i)/\mathbb C^*_R] \to \mathbf{BC}^*_\omega$ which is the same as $[\mathbb C^{K, ss} \times \mathbb C^N / \Gamma] \to \mathbf{BC}^*_\omega$.
\end{proof}
It is a straightforward verification that the hybrid GLSM virtual
cycles constructed in our paper agree with those constructed in
\cite{Cl17, FJR18}.
Indeed, the absolute perfect obstruction theory and cosection for
$\IR^{\mathrm{cpt}}$ constructed in this paper agree with the ones in the
literature (to see this, we again need the base-change lemma
\cite[Lemma~A.2.2]{CJW19P}).
We leave the comparison to \cite{CFGKS18P} for a future work.
\section{Properties of the stack of stable logarithmic $R$-maps}
\label{sec:properties}
In this section, we establish Theorem \ref{thm:representability}.
\subsection{The representability}
For convenience, we prove the algebraicity of the stack $\mathscr{R}(\mathfrak{P})$ of
all log R-maps with all possible discrete data since the discrete data
specifies open and closed components, and the stability is an open
condition.
Consider the stack of underlying R-maps $\fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega)$ which associates to any scheme $\underline{S}$ the category of commutative diagrams
\[
\xymatrix{
\uC\ar[rd]_{\omega^{\log}_{\ucC/\underline{S}}} \ar[r]^{\underline{f}} & \underline{\mathfrak{P}} \ar[d] \\
& \mathbf{BC}^*_\omega
}
\]
where $\uC \to \underline{S}$ is a family of twisted curves.
As proved in \cite{AbCh14, Ch14,GrSi13}, the tautological morphism
$\mathscr{R}(\mathfrak{P}, \beta) \to \fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega)$ is represented by log
algebraic spaces, see also \cite[Theorem~2.11]{CJRS18P}.
To show that $\mathscr{R}(\mathfrak{P}, \beta)$ is algebraic, it remains to prove the
algebraicity of $\fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega)$.
Now consider the tautological morphism
\[
\fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega) \to \fMtw
\]
where $\fMtw$ is the stack of twisted pre-stable curves.
For any morphism $\underline{S} \to \fMtw$, the corresponding pre-stable
curve $\underline{\cC} \to \underline{S}$ defines a fiber product
$\underline{\mathfrak{P}}\times_{\mathbf{BC}^*_\omega}\underline{\cC}$.
For any $\underline{T} \to \underline{S}$, the fiber product
$$\fS_{\underline{S}}(\underline{T}) := \underline{T}\times_{\fM(\cX)}\fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega)(\underline{S})$$
parameterizes sections of the projection $\underline{\mathfrak{P}}\times_{\mathbf{BC}^*_\omega}\underline{\cC}_{\underline{T}} \to \underline{\cC}_{\underline{T}} := \underline{\cC}\times_{\underline{S}}\underline{T}$.
Note that the composition $\underline{\mathfrak{P}}\times_{\mathbf{BC}^*_\omega}\underline{\cC} \to \underline{\cC} \to \underline{S}$ is proper and of Deligne--Mumford type.
Since being a section is an open condition, the stack $\fS_{\underline{S}}$ is an open substack of the stack parameterizing pre-stable maps to the family of targets $\underline{\mathfrak{P}}\times_{\mathbf{BC}^*_\omega}\underline{\cC} \to \underline{S}$, which is algebraic by the algebraicity of Hom-stacks in \cite[Theorem 1.2]{HaRy19}.
Hence, $\fS_{\underline{S}}$ is algebraic over $\underline{S}$.
This proves the algebraicity of $\fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega)$.
\subsection{Finiteness of automorphisms}
We now verify that $\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ is of
Deligne--Mumford type.
Let $f\colon \cC \to \mathfrak{P}$ be a pre-stable $R$-map.
An automorphism of $f$ over $\underline{S}$ is an automorphism of the log
curve $\cC \to S$ over $\underline{S}$ which fixes $f$.
Denote by $\Aut(f/S)$ the sheaf of automorphism groups of $f$ over
$\underline{S}$.
Since the underlying stack $\underline{\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)}$
parameterizes minimal objects in Definition \ref{def:minimal}, it
suffices to consider the following:
\begin{proposition}\label{prop:finite-auto}
Assume $f$ as above is minimal and stable, and that $\underline{S}$ is a
geometric point.
Then $\Aut(f/S)$ is a finite group.
\end{proposition}
\begin{proof}
By \cite{Ch14} and \cite{GrSi13}, it suffices to show that the
automorphism group of the underlying objects are finite, see also
\cite[Proposition~2.10 and 3.13]{CJRS18P}.
By abuse of notation, we leave out the underlines, and assume all
stacks and morphisms are equipped with the trivial logarithmic
structures.
Since the dual graph of $\cC$ has finitely many automorphisms, it
suffices to consider the case that $\cC$ is irreducible.
After possibly taking normalization, and marking the preimages of
nodes, we may further assume that $\cC$ is smooth.
Suppose $f$ has infinite automorphisms.
Then we have either $\cC$ is smooth and rational, and the total
number of markings is less than 3, or $\cC$ is an unmarked genus one
curve.
In both cases, the morphism $g := \ft \circ f\colon \cC \to \cX$
contracts the curve to a point $x \in \cX$.
We first consider the cases that $\cC$ is rational with two
markings, or it is of genus one without any markings.
In both cases, we have $\omega^{\log}_{\cC/S} \cong \cO_{\cC/S}$.
Thus the morphism $\cC \to \mathbf{BC}^*_\omega$ induced by $\omega^{\log}_{\cC/S}$
factors through the universal quotient $\spec \mathbf{k} \to \mathbf{BC}^*_\omega$.
We obtain a commutative diagram
\begin{equation}\label{diag:finite-auto-trivial-omega}
\xymatrix{
\cC \ar@/_3ex/[rd] \ar[r]_{f_\mathbf{k}} \ar@/^3ex/[rr]^{f} & \mathfrak{P}_\mathbf{k} \ar[r] \ar[d] & \mathfrak{P} \ar[d] \\
& \spec \mathbf{k} \ar[r] & \mathbf{BC}^*_\omega
}
\end{equation}
where the square is cartesian. Since the automorphism group of $f$
is infinite, the automorphism group of $f_\mathbf{k}$ is infinite as well.
Thus $f_\mathbf{k}$ contracts $\cC$ to a point of the Deligne--Mumford stack
$\mathfrak{P}_\mathbf{k}$.
Then we have
\[
\deg\big(f_\mathbf{k}^*\cO(\infty_{\mathfrak{P}_\mathbf{k}})\big) = \deg\big( f^*\cO(\infty_{\mathfrak{P}})\big) = 0
\]
which contradicts the stability of $f$ as in \eqref{equ:hyb-stability}.
Now assume that $\cC$ is rational with at most one marking.
Suppose there is no point $q \in \cC$ such that $f(q) \in \mathbf{0}_\mathfrak{P}$.
Let $f_{\cX}\colon \cC \to \infty_{\cX}$ be the composition
$\cC \to \mathfrak{P} \setminus \mathbf{0}_\mathfrak{P} \to \infty_{\mathfrak{P}} \to \infty_{\cX}$
where $\mathfrak{P} \setminus \mathbf{0}_\mathfrak{P} \to \infty_{\mathfrak{P}}$ is the projection from
$\mathbf{0}_\mathfrak{P}$ to $\infty_\mathfrak{P}$, see
Proposition~\ref{prop:curve-in-infinity}.
Since automorphisms of $f$ fix $f_{\cX}$, the map $f_{\cX}$
contracts $\cC$ to a point of $\infty_{\cX}$, hence
$\deg \big(f_{\cX}^*\cO_{\infty_{\mathfrak{P}'}}(\frac{r}{d})\big) = 0$.
Proposition \ref{prop:curve-in-infinity} immediately leads to a
contradiction to the stability condition \eqref{equ:hyb-stability}.
Thus there must be a point $q \in \cC$ such that $f(q) \in \mathbf{0}_\mathfrak{P}$.
On the other hand, since $\omega^{\log}_{\cC/S} < 0$ and
$\deg g^*\cH = 0$, by the stability condition
\eqref{equ:hyb-stability} we must have
$\deg\big( f^* \cO(\infty_{\mathfrak{P}})\big) > 0$. Thus $\cC$
intersects $\infty_{\mathfrak{P}}$ properly at its unique marking, denoted by
$\sigma$, as the morphism $f$ comes from a log map.
Clearly, $q \neq \sigma$.
Consider the $\mathbb{G}_m$-invariant open subset
$U = \cC \setminus \{q\}$.
Note that $\omega_U^{\log}$ is $\mathbb{G}_m$-equivariantly trivial.
We thus arrive at the same diagram
\eqref{diag:finite-auto-trivial-omega} with $\cC$ replaced by $U$.
The infinite automorphism group implies that $f_\mathbf{k}|_{U}$ is constant.
On the other hand, the image of $U$ must intersect $\infty_{\mathfrak{P}_\mathbf{k}}$
properly.
This is not possible!
\end{proof}
\subsection{Boundedness}
We next show that the stack $\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$ is of finite type. Consider the following composition
\begin{equation}\label{equ:take-curve}
\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta) \to \fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega) \to \fM_{g,n}
\end{equation}
where $\fM_{g,n}$ is the stack of genus $g$, $n$-marked pre-stable curves, the first arrow is obtained by removing log structures, and the second arrow is obtained by taking coarse source curves. We divide the proof into two steps.
\subsubsection{The composition \eqref{equ:take-curve} is of finite type.}
Let $T \to \fM_{g,n}$ be a morphism from a finite type scheme $T$, and $C \to T$ be the universal curve. Since the question is local on $\fM_{g,n}$, it suffices to prove that
\[
\mathscr{R}_T := \mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)\times_{\fM_{g,n}} T \to \fS_{T} := \fS(\underline{\mathfrak{P}}/\mathbf{BC}^*_\omega)\times_{\fM_{g,n}} T \to T
\]
is of finite type.
For any object $(f\colon \cC_S \to \underline{\mathfrak{P}}) \in \fS_T(S)$, let $C_T$ be
the pull-back of $C \to T$ via $S \to T$.
Then $\cC_S \to C_T$ is the coarse morphism.
Note that $\omega^{\log}_{C_T/T}$ pulls back to
$\omega^{\log}_{\cC_S/S}$.
We thus obtain a commutative diagram of solid arrows with the unique
square cartesian:
\begin{equation}\label{diag:factor-through-coarse}
\xymatrix{
&& \underline{\mathfrak{P}}_{T} \ar[rr] \ar[d] && \underline{\mathfrak{P}} \ar[d] \\
\cC_S \ar[rr] \ar@{-->}[rru]^{\tilde{f}} \ar@/_1pc/[rrrr]_{\omega^{\log}_{\cC_S/S}} && C \ar[rr]^{\omega^{\log}_{C/T}} && \mathbf{BC}^*_\omega
}
\end{equation}
Then it follows that $f$ factors through a unique dashed arrow
$\tilde{f}$ making the above diagram commutative.
Note that $\underline{\mathfrak{P}}_T \to T$ is a family of proper Deligne--Mumford
stacks with projective coarse moduli spaces over $T$.
Let $\tilde{\beta}$ be the curve class of the fiber of
$\underline{\mathfrak{P}}_T \to T$ corresponding to objects in $\mathscr{R}_T$.
Note that $\tilde{\beta}$ is uniquely determined by the curve class
$\beta$ in $\cX$ and the contact orders.
Thus, the morphism $\mathscr{R}_T \to \fS_T$ factors through the open substack
$\fS_T(\tilde{\beta}) \subset \fS_T$ with the induced maps with curve
class $\tilde{\beta}$.
First, note that the morphism $\mathscr{R}_T \to \fS_T(\tilde{\beta})$ is of
finite type. Indeed, using the same proof as in
\cite[Lemma~4.15]{CJRS18P}, one shows that the morphism
$\mathscr{R}_T \to \fS_T(\tilde{\beta})$ is combinatorially finite
(\cite[Definition 2.14]{CJRS18P}), hence is of finite type by
\cite[Proposition 2.16]{CJRS18P}.
Now let $\scrM_{g,n}(\underline{\mathfrak{P}}_T/T,\tilde{\beta})$ be the stack of
genus $g$, $n$-marked stable maps to the family of targets
$\underline{\mathfrak{P}}_T/T$ with curve class $\tilde{\beta}$.
Then $\fS_T(\tilde{\beta})$ is identified with the locally closed
sub-stack of $\scrM_{g,n}(\underline{\mathfrak{P}}_T/T,\tilde{\beta})$ which for any
$T$-scheme $S$ associates the category of stable maps
$\tilde{f}\colon \cC_S \to \underline{\mathfrak{P}}_T$ over $S$ such that the induced
map $C_S \to C$ from the coarse curve $C_S$ of $\cC_S$ to $C \to T$ is
stable, and is compatible with the marked points.
Since $\scrM_{g,n}(\underline{\mathfrak{P}}_T/T,\tilde{\beta})$ is of finite type over
$T$ by \cite[Theorem 1.4.1]{AbVi02}, $\fS_T(\tilde{\beta})$ is of
finite type.
\subsubsection{The image of \eqref{equ:take-curve} is of finite type}
It remains to show that the image of \eqref{equ:take-curve} is contained in a finite type sub-stack of $\fM_{g,n}$. For this purpose, it suffices to bound the number of unstable components of the source curves in $\mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$.
Let $f\colon \cC \to \mathfrak{P}$ be an $R$-map corresponding to a geometric
log point $S \to \mathscr{R}_{g, \vec{\varsigma}}(\mathfrak{P}, \beta)$.
Observe that the number
\begin{equation}\label{equ:boundedness-degree}
d_{\beta}:= \deg \omega^{\log}_{\cC/S}\otimes {f}^*\cO(\tilde{r} \infty_{\mathfrak{P}})
\end{equation}
is a constant only depending on the choice of genus $g$, the orbifold structure at markings, and the contact orders. Let $\mathcal{Z} \subset \cC$ be an irreducible component. Denote by
\[
d_{\beta,\mathcal{Z}} := \deg \big( \omega^{\log}_{\cC/S}\otimes {f}^*\cO(\tilde{r} \infty_{\mathfrak{P}})\big)|_{\mathcal{Z}}.
\]
Let $g := \ft \circ f$ be the pre-stable map underlying $f$.
An irreducible component $\mathcal{Z}\subset \cC$ is called \emph{$g$-stable}
if $(\deg g^*\cH)|_{\mathcal{Z}} > 0$ or $\mathcal{Z}$ is a stable component of the
curve $\cC$.
Otherwise, $\mathcal{Z}$ is called \emph{$g$-unstable}.
Suppose $\mathcal{Z}$ is $g$-unstable.
Then by the stability condition \eqref{equ:hyb-stability} and
$(\deg g^*\cH)|_{\mathcal{Z}} = 0$, we have
\[
d_{\beta,\mathcal{Z}} \ge \deg \big( (\omega^{\log}_{\cC/S})^{1 + \delta}\otimes {f}^*\cO(\tilde{r} \infty_{\mathfrak{P}})\big)|_{\mathcal{Z}} > 0.
\]
Note that $\mathfrak{P}_\mathbf{k}$ is a proper Deligne--Mumford stack, and the stack of
cyclotomic gerbes in $\mathfrak{P}_\mathbf{k}$ is of finite type, see
\cite[Definition~3.3.6, Corollary~3.4.2]{AGV08}.
Thus there exists a positive integer $\lambda$ such that if $\mathfrak{P}_\mathbf{k}$
has a cyclotomic gerbes banded by $\mu_k$, then $k | \lambda$.
Since $\underline{f}$ factors through a representable morphism $\tilde{f}$ in
\eqref{diag:factor-through-coarse}, we have
$d_{\beta,\mathcal{Z}} \ge \frac{1}{\lambda}$.
Now we turn to considering $g$-stable components.
Since the genus is fixed, and the orbifold structure of $\mathcal{Z}$ is
bounded, the number of $g$-stable components is bounded.
Let $\mathcal{Z}$ be an $g$-stable component.
We have the following two possibilities.
Suppose $f(\mathcal{Z}) \not\subset \infty_{\mathfrak{P}}$, hence
$\deg {f}^*\cO(\tilde{r} \infty_{\mathfrak{P}})\big)|_{\mathcal{Z}} \geq 0$.
Then we have $d_{\beta,\mathcal{Z}} \geq -1$ where the equality holds only if
$\mathcal{Z}$ is a rational tail.
Now assume that $f(\mathcal{Z}) \subset \infty_{\mathfrak{P}}$.
By Proposition~\ref{prop:curve-in-infinity}, we have
\[
d_{\beta,\mathcal{Z}} = \deg g^*\mathbf{L}\otimes (\underline{f}_{\cX})^*\cO_{\infty_{\cX}}(\frac{r}{d}).
\]
Since $\deg (\underline{f}_{\cX})^*\cO_{\infty_{\cX}}(\frac{r}{d}) \geq 0$
and $\deg g^*\mathbf{L}$ is bounded below by some number only depending on
$\mathbf{L}$ and the curve class $\beta$, we conclude that $d_{\beta,\mathcal{Z}}$ is
also bounded below by some rational number independent the choices of
$\mathcal{Z}$.
Finally, note that
\[
d_{\beta} = \sum_{\mathcal{Z}\colon\text{ $g$-stable}} d_{\beta,\mathcal{Z}} + \sum_{\mathcal{Z}\colon\text{ $g$-unstable}} d_{\beta,\mathcal{Z}}.
\]
The above discussion implies that the first summation can be bounded below by a number only depending on the discrete data $\beta$, and each term in the second summation is a positive number larger than $\frac{1}{\lambda}$. We thus conclude that the number of irreducible components of the source curve $\cC$ is bounded.
This finishes the proof of the boundedness.
\subsection{The set-up of the weak valuative criterion}\label{ss:valuative-set-up}
Let $R$ be a discrete valuation ring (DVR), $K$ be its quotient field, $\fm \subset R$ be the maximal ideal, and $k = R/\fm$ the residue field. Denote by $\underline{S} = \spec R$, $\underline{\eta} = \spec K$ and $\underline{s} = \spec k$.
Our next goal is to prove the weak valuative criterion of stable log R-maps.
\begin{theorem}\label{thm:weak-valuative}
Let $f_{\eta}\colon \cC_{\eta} \to \mathfrak{P}$ be a minimal log $R$-map
over a log $K$-point $\eta$.
Possibly replacing $R$ by a finite extension of DVRs, there is a
minimal log $R$-map $f_S \colon \cC \to \mathfrak{P}$ over $S$ extending
$f_{\eta}$ over $\underline{\eta}$.
Furthermore, the extension $f_S$ is unique up to a unique
isomorphism.
\end{theorem}
We will break the proof into several steps.
Since the stability condition in Section \ref{sss:stability}
are constraints only on the level of underlying structures, by the relative properness of log maps over underlying stable maps \cite{Ch14, AbCh14, GrSi13}, see also \cite[Proposition 2.17]{CJRS18P}, it suffices to prove the existence
and uniqueness of an underlying stable $R$-map
$\underline{f}\colon \underline{\cC} \to \mathfrak{P}$ extending $f_{\eta}$ over $S$,
possibly after a finite extension of the base.
Since the focus is now the underlying structure, we will leave out the
underlines to simplify the notations, and assume all stacks are
equipped with the trivial logarithmic structure for the rest of this
section.
Normalizing along nodes of $\cC_{\eta}$, possibly taking further base
change, and marking the points from the nodes, we obtain a possibly
disjoint union of smooth curves $\cC_{\eta}^{n}$.
Observe that $\cC_{\eta}^n \to \mathbf{BC}^*_\omega$ induced by
$\omega^{\log}_{\cC_{\eta}^n}$ factors through the corresponding
$\cC_{\eta} \to \mathbf{BC}^*_\omega$ before taking normalization.
Thus, we may assume that $\cC_{\eta}$ is smooth and irreducible.
It is important to notice that every isolated intersection of
$\cC_{\eta}$ with $\infty_{\mathfrak{P}}$ via $f$ is marked.
This will be crucial in the proof below.
\subsection{The separatedness}
We first verify the uniqueness in Theorem \ref{thm:weak-valuative}.
The strategy is similar to \cite[Section~4.4.5]{CJRS18P} but in a more
complicated situation.
\subsubsection{Reduction to the comparison of coarse curves}
Let $f_i\colon \cC_i \to \mathfrak{P}$ be a stable underlying R-map over $S$
extending $f_{\eta}$ for $i=1,2$.
Let $\cC_i \to C_i$ and $\cC_{\eta} \to C_{\eta}$ be the corresponding
coarse moduli.
By \eqref{diag:factor-through-coarse}, the morphism $f_i$ factors
through a twisted stable map
$\cC_i \to \cP_i := \underline{\mathfrak{P}}\times_{\mathbf{BC}^*_\omega}C_i$, where $\mathfrak{P}_i$ is a
proper Deligne--Mumford stack over $S$.
By the properness of \cite[Theorem~1.4.1]{AbVi02}, to show that $f_1$
and $f_2$ are canonically isomorphic, it suffices to show that the two
coarse curves $C_1$ and $C_2$ extending $C_{\eta}$ are canonically
isomorphic.
\subsubsection{Merging two maps}
Let $C_3$ be a family of prestable curves over $\spec R$ extending
$C_{\eta}$ with dominant morphisms $C_3 \to C_i$ for $i=1,2$.
We may assume $C_3$ has no rational components with at most two
special points contracted in both $C_1$ and $C_2$ by further
contracting these components.
Let $\cC_3 \to \cC_1\times \cC_2\times C_3$ be the family of twisted
stable maps over $\spec R$ extending the obvious one
$\cC_\eta \to \cC_1\times \cC_2\times C_3$.
Observe that the composition
$\cC_3 \to \cC_1\times \cC_2\times C_3 \to C_3$ is the coarse moduli
morphism.
Indeed, if there is a component of $\cC_3$ contracted in $C_3$, then
it will be contracted in both $\cC_1$ and $\cC_2$ as well.
Set $U_i^{(0)} = C_3$ for $i=1,2$. Let $U_i^{(k+1)}$ be obtained by
removing from $U_i^{(k)}$ the rational components with precisely one
special point in $U_i^{(k)}$ and that are contracted in $C_i$.
Note that these removed rational components need not be proper, and
their closure may have more than one special points in $C_3$.
We observe that this process must stop after finitely many steps.
Denote by $U_i \subset C_3$ the resulting open subset.
\begin{lemma}\label{lem:cover}
$U_1 \cup U_2 = C_3$.
\end{lemma}
\begin{proof}
Suppose $z \in C_3 \setminus (U_1\cup U_2) \neq \emptyset$. Then there is a tree of rational curves in $C_3$ attached to $z$ and contracted in both $C_1$ and $C_2$. This contradicts the assumption on $C_3$.
\end{proof}
We then construct an underlying $R$-map $f_3\colon \cC_3 \to \mathfrak{P}$ by
merging $f_1$ and $f_2$ as follows.
Denote by $\cU_i := \cC_3\times_{C_3}U_i$ for $i=1,2$.
Note that $U_{i} \to C_i$ hence $\cU_i \to \cC_i$ contracts only
rational components with precisely two special points in $U_i$.
In particular, we have
$\omega^{\log}_{\cC_3/S}|_{\cU_i} = \omega^{\log}_{\cC_i/S}|_{\cU_i}$.
This leads to a commutative diagram below
\begin{equation}\label{diag:merging-maps}
\xymatrix{
&&&& \mathfrak{P} \ar[d] \\
\cU_i \ar@/^1.5pc/[urrrr]^{f_3|_{\cU_i}} \ar[rr] \ar@/_1pc/[rrrr]_{\omega^{\log}_{\cC_3/S}|_{\cU_i}} && \cC_i \ar@/^.5pc/[rru]^{f_i} \ar[rr]^{\omega^{\log}_{\cC_i/S}} && \mathbf{BC}^*_\omega.
}
\end{equation}
where $f_{3}|_{\cU_i}$ is given by the obvious composition. We then observe that the two morphisms $f_3|_{\cU_1}$ and $f_3|_{\cU_2}$ coincide along $\cU_1\cap \cU_2$. Indeed, both morphisms $f_3|_{\cU_i}$ restrict to $f_{\eta}$ over the open dense set $\cC_{\eta} \subset \cU_i$, and $\mathfrak{P} \to \mathbf{BC}^*_\omega$ is proper of Deligne--Mumford type. Thus, $f_3|_{\cU_1}$ and $f_3|_{\cU_2}$ can be glued to an underlying $R$-map $f_3\colon \cC_3 \to \mathfrak{P}$ over $S$.
\subsubsection{Comparing the underlying $R$-maps}
Denote by $\overline{\cU_{i,s}}$ the closure of the closed fiber $\cU_{i,s}$ in $\cC_3$.
\begin{lemma}\label{lem:degree-comparison}
Notations as above, we have
\begin{equation}\label{eq:degree-comparison}
\deg \left(\omega^{\log}_{\cC_3/S} \otimes f_{3}^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\overline{\cU_{i,s}}}
\geq \deg \left(\omega^{\log}_{\cC_i/S} \otimes f_{i}^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\cC_{i,s}}
\end{equation}
\end{lemma}
\begin{proof}
We prove \eqref{eq:degree-comparison} by checking the following
\begin{equation}\label{eq:degree-comparison-local}
\deg \left(\omega^{\log}_{\cC_3/S} \otimes f_3^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\mathcal{Z}} \\
\geq \deg \left(\omega^{\log}_{\cC_i/S} \otimes f_i^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\mathcal{Z}}
\end{equation}
for each irreducible component $\mathcal{Z} \subset \overline{\cU_{i,s}}$.
Since $\cU_{i,s} \to \cC_{i,s}$ is a dominant morphism contracting
rational components with precisely two special points in $\cU_{i,s}$,
there is an effective divisor $D'$ of $\cC_{3,s}$ supported on
$\overline{\cU_{i}} \setminus \cU_{i}$ such that
\begin{equation}\label{equ:comparing-omega}
\omega^{\log}_{\cC_3/S}|_{\overline{\cU_{i,s}}} = \omega^{\log}_{\cC_i/S}|_{\overline{\cU_{i,s}}}(D').
\end{equation}
Restricting to $\mathcal{Z}$, we obtain
\begin{equation}\label{equ:component-comparing-omega}
\deg \omega^{\log}_{\cC_3/S}|_{\mathcal{Z}} = \deg \omega^{\log}_{\cC_i/S}|_{\mathcal{Z}} + \deg D'|_{\mathcal{Z}}.
\end{equation}
Suppose $\mathcal{Z}$ is contracted in $\cC_i$. By \eqref{diag:merging-maps}, we obtain
\[
\deg f_3^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}} = \deg f_i^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}} = 0.
\]
Then \eqref{eq:degree-comparison-local} immediately follows from \eqref{equ:component-comparing-omega}.
Now assume $\mathcal{Z}$ is mapped to an irreducible component $\mathcal{Z}' \subset \cC_i$. Consider the case that $f_{3}(\mathcal{Z}) \subset \infty_{\mathfrak{P}}$, hence $f_i(\mathcal{Z}') \subset \infty_{\mathfrak{P}}$ by \eqref{diag:merging-maps}. By \eqref{equ:curve-in-infinity}, we obtain the equality in \eqref{eq:degree-comparison-local}.
It remains to consider the case $f_{3}(\mathcal{Z}) \not\subset \infty_{\mathfrak{P}}$.
Let $\mathcal L_3$ and $\mathcal L_i$ be the corresponding spin structures over $\cC_3$
and $\cC_i$ respectively, see Proposition \ref{prop:map-field-equiv}.
Note that $\mathcal L_3|_{\cU_i} \cong \mathcal L_i|_{\cU_i}$ by
\eqref{diag:merging-maps}.
By \eqref{equ:comparing-omega} and Definition \ref{def:spin}, there is
an effective divisor $D$ supported on $\overline{\cU_i} \setminus \cU_i$
such that $r\cdot D = D'$ and
$\mathcal L_{3}|_{\mathcal{Z}} \cong \mathcal L_i|_{\mathcal{Z}}(D|_{\mathcal{Z}})$. By
\eqref{equ:proj-bundle}, we have
\[
\deg f_3^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}} - \deg f_i^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}'} \geq \frac{1}{a}\deg D|_{\mathcal{Z}}.
\]
Combining this with \eqref{equ:component-comparing-omega}, we obtain \eqref{eq:degree-comparison-local}.
\end{proof}
Suppose $C_1 \neq C_2$.
Then we have $U_i \neq C_i$ for some $i$, say $i=1$.
By construction each connected component of $C_3 \setminus U_1$ is a
tree of proper rational curves in $U_2$ with no marked point, hence
$\cT := (\cC_3 \setminus \cU_1) \subset \cU_2$.
By construction, the composition $\cT \to \cC_3 \to \cC_2$ is a closed
immersion and $f_{3}|_{\cT} = f_{2}|_{\cT}$.
Since $\deg \omega^{\log}_{\cC_3/S}|_{\cT} < 0$ (unless
$\cT = \emptyset$), and $\cT$ is contracted to $\cC_1$ and hence maps
to a point in $\cX$, the stability of $f_2$ implies
\[
\deg\left(\omega^{\log}_{\cC_3/S} \otimes f_3^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\cT}
= \deg\left(\omega^{\log}_{\cC_2/S} \otimes f_2^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\cT} > 0.
\]
Using Lemma~\ref{lem:degree-comparison}, We calculate
\begin{multline*}
\deg\left(\omega^{\log}_{\cC_3/S} \otimes f_{3}^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\cC_{3,s}} \\
= \deg\left(\omega^{\log}_{\cC_3/S} \otimes f_3^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\overline{\cU_{1, s}}} + \deg\left(\omega^{\log}_{\cC_3/S} \otimes f_3^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right) \big|_{\cT} \\
\geq \deg\left(\omega^{\log}_{\cC_1/S} \otimes f_1^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_{\cC_{1,s}} + \deg\left(\omega^{\log}_{\cC_3/S} \otimes f_3^*\cO(\tilde{r}\infty_{\mathfrak{P}})\right)\big|_\cT.
\end{multline*}
Since
$\deg f_{3,s}^*\cO(\tilde{r}\infty_{\mathfrak{P}}) = \deg
f_{1,s}^*\cO(\tilde{r}\infty_{\mathfrak{P}})$ is given by the sum of contact orders, we conclude that
$\cT = \cC_3 \setminus \cU_1 = \emptyset$.
Observe that $C_3 = U_1 \to C_1$ contracts proper rational components
with precisely two special points.
Let $Z \subset C_3$ be such a component, and let
$\mathcal{Z} = Z \times_{C_3}\cC_3$.
Since $f_3|_{\cC_3 = \cU_1}$ factors through $f_1$, we have
\begin{equation}\label{equ:bridge-trivial-degree}
\deg f_3^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}} = 0.
\end{equation}
On the other hand, since $Z$ has two special points in $C_3$ and is
contracted in $C_1$, it is not contracted in $C_2$.
Denote by $\mathcal{Z}' \subset \cC_2$ the component dominating
$Z \subset C_2$.
Then $\mathcal{Z}'$ has precisely two special points.
Furthermore $f_2|_{\mathcal{Z}'}$ and $f_3|_{\mathcal{Z}}$ coincide away from
the two special points.
Using \eqref{equ:bridge-trivial-degree}, we observe that
$\deg f_2^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}'} = 0$, which contradicts the
stability of $f_2$.
Thus $C_3 \to C_1$ is an isomorphism.
This finishes the proof of separatedness.
\subsection{Rigidifying (pre-)stable reductions}\label{sss:representability}
We start constructing the stable limit as in Theorem \ref{thm:weak-valuative}. Recall from Section \ref{ss:valuative-set-up} that it suffices to construct an extension of the underlying structures where $\cC_{\eta}$ is smooth and irreducible. Suppose we have an underlying R-map extending $f_{\eta}$:
\begin{equation}\label{equ:underlying-triangle}
f \colon \cC \to \mathfrak{P}
\end{equation}
where $\cC \to S$ is a pre-stable curve over $S$.
We modify $f$ to obtain a representable morphism to $\mathfrak{P}$ as follows.
Forming the relative coarse moduli \cite[Theorem~3.1]{AOV11}, we
obtain a diagram
\begin{equation*}
\xymatrix{
\cC \ar[d]_\pi \ar[r]^f & \mathfrak{P} \ar[d] \\
\cC^r \ar[ur]^{f^r} \ar[r] & \mathbf{BC}^*_\omega,
}
\end{equation*}
in which the upper triangle is commutative, $f^r$ is representable,
and $\pi$ is proper and quasi-finite.
Note that since
$\omega^{\log}_{C/S}|_{\cC^r} = \omega^{\log}_{\cC^r/S}$, the lower
triangle is also commutative.
\begin{proposition}\label{prop:valuative-representability}
Notations as above, we have:
\begin{enumerate}
\item $f^r$ is a representable underlying $R$-map over $S$ extending $f_{\eta}$.
\item If $f$ satisfies the positivity condition
\eqref{equ:hyb-stability}, then so does $f^r$.
\end{enumerate}
\end{proposition}
\begin{proof}
Both parts follow easily from the above observations.
\end{proof}
\subsection{Pre-stable reduction}
Next, we construct a (not necessarily stable) family
\eqref{equ:underlying-triangle} across the central fiber.
We will show that such a family can be constructed by taking the
stable map limit twice in a suitable way.
It is worth mentioning that the method here is very different than the
one in \cite{CJRS18P} in order to handle the general situation of this
paper.
In the following, $g_{\eta}$ and $\mathcal L_{\eta}$ denote the pre-stable
map and the spin structure on $\cC_{\eta}$ associated to $f_\eta$.
\subsubsection{The first stable map limit}\label{sss:stable-map-limit-1}
Let $g_\eta$ be the prestable map underlying $f_\eta$.
Then, let $g_0\colon \cC_0 \to \cX$ be any pre-stable map extending
$g_{\eta}$ whose existence follows from \cite{AbVi02}.
Possibly after a further base change, we construct the following
commutative diagram:
\begin{equation}\label{diag:stable-map-limits-1}
\xymatrix{
\cC_0'' \ar[d] \ar[rr]^{f_0} && \mathfrak{P} \ar[d] \\
\cC_0' \ar[d] \ar[rr]^{\mathcal L_0} && \mathfrak{X} \ar[d] \\
\cC_0 \ar[rr]^{(\omega^{\log}_{\cC_0/S},g_0) \ \ \ \ } && \mathbf{BC}^*_\omega\times\cX
}
\end{equation}
First, there is a unique stable map limit
$\cC_0' \to \cC_0\times_{\mathbf{BC}^*_\omega\times\cX}\mathfrak{X}$ extending the one given
by the spin $\mathcal L_{\eta}$.
This yields the spin structure $\mathcal L_0$ on $\cC_0'$.
Furthermore, the morphism $\cC'_0 \to \cC_0$ is quasi-finite.
We then take the unique stable map limit
$h\colon \cC_0'' \to \cP_{\cC_0'} := \mathfrak{P}\times_{\mathfrak{X}}\cC_0'$ extending the
one given by $f_{\eta}$.
To see the difference between the above stable map limits and a pre-stable reduction, we observe:
\begin{lemma}\label{lem:unfolding-bridges}
Suppose we are given a commutative diagram
\begin{equation}
\xymatrix{
\cC \ar[d] \ar[rr]^{f} && \mathfrak{P} \ar[d] \\
\cC' \ar[rr]^{\omega^{\log}_{\cC'/S}} && \mathbf{BC}^*_\omega
}
\end{equation}
where $\cC'$ and $\cC$ are two pre-stable curves over $S$ such that $\cC \to \cC'$ contracts only rational components with two special points. Then $f$ is a pre-stable reduction as in \eqref{equ:underlying-triangle}.
\end{lemma}
\begin{proof}
The lemma follows from that $\omega^{\log}_{\cC'/S}|_{\cC} = \omega^{\log}_{\cC/S}$.
\end{proof}
Observe that
$\omega^{\log}_{\cC_0/S}|_{\cC'_0} = \omega^{\log}_{\cC'_0/S}$.
If $\cC''_0 \to \cC'_0$ contracts no rational tails then it can only
contract rational bridges.
Thus we obtain a pre-stable reduction in this case by applying Lemma
\ref{lem:unfolding-bridges}.
Otherwise, we show that a pre-stable reduction can be achieved by
repeating stable map limits one more time as follows.
\subsubsection{The second stable map limit}
Set $\cC_1 = \cC''_0$. We will construct the following commutative diagram:
\begin{equation}\label{diag:stable-map-limits-2}
\xymatrix{
\cC_2 \ar[d] \ar[rrrr]^{f_1} &&&& \mathfrak{P} \ar[d] \\
\tilde{\cC}_1' \ar[rr] && \cC_1' \ar[d] \ar[rr]^{\mathcal L_1} && \mathfrak{X} \ar[d] \\
&& \cC_1 \ar[rr]^{(\omega^{\log}_{\cC_1/S},g_1) \ \ \ \ } && \mathbf{BC}^*_\omega\times\cX
}
\end{equation}
First, $g_1$ is the composition of $\cC_1 \to \cC_0$ with $g_0$, and $\mathcal L_1$ is the spin structure over $\cC'_1$ obtained by taking the stable map limit as in \eqref{diag:stable-map-limits-1}.
\smallskip
Second, we construct a quasi-finite morphism of pre-stable curves $\tilde{\cC}_1' \to \cC_1'$ over $S$ such that over $\eta$ it is the identity $\cC_{\eta} \to \cC_{\eta}$, and the identity $\mathcal L_{\eta} \to \mathcal L_{\eta}$ extends to a morphism of line bundles
\begin{equation}\label{equ:spin-iterate}
\mathcal L_0|_{\tilde{\cC}'_1} \to \mathcal L_1|_{\tilde{\cC}'_1}
\end{equation}
whose $r$-th power is the natural morphism
\begin{equation}\label{equ:log-cot-iterate}
(\omega_{\cC_0/S}^{\log} \otimes g_0^* \mathbf{L}^\vee)|_{\tilde{\cC}'_1} \to
(\omega_{\cC_1/S}^{\log} \otimes g_1^* \mathbf{L}^\vee)|_{\tilde{\cC}'_1}.
\end{equation}
Let $\sqrt[r]{\cC_1'}$ be the $r$th root stack of
$$(\omega_{\cC_0/S}^{\log} \otimes g_0^* \mathbf{L}^\vee)^{\vee}|_{\tilde{\cC}'_1} \otimes
(\omega_{\cC_1/S}^{\log} \otimes g_1^* \mathbf{L}^\vee)|_{\tilde{\cC}'_1},$$
and $\sqrt[r]{(\cC_1',s)}$ be the $r$-th root
stack of the section $s$ of the above line bundle given by \eqref{equ:log-cot-iterate}. We form the fiber product
\[
\hat{\cC}'_1 := \cC'_1\times_{\sqrt[r]{\cC_1'}}\sqrt[r]{(\cC_1',s)},
\]
where the morphism $\cC'_1 \to \sqrt[r]{\cC_1'}$ is defined via
$\mathcal L_0^\vee|_{\tilde{\cC}'_1} \otimes \mathcal L_1|_{\tilde{\cC}'_1}$.
The identities
$\mathcal L_{\eta} = \mathcal L|_{\cC_{\eta}} = \mathcal L_{1}|_{\mathcal L_{\eta}}$ induce a
stable map $\cC_{\eta} \to \hat{\cC}'_1$ which, possibly after a
finite base change of $S$, extends to a quasi-finite stable map
$\tilde{\cC}'_1 \to \hat{\cC}'_1$.
Since $\hat{\cC}'_{1} \to \cC'_1$ is quasi-finite, the composition
$\tilde{\cC}'_1 \to \hat{\cC}'_1 \to \cC'_1$ gives the desired
quasi-finite morphism.
Thus, $\mathcal L_1$ pulls back to a spin structure on $\tilde{\cC}'_1$.
Furthermore, the universal $r$-th root of $\sqrt[r]{(\cC_1',s)}$ pulls
back to a section of $\mathcal L_1 \otimes \mathcal L^\vee|_{\tilde{\cC}'_1}$ as
needed.
\smallskip
Finally, we construct $f_1$ in the same way as the stable map limit in
\eqref{diag:stable-map-limits-1} but using the spin structure
$\mathcal L_1|_{\tilde{\cC}'_1}$.
We will show:
\begin{proposition}\label{prop:pre-stable-reduction}
The morphism $\cC_2 \to \tilde{\cC}'_1$ contracts no rational tails.
\end{proposition}
Together with Lemma \ref{lem:unfolding-bridges}, we obtain a pre-stable reduction \eqref{equ:underlying-triangle}.
\subsubsection{The targets of the two limits}
Consider $ \cP_i := \tilde{\cC}'_1\times_{\mathfrak{X}} \mathfrak{P}$ for $i=0,1$,
where the arrow $\tilde{\cC}'_1 \to \mathfrak{X}$ is induced by $\mathcal L_i$.
The morphism \eqref{equ:spin-iterate} induces a birational map
$ c\colon \cP_0 \dashrightarrow \cP_1 $ whose indeterminacy locus is
precisely the infinity divisor $\infty_{\cP_0} \subset \cP_0$ over the
degeneracy locus of \eqref{equ:spin-iterate}.
Its inverse $c^{-1}\colon \cP_1 \dashrightarrow \cP_0$ is given by the
composition
\begin{multline*}
\cP_1 = \mathbb{P}^\mathbf w\left(\bigoplus_{j > 0} (g_1^*(\mathbf{E}_j^\vee) \otimes \mathcal L_1^{\otimes j})|_{\tilde{\cC}'_1} \oplus \cO\right) \\
\dashrightarrow \mathbb{P}^\mathbf w\left(\bigoplus_{j > 0} (g_1^*(\mathbf{E}_j^\vee) \otimes \mathcal L_1^{\otimes j})|_{\tilde{\cC}'_1} \oplus (\mathcal L_1 \otimes \mathcal L_0^\vee)^{\otimes a}|_{\tilde{\cC}'_1}\right) \\
\cong \mathbb{P}^\mathbf w\left(\bigoplus_{j > 0} (g_0^*(\mathbf{E}_j^\vee) \otimes \mathcal L_0^{\otimes j})|_{\tilde{\cC}'_1} \oplus \cO\right) = \cP_0,
\end{multline*}
where the first map is multiplication of the last coordinate by the
$a$th power of the section of
$(\mathcal L_1 \otimes \mathcal L_0^\vee)|_{\tilde{\cC}'_1}$ given by
\eqref{equ:spin-iterate}.
Therefore, the indeterminacy locus of $c^{-1}$ is the zero section
$\mathbf{0}_{\cP_1} \subset \cP_1$ over the degeneracy locus of
\eqref{equ:spin-iterate}.
We have arrived at the following commutative diagram
\begin{equation}\label{diag:compare-target}
\xymatrix{
&& \cP_0 \ar@/_1pc/@{-->}[dd]_{c} \ar@/^1pc/@{<--}[dd]^{c^{-1}} \ar[rrd] && \\
\cC_2 \ar[rru]^{f_0} \ar[rrd]_{f_1} &&&& \tilde{\cC}'_1 \\
&& \cP_1 \ar[rru] &&
}
\end{equation}
where by abuse of notations $f_0$ and $f_1$ are given by the
corresponding arrows in \eqref{diag:stable-map-limits-1} and
\eqref{diag:stable-map-limits-2}.
Indeed, $f_0\colon \cC_2 \to \cP_0$ is given by the composition
$ \cC_2 \to \tilde{\cC}'_1 \to \cC'_1 \to \cC_1 \to \mathfrak{P}.
$
\subsubsection{Comparing the two limits along vertical rational tails} A rational tail of $\cC_2$ over the closed fiber is called \emph{vertical} if it is contracted in $\tilde{\cC}'_1$.
\begin{lemma}\label{lem:VRT-in-infty}
If $\mathcal{Z} \subset \cC_2$ is a vertical rational tail, then $f_0(\mathcal{Z}) \subset \infty_{\cP_0}$.
\end{lemma}
\begin{proof}
Note that $f_0$ contracts any vertical rational tails. Suppose $f_0(\mathcal{Z}) \not\subset \infty_{\cP_0}$. Then $c\circ f_0$ is well-defined along $\mathcal{Z}$ hence $f_1|_\mathcal{Z} = c\circ f_0|_{\mathcal{Z}}$. This contradicts the stability of $f_1$ as a stable map limit.
\end{proof}
For $i=0,1$, denote by $p_i\colon \cP_i \dashrightarrow \infty_{\cP_i}$ the projection from the zero section $\mathbf{0}_{\cP_i}$ to $\infty_{\cP_i}$. Thus $p_i$ is a rational map well-defined away from $\mathbf{0}_{\cP_i}$. Furthermore, we observe that $\infty_{\cP_0} \cong \infty_{\cP_1}$. Using this isomorphism, we have $p_0 = p_1\circ c$ and $p_1 = p_0\circ c^{-1}$.
\begin{lemma}\label{lem:const-proj}
Let $\mathcal{Z} \subset \cC_2$ be a vertical rational tail. Then $p_1\circ f_1$ contracts an open dense subset of $\mathcal{Z}$.
\end{lemma}
\begin{proof}
Since $f_1$ is a stable map and $\mathcal{Z}$ is a vertical rational tail, we have $f_1(\mathcal{Z}) \not\subset \mathbf{0}_{\cP_1}$. Thus $p_1\circ f_1$ is well-defined on an open dense $U \subset \mathcal{Z}$ such that $f_1(U)$ avoids $\mathbf{0}_{\cP_1}$. Observe that $c^{-1}$ is well-defined on $f_1(U)$. We then have
$
p_1 \circ f_1 |_U = p_0\circ c^{-1} \circ f_1 |_U = p_0 \circ f_0|_U.
$
Here $p_0$ is well-defined on $f_0|_U$ by Lemma \ref{lem:VRT-in-infty}. The statement follows from that $f_0$ contracts any vertical rational tail.
\end{proof}
\begin{corollary}\label{cor:no-VRT-in-infty}
If $\mathcal{Z} \subset \cC_2$ is a vertical rational tail, then the image
$f_1(\mathcal{Z})$ dominates a line joining $\mathbf{0}_{\cP_1}$ and a point on
$\infty_{\cP_1}$.
\end{corollary}
\begin{proof}
By Lemma \ref{lem:const-proj}, $f_1(\mathcal{Z})$ has support on a fiber of
$p_1$.
Since $\mathcal{Z}$ intersects $\infty_{\cP_1}$ at its unique node, it
suffices to show that $f_1(\mathcal{Z}) \not\subset \infty_{\cP_1}$ hence
$f_1|_{\mathcal{Z}}$ dominates a fiber of $p_1$.
Otherwise, since $p_1|_{\infty_{\cP_1}}$ is the identity, $f_1$
contracts $\mathcal{Z}$ by Lemma \ref{lem:const-proj}.
This contradicts the stability of $f_1$ constructed as a stable map
limit.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:pre-stable-reduction}]
We show that Corollary~\ref{cor:no-VRT-in-infty} and
Lemma~\ref{lem:VRT-in-infty} contradict each other, hence rule out
the existence of vertical rational tails.
Let $\mathcal{Z}\subset \cC_2$ be a vertical rational tail.
The pre-stable map $\cC_2 \to \cX$ factors through $\cC_2 \to \cC_1$
along which $\mathcal{Z}$ is contracted to a smooth unmarked point on
$\cC_1$.
Thus there is a Zariski neighborhood $U \subset \cC_2$ containing
$q$ such that $\mathbf{E}_j|_{U}$ splits.
Denote by $\{H_{ijk}\}_{k=1}^{\rk \mathbf{E}_j}$ the collection of
hyperplanes in $\cP_i|_{U}$ corresponding to each splitting factor
of $\mathbf{E}_j|_{U}$.
By Corollary~\ref{cor:no-VRT-in-infty} there is a smooth unmarked
point $q \in \mathcal{Z}$ such that $f_1(q) \in \mathbf{0}_{\cP_1}$, hence
$f_1(q) \in H_{1jk}$ for all $j$ and $k$.
We will show that $f_0(q) \in H_{0jk}$ for all $j$ and $k$ as well,
hence $f_0(q) \in \mathbf{0}_{\cP_0}$, which contradicts
Lemma~\ref{lem:VRT-in-infty}.
Suppose $\mathcal{Z}$ intersects $H_{1jk}$ properly at $q$ via $f_1$. Let $D_{1jk} \subset U$ be an irreducible component of $f_1^*(H_{1jk})$ containing $q$. Then $D_{1jk}$ is a multi-section over $S$ with the general fiber $D_{1jk,\eta} \subset f_{1,\eta}^*(H_{1jk,\eta}) = f_{0,\eta}^*(H_{0jk,\eta})$. Taking closure, we observe that $f_0(q) \in D_{1jk} \subset f_{0}^*(H_{0jk})$.
Suppose $f_1(\mathcal{Z}) \subset H_{1jk}$. Note that $p_1\circ f_1 = p_0 \circ c^{-1} \circ f_1 = p_0 \circ f_0$ are well-defined along an open dense subset of $\mathcal{Z}$. Then Lemma \ref{lem:VRT-in-infty} together with $p_1 \circ f_1(\mathcal{Z}) \subset \infty_{\cP_1} \cap H_{1jk} \cong \infty_{\cP_0} \cap H_{0jk}$ implies that $f_0$ contracts $\mathcal{Z}$ to a point of $\infty_{\cP_0} \cap H_{0jk}$.
\end{proof}
\subsection{Stabilization}
Let $f\colon \cC \to \mathfrak{P}$ be a pre-stable reduction extending
$f_{\eta}$ over $S$ as in \eqref{equ:underlying-triangle}.
We next show that by repeatedly contracting unstable rational bridges
and rational tails as in Section~\ref{sss:stabilize-bridge} and
\ref{sss:stabilize-tail}, we obtain a pre-stable reduction satisfying
\eqref{equ:hyb-stability}.
Together with Proposition~\ref{prop:valuative-representability}, this
will complete the proof of Theorem~\ref{thm:weak-valuative}.
\subsubsection{Stabilizing unstable rational bridges}\label{sss:stabilize-bridge}
Let $\mathcal{Z} \subset \cC$ be an unstable rational bridge. We contract $\mathcal{Z}$ as follows. Consider $\cC \to C \to C'$ where the first arrow takes the coarse curve, and the second arrow contracts the component corresponding to $\mathcal{Z}$. Since $\omega^{log}_{C'/S}|_{\cC} = \omega^{log}_{\cC/S}$, we have a commutative diagram
\[
\xymatrix{
\cC \ar@/^1pc/[drr]^{f} \ar@/_1pc/[rdd] \ar@{-->}[dr]^{f_{C'}}&& \\
& \mathfrak{P}_{C'} \ar[r] \ar[d] & \mathfrak{P} \ar[d] \\
& C' \ar[r]^{\omega^{log}_{C'/S}} & \mathbf{BC}^*_\omega
}
\]
where the square is cartesian and the dashed arrow $f_{C'}$ is induced by the fiber product. By Corollary \ref{cor:rat-bridge-stability}, $\mathcal{Z}$ is contracted along $f_{C'}$.
Note that $f_{\eta}\colon \cC_{\eta} \to \mathfrak{P}$ yields a stable map $\cC_{\eta} \to \mathfrak{P}_{C'}$ which, possibly after a finite base change, extends to a stable map $f'_{C'}\colon \cC' \to \mathfrak{P}_{C'}$. Let $q \in C'$ be the node to which $\mathcal{Z}$ contracts.
\begin{lemma}\label{lem:remove-unstable-bridge}
The composition $\cC' \to \mathfrak{P}_{C'} \to C'$ is the coarse moduli morphism. Furthermore, let $\tilde{q} \in \cC'$ be the node above $q \in C'$. Then we have $f|_{\cC\setminus \mathcal{Z}} = f'|_{\cC'\setminus \{q\}}$.
\end{lemma}
\begin{proof}
Let $\bar{f}'\colon \bar{\cC}' \to P_{C'}$ be the coarse stable map of $f'_{C'}$, and $\bar{f}_{C'}\colon C \to P_{C'}$ be the coarse stable map of $f_{C'}$. Thus $\bar{f}'$ is the stabilization of $\bar{f}_{C'}$ as a stable map. By construction, the image of $\mathcal{Z}$ in $C$ is the only unstable component of $\bar{f}_{C'}$, hence is the only component contracted along $C \to \bar{\cC}'$. Therefore $\cC' \to C'$ is the coarse moduli. Since the modification is local along $\mathcal{Z}$, the second statement follows from the first one.
\end{proof}
Let $f'$ be the composition $\cC' \to \mathfrak{P}_{C'} \to \mathfrak{P}$. The above lemma implies that $\omega^{\log}_{\cC'/S} = \omega^{\log}_{C'/S}|_{\cC'}$. Thus $f'\colon \cC' \to \mathfrak{P}$ is a new pre-stable reduction extending $f_{\eta}$ with $\mathcal{Z}$ removed.
\subsubsection{Stabilizing rational tails}\label{sss:stabilize-tail}
Let $\mathcal{Z} \subset \cC$ be an unstable rational tail, $\cC \to \cC'$ be the contraction of $\mathcal{Z}$, and $p \in \cC'$ be the image of $\mathcal{Z}$. Possibly after a finite extension, we take the stable map limit
\[
f'\colon \cC'' \to \mathfrak{P}_{\cC'} := \cC'\times_{\mathbf{BC}^*_\omega}\mathfrak{P}
\]
extending the one induced by $f_{\eta}$.
We will also use $f'\colon \cC'' \to \mathfrak{P}$ for the corresponding
morphism.
Let $\cT \subset \cC''$ be the tree of rational components contracted
to $p$.
Since $f'$ is a modification of $f$ around $\mathcal{Z}$, we observe that
$f'_{\cC'' \setminus \cT} = f|_{\cC\setminus \mathcal{Z}}$.
\begin{proposition}\label{prop:stabilize-tails}
The composition $\cC'' \to \mathfrak{P}_{\cC'} \to \cC'$ is the identity. Therefore, $f'\colon \cC' \to \mathfrak{P}$ is a pre-stable reduction extending $f_{\eta}$ with $\mathcal{Z}$ contracted but everywhere else is identical to $f$.
\end{proposition}
The proof of the above proposition occupies the rest of this section. Since $p$ is a smooth unmarked point of $\cC'$, it suffices to show that $\cT$ contains no component. We first consider the following case.
\begin{lemma}
Notations and assumptions as above, suppose that $f(\mathcal{Z}) \subset \mathbf{0}_{\mathfrak{P}}$. Then Proposition \ref{prop:stabilize-tails} holds.
\end{lemma}
\begin{proof}
Since $f'_{\cC'' \setminus \cT} = f|_{\cC\setminus \mathcal{Z}}$, the
assumption implies
$\deg f^*\cO(\infty_{\mathfrak{P}}) \leq \deg
(f')^*\cO(\infty_{\mathfrak{P}})|_{\overline{\cC''_s\setminus\cT}}$.
On the other hand, we have
$\deg (f')^*\cO(\infty_{\mathfrak{P}})|_{\cT} \geq 0$, and ``$=$'' iff $\cT$
is a single point.
Thus, the lemma follows from
$ \deg f^*\cO(\infty_{\mathfrak{P}}) = \deg
(f')^*\cO(\infty_{\mathfrak{P}})|_{\overline{\cC''_s\setminus\cT}} + \deg
(f')^*\cO(\infty_{\mathfrak{P}})|_{\cT} $.
\end{proof}
We now impose the condition $f(\mathcal{Z}) \not\subset \mathbf{0}_{\mathfrak{P}}$.
Observe that the pre-stable map $g\colon \cC \to \cX$ contracts $\mathcal{Z}$,
hence factors through a pre-stable map $g'\colon \cC' \to \cX$.
Since $p$ is a smooth unmarked point, we may choose a Zariski
neighborhood $U' \subset \cC'$ of $p$ such that $(g')^*\mathbf{E}_i|_{U'}$
splits for each $i$.
Denote by $U = U'\times_{\cC'} \cC$.
Then $g^*\mathbf{E}_i|_{U}$ splits as well for each $i$.
The $j$-th splitting factors of $\oplus\mathbf{E}_i|_{U}$ and
$\oplus\mathbf{E}_i|_{U'}$ define families of hyperplanes
\begin{equation}\label{equ:local-hyperplane}
H_j \subset \mathfrak{P}_U, \ \ \ \mbox{and} \ \ \ H'_j \subset \mathfrak{P}_{U'}
\end{equation}
over $U$ and $U'$ respectively for $j = 1, 2, \cdots, n$.
\begin{lemma}\label{lem:unstable-tail}
Notations and assumptions as above, for each $j$ we have $\deg (f^*H_j)|_{\mathcal{Z}} \leq 0$. In particular, $f(\mathcal{Z}) \not\subset \mathbf{0}_{\mathfrak{P}}$ implies that $f(\mathcal{Z}) \cap \mathbf{0}_{\mathfrak{P}} = \emptyset$.
\end{lemma}
\begin{proof}
Observe that $\bigcap_j H_j$ is the zero section $\mathbf{0}_{\mathfrak{P}_U}$. Thus, it suffices to show that $\deg (f^*H_j)|_{\mathcal{Z}} \leq 0$ for each $j$.
Since $\mathcal{Z}$ is contracted by $f$, $\mathbf{E}_i$ and $\mathbf{L}$ are both trivial along $\mathcal{Z}$. Thus, we have $\mathfrak{P}_{\mathcal{Z}} = \mathbb{P}^\mathbf w(\oplus_j \mathcal L_{\mathcal{Z}}^{\otimes i_j} \oplus \cO)$ where the direct sum is given by the splitting of $\mathbf{E}_i$ for all $i$. The corresponding section $\mathcal{Z} \to \mathfrak{P}_{\mathcal{Z}}$ is defined by a collection of sections $(s_1, \cdots, s_n, s_{\infty})$ with no base point, where $s_j \in H^0(\mathcal L^{i_j}\otimes f^*\cO(w_{i_j} \infty_{\mathfrak{P}})|_{\mathcal{Z}})$ and $s_{\infty} \in H^0(f^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}})$. In particular, we have $f^*\cO(H_j)|_{\mathcal{Z}} = \mathcal L^{i_j}\otimes f^*\cO(w_{i_j} \infty_{\mathfrak{P}})|_{\mathcal{Z}}$. Note that $w_{i_j} = a \cdot i$ by the choice of weights \eqref{equ:universal-proj}. We calculate
\[
(\mathcal L^{i_j}\otimes f^*\cO(w_{i_j} \infty_{\mathfrak{P}})|_{\mathcal{Z}})^{r}
= (\mathcal L \otimes f^*\cO(\infty_{\mathfrak{P}})|_{\mathcal{Z}})^{\tilde{r} i}
= \big(\omega^{\log}_{\cC/S}\otimes f^*\cO(\tilde{r} \infty_{\mathfrak{P}})|_{\mathcal{Z}}\big)^{ i}.
\]
Since $\mathcal{Z}$ is unstable, we have
$\deg \omega^{\log}_{\cC/S}\otimes f^*\cO(\tilde{r} \infty_{\mathfrak{P}})|_{\mathcal{Z}}
\leq 0$, which implies $\deg (f^*H_j)|_{\mathcal{Z}} \leq 0$.
\end{proof}
To further proceed, consider the spin structure $\mathcal L'$ over $\cC'$ and
observe that $\mathcal L'|_{\cC'\setminus \{p\}} = \mathcal L|_{\cC\setminus \mathcal{Z}}$.
Using the same construction as for \eqref{equ:spin-iterate}, we obtain
a quasi-finite morphism $\widetilde{\cC} \to \cC$ between two pre-stable
curves over $S$ which is isomorphic away from $\mathcal{Z}$ and its pre-image
in $\widetilde{\cC}$, and a canonical morphism of line bundles
$\mathcal L'|_{\widetilde{\cC}} \to \mathcal L|_{\widetilde{\cC}}$ extending the identity
$\mathcal L'|_{\cC'\setminus \{p\}} = \mathcal L|_{\cC\setminus \mathcal{Z}}$, whose $r$-th
power is the canonical morphism
$\omega^{\log}_{\cC'/S}|_{\widetilde{\cC}} \to
\omega^{\log}_{\cC/S}|_{\widetilde{\cC}}$.
Define:
\[
\mathfrak{P}_{\widetilde{\cC}} := \mathfrak{P}\times_{\mathbf{BC}^*_\omega}\widetilde{\cC} \ \ \ \mbox{and} \ \ \ \mathfrak{P}'_{\widetilde{\cC}} := \mathfrak{P}_{\cC'}\times_{\cC'}\widetilde{\cC}
\]
We have arrived at the following commutative diagram
\[
\xymatrix{
\widetilde{\cC} \ar[rr]^{\widetilde{f}} && \mathfrak{P}_{\widetilde{\cC}} \ar[rrd] \ar@/^1pc/@{-->}[dd]^{c^{-1}}&&&& \\
&&&& \widetilde{\cC} \ar[rrd] && \\
\widetilde{\cC}''' \ar[rr]^{\widetilde{f}'} \ar[uu] \ar@/_1pc/[rrrrd]^{f''}&& \mathfrak{P}'_{\widetilde{\cC}} \ar@/^1pc/@{-->}[uu]^{c} \ar[rru] \ar[rrd] &&&& \cC' \\
&& &&\mathfrak{P}_{\cC'} \ar[rru] &&
}
\]
where $\widetilde{f}$ is the section obtained by pulling back $f$, $c$
and $c^{-1}$ are the two birational maps defined using
$\mathcal L'|_{\widetilde{\cC}} \to \mathcal L|_{\widetilde{\cC}}$ similarly to
\eqref{diag:compare-target}, and $\widetilde{f}'$ is the stable map
limit extending the one given by $f_{\eta}$.
Denote by $\mathcal{Z}$ the corresponding rational tail of $\widetilde{\cC}$, and
by $\widetilde{\mathcal{Z}} \subset \widetilde{\cC}'''$ the component dominating $\mathcal{Z}$.
By Lemma \ref{lem:unstable-tail}, the image $\widetilde{f}(\mathcal{Z})$ avoids
$\mathbf{0}_{\mathfrak{P}_{\widetilde{\cC}}}|_{\mathcal{Z}}$ which is the indeterminacy locus of
$c^{-1}$.
This implies that
$\widetilde{f}'(\widetilde{\mathcal{Z}}) \subset c^{-1}\circ \widetilde{f}(\mathcal{Z}) \subset
\infty_{\mathfrak{P}'_{\widetilde{\cC}}}$.
Thus by the commutativity of the above diagram, any rational tail of
$\widetilde{\cC}'''$ contracted to a point on $\mathcal{Z}$, is also contracted by
$\widetilde{f}'$.
Now the stability of $\widetilde{f}'$ as a stable map implies:
\begin{lemma}
$\widetilde{\cC}''' \to \widetilde{\cC}$ contracts no component.
\end{lemma}
Furthermore:
\begin{lemma}
The rational tail $\widetilde{\mathcal{Z}}$ is contracted by $f''$.
\end{lemma}
\begin{proof}
Write $\widetilde{U} = \widetilde{\cC}\times_{\cC'}U$.
By abuse of notations, denote by
$H_{j} \subset \mathfrak{P}_{\widetilde{\cC}}$ and
$H'_{j} \subset \mathfrak{P}'_{\widetilde{\cC}}$ the families of hyperplanes
over $\widetilde{U}$ obtained by pulling back the corresponding
hyperplanes in \eqref{equ:local-hyperplane}.
From the construction of $c^{-1}$, we observe that
$\widetilde{f}(\mathcal{Z}) \subset H_j$ for some $H_j$ implies that
$\widetilde{f}'(\widetilde{\mathcal{Z}}) \subset H'_j$.
Suppose $f''(\widetilde{\mathcal{Z}})$ is one-dimensional.
Then $\widetilde{\mathcal{Z}}$ intersects some $H'_j$ properly and
non-trivially.
Since $H'_{j}$ is a family over $\widetilde U$,
$(\widetilde{f}')^*(H'_j)$ contains a non-empty irreducible
multi-section over $U$ which intersects $\widetilde{\mathcal{Z}}$.
Denote this multi-section by $D$.
Consider the general fiber
$D_{\eta} \subset f_{\eta}^*(H'_{j,\eta}) = f_{\eta}^*(H_{j,\eta})$.
The closure $\overline{D_{\eta}} \subset \widetilde{f}^{*}H_j$
intersects $\mathcal{Z}$ non-trivially.
By Lemma \ref{lem:unstable-tail}, we necessarily have
$\widetilde{f}(\mathcal{Z}) \subset H_j$ hence
$\widetilde{f}'(\widetilde{\mathcal{Z}}) \subset H'_{j}$ by the previous
paragraph.
This contradicts the assumption that $\widetilde{\mathcal{Z}}$ and $H'_j$
intersect properly.
\end{proof}
Finally, observe that the coarse pre-stable map of $f''$ factors
through the coarse stable map of $f'\colon \cC'' \to \mathfrak{P}_{\cC'}$.
The above two lemmas show that the unstable components of
$\widetilde{\cC}'''$ with respect to $f''$ are precisely those
contracted in $\cC'$.
Therefore, the arrow $\cC'' \to \cC'$ contracts no component.
This completes the proof of Proposition \ref{prop:stabilize-tails}.
\section{Reducing perfect obstruction theories along boundary}
\label{sec:POT-reduction}
For various applications in this and our subsequent papers
\cite{CJR20P1, CJR20P2}, we further develop a general machinery,
initiated in \cite{CJRS18P}, on reducing a perfect obstruction theory
along a Cartier divisor using cosections.
Furthermore, we prove a formula relating the two virtual cycles
defined using a perfect obstruction theory and its reduction under the
general setting in Section~\ref{ss:boundary-cycle}. Since log
structures are irrelevant in this section, we will assume all log
structures to be trivial for simplicity.
\subsection{Set-up of the reduction}\label{ss:reduction-set-up}
Throughout this section we will consider a sequence of morphisms of algebraic stacks
\begin{equation}\label{equ:stacks-reduction}
\scrM \to \fH \to \fM
\end{equation}
where $\scrM$ is a separated Deligne--Mumford stack, and the second morphism is smooth of Deligne--Mumford type.
Let $\Delta \subset \fM$ be an effective Cartier divisor, and let $\Delta_{\fH}$ and $\Delta_{\scrM}$ be its pull-backs in $\fH$ and $\scrM$ respectively.
Let $\FF$ be the complex with amplitude $[0,1]$ over $\fM$
\[
\cO_{\fM} \stackrel{\epsilon}{\longrightarrow} \cO_{\fM}(\Delta)
\]
where $\epsilon$ is the canonical section defining $\Delta$.
We further assume two relative perfect obstruction theories
\begin{equation}\label{equ:canonical-POT}
\varphi_{\scrM/\fM} \colon \TT_{\scrM/\fM} \to \EE_{\scrM/\fM} \ \ \ \mbox{and} \ \ \ \varphi_{\fH/\fM} \colon \TT_{\fH/\fM} \to \EE_{\fH/\fM}
\end{equation}
which fit in a commutative diagram
\begin{equation}\label{diag:compatible-POT}
\xymatrix{
\TT_{\scrM/\fM} \ar[rr] \ar[d]_{\varphi_{\scrM/\fM}} && \TT_{\fH/\fM} \ar[d]^{\varphi_{\fH/\fM}|_{\scrM}} \\
\EE_{\scrM/\fM} \ar[rr]^{\sigma^{\bullet}_{\fM}} && \EE_{\fH/\fM}
}
\end{equation}
such that $H^1(\EE_{\fH/\fM}) \cong \cO_{\fM}(\Delta)|_{\fH}$, and the following cosection
\begin{equation}\label{equ:general-cosection}
\sigma_{\fM} := H^1(\sigma^{\bullet}_{\fM}) \colon H^1(\EE_{\scrM/\fM}) \to H^1(\EE_{\fH/\fM}|_{\scrM}) \cong \cO_{\fM}(\Delta)|_{\scrM}
\end{equation}
is surjective along $\Delta_{\scrM}$.
\subsection{The construction of the reduction}
Consider the composition
\[
\EE_{\fH/\fM} \to H^1(\EE_{\fH/\fM})[-1] \cong \cO_{\fM}(\Delta)|_{\scrM} \twoheadrightarrow \cok(\epsilon)[-1].
\]
Since $\fH \to \fM$ is smooth, we have $\cok(\epsilon)[-1] \cong \FF|_{\fH}$. Hence the above composition defines a morphism
\begin{equation}\label{equ:reduction-map}
\EE_{\fH/\fM} \to \FF|_{\fH}
\end{equation}
over $\fH$. We form the distinguished triangles
\begin{equation}\label{equ:reduced-POT}
\EE^{\mathrm{red}}_{\fH/\fM} \to \EE_{\fH/\fM} \to \FF|_{\fH} \stackrel{[1]}{\to} \ \ \ \mbox{and} \ \ \ \EE^{\mathrm{red}}_{\scrM/\fM} \to \EE_{\scrM/\fM} \to \FF|_{\scrM} \stackrel{[1]}{\to},
\end{equation}
where the middle arrow in the second triangle is the composition of \eqref{equ:reduction-map} with $\sigma^{\bullet}_{\fM}$.
\begin{theorem}\label{thm:reduction}
Notations and assumptions as above, we have:
\begin{enumerate}
\item There is a factorization of perfect obstruction theories
\[
\xymatrix{
\TT_{*/\fM} \ar[rr]^{\varphi_{*/\fM}} \ar[rd]_{\varphi^{\mathrm{red}}_{*/\fM}} && \EE_{*/\fM} \\
&\EE^{\mathrm{red}}_{*/\fM} \ar[ru]&
}
\]
such that $\varphi^{\mathrm{red}}_{*/\fM}|_{*\setminus\Delta_*} = \varphi_{*/\fM}|_{*\setminus\Delta_*}$ for $* = \scrM$ or $\fH$.
\item There is a canonical commutative diagram
\[
\xymatrix{
\EE^{\mathrm{red}}_{\scrM/\fM} \ar[rr] \ar[d]_{\sigma^{\bullet,\mathrm{red}}_{\fM}} && \EE_{\scrM/\fM} \ar[d]^{\sigma^{\bullet}_{\fM}} \\
\EE^{\mathrm{red}}_{\fH/\fM}|_{\scrM} \ar[rr] && \EE_{\fH/\fM}|_{\scrM}
}
\]
such that $H^1(\EE^{\mathrm{red}}_{\fH/\fM}) \cong \cO_{\fH}$. Furthermore, the {\em reduced cosection}
\[
\sigma^{\mathrm{red}}_{\fM}:= H^1(\sigma^{\mathrm{red},\bullet}_{\fM}) \colon H^1(\EE^{\mathrm{red}}_{\scrM/\fM}) \to H^1(\EE^{\mathrm{red}}_{\fH/\fM}|_{\scrM}) \cong \cO_{\scrM}
\]
is surjective along $\Delta_{\scrM}$, and satisfies $\sigma^{\mathrm{red}}_{\fM}|_{\scrM\setminus\Delta_{\scrM}} = \sigma_{\fM}|_{\scrM\setminus\Delta_{\scrM}}$.
\end{enumerate}
\end{theorem}
This theorem will be proven below in Section~\ref{ss:proof-reduction}.
In case $\fM$ admits a fundamental class $[\fM]$, denote by $[\scrM]^{\mathrm{vir}}$ and $[\scrM]^{\mathrm{red}}$ the virtual cycles giving by the perfect obstruction theories $\varphi_{\scrM/\fM}$ and $\varphi^{\mathrm{red}}_{\scrM/\fM}$ respectively.
\begin{remark}
In order to construct the cone $\EE^\mathrm{red}_{\scrM/\fM}$, instead of having the auxiliary stack $\fH$, it suffices to
assume the existence of a cosection
$\sigma_\fM\colon H^1(\EE_{\scrM/\fM}) \to \cO_{\fM}(\Delta)|_\scrM$.
Furthermore, the proof of Theorem~\ref{thm:reduction} shows that if
$\sigma_\fM$ is surjective along $\Delta_\scrM$, then
$\EE^\mathrm{red}_{\scrM/\fM}$ is perfect of amplitude $[0, 1]$.
On the other hand, in practice the auxiliary stack $\fH$ provides a convenient criterion to
ensure the factorization of Theorem~\ref{thm:reduction} (1).
\end{remark}
\subsection{Decending to the absolute reduced theory}\label{ss:general-absolut-theory}
We further assume $\fM$ is smooth. Consider the morphism of triangles:
\begin{equation}\label{diag:relative-to-absolute}
\xymatrix{
\TT_{{*}/\fM} \ar[r] \ar[d]_{\varphi_{*/\fM}^{\mathrm{red}}} & \TT_{{*}} \ar[r] \ar[d]_{\varphi^{\mathrm{red}}_{*}} & \TT_{\fM}|_{{*}} \ar[r]^{[1]} \ar[d]^{\cong} & \\
\EE^{\mathrm{red}}_{{*}/\fM} \ar[r] & \EE^{\mathrm{red}}_{{*}} \ar[r] & \TT_{\fM}|_{{*}} \ar[r]^{[1]} &
}
\end{equation}
for $*=\fH$ or $\scrM$. By \cite[Proposition A.1.(1)]{BrLe00}, $\varphi^{\mathrm{red}}_\scrM$ is a perfect obstruction theory compatible with $\varphi^{\mathrm{red}}_{\scrM/\fM}$, hence induces the same virtual cycle $[\scrM]^{\mathrm{red}}$.
\begin{lemma}\label{lem:cosection-descent}
The induced morphism $H^1(\EE^{\mathrm{red}}_{{\fH}/{\fM}}) \to H^1(\EE^{\mathrm{red}}_{{\fH}})$ is an isomorphism of $\cO_{{\fH}}$.
\end{lemma}
\begin{proof}
Since $\fM$ is smooth, we have $H^1(\TT_{\fM}) = 0$. Consider the induced morphism between long exact sequences
\[
\xymatrix{
H^{0}(\TT_{{\fH}}) \ar[r] \ar[d]^{\cong} & H^{0}(\TT_{\fM|_{{\fH}}}) \ar[r] \ar[d]^{\cong} & H^{1}(\TT_{{\fH}/\fM}) \ar[r] \ar[d] & H^{1}(\TT_{{\fH}}) \ar[r] \ar[d] & 0 \\
H^{0}(\EE^{\mathrm{red}}_{{\fH}}) \ar[r] & H^{0}(\TT_{\fM}|_{{\fH}}) \ar[r] & H^{1}(\EE^{\mathrm{red}}_{{\fH}/\fM}) \ar[r] & H^{1}(\EE^{\mathrm{red}}_{{\fH}}) \ar[r] & 0
}
\]
Since ${\fH} \to \fM$ is smooth, the two horizontal arrows on the left are both surjective. Thus $H^1(\EE^{\mathrm{red}}_{{\fH}/\fM}) \to H^1(\EE^{\mathrm{red}}_{{\fH}})$ is an isomorphism. By Theorem \ref{thm:reduction} (2), we have $H^1(\EE^{\mathrm{red}}_{{\fH}}) \cong \cO_{{\fH}}$.
\end{proof}
By Theorem \ref{thm:reduction}, we obtain a morphism of triangles
\[
\xymatrix{
\EE^{\mathrm{red}}_{\scrM/\fM} \ar[r] \ar[d]_{{\sigma^{\bullet,\mathrm{red}}_{\fM}}} & \EE^{\mathrm{red}}_{\scrM} \ar[r] \ar[d]_{\sigma^{\bullet,\mathrm{red}}} & \TT_{\scrM} \ar[r]^{[1]} \ar[d] & \\
\EE^{\mathrm{red}}_{{\fH}/{\fM}}|_{\scrM} \ar[r] & \EE^{\mathrm{red}}_{\fH}|_{\scrM} \ar[r] & \TT_{\fM}|_{\scrM} \ar[r]^{[1]} &
}
\]
Taking $H^1$ and applying Lemma \ref{lem:cosection-descent}, we have a commutative diagram
\begin{equation}\label{diag:cosections}
\xymatrix{
H^1(\EE^{\mathrm{red}}_{\scrM/\fM}) \ar@{->>}[r] \ar[d]_{\sigma^{\mathrm{red}}_{\fM}} & H^1(\EE^{\mathrm{red}}_{\scrM}) \ar[d]^{\sigma^{\mathrm{red}}} \\
\cO_{\scrM} \ar[r]^{=} & \cO_{\scrM}.
}
\end{equation}
Denote by $\scrM(\sigma^{\mathrm{red}}) \subset \scrM$ the closed substack along which the cosection $\sigma^{\mathrm{red}}$ degenerates, and write $\iota\colon \scrM(\sigma^{\mathrm{red}}) \hookrightarrow \scrM$ for the closed embedding. Let $[\scrM]_{\sigma^{\mathrm{red}}}$ be the cosection localized virtual cycle as in \cite{KiLi13}. We conclude that
\begin{theorem}\label{thm:generali-localized-cycle}
With the assumptions in Section \ref{ss:reduction-set-up} and further assuming that $\fM$ is smooth, we have
\begin{enumerate}
\item The cosection $\sigma^{\mathrm{red}}$ is surjective along $\Delta_{\scrM}$.
\item $\iota_*[\scrM]_{\sigma^{\mathrm{red}}} = [\scrM]^{\mathrm{red}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) follows from the surjectivity of $\sigma^{\mathrm{red}}_{\fM}$ along $\Delta_{\scrM}$ and \eqref{diag:cosections}. (2) follows from \cite[Theorem 1.1]{KiLi13}.
\end{proof}
\subsection{Proof of Theorem \ref{thm:reduction}}
\label{ss:proof-reduction}
By \eqref{diag:compatible-POT}, we obtain a commutative diagram of solid arrows
\begin{equation}\label{diag:compare-POTs}
\xymatrix{
\TT_{\scrM/\fM} \ar@/^1pc/[rrd] \ar@{-->}[rd]_{\varphi^{\mathrm{red}}_{\scrM/\fM}} \ar[dd] &&&& \\
& \EE^{\mathrm{red}}_{\scrM/\fM} \ar[dd] \ar[r] & \EE_{\scrM/\fM} \ar[dd] \ar[r] & \FF|_{\scrM} \ar[r]^{[1]} \ar@{=}[dd] & \\
\TT_{\fH/\fM}|_{\scrM} \ar@/^1pc/[rrd]|{\ \ \ \hole \ \ } \ar@{-->}[rd]_{\varphi^{\mathrm{red}}_{\fH/\fM}|_{\scrM}} &&&& \\
& \EE^{\mathrm{red}}_{\fH/\fM}|_{\scrM} \ar[r] & \EE_{\fH/\fM}|_{\scrM} \ar[r] & \FF|_{\scrM} \ar[r]^{[1]} &
}
\end{equation}
where the two horizontal lines are given by \eqref{equ:reduced-POT}, and the two solid curved arrows are given by \eqref{equ:canonical-POT}.
Since $\fH \to \fM$ is smooth of Deligne--Mumford type, $\TT_{\fH/\fM}$ is the relative tangent bundle $T_{\fH/\fM}$. Thus the composition $\TT_{\fH/\fM} \to \EE_{\fH/\fM} \to \FF|_{\fH}$ is trivial, which leads to the desired arrow $\varphi^{\mathrm{red}}_{\fH/\fM}$.
Similarly, the composition $\TT_{\scrM/\fM} \to \EE_{\scrM/\fM} \to \FF|_{\scrM}$ factors through $\TT_{\fH/\fM}|_{\scrM} \to \FF|_{\scrM}$ hence is also trivial, which leads to $\varphi^{\mathrm{red}}_{\scrM/\fM}$. This proves the factorization part in (1), and the commutative diagram in (2).
For the perfect obstruction theories part, observe that $\EE^{\mathrm{red}}_{\fH/\fM}$ and $\EE^{\mathrm{red}}_{\scrM/\fM}$ are at least perfect in $[0,2]$ as $\FF$ is perfect in $[0,1]$. It remains to show that $H^2(\EE^{\mathrm{red}}_{\fH/\fM}) = 0$ and $H^{2}(\EE^{\mathrm{red}}_{\scrM/\fM})=0$. Taking the long exact sequence of the first triangle in \eqref{equ:reduced-POT}, we have
\[
H^1(\EE_{\fH/\fM}) \to H^1(\FF|_{\fH}) \to H^2(\EE^{\mathrm{red}}_{\fH/\fM}) \to 0.
\]
Since the left arrow is precisely $\cO_{\fM}(\Delta) \twoheadrightarrow \cok \epsilon$, we obtain $H^2(\EE^{\mathrm{red}}_{\fH/\fM}) = 0$.
Similarly, we have the long exact sequence
\[
H^1(\EE_{\scrM/\fM}) \to H^1(\FF|_{\scrM}) \to H^2(\EE^{\mathrm{red}}_{\scrM/\fM}) \to 0,
\]
where the left arrow is given by the composition
\[
H^1(\EE_{\scrM/\fM}) \stackrel{\sigma_{\fM}}{\to} H^1(\EE_{\fH/\fM}|_{\scrM}) \twoheadrightarrow H^1(\FF|_{\scrM}).
\]
Since $\FF|_{\scrM\setminus \Delta_{\scrM}} = 0$ and $\sigma_{\fM}$ is surjective along $\Delta_{\scrM}$, the above composition is surjective, hence $H^2(\EE^{\mathrm{red}}_{\scrM/\fM}) = 0$.
We next verify that $\varphi^{\mathrm{red}}_{\scrM/\fM}$ and $\varphi^{\mathrm{red}}_{\fH/\fM}$ are obstruction theories. Indeed, the factorization of (1) implies a surjection $H^0(\TT_{\scrM/\fM}) \twoheadrightarrow H^0(\EE^{\mathrm{red}}_{\scrM/\fM})$ and an injection $H^1(\TT_{\scrM/\fM}) \hookrightarrow H^1(\EE^{\mathrm{red}}_{\scrM/\fM})$. Since $\FF|_{\scrM}$ is perfect in $[0,1]$, $H^0(\TT_{\scrM/\fM}) \twoheadrightarrow H^0(\EE^{\mathrm{red}}_{\scrM/\fM})$ is an injection, hence an isomorphism. The case that $\varphi^{\mathrm{red}}_{\fH/\fM}$ is an obstruction theory can be proved similarly. This completes the proof of (1).
Observe that $H^0(\FF|_{\fH}) = 0$ since $\fH \to \fM$ is smooth. The first triangle in \eqref{equ:reduced-POT} implies an exact sequence
\[
0 \to H^1(\EE^{\mathrm{red}}_{\fH/\fM}) \to H^1(\EE_{\fH/\fM}) \to H^1(\FF|_{\fH}) \to 0.
\]
Using \eqref{equ:general-cosection} and the construction of \eqref{equ:reduction-map}, we obtain $H^1(\EE^{\mathrm{red}}_{\fH/\fM}) \cong \cO_{\fH}$.
Now \eqref{diag:compare-POTs} induces a morphism of long exact sequences
\[
\xymatrix{
0 \ar[r] & H^0(\FF|_{\scrM}) \ar[r] \ar[d]^{\cong} & H^1(\EE^{\mathrm{red}}_{\scrM/\fM}) \ar[r] \ar[d]^{\sigma^{\mathrm{red}}_{\fM}} & H^1(\EE_{\scrM/\fM}) \ar[r] \ar[d]^{\sigma_{\fM}} & H^1(\FF|_{\scrM}) \ar[r] \ar[d]^{\cong}& 0 \\
0 \ar[r] & H^0(\FF|_{\scrM}) \ar[r] & H^1(\EE^{\mathrm{red}}_{\fH/\fM}) \ar[r] & H^1(\EE_{\fH/\fM}) \ar[r] & H^1(\FF|_{\scrM}) \ar[r] & 0
}
\]
The surjectivity of $\sigma^{\mathrm{red}}_{\fM}$ along $\Delta_{\scrM}$ follows from the surjectivity of $\sigma_{\fM}$ along $\Delta_{\scrM}$. This finishes the proof of (2).
\subsection{The reduced boundary cycle}\label{ss:boundary-cycle}
The pull-backs
\[
\EE_{\Delta_{\scrM}/\Delta} := \EE_{\scrM/\fM}|_{\Delta_{\scrM}} \ \ \ \mbox{and} \ \ \ \EE_{\Delta_{\fH}/\Delta} := \EE_{\fH/\fM}|_{\Delta_{\fH}}.
\]
define perfect obstruction theories of $\Delta_{\scrM} \to \Delta$ and $\Delta_{\fH} \to \Delta$ respectively. Consider the sequence of morphisms
\[
\EE_{\Delta_{\scrM}/\Delta} \to \EE_{\Delta_{\fH}/\Delta}|_{\Delta_{\scrM}} \to H^{1}(\EE_{\fH/\fM}|_{\Delta_{\scrM}})[-1] \to H^1(\FF)|_{\Delta_{\scrM}}[-1]
\]
where the last arrow is given by \eqref{equ:reduction-map}. Since
\begin{equation}\label{equ:F1}
H^1(\FF|_{\Delta}) = \cO_{\Delta}(\Delta),
\end{equation}
we obtain a triangle
\begin{equation}\label{equ:boundary-red-POT}
\EE^{\mathrm{red}}_{\Delta_{\scrM}/\Delta} \to \EE_{\Delta_{\scrM}/\Delta} \to \cO_{\Delta}(\Delta)|_{\Delta_{\scrM}}[-1] \stackrel{[1]}{\to}
\end{equation}
The two virtual cycles $[\scrM]^{\mathrm{vir}}$ and $[\scrM]^{\mathrm{red}}$ are related as follows.
\begin{theorem}\label{thm:boundary-cycle}
Notations and assumptions as above, we have
\begin{enumerate}
\item There is a canonical factorization of perfect obstruction theories
\[
\xymatrix{
\TT_{\Delta_{\scrM}/\Delta} \ar[rr]^{\varphi_{\Delta_{\scrM}/\Delta}} \ar[rd]_{\varphi^{\mathrm{red}}_{\Delta_{\scrM}/\Delta}} && \EE_{\Delta_{\scrM}/\Delta} \\
&\EE^{\mathrm{red}}_{\Delta_{\scrM}/\Delta} \ar[ru]&
}
\]
Denote by $[\Delta_{\scrM}]^{\mathrm{red}}$ the virtual cycle associated to $\varphi^{\mathrm{red}}_{\Delta_{\scrM}/\Delta}$, called the \emph{reduced boundary cycle}.
\item Suppose $\fM$ is smooth. Then we have a relation of virtual cycles
\[
[\scrM]^{\mathrm{vir}} = [\scrM]^{\mathrm{red}} + i_*[\Delta_{\scrM}]^{\mathrm{red}}
\]
where $i \colon \Delta_{\scrM} \to \scrM$ is the natural embedding.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof of Theorem \ref{thm:boundary-cycle} (1) is similar to Theorem \ref{thm:reduction} (1), and will be omitted. We next consider (2).
Recall that $\scrM(\sigma^{\mathrm{red}}) \subset \scrM$ is the locus where
$\sigma^{\mathrm{red}}$ hence $\sigma_{\fM}$ degenerates.
Replacing $\scrM$ by $\scrM\setminus \scrM(\sigma^{\mathrm{red}})$ we may
assume that $\sigma_{\fM}$ is everywhere surjective.
Since the cosection localized virtual cycle $[\scrM]_{\sigma^{\mathrm{red}}}$ is
represented by a Chow cycle supported on $\scrM(\sigma^{\mathrm{red}})$, which
is empty by our assumption, we see that
$[\scrM]_{\sigma^{\mathrm{red}}} = 0$.
By Theorem~\ref{thm:generali-localized-cycle}, it remains to show that
\begin{equation}\label{equ:canonical=boundary}
[\scrM] = i_*[\Delta_{\scrM}]^{\mathrm{red}}.
\end{equation}
To proceed, we consider the triangle
\begin{equation}\label{equ:t-red-POT}
\EE^{\tred}_{\scrM/\fM} \to \EE_{\scrM/\fM} \to \cO_{\fM}(\Delta)|_{\scrM}[-1] \stackrel{[1]}{\to}
\end{equation}
where the middle arrow is given by \eqref{equ:reduction-map} and \eqref{equ:F1}. Similar to the case of (1), we obtain a factorization of perfect obstruction theories
\[
\xymatrix{
\TT_{\scrM/\fM} \ar[rr]^{\varphi_{\scrM/\fM}} \ar[rd]_{\varphi^{\tred}_{\scrM/\fM}} && \EE_{\scrM/\fM} \\
&\EE^{\tred}_{\scrM/\fM} \ar[ru]&
}
\]
Let $[\scrM]^{\tred}$ be the virtual cycle corresponding to the perfect obstruction theory $\varphi^{\tred}_{\scrM/\fM}$. We call $[\scrM]^{\tred}$ the \emph{totally reduced virtual cycle} to be distinguished from $[\scrM]^{\mathrm{red}}$. Comparing \eqref{equ:boundary-red-POT} and \eqref{equ:t-red-POT}, we have
$
\varphi^{tred}_{\scrM/\fM}|_{\Delta_{\scrM}} = \varphi^{\mathrm{red}}_{\Delta_{\scrM}/\Delta},
$
hence
\[
i^{!} [\scrM]^{tred}= [\Delta_{\scrM}]^{\mathrm{red}}.
\]
Since $\fM$ is smooth, as in \eqref{diag:relative-to-absolute}, we may
construct absolute perfect obstruction theories associated to
$\varphi_{\scrM/\fM}$ and $\varphi^{\tred}_{\scrM/\fM}$ respectively:
\[
\varphi_{\scrM}\colon \TT_{\scrM/\fM} \to \EE_{\scrM} \ \ \ \mbox{and} \ \ \ \varphi^{\tred}_{\scrM}\colon \TT_{\scrM/\fM} \to \EE^{\tred}_{\scrM}.
\]
By the same construction as in
Section~\ref{ss:general-absolut-theory}, the cosection $\sigma_{\fM}$
descends to an absolute cosection
$\sigma\colon H^1(\EE_{\scrM}) \to \cO_{\fM}(\Delta)|_{\scrM}$ which
is everywhere surjective.
Let $\fE_{\scrM}$ and $\fE^{tred}_{\scrM}$ be the vector bundle stacks
of $\EE_{\scrM}$ and $\EE^{\tred}_{\scrM}$ respectively.
Then $\fE^{tred}_{\scrM}$ is the kernel cone stack of
$\fE_{\scrM} \to \cO_{\fM}(\Delta)|_{\scrM}$ induced by $\sigma$.
Let $\fC_{\scrM}$ be the intrinsic normal cone of $\scrM$.
Unwinding the definition of cosection localized virtual cycle in
\cite[Definition 3.2]{KiLi13}, we have
\begin{equation}\label{equ:tred=bred}
[\scrM]_\sigma = i^{!} 0^!_{\fE^{\tred}_{\scrM}}[\fC_{\scrM}] = i^![\scrM]^{\tred} = [\Delta_{\scrM}]^{\mathrm{red}}.
\end{equation}
where $[\scrM]_\sigma$ is the cosection localized virtual cycle
corresponding to $\sigma$.
Finally, \eqref{equ:canonical=boundary} follows from
$i_*[\scrM]_\sigma = [\scrM]^{\mathrm{vir}}$, see \cite[Theorem~1.1]{KiLi13}.
\end{proof}
| proofpile-arXiv_065-7085 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}%
\def\subsection{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\subsubsection{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
\centering
}%
}%
\def\paragraph{%
\@startsection
{paragraph}%
{4}%
{\parindent}%
{\z@}%
{-1em}%
{\normalfont\normalsize\itshape}%
}%
\def\subparagraph{%
\@startsection
{subparagraph}%
{5}%
{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}%
}%
\def\section@preprintsty{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsection@preprintsty{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsubsection@preprintsty{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
}%
}%
\@ifxundefined\frontmatter@footnote@produce{%
\let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote
}{}%
\def\@pnumwidth{1.55em}
\def\@tocrmarg {2.55em}
\def\@dotsep{4.5pt}
\setcounter{tocdepth}{3}
\def\tableofcontents{%
\addtocontents{toc}{\string\tocdepth@munge}%
\print@toc{toc}%
\addtocontents{toc}{\string\tocdepth@restore}%
}%
\def\tocdepth@munge{%
\let\l@section@saved\l@section
\let\l@section\@gobble@tw@
}%
\def\@gobble@tw@#1#2{}%
\def\tocdepth@restore{%
\let\l@section\l@section@saved
}%
\def\l@part#1#2{\addpenalty{\@secpenalty}%
\begingroup
\set@tocdim@pagenum{#2}%
\parindent \z@
\rightskip\tocleft@pagenum plus 1fil\relax
\skip@\parfillskip\parfillskip\z@
\addvspace{2.25em plus\p@}%
\large \bf %
\leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@
\hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip
\par
\nobreak %
\endgroup
}%
\def\tocleft@{\z@}%
\def\tocdim@min{5\p@}%
\def\l@section{%
\l@@sections{}{section
}%
\def\l@f@section{%
\addpenalty{\@secpenalty}%
\addvspace{1.0em plus\p@}%
\bf
}%
\def\l@subsection{%
\l@@sections{section}{subsection
}%
\def\l@subsubsection{%
\l@@sections{subsection}{subsubsection
}%
\def\l@paragraph#1#2{}%
\def\l@subparagraph#1#2{}%
\let\toc@pre\toc@pre@auto
\let\toc@post\toc@post@auto
\def\listoffigures{\print@toc{lof}}%
\def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}}
\def\listoftables{\print@toc{lot}}%
\let\l@table\l@figure
\appdef\class@documenthook{%
\@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}%
\raggedcolumn@sw{\raggedbottom}{\flushbottom}%
}%
\def\tableft@skip@float{\z@ plus\hsize}%
\def\tabmid@skip@float{\@flushglue}%
\def\tabright@skip@float{\z@ plus\hsize}%
\def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}%
\def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}%
\def\@makefntext#1{%
\def\baselinestretch{1}%
\reset@font
\footnotesize
\leftskip1em
\parindent1em
\noindent\nobreak\hskip-\leftskip
\hb@xt@\leftskip{%
\Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}%
\hss\@makefnmark\
}%
#1%
\par
}%
\prepdef
\section{Introduction}
\label{sec:introduction}
The nature of dark matter (DM) remains one of the most pressing questions in modern cosmology and astrophysics; despite enormous theoretical and observational/experimental efforts, no definite DM candidate, or even paradigm for the dark sector, has been generally accepted. Direct probes of the dark sector, such as the direct detection experiments \cite{aprile2018dark, aguilar2017first, crisler2018sensei, angloher2017results} and collider searches \cite{boveia2018dark, Kahlhoefer:2017dnp}, have placed only limits on some of the interactions of dark particles. Cosmological and astrophysical observations have placed complementary constraints, such as those derived form the relic abundance requirement \cite{Steigman:2012nb}, and the need to address the core-cusp problem \cite{Moore:1999gc} in the DM galactic distribution. For this last problem a popular approach has been to assume that the dark sector has appropriately strong, velocity-dependent self-interactions \cite{Spergel:1999mh}. An alternative idea~\footnote{The possibility that DM consists of ultra-light bosons that form a Bose-Einstein condensate on galactic scales has also been studied \cite{Boehmer:2007um, Alexander:2016glq} as a way of addressing the cusp problem.} is to assume that the DM is composed of fermions \cite{domcke2015dwarf, randall2017cores,destri2013fermionic}, and to ascribe the absence of a cusp to the exclusion principle; in this paper we investigate in some detail the viability of this last possibility.
Qualitatively speaking the possibility that the Pauli principle is responsible for the smooth DM profile at the galactic cores can be realized only for sufficiently light fermionic DM: only if the wavelength of such fermions is large enough can we expect the exclusion principle to be effective over distances typical of galaxies. This type of DM would be light; in fact, we will show below that the model provides reasonable results for masses $ \sim 50 \,\hbox{eV}$, consistent with qualitative arguments~\cite{domcke2015dwarf}. Such light DM candidates could not have been in thermal equilibrium during the big-bang nucleosynthesis and large scale structure formation epochs \cite{Berezhiani:1995am, Feng:2008mu, Dodelson:1993je, Shi:1998km, Dodelson:2005tp, Boyarsky:2018tvu, Dasgupta:2013zpn}. This can be achieved by assuming the DM fermions carry a conserved charge, under which all standard model (SM) particles are neutral, in which case there are no renormalizable couplings between the DM fermions and the SM~\footnote{There are, of course, non-renormalizable couplings, but these are proportional to inverse powers of some scale -- the scale of the (heavy) physics that mediates such interactions. We assume that such scale is sufficiently large to ensure absence of SM-DM equilibrium.}. In this situation most constraints are easily met, with the exception of the relic abundance, for which existing approaches \cite{Dasgupta:2013zpn} can be adapted. Alternatively (though this is less attractive), the relic abundance can be ascribed to some primordial abundance generated in the very early universe by a yet-unknown mechanism. In this paper, however, we concentrate on galactic dynamics -- cosmological considerations lie outside the scope of our investigation.
In the calculations below we obtain the DM distribution assuming only {\it(i)} hydrostatic equilibrium, {\it(ii)} non-interacting and isothermal DM, {\it(iii)} asymptotically flat rotation cuves, and {\it(iv)} a given baryon density. More specifically, we do not make any assumptions about the shape of the DM distribution or its degree of degeneracy, which differs from the approach used in several related calculations that have appeared in in the literature \cite{burkert1995structure, salucci2007, di2018phase,domcke2015dwarf}. One additional salient trait of this model is that it generally requires the presence of a super-massive black hole (SMBH) at the galactic center, though in special cases it can also accommodate galactic configurations without a SMBH.
An interesting argument found in the literature \cite{burkert1995structure, salucci2007, di2018phase,domcke2015dwarf}, based on the requirements that the assumed DM profile is consistent with the observational features (core size, velocity dispersion etc.) or merely from the DM phase space distribution \cite{Tremaine:1979we}~\footnote{Other lower bounds can be derived form the relic density constraint \cite{randall2017cores}, which we do not consider here.}, leads to a lower-bound constraint on the mass of the DM candidate. Our calculations do not generate this type of constraint because we make no {\it a-priori} assumptions about the DM distribution; in fact, we obtain consistent values as low as $ \sim 20 $ eV (cf. Sect. \ref{sec:large}). In contrast, we do obtain an {\em upper} bound for the DM mass that depends on the asymptotic value of the rotation velocity and the mass of the SMBH (if no black hole is present the bound is trivial).
The rest of the paper is organized as follows. The equilibrium of the DM+baryon system is discussed in the next section; we then apply the results to spherically-symmetric configurations (Sec. \ref{sec:SSS}). In Sec. \ref{sec:applications} we compare the model predictions with observational data for specific galaxies and obtain the DM mass values consistent with these observations. Conclusions are presented in Sec. \ref{sec:conclusions}, while some details of the data we used are provided in the Appendix.
\section{Equilibrium equations}
\label{sec:equil.eq}
As indicated above, we will investigate the viability of a Fermi-Dirac gas as a galactic DM candidate; we will assume that the gas is in local equilibrium, and that its self-interactions can be neglected. Additionally, we also assume the gas is non-relativistic, which we will justify {\it a posteriori}. In this case the hydrostatic stability of a small volume of the DM gas requires
\beq
m n \nabla \Phi + \nabla P =0 \,,
\label{eq:P}
\eeq
where $m $ is the DM mass, $n$ the density of the gas, $P$ its pressure, and $ \Phi $ the gravitational potential. Using the standard thermodynamic relation $ n \, d\mu = dP - s \, dT $, where $ \mu $ is the chemical potential, $T$ the temperature and $s$ the entropy (volume) density of the gas, it follows that
\beq
\nabla(m \Phi + \mu) + \frac sn \nabla T=0 \,.
\eeq
We will assume that $T$ is constant throughout the gas, in which case
\beq
m \Phi + \mu = E_0= \mbox{constant}.
\label{eq:energy}
\eeq
The value of $E_0 $ will be discussed below.
Using \cref{eq:energy} in the Poisson equation for $ \Phi $ gives
\beq
\nabla^2 \mu = - \frac{4\pi m}{M_{\tt pl}^2} \left( \rho_B + m n \right)\,,
\label{eq:eom}
\eeq
where $ M_{\tt pl}$ denotes the Planck mass~\footnote{We work in units where $k_B= \hbar = c = 1 $, where $ k_B$ is Boltzmann's constant}, $ \rho_B $ is the baryon mass density, and $n$ the DM number density (as noted previously); explicitly
\beq
n = - \frac 2{\lambda^3} \mbox{Li}_{3/2}\left( -e^{\mu/T} \right)\, ;\quad \lambda = \sqrt{\frac{2\pi}{m T}}\,,
\label{eq: n}
\eeq
where Li denotes the standard polylogarithm function and $\lambda $ is the thermal wavelength; the factor of 2 is due to the spin degrees of freedom.
Using standard expressions for the ideal Fermi gas the average DM velocity dispersion is given by
\beq
\sigma^2_{\text{\tiny DM}} = \inv3\vevof{v^2} = \frac P{m\, n} \,; \quad P = - \frac {2T}{\lambda^3} \mbox{Li}_{5/2}\left( -e^{\mu/T} \right)\,.
\label{eq:sigma}
\eeq
Within our model introduced above, the structure of the galaxy is determined by the solution to equation \cref{eq:eom} with appropriate boundary conditions. To do this our strategy will be to choose an analytic parameterization for $ \rho_B $ consistent with observations, and impose boundary conditions at large distances from the galactic center which lead to the flat rotation curves; from this $ \mu({\bf r})$ can be obtained. The solution will depend on the parameters in $ \rho_B$, the DM mass $m$ and the asymptotic rotation velocity $\overline{\vrot}$.
The idea of constraining the DM mass using the phase space density evolution was first suggested by Tremaine and Gunn (TG)~\cite{Tremaine:1979we}. In their seminal approach the DM halo is assumed to be an isothermal classical ideal gas in hydrostatic equilibrium, with a phase space distribution of the form $ f({\bf p},{\bf r})= n(r) \exp[ - p^2/(2 m^2 \sigma^2) ]$, where $ n(r) = n_0/r^2 $. The exclusion principle then requires $ f(0,{\bf r}) < 1 $, which leads to the lower bound $m^4 \,{\raise-3pt\hbox{$\sim$}}\!\!\!\!\!{\raise2pt\hbox{$>$}}\, 0.004 M_{\tt pl}^2/(\sigma r^2)$. This bound then follows from a consistency requirement associated with the adopted form of $f$. The Milky Way dwarf spheroidal satellites, due to their high DM density, allow a simple and robust application of the TG bound (see for eg. \cite{boyarsky2009lower,randall2017cores}) obtaining, for example, $ m \,{\raise-3pt\hbox{$\sim$}}\!\!\!\!\!{\raise2pt\hbox{$>$}}\, 70$ eV using Fornax \cite{randall2017cores}, though uncertainty in the DM core radius limits somewhat the reliability of this bound~\footnote{A large core size cannot be ruled out \cite{di2018phase}, while relaxing the dependence of the DM halo core radius on the stellar component and marginalizing the unknown stellar velocity dispersion anisotropy lead to mass bounds as low as $20$ eV \cite{di2018phase} (though such large haloes are unrealistic and would be at odds with their lifetime due to dynamical friction effects within the Milky Way).}.
In contrast to these assumptions, we use the Fermi-Dirac distribution $ f({\bf p},{\bf r}) = \{ \exp [ p^2/(2m T) - \mu(r)/T ] + 1 \}^{-1}$ that, $i)$ automatically satisfies the exclusion principle constraint, $ii)$ does not factorize into a product of space and momentum functions and $iii)$ leads to a singular $n$ only when a central SMBH is present. In our approach the DM density profile is determined by the baryon distribution by solving \cref{eq:eom}; we make no assumptions about the the DM core radius or the DM distribution in general. In particular, the degree of degeneracy of the fermion distribution function follows from the behavior of $ \mu({\bf r}) $; we will see below that the DM approximates a classical Maxwell-Boltzmann gas far from the bulge and that its quantum nature only becomes important near the galactic center, leading to a core-like profile. Despite these differences, we observe that the values we obtain for $m$ (see below) are roughly consistent with the bounds based on the extended TG approach, especially for smaller dwarf galaxies with higher DM density.
The value of $ \mu $ at the origin will be of interest in interpreting the solutions to \cref{eq:eom}. If $ \mu({\bf r}\to0) \rightarrow + \infty $ then $ \phi \rightarrow - \infty $, which, as we will show, corresponds to a point-like mass at origin, a black hole \footnote{This scenario was recently considered in \cite{de2017warmbh} with completely different boundary conditions, without baryons and DM mass in the keV range.}. In these cases, the DM density exhibits a cusp at the origin, but for realistic parameters this cusp appears only in the immediate vicinity of the black hole. Outside this region the DM density has a core-like profile. Solutions for which $ \mu(0) $ is finite corresponds to galaxies where no central black hole is present and exhibit `pure' core-like DM densities. The remaining possibility, $ \phi({\bf r}\to0)\rightarrow + \infty $ describe the unphysical situation of a repulsive point-like object.
\section{Spherically symmetric solutions}
\label{sec:SSS}
In the following section we will adopt the simplifying assumption that all quantities depend only on $r = |{\bf r}|$; this is a reasonable assumption for ellipticals, but is problematic for spiral galaxies. We will comment on this when we apply our formalism to specific cases.
It proves convenient to define $ \overline u$ and $x$ by
\beq
x = \frac r A\,, \quad \frac{\overline u(x)} x = \frac\mu T\,; \quad A = \sqrt{\frac{T M_{\tt pl}^2 \lambda^3}{8\pi m^2}}\,,
\label{eq:ubdef}
\eeq
while the baryon density can be written in the form
\beq
\rho_B = \frac{M_{\tt B}}{\left(\frac43\pi a ^3 \right)} F(r/ a )\,,
\label{eq:B-density}
\eeq
where $M_{\tt B}$ is the total bulge mass and $ a $ denotes the scale radius which can be obtained from the effective radius using the explicit form of the baryonic profile function $F$; $ \rho_B $ will be negligible for $ r \gg a $. The normalization for $F$ is taken to be
\beq
\int_0^\infty dy\,y^2 F(y) = \inv3\,.
\eeq
With these definitions \cref{eq:eom} becomes (a prime denotes an $x$ derivative)
\beq
\overline u'' = x \mbox{Li}_{3/2}\left( - e^{\overline u/x} \right) -{\tt q}\, x F(x/X_B)\,, \qquad X_B = a /A\,,\quad {\tt q}=\frac{3M_{\tt B} \lambda^3}{8\pi m a ^3 } \,.
\label{eq:ueom}
\eeq
For most of the examples we consider $ X_B \lesssim 1 $.
Far from the galactic center $ \rho_B $ can be neglected and the gas density will be small enough so that $ P = n T $ and Li$_{3/2}(-z) \simeq -z $. In this region a `test' object in a circular orbit of radius $r$ will have velocity
$ v_{\tt rot}(r)$ determined by
\beq
v_{\tt rot}^2(r) = \frac{ M_{\tt tot}(r)}{M_{\tt pl}^2 r }\,,
\label{eq:vrot}
\eeq
where $ M_{\tt tot}$ is the total mass ($M_{\tt BH}$ + $M_{\tt B}$ + $M_{\tt DM}$) inside radius $r$. At large distances $v_{\tt rot}(r)$ will approach an $r$-independent value $ \overline{\vrot}$ provided $ M_{\tt tot}(r) \propto r$, which requires $ n \sim 1/r^2$ (since the dark component dominates in the asymptotic region). This then implies $ \overline u = x \ln(b/x^2) $ for some constant $b$; substituting in $ \overline u'' \simeq - x \exp(\overline u/x) $ gives $ b=2$:
\beq
\overline u \rightarrow x \ln \left( \frac2{x^2}\right)\,, \quad x \gg X_B.
\label{eq:uas}
\eeq
The numerical solutions approach the asymptotic expression in \cref{eq:uas} for $ x \,{\raise-3pt\hbox{$\sim$}}\!\!\!\!\!{\raise2pt\hbox{$>$}}\, 1$.
Using the asymptotic expressions it follows that $ M_{\tt tot}(r) \simeq (16\pi A^2/\lambda^3)m r $, whence \cref{eq:vrot} gives
\beq
T = \frac12} %\def\half{{1\over2} m \overline{\vrot}^2 \,; \qquad \mbox{where}~~v_{\tt rot}(r)\, \stackrel{r \gg a}\longrightarrow \,\overline{\vrot}\,.
\label{eq:T-vrot}
\eeq
Comparing this with the expression \cref{eq:sigma} we find
\beq
\sigma^{}_{\text{\tiny DM}} = \frac\overline{\vrot}{\sqrt{2}} \,, \quad (r \gg a ) \,;
\eeq
it also follows that $ \lambda = \sqrt{4 \pi}/( m \overline{\vrot})$
We solve \cref{eq:ueom} using \cref{eq:uas} and its $x$ derivative as boundary conditions. The solution~\footnote{For later convenience we explicitly display the dependence on the parameters $ X_B$ and ${\tt q}$.} $ \overline u(x;X_B,{\tt q})$ will then ensure that rotation curves are flat and is consistent with the chosen baryon profile. Note that in general $ \overline u $ will not vanish at the origin, which implies the behavior
\beq
\Phi \, \stackrel {r \rightarrow 0 }\longrightarrow \, - \frac{A T}m \frac{u_0 }r\,, \quad u_0 = \overline u(0;X_B,{\tt q}) \,.
\label{eq:u_o}
\eeq
For $u_0>0 $ this corresponds to the field generated by a point mass
\beq
M_{\tt BH} = \frac{AT M_{\tt pl}^2}m\, u_0 = \left( \frac{\sqrt{\pi} \, \overline{\vrot}^3}8 \right)^{1/2} \frac{M_{\tt pl}^3}{m^2} u_0
\label{eq:mbh-u0}
\eeq
that we interpret as a black hole at the galactic center: in these cases the boundary conditions are consistent only if a black hole with this particular mass is present. For $u_0<0 $ the solution in \cref{eq:u_o} is unphysical, at least as far as classical non-relativistic configurations are concerned. These two regimes are separated by the curve $ u_0 =0 $ in the $ X_B - {\tt q}$ plane; solutions of this type correspond to galaxies without a central black hole.
For the examples that follow, we consider that the expression $ \overline u(0;X_B,{\tt q})= u_0 $ is equivalent (to a good approximation) to the simple relation~\footnote{This was obtained numerically, not derived rigorously using the properties of the solutions to \cref{eq:eom}.}
\beq
\ln X_B = \nu(u_0) \ln{\tt q} + c(u_0)\,,
\label{eq:scal.rel}
\eeq
where the functions $ \nu $ and $c$ depend on the form of $F$ in \cref{eq:B-density}, but are generally $\mathcal{O}(1) $. For the choices of $F$, and for $ u_0 $ not too close to zero, below they can be approximated by algebraic functions:
\beq
c(u_0) \sim \bar c_1 \sqrt{\bar c_2 - u_0}\,, \qquad \nu(u_0 ) \sim -\bar\nu_1 - \bar\nu_2 u_0^2 + \bar\nu_3 u_0^3\,,
\label{eq:c.nu}
\eeq
where $ \bar c_{1,2},\,\bar\nu_{1,2,3} $ are positive and $ O(1) $; values for several choices of $F$ are provided in the next section, see table \ref{fit.params}. The errors in using these expressions are below $ 10\% $, so they are useful for $ u_0 \gg 0.1$. Unfortunately, many cases of interest correspond to $ u_0 \lesssim 0.1 $, so in most results below we will not use \cref{eq:scal.rel,eq:c.nu}, opting instead for a high-precision numerical calculation.
It is worth pointing out that once the boundary conditions at large $r$ are imposed, $u_0 $ is determined by $ X_B$ and ${\tt q}$, it is not a free parameter. Equivalently, $ M_{\tt BH}$ is determined by $m$ and $ \rho_B$, in particular, the presence (or absence) of a black-hole and its mass are not an additional assumption, but instead follow naturally from the choice of DM mass and baryon density profile.
The relation \cref{eq:scal.rel} can be used to estimate the DM mass $m$ in terms of the galactic quantities $M_{\tt B},\, a$ and $M_{\tt BH}$. Since $ c(u_0) $ in \cref{eq:c.nu} should be real, a necessary condition for $m$ to be real as well is $ u_0 < \bar c_2 $. This leads to the requirement:
\beq
m^2 < \frac{\bar c_2}{(64/\pi)^{1/4}} \frac{\left( M_{\tt pl}^2 \overline{\vrot} \right)^{3/2}}M_{\tt BH}\stackrel{\bar c_2 =1.3}\sim \left( 180\, \hbox{eV} \right)^2 \frac{ \left( 10^3 \overline{\vrot} \right)^{3/2}}{M_{\tt BH}/\left(10^9 M_\odot\right)}\,;
\label{eq:con.con}
\eeq
for most of the the specific examples studied below we find $ m \lesssim 100\,\hbox{eV}$ (see Sec. \ref{sec:applications}).
To get an estimate of the values of the quantities involved, for $ m\sim 10\, \hbox{eV} $ and $ \overline{\vrot} \sim 300 $km/s, $A\sim 20 $kpc and $ M_{\tt BH} \sim 10^{11} u_0^2 M_\odot$, so that realistic situations will correspond to small values of $u_0 $ that will satisfy \cref{eq:con.con}.
Since $ \overline{\vrot} \ll1 $ for all cases of interest, the gas temperature will be much smaller than its mass. In addition, $ \mu/m = \overline{\vrot}^2 \overline u/(2x) $ (cf. \cref{eq:ubdef}), where we expect $ \overline u \sim \mathcal{O}(1) $ (see Sec. \ref{sec:applications}) and $ \mu \ll m$; except perhaps in the immediate vicinity of the galactic center and even then only when $ \overline u(0)\not=0$ (corresponding to $M_{\tt BH}\not=0$). From this it follows that in general the Fermi gas will be non-relativistic, as we assumed above.
We will define the halo (or virial) radius $ R_{\tt hal} $ by the condition $ m n(R_{\tt hal}) = 200 \times \rho_c $, where $ \rho_c \simeq 4.21 \times 10^{-47} \hbox{GeV}^4 $ is the critical density of the Universe. For all cases considered here the density will take its asymptotic expression (corresponding to \cref{eq:uas}) at $ r = R_{\tt hal} $, then we find
\beq
R_{\tt hal} = \left(10^3 \overline{\vrot} \right)\times 240 \, \mbox{kpc} \,,
\label{eq:rhal}
\eeq
and depends only on $ \overline{\vrot} $; the galactic radius is then $\mathcal{O}(100\,\mbox{kpc})$.
Taking the zero of energy at infinity imposes the boundary condition $ \Phi(R_{\tt hal}) = - M_{\tt hal}/( M_{\tt pl}^2 R_{\tt hal}) $ so that, using \cref{eq:energy} and \cref{eq:uas},
\beq
E_0 = T \ln \left( \frac{2A^2}{R_{\tt hal}^2} \right) - \fracM_{\tt hal}{R_{\tt hal} M_{\tt pl}^2}\,; \quad M_{\tt hal} = M_{\tt B} + 4\pi m \int_0^{R_{\tt hal}} dr\, r^2 n(r) \,,
\eeq
so that $E_0 $ is then determined by the other parameters in the model.
\subsection{Sample calculation}
To illustrate the model presented above we consider a set of 3 hypothetical galaxies (cf. table \ref{examples}) for which we display some of the results derived from the calculations described above, where the black-hole mass $ M_{\tt BH} $ is calculated using \cref{eq:mbh-u0}. In this section we will assume $ m = 50 $ eV and use the Plummer profile $ F(y) = (1 + y^{2})^{-5/2}$ (again, for illustration purposes); note that the solution is independent of $a$ when $ M_{\tt B}=0$.
\begin{table}
\caption{Sample galaxies}
$$
\begin{array}{|c|c|c||c|}
\hline
\text{Galaxy} & M_{\tt B}/M_\odot & a\, (\text{kpc})& M_{\tt BH}/M_\odot \cr
\hline
A & 0 & -- & 8.5 \times 10^9 \cr \hline
B & 2.55 \times 10^{10} & 2.5 & 5.4 \times 10^7 \cr \hline
C & 2.55 \times 10^{10} & 3.25 & 2.8 \times 10^9 \cr \hline
\end{array}
$$
\label{examples}
\end{table}
All these galaxies have a halo radius (cf. \cref{eq:rhal}) $ \sim 300 $kpc. The total mass density and circular velocity \cref{eq:vrot} are plotted in Fig. \ref{fig:example}. Galaxy $A$ shows a density profile with no evidence for a core while a clear constant-density core develops in galaxies $B$ and $C$. Note that for the latter, the density increases again for $ r \lesssim 200 $ pc due to the relatively large central SMBH. Similarly, galaxy $B$ has a density increase only at very small radii, $ r \ll 100 $ pc, because of a smaller black hole at the galactic center. The circular velocity profile is generally steepest for $A$, decreasing for $B$ and even more for $C$.
\begin{figure}[ht]
$$
\includegraphics[width=2.9in]{Figures/example-density.pdf} \qquad
\includegraphics[width=2.9in]{Figures/example-velocity.pdf}
$$
\caption{\footnotesize Density (left) and circular velocity (right) for the sample galaxies in table \ref{examples}, the black, dark-gray and light-gray curves correspond, respectively, to galaxies $A$, $B$ and $C$ ($\rho_0 = 2 m /\lambda^3$).}
\label{fig:example}
\end{figure}
As shown by this exercise, the solution is very sensitive to the particular combination of size and galaxy mass ($a$ and $M_{\tt B}$). For example, a change by a factor of 50 is predicted in $ M_{\tt BH} $ due to a relatively small ($\sim 40$\%) change in $a$, leading ultimately to quite different density profiles. While this can be considered a feature of the model, which is anticipated to have large predictive power, given the uncertainties that plague current astronomical measurements one may refrain from over-interpreting the results at such level of detail.
It is interesting to note that the case $ M_{\tt B}=0 $ is universal, in the sense that the solution to \cref{eq:eom} with boundary conditions \cref{eq:uas} is unique and, in particular, has $ u_0 \simeq 1.49 $: within this model configurations without a smooth baryon density are consistent with flat rotation curves only if they contain a central SMBH with mass $\sim (6 \times 10^6/m_{\tt eV})^2 M_\odot $ (see \cref{eq:mbh-u0}), where $ m_{\tt eV} $ is the DM mass in eV units.
\section{The TFDM model in specific galaxies.}
\label{sec:applications}
Given a spherically-symmetric galaxy with a known baryon density profile and a given black hole mass, the results of Sec. \ref{sec:SSS} predict a DM mass $m$. It is then important to determine whether the {\em same} value of $m$ is obtained for different galaxies, as required for consistency. In this section we discuss this issue for a set of large galaxies (Sec. \ref{sec:large}) and then for a set of dwarf galaxies (Sec. \ref{sec:dwarf}). We note that we cannot expect a perfect agreement (that is, precisely the same $m$ in all cases), as we have ignored many of the details of the structure of the galaxies being considered (assuming, for example, spherical symmetry). We will be satisfied instead to see if the values of $m$ derived for each galaxy cluster around a specific range.
\subsection{Large galaxies with SMBH}
\label{sec:large}
We will adopt the following three commonly used stellar density profiles (cf. \cref{eq:B-density}) \cite{plummer1911problem,hernquist1990analytical,jaffe1983simple} into our model.
\begin{align}
\label{eqn:profiles}
F(y) &=\frac{1}{(1 + y^{2})^{\frac{5}{2}}} \qquad \qquad \text{\text{(Plummer)}} \,, \nonumber\\[6pt]
F(y) &=\frac{2}{3y(1 + y)^{3}} \qquad \qquad \textrm{\text{(Hernquist)}} \,, \nonumber\\[6pt]
F(y) &=\frac{1}{3y^2(1 + y)^{2}} \qquad \qquad \textrm{\text{(Jaffe)}} \,,
\end{align}
for which the parameters in the scaling relations \cref{eq:scal.rel,eq:c.nu} are provided in table \ref{fit.params}. We use different profiles in order to gauge the effect of baryon distribution on the DM mass in the set of galaxies that we study.
\input{Tables/fit_parameters.tex}
We collected a dataset from several sources \cite{hu2009black, kormendy2013coevolution, lelli2017one, graham2016normal,sofue1999central,sofue2009unified} for a total of 60 galaxies, spanning a large range of Hubble types, and each of them containing a SMBH at their galactic center; details on data selection are provided in appendix \ref{appendix:data}. Using the central values of $M_{\tt B},\, M_{\tt BH},\,\overline{\vrot}$, and $a$ provided in the above references, we calculate the DM mass $m$ for all the galaxies in this set~\footnote{To minimize inaccuracies, we do not use \cref{eq:scal.rel}, but find $m$ by solving $ \overline u(0;X_B,{\tt q})= u_0$ numerically.}. The results are shown in Table \ref{table:elliptical_galaxies} and Table \ref{table:spiral_galaxies} for elliptical and spiral galaxies, respectively.
\begin{table}
\scriptsize
\noindent\begin{minipage}{0.5\textwidth}
\caption{Elliptical Galaxies}
\label{table:elliptical_galaxies}
\input{Tables/Elliptical_table}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=2.3in]{Figures/Plummer_Elliptical.pdf}\\
\vspace{20pt}
\includegraphics[width=2.3in]{Figures/Hernquist_Elliptical.pdf}\\
\vspace{20pt}
\includegraphics[width=2.3in]{Figures/Jaffe_Elliptical.pdf}
\end{minipage}
\end{table}
\begin{table}
\scriptsize
\begin{minipage}[18cm]{0.5\textwidth}
\caption{Spiral Galaxies}
\label{table:spiral_galaxies}
\input{Tables/Spiral_table}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=2.3in]{Figures/Plummer_Spiral.pdf}\\
\vspace{20pt}
\includegraphics[width=2.3in]{Figures/Hernquist_Spiral.pdf}\\
\vspace{20pt}
\includegraphics[width=2.3in]{Figures/Jaffe_Spiral.pdf}
\end{minipage}
\end{table}
For spiral galaxies, we find that the DM mass lies in the range $30-100$ eV with a few outliers in the range $ \sim 100-150$ eV. For elliptical galaxies, $m$ has a tighter range, $10-60$ eV for all the three baryon profiles (excluding the one outlier, NGC 221). The average and standard deviation for the calculated DM mass for the two different galaxy types and three baryonic profiles are listed in table \ref{mean_std}.
\input{Tables/Mean_Std}
It is important to note that the average value of $m$ for elliptical galaxies is lower than that for spiral galaxies. This is due, to a great extent, to having ignored the spiral mass in the above calculations: if we add the spiral mass to the bulge and increase the effective radius (while keeping all other parameters fixed), the value of $m$ decreases considerably. For example, in the Milky Way (spiral mass $5.17 \times 10^{10} M_{\odot} $, bulge mass $0.91 \times 10^{10} M_\odot$ \cite{licquia2015improved}), this shifts the DM mass from $51.8$ eV to $22.38$ eV for a change in effective radius from 0.7 kpc to 3 kpc (for Hernquist profile). Even though adding the entire baryonic mass of the spiral to the bulge stellar mass by just increasing the bulge effective radius is probably a poor assumption, it can be expected that considering the disc structure would lead to a decrease in the mean value of $m$, closer to the result for elliptical galaxies. On the other hand $\overline{\vrot}$ is not known for most bulge-dominated elliptical galaxies, so uncertainties in this parameter may shift the DM mass for ellipticals, but the change in that would be less significant. Overall, it is remarkable that despite all its simplifying assumptions the model provides values of $m$ that lie within a relatively narrow range~\footnote{The case of fermionic DM for the Milky way considering most of the structural features of the galaxy has been studied \cite{barranco2018constraining}. However, they assume complete degeneracy at zero temperature and the mass range is obtained strictly from the constraints on the rotation curve.}.
\begin{figure}[ht]
$$
\includegraphics[width=2.3in]{Figures/chemical_potential.pdf}
\includegraphics[width=2.3in]{Figures/Pressure.pdf}
\includegraphics[width=2.4in]{Figures/model_vs_NFW.pdf}
$$
\caption{\footnotesize DM chemical potential (left) and $P/(n T)$ (middle) as functions of $r$ for the Milky Way (see \cref{eq: n,eq:sigma,eq:T-vrot}) for three baryon density profiles; the classical Maxwell-Boltzmann equation of state is shown in red. Right: comparison of the DM density profile for the model discussed here using the Hernquist profile with the (unnormalized) NFW profile \cite{sofue2012grand} to illustrate the presence of a core in the former. All graphs are for the Milky Way. }
\label{fig:mu-PnT}
\end{figure}
The histograms next to tables \ref{table:elliptical_galaxies} and \ref{table:spiral_galaxies} exhibit a few ``outliers'', for which the DM mass is in the $ \gtrsim 100 $ eV range, though this is dependent on the baryon profile used. For example, $m$ associated with NGC 2778 is $ \sim 75$ eV for the Plummer and Hernquist profiles, but $ \sim 100$ eV for the Jaffe profile, while $m$ for NGC 6068 and NGC 5576 exhibit the opposite behavior. The case of NGC 221 is unique in that it requires $ m \sim 200 $ eV, but it is also special in that it is the smallest galaxy in this set (with an effective radius of $40$ pc), and categorized as a dwarf galaxy with a central black hole. By comparing the bulge mass from two different sources (log($M_{\tt B}$) of 9.05 in \cite{kormendy2013coevolution} and 8.53 in \cite{hu2009black}) hint at larger uncertainties in the measurement of stellar mass and lead to a comparatively large value for the DM mass.
To further understand the spread of $m$ values we present in Fig. \ref{fig:dMB-MB} a plot of $m$ against $M_{\tt B}$ for the galaxies in our dataset, where we find that larger values of $m$ are associated with smaller, less massive galaxies. This correlation may indicate a defect in the DM model (which should produce similar values of $m$ for all galaxies, without the correlation show in the figure), or it may indicate that the data we use underestimates $M_{\tt B}$ for smaller galaxies, and over-estimates it for larger ones. To examine this last possibility we took from our dataset the values of $ \overline{\vrot}$ and $ a$ for each galaxy and then obtained the baryon mass that corresponds to a fixed choice of $m=50 $eV. We denote this `derived' baryon mass by $ M_{\tt B}'$, In Fig. \ref{fig:dMB-MB} we also present a plot of $ M'_B / M_{\tt B} $ vs $M_{\tt B}$, which shows that $ | M'_B| \lesssim 3 M_{\tt B}$ for the spiral galaxies in our set, and $ | M'_B| \lesssim 1.5 M_{\tt B}$ for the ellipticals, so that an $\mathcal{O}(1)$ shift in $\log M_{\tt B}$ can explain the fact that we do not obtain the same value of $m$ for these galaxies. Although we believe this argument is compelling, factors of order $\sim 2$-$3$ can easily be accommodated given the current systematic errors in the estimation of $M_{\tt B}$ associated to stellar evolution, reddening and the past star formation history of each galaxy (see for instance \cite{Bell2001}). Therefore the viability of the dark matter model in this context then cannot be absolutely decided.
\begin{figure}
\bal
\includegraphics[width=2.3in]{Figures/Elliptical_bulge_DM.pdf} \qquad & \qquad \includegraphics[width=2.3in]{Figures/Elliptical_bulge_error.pdf}
\nonumber\\[6pt]
\includegraphics[width=2.3in]{Figures/Spiral_bulge_DM.pdf} \qquad & \qquad\includegraphics[width=2.3in]{Figures/Spiral_bulge_error.pdf}
\nonumber \end{align}
\caption{Left: scatter plot and linear fit illustrating the correlation between the obtained values of $m$ and $M_{\tt B}$ for elliptical (top) and spiral (bottom) galaxies. Right: relative shift in $M_{\tt B}$ needed to obtain a fixed value of $m$, chosen here as $50$ eV, for elliptical (top) and spiral (bottom) galaxies. NGC 221 is not included in the plots. All the results are for the Hernquist profile.}
\label{fig:dMB-MB}
\end{figure}
We now consider various aspects of the solutions to \cref{eq:eom}, using the Milky Way as an example. In Fig. \ref{fig:mu-PnT}, we show the chemical potential for three different baryon profiles. As expected, $\mu(r)$ diverges as $r$ approaches the galactic center, indicating a the presence of a SMBH. We also examine the degree to which the gas is degenerate by plotting $ P/(n T) $. Far from the galactic center, the gas obeys the classic (dilute) Maxwell-Boltzmann distribution $ P \simeq n T$ (red line in the figure), while close to the galactic center, a significant deviation due to Fermi-Dirac statistics is observed, indicating strong degeneracy. In the bottom panel of Fig.~\ref{fig:mu-PnT} we compare the obtained density profile in the inner regions to the empirical solution found for collisional cold dark matter model, or NFW profile \citep{navarro1996structure}. At the centers of halos, the cold dark matter solution is characterized by a cuspy mass distribution while our model favors shallower inner dark matter cores, with the exception of the region surrounding the central black hole.
\begin{figure}
\bal
\includegraphics[width=2.3in]{Figures/density_Hernquist.pdf} \qquad & \qquad \includegraphics[width=2.3in]{Figures/mf_hernquist.pdf} \nonumber\\[6pt]
\includegraphics[width=2.3in]{Figures/density_Jaffe.pdf} \qquad & \qquad\includegraphics[width=2.3in]{Figures/mf_Jaffe.pdf} \nonumber\\[6pt]
\includegraphics[width=2.3in]{Figures/density_Plummer.pdf} \qquad & \qquad \includegraphics[width=2.3in]{Figures/mf_Plummer.pdf}
\nonumber \end{align}
\caption{DM density (left column) and mass fraction (right column) for two spiral galaxies (Milky Way and N224) and two elliptical galaxies (N3379 and N4621) (we use $ \rho_0 = 2 m/\lambda^3$). }
\label{fig:density-mf}
\end{figure}
\begin{figure}[ht]
\bal
\includegraphics[width=2.4in]{Figures/milkyway_rot.pdf} \qquad & \qquad \includegraphics[width=2.4in]{Figures/N224_rot.pdf}\nonumber\\[6pt]
\includegraphics[width=2.4in]{Figures/N3079_rot.pdf} \qquad & \qquad \includegraphics[width=2.4in]{Figures/N4258_rot.pdf}
\nonumber
\end{align}
\caption{Circular velocity as a function of distance for four spiral galaxies : Milky Way, NGC 224, NGC 3079 and NGC 4258. Dataset 1 (with no error bars) for all the four galaxies is from \cite{sofue1999central} whereas dataset 2 for the Milky Way is taken from \cite{sofue2009unified}. Dataset 3 for NGC 224 is obtained from \cite{braun2009wide}}
\label{fig:rot}
\end{figure}
The mass densities for DM, and the fraction of the DM mass inside a given radius are shown in Fig. \ref{fig:density-mf}. By construction, the DM mass density exhibits the $ 1/r^2 $ behavior at large $r$ required for the observed flat rotation curves. It is also relatively flat inside the bulge except for the immediate vicinity of the origin where it spikes due to the accumulation of DM around the central black-hole ($\mu$ diverges as $ r \rightarrow 0 $, which allows for a higher density of DM particles to be accommodated in a smaller volume, leading to the observed increase in $ \rho $); though not obvious from the figure, this spike is significant only for $ r \lesssim 1 $ pc. Outside of the region immediately surrounding the black-hole the exclusion principle obeyed by our DM candidate does lead to a core-like behavior. The plot of the DM mass fraction shows that, except for a few kiloparsecs from the galactic center, galaxies are DM dominated.
In fig \ref{fig:rot} we plot the circular velocity as a function of distance from the galactic center for four spiral galaxies, the Milky Way, NGC 224 (M31 or Andromeda), NGC 3079 and NGC 4258, using the three different baryonic profiles. We also compare the model predictions with data obtained using CO, HI and H-alpha observations (elliptical galaxies are not included in the sample due to the lack of rotational curve data). The outer region of the rotation curves are in good agreement with the data, as expected from our boundary conditions. The inner dynamics is best reproduced for NGC 3079 followed by the Milky Way, but no so effectively for NGC 224 and NGC 4258. This again, can be attributed to the fact that our model does not include the disc structure, which has a significant contribution to the dynamics of circular velocities, and also assumes complete spherical symmetry for these galaxies. It is then remarkable that the overall qualitative features of the rotation curves for our model are a good fit to the available data.
The statistical errors in the above values for $m$ can be estimated using the scaling relation in \cref{eq:scal.rel}. Using the fact that $ u_0 $ is small for the examples being considered, and taking $ \nu(0)\sim -0.4,\, c(0) \sim 0.9$ (cf. table \ref{fit.params}), we find (at 3 standard deviations)
\beq
\frac{\delta m}m \sim 3 \times \left[\frac12} %\def\half{{1\over2} \frac{\deltaa}a - \frac{\delta M_{\tt B}}{M_{\tt B}} +2 \frac{\delta \overline{\vrot}}\overline{\vrot} \right]
\eeq
assuming that $ M_{\tt B} \propto a \sigma^2 $ \cite{kormendy2013coevolution}, using \cref{eq:sig-v-spiral,eq:sig-v-ell}, and taking $ \delta M_{\tt B}/M_{\tt B} \sim \delta \overline{\vrot}/\overline{\vrot} \sim 0.1 $ we find $ \delta m/m \sim 0.4$. This, however, does not include the systematic errors associated with our applying the spherically symmetric model to spiral galaxies, or systematic errors with the data itself; as noted earlier, we expect these errors to be considerably larger.
\subsection{Galaxies without SMBH}
\label{sec:dwarf}
Strong observational evidence suggests that almost all massive galaxies contain a supermassive black hole at their galactic center; most galaxies with no SMBH are small, dwarf galaxies. The best studied members of the latter category are the Milky Way dwarf spheroidal galaxies (dSphs) and because of this, they are the best suited candidates to test our model in the special case where $M_{\tt BH}=0 $. However, it is widely accepted that these dSphs are mostly dominated by dark matter, with mass-to-light ratios of $M/L_{V} \sim 10^{1-2}$ \cite{mateo1998dwarf}. Detailed studies of light fermionic DM in nearby dwarf spheroidal galaxies have already appeared in the literature \cite{domcke2015dwarf, randall2017cores,destri2013fermionic}, though the implementation of the Thomas-Fermi paradigm is different form the one being discussed here (cf. the discussion in Sect. \ref{sec:introduction} and at the end of Sec. \ref{sec:equil.eq}). The DM profile in our model is determined based on the baryon distribution and hence we do not consider these galaxies due to their negligible baryonic content.
There is also the generally accepted picture that a majority of the dwarf galaxies have slowly rising rotation curves \cite{read2016understanding,swaters2009rotation}, so our assumption of flattened out circular velocities for the boundary conditions no longer holds~\footnote{It is possible to adapt tour approach to these situations, but we will not pursue this here.}. Therefore we will here restrict ourselves to somewhat larger dwarf galaxies without central black holes, but with flat asymptotic rotation curves and also with an estimate of the baryonic mass. We choose a total of eight such dwarf galaxies (from the SPARC database \cite{lelli2016small}) based on their small bulge mass ($M_{\tt B} \lesssim 10^9 M_\odot$) and small asymptotic rotational velocity ($\overline{\vrot} \lesssim 100$ km/s)~\footnote{There were a few other galaxies in the data set that satisfied these two constraints, but for which we found no real solutions for the DM mass.}. Since we do not find a strong dependence with the baryonic profile function $F$, in this section we restrict ourselves to the case of the Plummer profile.
\begin{table}[h]
\caption{Dwarf galaxies}
\label{table:dwarf_galaxies}
\input{Tables/dwarf_table}
\end{table}
The values of $m$ for the eight dwarf galaxies are listed in table \ref{table:dwarf_galaxies}; the masses turn out to be on the higher end of the spectrum as compared to the galaxies with SMBHs in the previous section. This can be understood using the scaling relations \cref{eq:scal.rel,eq:c.nu}, which in this ($u_0 =0 $) case reduces to
\beq
\label{eq:dwarf}
0.412 \ln\left( \frac{M_{\tt B}}{10^9 M_\odot} \right)+ 0.352 \ln\left( \frac m{30 \hbox{eV}}\right) = 0.236 \ln\left( \frac a{2.5\text{kpc}}\right) + 0.736 \ln\left( \frac\overline{\vrot}{200 \text{km/s}} \right) + 1.493\,,
\eeq
where we used the fit parameters for the Plummer model listed in table \ref{fit.params}. For the eight galaxies considered here, if we take the average value of $\overline{\vrot} \sim$ 70 km/s and a $\sim$ 0.5 kpc, we get log ($M_{\tt B}/M_{\odot}$) as 9.17 for the DM mass of 50 eV which is not far off from the data available for $M_{\tt B}$ (cf. \cite{lelli2016small} ). Also, the farthest outlier in our data, UGC 8550 requires log ($M_{\tt B}/M_{\odot}$) to be 9.16 as compared to the given value of 8.72. The difference is far less compared to the case of galaxies with SMBH as dwarf galaxy NGC 221 with similar DM mass for the same Plummer profile requires much larger shift in baryonic mass (log ($M_{\tt B}/M_{\odot}$) of 9.61 as compared to 8.53 provided in the data). This might hint that the large systematic errors in the measurement of $M_{\tt B}$ are more impactful in the case of galaxies without SMBH causing considerable shift in the DM mass. We again denote by $M'_B$ the total baryon mass when $m$ has the specific value of $50$ eV, then we find that $ M^{\prime}_{B}/M_{\tt B}$ in the range $1.5-3$ for all the eight dwarfs we studied. As for the case of large galaxies, it is currently impossible to exclude this possibility because of the large systematic errors in $ M_{\tt B}$.
It should be noted that some of these dwarf galaxies provide two real solutions for the DM mass. In such cases, only the smaller of the two values are included in table \ref{table:dwarf_galaxies} because the larger mass solution (in the $\mathcal{O}(500\,\hbox{eV})$ range) does not lead to a core-like profile or match with other observations (e.g. rotation curves).
\begin{figure}[ht]
$$
\begin{array}{lr}
\includegraphics[width=2.3in]{Figures/chemical_potential_dwarf.pdf} & \qquad
\includegraphics[width=2.3in]{Figures/Pressure_dwarf.pdf} \cr
\includegraphics[width=2.3in]{Figures/density_dwarf.pdf} & \qquad
\includegraphics[width=2.3in]{Figures/mf_dwarf.pdf} \cr
\includegraphics[width=2.3in]{Figures/NGC2915_rot.pdf} & \qquad
\includegraphics[width=2.3in]{Figures/DDO154_rot.pdf}
\end{array} $$
\caption{\footnotesize Properties of the solution to the TFDM equations for dwarf galaxies. Top row: chemical potential (left) and $P/(nT) $ (right) for the DM as a function of $r$ for 3 dwarf galaxies; middle row: DM density (left) and mass fraction (right) for the same galaxies; bottom row: rotation curve for NGC 2915 (left) and DDO 154 (right) with rotation data taken from \cite{lelli2016sparc}; also, $ \rho_0 = 2 m/\lambda^3$.}
\label{fig:dwarf_props}
\end{figure}
In Fig \ref{fig:dwarf_props} we illustrate the properties of the solutions by plotting various properties of model predictions for three dwarf galaxies, whose behavior away from the center is qualitatively similar to that of large galaxies with SMBHs. We note that the predicted dark matter profiles show a central constant density core with core radii $r \sim 100$-$400$ pc (which is also the case for the other galaxies in our set). Of special interest are the rotation curves (bottom line in the figure): for DDO 154 and NGC 2915, the predicted behavior of $v_{\tt rot}(r)$ qualitatively matches quite well with the observations, but the rise in the curve is somewhat steeper compared to the data. It is unclear whether these discrepancies are due to a shortcoming in the model itself or in the simplifying assumptions we adopted, or due to the specific baryonic profile (Plummer's) we use~\footnote{The match with observations does not improve if we use the Hernquist or Jaffe profiles.}.
\section{Conclusions}
\label{sec:conclusions}
In this paper we investigated the extent to which a DM model consisting of single, light fermion, is consistent with the observed bulk properties of galaxies (effective radius, baryon mass and profile, etc.). To simplify the calculations we neglected possible fermion (non-gravitational) interactions, and assumed that the galaxies are well described by a spherically-symmetric configuration. We also assumed a fixed baryon distribution that affects the mechanical equilibrium of the system, but we neglected any thermal or dynamical effects of the baryons. The baryon profile, which is directly observable, together with the boundary conditions leading to flat rotation curves, completely determine the DM distribution in the system. This is in contrast with other publications which assume a DM profile {\it ab initio}.
For the set of galaxies we considered (that includes spiral, ellipticals and several dwarf galaxies) the model is consistent with the observational data, in the sense that the values of $m$ we obtain lie in a relatively narrow range. Admittedly, for the model to be convincing, the {\em same} value of $m$ should be obtained for all galaxies; but to test this would require a careful modeling of each galaxy, and solving the stability equation \cref{eq:eom} without the assumption of spherical symmetry -- which lies beyond the scope of this paper. A stringent test of the model would also require more accurate data with reduced systematic errors; it is unclear whether any of these effects leads to the $ m - M_{\tt B}$ correlation observed in Fig. \ref{fig:dMB-MB}. Given these uncertainties we limit ourselves to stating that the model is promising, but additional calculations and observations are necessary to fully determine its viability.
For galaxies with a SMBH we find that the preferred DM mass is $ \sim 40 $ eV, and that the DM distribution has a central core region where the fermions are strongly degenerate, with the degeneracy increasing as the central black-hole is approached. For galaxies without SMBHs the DM mass values we find are generally larger ($ \gtrsim 70 $ eV). Possible reasons for this discrepancy, as well as for the spread in the preferred values of $m$ within each galaxy class are discussed in sections \ref{sec:large} and \ref{sec:dwarf}. It is interesting to note that the lower bounds for $m$ obtained in \cite{randall2017cores, domcke2015dwarf,di2018phase} for the Milky Way dwarf spheroidal galaxies are in the range $20-100$ eV, which is consistent with our results for galaxies without a SMBH.
Interestingly, this model makes clear testable predictions that may be worth exploring in more detail. For instance, at fixed $m$ and asymptotic outer velocity, the profile is fully determined by the equilibrium reached between dark matter and baryons. This means that any detected difference in the {\it shapes} of the rotation curves measured in galaxies at fixed terminal rotation velocity \cite{Oman2015}, in particular for dark matter-dominated objects like dwarfs, should be accompanied by a significant difference in the baryonic mass distribution. Such correlation has already been shown to help alleviate the problem of rotation velocity diversity in the case of self-interactive dark matter \cite{Creasey2017}. Exploring the correlation between observed baryonic properties (mass, gas fractions, size) and the shape of the velocity profiles in single fermion dark matter case would also help assess the viability of this model.
Small deviations from spherical symmetry can be implemented using perturbation theory, which would be applicable to elliptical galaxies or for studying the effects of rotation. In contrast, a more accurate comparison of the model to spiral galaxies will require solving \cref{eq:eom} assuming cylindrical symmetry, and including in $ \rho_B$ bulge and spiral components. Also of interest would be a study of the dynamic stability of the system, that can be approached using standard techniques \cite{siegel1971}; in this case \cref{eq:eom} is replaced by the Euler equation and complemented by the DM and baryon current conservation constraints.
Finally, we wish to comment on the possible effects of exchange interactions. Inside an atom these effects are significant \cite{Bethe-Jackiw}, but in the present situation they can be neglected since we assume the fermions experience only gravitational interactions. This, however, will change dramatically should fermion self-interactions are included, and can lead to a further reduction of the DM pile-up at the core.
\begin{acknowledgments}
The authors would like to thank Hai-bo Yu for interesting and useful comments. LVS acknowledges support from NASA through the HST Program AR-14582 and from the Hellman Foundation.
\end{acknowledgments}
| proofpile-arXiv_065-7100 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{\thesubsubsection ~~ #1}
}
\newcommand{\subsectionmod}[1]{
\refstepcounter{subsection}
\subsection*{\thesubsection ~~ #1}
}
\begin{document}
\begin{flushright}
\end{flushright}
\vspace{14mm}
\begin{center}
{\huge \bf{D1-D5-P superstrata in 5 and 6 dimensions:\\ separable wave equations and prepotentials}} \medskip
\vspace{13mm}
\centerline{{\bf Robert Walker$^{1}$}}
\bigskip
\bigskip
\vspace{1mm}
\centerline{$^1$\,Department of Physics and Astronomy,}
\centerline{University of Southern California,} \centerline{Los
Angeles, CA 90089-0484, USA}
\vspace{4mm}
{\small\upshape\ttfamily ~ walkerra @ usc.edu} \\
\vspace{10mm}
\begin{adjustwidth}{17mm}{17mm}
%
\begin{abstract}
\vspace{3mm}
\noindent
We construct the most general single-mode superstrata in 5 dimensions with ambipolar, two centered Gibbons Hawking bases, via dimensional reduction of superstrata in 6 dimensions. Previously, asymptotically $\text{AdS}_{3}\times \mathbb{S}^{2}$ 5-dimensional superstrata have been produced, giving microstate geometries of black strings in 5 dimensions. Our construction produces asymptotically $\text{AdS}_{2}\times \mathbb{S}^{3}$ geometries as well, the first instances of superstrata describing the microstate geometries of black holes in 5 dimensions. New examples of superstrata with separable massless wave equations in both 5 and 6 dimensions are uncovered. A $\mathbb{Z}_{2}$ symmetry which identifies distinct 6-dimensional superstrata when reduced to 5 dimensions is found. Finally we use the mathematical structure of the underlying hyper-K\"{a}hler bases to produce prepotentials for the superstrata fluxes in 5 dimensions and uplift them to apply in 6 dimensions as well.
\end{abstract}
\end{adjustwidth}
\end{center}
\thispagestyle{empty}
\newpage
\baselineskip=14pt
\parskip=2pt
\tableofcontents
\baselineskip=15pt
\parskip=3pt
\section{Introduction}
\label{Sect:Intro}
The microstate geometry program seeks to describe black hole entropy by explicitly constructing smooth horizonless geometries that approximate a given black hole \cite{Bena:2007kg}. These geometries are interpreted as microstates in the ensemble of states that give rise to the entropy via a Boltzmann like state counting procedure\footnote{The foundational work of \cite{Strominger:1996sh} performed a state counting of this form in a regime where the black hole geometry is absent. The microstate geometry program seeks to describe such states in the regime where the gravitational geometry is manifest.}. This idea has been explored most fully in the D1-D5-P system of type IIB supergravity. There are two main research directions in this program, to produce new examples of micrsostate geometries and to better understand those that we already have. This work was motivated by finding new examples of superstrata with separable massless wave equations (SMWEs), a property that has proven to be critical in recent analysis and critiques of the microstate geometry program \cite{Bena:2017upb,Raju:2018xue,Tyukov:2017uig,Heidmann:2019zws,Bena:2018bbd,Bena:2019azk} .
We focus on the microstate geometries that have come to be known as superstrata \cite{Heidmann:2019zws,Bena:2011uw,Bena:2015bea,Ceplak:2018pws}. These solutions have several key features that make them ideal for exploring the microstate geometry program:
\begin{itemize}
\item The geometry can be produced with either asymptotically flat, or asymptotically anti-de Sitter crossed with a sphere \cite{Bena:2017xbt} (possibly with orbifold singularities).
\item They can be tuned to produce arbitrarily long BTZ-like throats prior to smoothly capping off \cite{Tyukov:2017uig}.
\item It is known how to construct them in both 5 and 6 dimensions \cite{Bena:2017geu}.
\item There are families of solutions with the same asymptotic charges \cite{Heidmann:2019zws}.
\item Examples can be produced with greater coverage of the charges \cite{Bena:2016ypk}. For instance, earlier constructions such as in \cite{Bena:2005va,Berglund:2005vb,Bena:2006kb,Bena:2007qc} could only produce high angular momentum solutions, there is no such obstruction for superstrata.
\item Some examples are known to have SMWEs \cite{Bena:2017upb}, this allows the computation of properties such as energy gaps \cite{Tyukov:2017uig} in the spectrum or investigation of scattering \cite{Heidmann:2019zws,Bena:2018bbd,Bena:2019azk}.
\item The dual CFT description \cite{Bena:2016agb} is well understood.
\end{itemize}
It is for these reasons that the superstrata have risen to prominence, with many recent investigations \cite{Bena:2019azk,Giusto:2019qig,Tian:2019ash,Bena:2018mpb,Bakhshaei:2018vux,Tormo:2019yus}.
The \textit{original} superstrata constructed in \cite{Bena:2015bea} have three important generalizations that need to be distinguished. To begin with the original superstrata were generated by solely bosonic CFT operators, the work of \cite{Ceplak:2018pws} introduced fermionic operators to produce \textit{supercharged} superstrata. In \cite{Heidmann:2019zws} a superposition of the original and supercharged superstrata gave \textit{hybrid} superstrata, steps were also taken towards constructing superpositions of solutions with multiple modes. Throughout this work we will refer to all of these solutions as superstata, distinguish between the separate flavors (original, supercharged, hybrid) when required and treat single and multi-mode solutions separately.
The defining feature of superstrata is they allow fluctuations in the Maxwell fields along the periodic coordinates. In 6 dimensions the fluctuations are parametrized by three integers $(k,m,n)$ corresponding to Fourier modes for the three periodic coordinates $(v,\phi,\psi)$. The 6-dimensional superstrata can be expressed as a double circle fibration in the coordinates $(v,\psi)$. A natural $\text{SL}(2,\mathbb{Q})$ action, known as a spectral transformation \cite{Bena:2008wt,Niehoff:2013kia}, can be defined which mixes these circles. Any single-mode 6-dimensional superstrata will be cyclic in some combination of the $(v,\psi)$ circles, so a Kaluza-Klein reduction on this combination of circles is possible. In order to preserve the form of the BPS equations in 5 dimensions it is useful to use a spectral transformation redefining the $(v,\psi)$ coordinates so that the cyclic direction becomes exactly $v$.
In addition to the integers $(k,m,n)$, the $\text{SL}(2,\mathbb{Q})$ transformation introduces another 3 parameters, giving a total of 6 parameters. One of these parameters is used to ensure the reduction occurs on the $v$-circle. The remaining 5 parameters then show up in the 5-dimensional solutions as: 2 Fourier modes for the $(\phi,\psi)$ directions, the 2 Gibbons Hawking (GH) charges of the now two centered ambipolar GH base and 1 gauge degree of freedom. Thus the reduction produces the most general single-mode superstrata possible on an ambipolar two centered GH base in 5 dimensions. If the net GH charge vanishes the asymptotic geometry is $\text{AdS}_{3}\times \mathbb{S}^{2}$, such geometries were produced in \cite{Bena:2017geu} and correspond to microstate geometries for black strings. If the net GH charge is non-zero the asymptotic geometry is $\text{AdS}_{2}\times \mathbb{S}^{3}$ with a possible $\mathbb{Z}_{p}$ orbifolding of the $\mathbb{S}^{3}$, these are the microstate geometries of black holes, a new result.
We use spectral transformations and reductions to produce new examples of superstrata with SMWEs. In addition to the original $(1,0,n)$ family that were known to have SMWEs \cite{Bena:2017upb}, we show that the $(1,1,n)$ family do as well in 6 dimensions. Applying spectral transformations to these families we find that the two remaining spectral transformation parameters index families of distinct 6-dimensional superstrata with SMWEs. The parameters can be used to alter the complexity of the individual separated differential equations. In addition we show that the $(2,1,n)$ family has SMWEs in certain circumstances: in 6-dimensions the supercharged flavor have SMWEs \cite{Heidmann:2019zws}, in 5-dimensions both the supercharged and original flavors have SMWEs, while the hybrid flavor have SMWEs in 5-dimensions provided the momentum on the $\phi$-circle vanishes.
We show that the $(k,m,n)$ and $(k,k-m,n)$ superstrata in 6 dimensions reduce to the same solutions in 5 dimensions, hence there is a $\mathbb{Z}_{2}$ symmetry identifying 6-dimensional solutions after reduction. In addition it is also clear that multi-mode solutions will not reduce unless the multiple modes are parallel in the $(v,\psi)$ directions. Hence we reveal two mechanisms that may lead to a greater number of superstrata in 6 than 5 dimensions.
In \cite{Tyukov:2018ypq} it was shown how how in 5 dimensions the superstrata fluxes could be derived from a scalar \textit{prepotential}. This prepotential program is of interest since it promises to simplify the process of finding BPS solutions to the D1-D5-P system by reducing parts (if not all) of it to functional analysis on 4-dimensional hyper-K\"{a}hler bases. We construct the prepotentials for our new 5-dimensional superstrata, as well as indicate how the reduction procedure can be inverted so that prepotentials can be used in 6 dimensions as well.
In section \ref{Sect:Sec2 superstrata intro} we give an overview of the superstata solutions, including the BPS equations they solve and how they are constructed. The following sections then separate four related sets of original results:
\begin{itemize}
\item Section \ref{Sect:Sec3 relating 5D 6D superstrata} illustrates the relationship between single-mode superstrata in 6 and 5 dimensions using spectral transformations.
\item Section \ref{Sect:Sec4 relations amongst superstrata} shows that dimensional reduction of the $(k,m,n)$ and $(k,k-m,n)$ 6-dimensional superstrata leads to equivalent 5-dimensional superstata. The special case of the $(1,0,n)$ and $(1,1,n)$ families is considered explicitly.
\item Section \ref{Sect:Sect5 Seperability} summarizes a non-exhaustive but systematic search for superstrata with SMWEs, we show how the $(2,1,n)$ families has greater separability properties in 5 than 6 dimensions and how spectral transformations can alter the form of the wave equations in 6 dimensions.
\item Section \ref{Sect 3.5: prepotentials} shows how prepotentials can be constructed for superstrata fluxes in both 5 and 6 dimensions, explicit examples are given.
\end{itemize}
Finally, a discussion of the significance of these results and possible directions for future investigation is given in section \ref{Sect:Sect6 Discussion}.
\section{Superstrata and their flavors in supergravity}
\label{Sect:Sec2 superstrata intro}
This section reviews the BPS equations in 6 dimensions and sketches how to construct the superstrata, more details may be found in \cite{Heidmann:2019zws,Bena:2015bea,Ceplak:2018pws,Bena:2017xbt}.
\subsection{BPS equations}
The superstrata and its flavors constructed in \cite{Heidmann:2019zws,Bena:2015bea,Ceplak:2018pws} are generally studied within 6-dimensional (0,1) supergravity obtained by compactifying type IIB supergravity with manifold structure $\mathcal{M}^{1,4}\times \mathbb{S}^{1}\times \mathcal{C}$ on $\mathcal{C}$. The compactification manifold $\mathcal{C}$ is required to be hyper-K\"{a}hler, thus it is taken to be either $\mathbb{T}^{4}$ or K3. The circle $ \mathbb{S}^{1}$ of radius $R$ is paramatrized by the cyclic coordinate
\begin{align}
y \sim y+2 \pi R~.
\end{align}
The simplest models that give smooth superstrata involve coupling to two tensor multiplets. It is also possible \cite{Bena:2017geu}, in certain circumstances to compactify the theory on a circle direction inside the $\mathcal{M}^{1,4}\times \mathbb{S}^{1}$, the theory then reduces to a 5-dimensional $\mathcal{N}=2$ supergravity coupled to three vector multiplets. This compactification is nothing more than a standard Kaluza-Klein reduction, which ensures the BPS equations in each dimension are related.
The 6-dimensional geometry can be written as
\begin{align}
ds_6^2 &= -\frac{2}{\sqrt{P}} \, (dv+\beta) \big(du + \omega + \tfrac{1}{2}\, F \, (dv+\beta)\big)
+ \sqrt{P} \, ds_4^2(\mathcal{B})~, \label{ds6} \\
&= \frac{1}{F\sqrt{P}}\left( (du+\omega)^{2}+FPV \,ds_{3}^{2}\right) -\frac{F}{\sqrt{P}}\left( dv+ \beta +\frac{1}{F}(du+\omega)\right)^{2} +\frac{\sqrt{P}}{V}\left(d\psi +A \right)^{2}~.\label{dsDoubleFiber}
\end{align}
where\footnote{Often the (perhaps) more canonical pair of light cone coordinates $u=\frac{1}{\sqrt{2}}(t-y)$ and $v= \frac{1}{\sqrt{2}}(t+y)$ are used instead of (\ref{uvDef}). It is shown in \cite{Bena:2017geu} how the two choices are related by a redefinition of the $(F,\omega,\Theta^{(I)})$. Here we use the $t=u$ identification since since when we compactify on $v$ to produce 5-dimensional solutions, $u=t$ will indeed be the time direction.}
\begin{align}
u= t ~, \qquad v=t+y~, \label{uvDef}
\end{align}
and the details of $(ds_{4}^{2}(\mathcal{B}),ds_{3}^{2})$ are discussed around (\ref{GHbase}). Supersymmetry requires all fields, such as the functions $(P,F)$, one form $\beta$ and the $(Z_{I},\Theta^{(I)})$ to be independent of $u$. Working with $v$ independent $\beta$ and $ds_4^2(\mathcal{B})$ simplifies the BPS equations as well, demanding this ensures $ds_4^2(\mathcal{B})$ is hyper-K\"{a}hler and $d \beta$ is self dual on this base.
The form of the metric in (\ref{dsDoubleFiber}) is that of a double circle fibration in for the $(v,\psi)$ circles, thus there is a natural $\text{SL}(2,\mathbb{Z})$ action redefining the $(v,\psi)$ coordinates amongst each other. This action may be used to ensure the fields and metric are independent of $v$, the 5-dimensional solution is then found by applying a Kaluza Klein reductions to the $v$-circle. Completing this procedure and identifying
\begin{align}
F=-Z_{3} ~,\label{6to5data}
\end{align}
gives the 5-dimensional geometry
\begin{align}
ds_{5}^{2} &= \left(Z_{3}P \right)^{-\frac{2}{3}} (dt+\omega)^{2} + \left( Z_{3}P\right)^{\frac{1}{3}} \, ds_4^2(\mathcal{B}) ~.\label{ds5}
\end{align}
The full supergravity system has a set of maxwell like fields $(Z_{I},\Theta^{(I)})$ where $I\in \{1,2,4 \}$, in term of which the field strengths in of the vector/tensor multiplets can be written (See \cite{Bena:2017geu} for instance). These Maxwell like fields and the data appearing in (\ref{ds6}) and (\ref{ds5}) are fixed by the BPS equations. In 6-dimensions the BPS equations split into a first layer:
\begin{align}
* D \dot{Z}_1 &= D \Theta^{(2)} \,,\quad D* D Z_1 = -\Theta^{(2)} \wedge d\beta\,,\quad \Theta^{(2)} =* \Theta^{(2)} ~, \label{eqZ1Theta2} \\
* D \dot{Z}_2 &= D \Theta^{(1)} \,,\quad D * D Z_2 = -\Theta^{(1)} \wedge d\beta\,,\quad \Theta^{(1)} =* \Theta^{(1)}~,\label{eqZ2Theta1} \\
* D \dot{Z}_4 &= D \Theta^{(4)} \,,\quad D * D Z_4 = - \Theta^{(4)}\wedge d\beta\,,\quad \Theta^{(4)}=* \Theta^{(4)} ~,\label{eqZ4Theta4}
\end{align}
as well as a second layer:
\begin{align}
(1+*)D\omega +F\,d\beta &= Z_{1}\Theta^{(1)} +Z_{2}\Theta^{(2)}-2Z_{4}\Theta^{(4)}~, \label{BPS6D1}\\
*D* \left( \dot{\omega}- \frac{1}{2}DF \right) &= \ddot{P}- \left(\dot{Z}_{1}\dot{Z}_{2}-\dot{Z}^{2}_{4} \right) - \frac{1}{2}* \left(\Theta^{(1)} \wedge \Theta^{(2)} - \Theta^{(4)} \wedge \Theta^{(4)} \right)~,\label{BPS6D2}
\end{align}
where $(d,*)$ are the exterior derivative and Hodge star operations on $ds_4^2(\mathcal{B})$, a dot denotes differentiation with respect to $v$ and
\begin{align}
D\Phi = d\Phi - \beta \wedge \dot{\Phi}~.
\end{align}
The function $P$ is fixed by
\begin{align}
P = Z_{1}Z_{2}-Z_{4}^{2}~.
\end{align}
If the data $(Z_{I},\Theta^{(I)},\beta,F,\omega)$ appearing above are independent of $v$ then the 6-dimensional BPS equations after defining
\begin{align}
d\beta = \Theta^{(3)}~,
\end{align}
reduce to the 5-dimensional BPS equations, with zeroth layer:
\begin{align}
\Theta^{(I)} =* \Theta^{(I)} ~, \qquad \Theta^{(3)} = *\Theta^{(3)}~, \label{5D Theta dual}
\end{align}
first layer:
\begin{align}
\nabla^{2} Z_{1} &= * \left(\Theta^{(2)}\wedge \Theta^{(3)} \right)~,\label{5D BPS Z1eq}\\
\nabla^{2} Z_{2} &= * \left(\Theta^{(1)}\wedge \Theta^{(3)} \right) ~,\\
\nabla^{2} Z_{3} &= * \left(\Theta^{(1)}\wedge \Theta^{(2)}-\Theta^{(4)}\wedge \Theta^{(4)} \right)~,\\
\nabla^{2} Z_{4} &= * \left(\Theta^{(3)}\wedge \Theta^{(4)} \right)~,\label{5D BPS Z4eq}
\end{align}
and second layer
\begin{align}
(1+*)dw = Z_{1}\Theta^{(1)}+Z_{2}\Theta^{(2)}+Z_{3}\Theta^{(3)} -2 Z_{4}\Theta^{(4)}~. \label{5DBPSfinal}
\end{align}
It is key to note that in order for this reduction to work all 6-dimensional fields including the $(Z_{I},\Theta^{(I)})$ must be independent of the $v$-circle that we reduce on. This will be critical in section \ref{SubSec: 6D5D relationship} where we illustrate the relationship between 6 and 5-dimensional superstrata. This is also the reason we need to introduce spectral transformations in section \ref{SubSec: spectral flow}, which will enable a transformation of any given single-mode 6-dimensional superstrata to remove all $v$-dependence before reducing to 5 dimensions.
\subsection{Gibbons Hawking bases}
The first step in finding solutions to the BPS equations (\ref{eqZ1Theta2})-(\ref{BPS6D2}) or (\ref{5D Theta dual})-(\ref{5DBPSfinal}) is to specify a hyper-K\"{a}hler base. The Gibbons Hawking (GH) geometries provide some of the simplest yet non-trivial examples of hyper-K\"{a}hler manifolds. They are constructed as
\begin{align}
ds_4^2(\mathcal{B}) = \frac{1}{V}\left( d\psi +A \right)^{2} +V \, ds_{3}^{2}~, \qquad \nabla_{3}^{2}V=0 ~, \qquad *_{3}d_{3}V=d_{3}A~, \label{GHbase}
\end{align}
where $\psi \in [0,4\pi)$ is the GH fiber, $ ds_{3}^{2}$ is the flat metric of $\mathbb{R}^{3}$ and operations with a subscript $_3$ refer to this flat base\footnote{In this paper we use $(\nabla^{2},*,d)$ to refer to the Laplace-Beltrami operator, Hodge star and exterior derivative on the entire 4-dimensional GH base of (\ref{GHbase}).}. Introducing Cartesian coordinates $(y^{1},y^{2},y^{3})$ on $ ds_{3}^{2}$, the $V$ appearing in (\ref{GHbase}) are then given by
\begin{align}
V(\vec{y}) = \sum_{i=1}^{N} \frac{q_{i}}{\left| \vec{y}-\vec{y}_{i} \right|}~,
\end{align}
where the $q_{i}\in \mathbb{Z}$ are known as the GH charges, they are centered at the $\vec{y}_{i}$ and $N$ labels the total number of charges.
For computations it is convenient to introduce spherical bipolar coordinates $(r,\theta,\phi)$ on the flat $\mathbb{R}^{3}$ defined by
\begin{align}
y_{1}+iy_{2} = \frac{r}{4}\sqrt{r^{2}+a^{2}}\sin 2\theta \, e^{i\phi} \qquad \text{and} \qquad y_{3} = \frac{1}{8}(2r^{2}+a^{2})\cos 2\theta~,
\end{align}
where $r\in [0,\infty$, $\theta \in [0,\pi/2)$ and $\phi \in [0,2\pi)$. Defining
\begin{align}
V &= \frac{4}{\Lambda} ~, \qquad\qquad\qquad~~\, A = \frac{(a^{2}+2r^{2})\cos 2\theta -a^{2}}{2\Lambda } \, d\phi~, \\
\Sigma &= r^{2}+a^{2}\cos^{2}\theta~, \qquad \Lambda= r^{2}+a^{2}\sin^{2}\theta~, \label{SigmaLambda}
\end{align}
the GH metric then becomes
\begin{align}
ds_{4}^{2}(\mathcal{B}) = \frac{1}{V}(d\psi+A)^{2} + \frac{V}{16} \left( 4\Sigma \Lambda \left( \frac{dr^{2}}{a^{2}+r^{2}}+d\theta^{2} \right) + r^{2}(r^{2}+a^{2})\sin^{2}2\theta \, d\phi^{2} \right)~. \label{GHsbcoords}
\end{align}
These coordinates are adapted to the superstrata since the $(Z_{I},\Theta^{(I)})$ are sourced on the locus $\Sigma=0$.
In model building it is important to understand that the $\Theta^{(I)}$ are supported on 2 cycles in the geometry. In the standard construction of 6-dimensional superstrata the base is taken to be flat $\mathbb{R}^{4}$. The non-trivial 2 cycles are then provided by the pinching off of the $v$-circle. In 5 dimensions the non-trivial 2 cycles are contained entirely in the GH base, they are furnished by the pinching off of the $\psi$-circle where $V$ diverges at the GH points. Thus the 5-dimensional superstrata require GH bases with multiple centers. The simplest such geometries have two centers, which can be aligned with the $y^{3}$ direction and their separation parametrized by $a$
\begin{align}
\vec{y}_{\pm}=(0,0,\pm a^{2}/8)~.
\end{align}
This gives
\begin{align}
V=\frac{q_{-}}{r_{-}}+\frac{q_{+}}{r_{+}}~,
\end{align}
where
\begin{align}
r_{-}=\left| \vec{y}- \vec{y}_{-}\right|=\frac{\Sigma}{4}\qquad \text{and} \qquad r_{+}=\left| \vec{y}- \vec{y}_{+}\right|=\frac{\Lambda}{4}~,
\end{align}
and $(q_{-},q_{+})$ are the GH charges.
Naively the $\Theta^{(I)}$ should be the cohomological duals to the second homology on the GH base. Thus it would seem rather pathological to allow $q_{i}$ with varying signs, since the two cycles would be destroyed by the behavior at zeros\footnote{The full 5-dimensional geometry in (\ref{ds5}) is regular at these points, due to the behavior of $Z_{3}P$.} of $V$. However, the key to constructing the superstrata is to allow exactly these types of bases which have come to be known as \textit{ambipolar}\footnote{Ambipolar geometries can be characterized as those geometries that possess domains where the signature is $(+,+,+,+)$ and domains where it is $(-,-,-,-)$, the surfaces where it flips are known as ambipolar surfaces.}. The $\Theta^{(I)}$ fluxes that are then used have a far richer structure than those that can be built out of just the fluxes dual to the two cycles of non-ambipolar bases. Currently a complete understanding of the fluxes that can be constructed on ambipolar bases is missing, it is hoped the prepotential results of section \ref{Sect 3.5: prepotentials} will be helpful for future investigations in this direction, constructing new fluxes and microstate geometries from them.
For future reference we note that frames on (\ref{GHsbcoords}) can be erected as
\begin{align}
e_{1}=\sqrt{\frac{\Sigma}{r^{2}+a^{2}}}\, dr , ~~ e_{2}=\sqrt{\Sigma}\, d\theta, ~~ e_{3}=\frac{1}{2}\sqrt{a^{2}+r^{2}}\sin\theta \, \left(d\psi-d\phi \right), ~~ e_{4}= \frac{1}{2} r\cos\theta \, \left(d\psi+d\phi \right)~,
\end{align}
in terms of which a basis for self dual forms is
\begin{align}
\Omega^{(1)}&= \frac{1}{\Sigma \sqrt{r^{2}+a^{2}}\cos \theta}\left( e_{1}\wedge e_{2}+e_{3}\wedge e_{4}\right)~,\label{Omega1} \\
\Omega^{(2)}&= \frac{1}{\sqrt{\Sigma} \sqrt{r^{2}+a^{2}}\cos \theta}\left( e_{1}\wedge e_{4}+e_{2}\wedge e_{3}\right)~,\label{Omega2} \\
\Omega^{(3)}&= \frac{1}{\sqrt{\Sigma} r\sin \theta}\left( e_{1}\wedge e_{3}-e_{2}\wedge e_{4}\right)~,\label{Omega3}
\end{align}
and the canonical complex structure is
\begin{align}
J = \frac{1}{\sqrt{\Sigma} r\sin \theta}\left( e_{1}\wedge e_{3}-e_{2}\wedge e_{4}\right) ~. \label{J3}
\end{align}
\subsection{Original, supercharged and hybrid superstrata in 6-dimensions}
Since the hybrid superstrata encompass both the original and the supercharged flavors they provide a convenient way to study the properties of both at once. Hence we will work with the hybrid flavor and fix the relevant parameters to highlight the original or supercharged results where necessary.
To construct/summarize the $(Z_{I},\Theta^{(I)})$ that solve the 6-dimensional BPS equations (\ref{eqZ1Theta2})-(\ref{BPS6D2}) on the flat $\mathbb{R}^{4}$ base it is convenient to introduce the mode functions
\begin{align}
v_{k,m,n} & \equiv (m+n)\frac{v}{R}-\frac{k}{2}\phi+\frac{1}{2}(k-2 m)\psi~, \label{moding} \\
\Delta_{k,m,n} & \equiv \left( \frac{a}{\sqrt{r^{2}+a^{2}}}\right)^{k} \left( \frac{r}{\sqrt{r^{2}+a^{2}}}\right)^{n}\cos^{m}\theta \sin^{k-m}\theta~, \label{DeltaDef}
\end{align}
where $(k,m,n)$ are non negative integers indexing Fourier modes on $(v,\phi,\psi)$. The $(Z_{I},\Theta^{(I)})$ will depend non-trivially on these modes. However, a key feature of superstrata is that the mode dependence cancels out in the metrics (\ref{ds6}) and (\ref{ds5}) due to a process known as \textit{coiffuring} (\ref{coiff}). It is also convenient to introduce the functions and forms
\begin{align}
z_{k,m,n} &\equiv \sqrt{2}R \frac{\Delta_{k,m,n}}{\Sigma} \cos v_{k,m,n} ~,\\
\vartheta_{k,m,n} & \equiv -\sqrt{2} \Delta_{k,m,n} \left[ \left( (m+n)r\sin\theta+n\left( \frac{m}{k}-1 \right) \frac{\Sigma}{r\sin \theta} \right) \Omega^{(1)}\sin v_{k,m,n} \right. \label{thetatilde} \\
&\qquad\qquad\qquad\qquad\qquad +\left. \left( m \left(\frac{n}{k}+1 \right)\Omega^{(2)} + \left( \frac{m}{k}-1\right)n \Omega^{(3)}\right)\cos v_{k,m,n}\right]~,\notag \\
\varphi_{k,m,n} & \equiv \sqrt{2}\Delta_{k,m,n} \left[\frac{\Sigma}{r\sin\theta} \Omega^{(1)} \sin v_{k,m,n} + \left(\Omega^{(2)}+\Omega^{(3)}\cos v_{k,m,n} \right)\right]~. \label{thetahat}
\end{align}
The supertrata $Z_{I}$ can then be succinctly summarized as
\begin{align}
Z_{1} &= \frac{Q_{1}}{\Sigma} + b_{1} \frac{R}{\sqrt{2} Q_{5}}z_{2k,2m,2n}~,\label{Z1} \\
Z_{2}&= \frac{Q_{5}}{\Sigma}~, \label{Z2}\\
Z_{4} &= b_{4} z_{k,m,n}~, \label{Z4}
\end{align}
and the $\Theta^{(I)}$ as
\begin{align}
\Theta^{(1)} &= 0 ~, \label{Theta1}\\
\Theta^{(2)} &= \frac{R}{\sqrt{2} Q_{5}} \left( b_{1} \vartheta_{2k,2m,2n} + c_{2} \varphi_{2k,2m,2n} \right) ~, \label{Theta2}\\
\Theta^{(4)} &= b_{4}\vartheta_{k,m,n} +c_{4} \varphi_{k,m,n} ~, \label{Theta4}
\end{align}
where $(b_{1},b_{4},c_{2},c_{4})$ are constants.
It is straightforward to check that equations (\ref{Z1})-(\ref{Theta4}) solve the BPS first layer equations (\ref{eqZ1Theta2})-(\ref{eqZ4Theta4}) with
\begin{align}
\beta = -\frac{R a^{2}}{2 \Sigma}\left(d\phi +\cos 2\theta \, d\psi \right)~.
\end{align}
Difficulties arise when trying to solve the BPS second layer equations (\ref{BPS6D1}) and (\ref{BPS6D2}), due to the quadratic sources. A process known as coiffuring was developed to deal with these difficulties \cite{Bena:2014rea}, ensuring regularity of the gauge fields and geometry. Coiffuring requires
\begin{align}
b_{1}=b_{4}^{2} \qquad \text{and} \qquad c_{2}=2 b_{4}c_{4}. \label{coiff}
\end{align}
The solution for $(\omega,F)$ in equations (\ref{BPS6D1}) and (\ref{BPS6D2}) is now involved, but algorithmic, we now summarize the solution method (full details can be found in \cite{Bena:2017xbt}).
One first breaks $(\omega,F)$ into pieces that depend on the mode (\ref{moding}) and those that don't
\begin{align}
\omega &= \omega_{0} + \mu_{k,m,n}\,d\psi+\zeta_{k,m,n}\, d\phi \\
F &= 0 + F_{k,m,n}
\end{align}
where $(\mu_{k,m,n},\zeta_{k,m,n},F_{k,m,n})$ are only functions of $(r,\theta)$ by virtue of the coiffuring (\ref{coiff}) and
\begin{align}
\omega_{0}=\frac{R a^{2}}{2\Sigma} \left( \cos 2\theta ~ d\phi +d\psi\right)~. \label{omega0}
\end{align}
Substituting these expressions into the BPS equations (\ref{BPS6D1}) and (\ref{BPS6D2}) gives three independent equations. In principal there should be four, three coming from decomposing (\ref{BPS6D1}) into its self dual pieces and one from (\ref{BPS6D2}), but one of the self dual pieces turns out to vanish identically. Taking combinations of these three equations shows that $\mu_{k,m,n}$ and $F_{k,m,n}$ both satisfy Laplace type equations with non-trivial sources, in each case the problem can be reduced to summing solutions of the DE
\begin{align}
\nabla^{2} \mathcal{F}_{2k,2m,2n} = \frac{\Delta_{2k,2m,2n}}{(r^{2}+a^{2})\Sigma \cos^{2}\theta } ~.
\end{align}
The solution of this DE is given by
\begin{align*}
\mathcal{F}_{2k,2m,2n}= - \sum_{j_{1},j_{2},j_{3}=0}^{j_{1}+j_{2}+j_{3}\leq k+n-1} {
j_{1}+j_{2}+j_{3} \choose j_{1},j_{2},j_{3} } \frac{{
k+n-j_{1}-j_{2}-j_{3}-1 \choose k-m-j_{1},m-j_{2}-1,n-j_{3} }^{2}}{{
k+n-1 \choose k-m,m-1,n
}^{2}} \frac{\Delta_{2(k-j_{1}-j_{2}-1),2(m-j_{2}-1),2(n-j_{3})}}{4(k+n)^{2}(r^{2}+a^{2})}
\end{align*}
where
\begin{align}
{
j_{1}+j_{2}+j_{3} \choose j_{1},j_{2},j_{3}
} = \frac{(j_{1}+j_{2}+j_{3})!}{j_{1}!j_{2}!j_{3}!}~.
\end{align}
The solutions for $(F_{k,m,n},\mu_{k,m,n})$ can be summarized as
\begin{align}
F_{k,m,n}&= 4 \left[ \left(\frac{m(k+n)}{k}b_{4}-c_{4} \right)^{2} \mathcal{F}_{2k,2m,2n}+ \left(\frac{n(k-m)}{k}b_{4}+c_{4} \right)^{2}\mathcal{F}_{2k,2m+2,2n-2}\right] ~, \\
\mu_{k,m,n}&= R \left[ \left( \frac{(k-m)(k+n)}{k}b_{4}+c_{4}\right)^{2}\mathcal{F}_{2k,2m+2,2n} + \left(\frac{mn}{k}b_{4}-c_{4} \right)^{2} \mathcal{F}_{2k,2m,2n-2} -\frac{b_{b^{2}\Delta_{2k,2m,2n}}}{4\Sigma} \right] \notag\\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -R\frac{r^{2}+a^{2}\sin^{2}\theta}{4\Sigma} F_{k,m,n} + \frac{R B^{2}}{4\Sigma} ~, \label{mu eq}
\end{align}
where the term proportional to the constant $B^{2}$ is an arbitrary (for now) homogeneous term.
Solving for the $\zeta_{k,m,n}$ requires integrating two first order DEs of the form
\begin{align}
\partial_{r} \zeta_{k,m,n} &= S_{r}\left(r,\theta,F_{k,m,n},\partial_{r}\mu_{k,m,n},\partial_{\theta}\mu_{k,m,n} \right) ~,\\
\partial_{\theta} \zeta_{k,m,n} &= S_{\theta}\left(r,\theta,F_{k,m,n},\partial_{r}\mu_{k,m,n},\partial_{\theta}\mu_{k,m,n} \right) ~,
\end{align}
where $ S_{r}$ and $ S_{\theta}$ are functionals of the given arguments only. Unfortunately the full solution is not known in closed form, but for a given $(k,m,n)$ it is straightforward to perform the integration. For specific sub-families it is possible to solve these equations in closed form. For instance this has been done for the original flavor in the $(1,0,n)$ and $(2,1,n)$ families \cite{Bena:2017upb} and the $(k,0,1)$ family \cite{Bena:2018mpb}. In section \ref{SubSec: 11n} we will present the $(\mu_{1,1,n},\zeta_{1,1,n},F_{1,1,n})$ of the $(1,1,n)$ original superstrata in closed form as well.
The constant $B$ introduced in (\ref{mu eq}) is used to ensure regularity at $(r=0,\theta=0)$ by fixing $\mu_{k,m,n}(r=0,\theta=0)=0$, this is done by setting
\begin{align}
B^{2} = \frac{b_{4}^{2}+ \frac{k^{2}}{mn(k-m)(k+n)}c_{4}^{2} }{{k \choose m} {k+n-1 \choose n}} = b^{2}+c^{2}
\end{align}
where
\begin{align}
b= \frac{b_{4}}{\sqrt{{k \choose m} {k+n-1 \choose n}}} \qquad \text{and} \qquad c= \frac{kc_{4}}{\sqrt{mn(k-m)(k+n){k \choose m} {k+n-1 \choose n}}}~.
\end{align}
Demanding regularity at $(r=0,\theta=\pi/2)$ also fixes
\begin{align}
\frac{Q_{1}Q_{5}}{2R^{2}} = a^{2}+ \frac{b^{2}+c^{2}}{2}~.
\end{align}
The beauty of the hybrid solutions is now evident when one considers the conserved charges
\begin{align}
J_{R}&= \frac{R}{\sqrt{2}} \left(a^{2}+\frac{m}{k}(b^{2}+c^{2}) \right)~, \qquad J_{L}=\frac{R}{\sqrt{2}}a^{2}~, \qquad Q_{P}= \frac{m+n}{2k}(b^{2}+c^{2})~, \qquad Q_{1,5}~.
\end{align}
If one were restricted to just the original flavor ($c=0$) or the supercharged flavor ($b=0$), then superstrata with different $(k,m,n)$ and $b$ or $c$ would posses different asymptotic charges. However, with the hybrid flavor we can define
\begin{align}
b\equiv B \cos \alpha ~, \qquad c\equiv B \sin \alpha~,
\end{align}
where $\alpha \in [0,2\pi)$ and $B>0$. The parameter $\alpha$ then parametrizes a continuous family of superstrata solutions with identical asymptotic charges.
Finally, we need to consider the restrictions on the integers $(k,m,n)$. For the original superstrata the constraints are $1\leq k$, $0\leq m \leq k$ and $1\leq n$. While for the supercharged and hybrid solutions the requirements are $1\leq m \leq k-1$ and $1\leq n$. Thus when we consider the $(1,0,n)$ and $(1,1,n)$ families in sections \ref{Sect:Sec4 relations amongst superstrata} and \ref{Sect:Sect5 Seperability} we are necessarily looking at the original flavor, but when we look at the $(2,1,n)$ family we consider all three flavors.
\section{Relating superstrata in 5 and 6 dimensions}
\label{Sect:Sec3 relating 5D 6D superstrata}
This section shows how spectral transformation can be used to turn any given 6-dimensional single-mode superstrata into a form in which it is independent of $v$. The transformation corresponds to a coordinate redefinition among $(v,\psi)$, followed by a lattice re-identification. It has has a non-trivial effect on the $ds_{4}(\mathcal{B})$ base, turning a flat $\mathbb{R}^{4}$ into an ambipolar two centered GH space. We also discuss how this reduction procedure fails for multi-mode solutions with non-parallel modes.
\subsection{Summary of spectral transformations} \label{SubSec: spectral flow}
Spectral transformations as applied to superstrata were studied in detail in \cite{Bena:2017geu}. The basic idea is that the 6-dimensional metric ($\ref{dsDoubleFiber}$) is a double circle fibration in the $(v,\psi)$ coordinates, so it is possible to impose coordinate redefinitions that mix the two coordinates into new angular coordinates $(\hat{v},\hat{\psi})$ as
\begin{align}
\frac{\hat{v}}{R} = \mathbf{a} \frac{v}{R}+\mathbf{b} \psi \qquad \text{and} \qquad \hat{\psi} = \mathbf{c} \frac{v}{R}+\mathbf{d}\psi~, \label{SpecTrans}
\end{align}
with re-identified periodicities
\begin{align}
\hat{v} \cong \hat{v}+2 \pi R \qquad \text{and} \qquad \hat{\psi} \cong \hat{\psi}+4\pi ~.
\end{align}
The parameters $(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d})$ are required to form an element of SL$(2,\mathbb{Q})$, i.e. $\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d} \in \mathbb{Q}$ and are constrained by $\mathbf{a}\mathbf{d}-\mathbf{b}\mathbf{c}=1$. The origin of requiring the group to be SL$(2,\mathbb{Q})$ rather than SL$(2,\mathbb{R})$ is so that the coordinate periodicity re-identifications are well defined. The re-identification may modify the presence/absence of orbifold singularities and so the spectral transformation is not necessarily a diffeomorphism, but is closely related.
The coordinate transformation (\ref{SpecTrans}) alters the GH base, so the BPS equation (\ref{eqZ1Theta2})-(\ref{BPS6D2}) are modified. The rules for how the data $(V,A,\beta,F,\omega,Z_{I},\Theta^{(I)})$ transform under spectral flow to maintain a BPS solution were derived in \cite{Bena:2008wt} and further refined in \cite{Bena:2017geu}. In order to summarize the transformations it is convenient to introduce the auxiliary data $(K_{3},\xi,\mu,\varpi,\nu)$ defined implicitly by
\begin{align}
\beta= \frac{K_{3}}{V}(d\psi +A)+\xi~, \qquad \omega = \mu (d\psi +A) + \varpi~,\qquad P= K_{3}\left(\frac{K_{3}}{V}\nu+\mu \right)~. \label{betaomegaPdef}
\end{align}
The transformations for $(V,A,\beta,F,\omega,Z_{I})$ are then given by
\begin{align}
\widehat{V} = \mathbf{d} V - \frac{\mathbf{c}}{R} K_{3}~, \qquad \widehat{K}_{3} = -\mathbf{b}R V+\mathbf{a}K_{3}~, \qquad \hat{\xi} = \mathbf{a}\xi +\mathbf{b}R A~, \qquad \widehat{A}=\frac{\mathbf{c}}{R}\xi+\mathbf{d}A~, \label{SpecVars1}
\end{align}
\begin{align}
\widehat{\varpi}=\varpi~, \qquad \hat{\nu} = \frac{\mathbf{a}\nu K_{3}^{2}+\mathbf{b} R\mu V^{2}}{\widehat{K}_{3}^{2}}~, \qquad \hat{\mu} = \frac{\frac{\mathbf{c}}{R}\nu K_{3}^{2}+\mathbf{d}\mu V^{2}}{\widehat{V}^{2}}~, \qquad \widehat{F} = \frac{\widehat{V}}{V}F -2 \frac{\mathbf{c}}{R} \mu - \frac{\mathbf{c}^{2}}{R^{2}} \frac{P}{\widehat{V}}~, \label{SpecVars2}
\end{align}
and
\begin{align}
\widehat{Z}_{I} = \frac{V}{\widehat{V}} Z_{I}~. \label{SpecZs}
\end{align}
The $\Theta^{(I)}$ transformations are more involved. It is useful to introduce $\lambda^{(I)}$ defined by
\begin{align}
\Theta^{(I)} = (1+*)\left[(d\psi +A)\wedge \lambda^{(I)} \right]~,
\end{align}
as well as the covariant derivative\footnote{Note that $d_{3}$ is the exterior derivative with respect to the 3D base in (\ref{GHbase}) which is invariant under spectral flow.}
\begin{align}
\mathcal{D} = d_{3} - A \partial_{\psi}-\xi \partial_{v}
\end{align}
which is conveniantly invariant under spectral transformation, i.e. $\mathcal{D}=\widehat{\mathcal{D}}$ with
\begin{align}
\widehat{\mathcal{D}}=d_{3} - \widehat{A} \partial_{\hat{\psi}}-\hat{\xi} \partial_{\hat{v}}~.
\end{align}
Using these definitions the $\lambda^{(I)}$ transformations can be written as
\begin{align}
\hat{\lambda}^{(1)} = \lambda^{(1)} - \frac{\mathbf{c}}{R} \widehat{\mathcal{D}}\left( \frac{\widehat{Z}_{2}}{V}\right)~, \qquad \hat{\lambda}^{(2)} = \lambda^{(2)} - \frac{\mathbf{c}}{R} \widehat{\mathcal{D}}\left( \frac{\widehat{Z}_{1}}{V}\right)~, \qquad \hat{\lambda}^{(4)} = \lambda^{(4)} - \frac{\mathbf{c}}{R} \widehat{\mathcal{D}}\left( \frac{\widehat{Z}_{4}}{V}\right)~, \label{Speclambdas}
\end{align}
and the $\widehat{\Theta}^{(I)}$ take the form
\begin{align}
\widehat{\Theta}^{(I)} = (1+\hat{*})\left[(d\hat{\psi} +\widehat{A})\wedge \hat{\lambda}^{(I)} \right]~.\label{SpecThetas}
\end{align}
In 5 dimensions a subset of the spectral transformations are gauge transformations, these correspond to keeping $\psi$ fixed, but shifting $v$ by a multiple of $\psi$
\begin{align}
\frac{\hat{v}}{R} = \frac{v}{R}+\mathbf{b} \psi \qquad \text{and} \qquad \hat{\psi} = \psi~. \label{GaugeTrans}
\end{align}
Such a transformation when implemented in equations (\ref{SpecVars1})-(\ref{SpecZs}) and (\ref{Speclambdas})-(\ref{SpecThetas}) leaves the physical data $(Z_{I},Z_{3},\Theta^{I},\Theta^{(3)},\mu,\varpi)$ invariant. A discussion of the general form of these gauge transformations requires a decomposition of the physical data into a set of harmonic functions \cite{Bena:2017geu,Bena:2008wt}. It is important to recognize this is only a gauge transformation in the 5-dimensional setting, in 6-dimensions the metric (\ref{ds6}) depends explicitly on $\beta$ and although $\widehat{\Theta}^{(3)}=\Theta^{(3)}$, $\beta$ transforms as
\begin{align}
\widehat{\beta} = \beta - \mathbf{b} R\, d\psi~.
\end{align}
Thus the transformation (\ref{GaugeTrans}) is physically relevant in 6 dimensions but not in 5 dimensions\footnote{This is why the massless wave equations considered in section \ref{Sect:Sect5 Seperability} depend on $\mathbf{a}$ only in terms with $p$ coefficients}.
\subsection{6D $\iff$ 5D solutions for single-mode superstrata} \label{SubSec: 6D5D relationship}
In \cite{Bena:2017geu} it was noted that if a 6-dimensional superstrata is independent of $v$ it is simple to reduce the solution to a 5-dimensional solution. However, it was not fully appreciated that there always exists a transformation of the form (\ref{SpecTrans}) that makes $v_{k,m,n}$ of (\ref{moding}) to be independent of $\hat{v}$. This means that for any single-mode superstrata there exists a spectral transformation after which it can be reduced to a 5-dimensional solution. The trade off one makes is that the flow turns the flat $\mathbb{R}^{4}$ base on which the superstrata were first constructed into an ambipolar two centered GH base. This could be anticipated since in 5-dimensions the only non-trivial topology capable of supporting non-singular fluxes is the GH base, whereas the 6-dimensional solutions with a flat base exploit the topology of the $v$ fiber to support non-singular fluxes\footnote{It is this topological dependence on the $v$-fiber that is at the heart of why it is difficult to generalize the results of \cite{Tyukov:2018ypq} to 6-dimensions and find prepotentials for the fluxes.}.
The asymptotic geometry in 5 dimensions will depend on the net GH charge. If it is zero, as for the 5-dimensional examples in \cite{Bena:2017geu}, then it is $\text{AdS}_{3}\times \mathbb{S}^{2}$. But if the net GH charge is $q\neq 0$, it will be asymptotically $\text{AdS}_{2}\times \mathbb{S}^{3}/\mathbb{Z}_{q}$. The former is appropriate for the microstate geometries of black strings and the latter to those of black holes in 5 dimensions. Since our construction produces both types, we have found the first examples of superstrata that describe the microstates of black holes in 5 dimensions .
To find the spectral transformations (\ref{SpecTrans}) that transform (\ref{moding}) to be $\hat{v}$ independent, it is useful to look at just the parts of the mode (\ref{moding}) that are altered by the spectral transformation (\ref{SpecTrans}), so we define
\begin{align}
\chi_{k,m,n} = (m+n)\frac{v}{R}+\frac{1}{2}(k-2m)\psi~.
\end{align}
Applying the transformation (\ref{SpecTrans}) leads to the new mode dependence \begin{align}
\hat{\chi}_{k,m,n} = \begin{pmatrix}
m+n & \frac{k-2m}{2}
\end{pmatrix} \begin{pmatrix}
\mathbf{d} & -\mathbf{b} \\ -\mathbf{c} & \mathbf{a}
\end{pmatrix} \begin{pmatrix}
\frac{\hat{v}}{R} \\ \hat{\psi}
\end{pmatrix}~.
\end{align}
Demanding $\hat{v}$ independence and fixing $\mathbf{a}\mathbf{d}-\mathbf{b}\mathbf{c}=1$ ensures
\begin{align}
\mathbf{c} = \frac{2(m+n)}{\mathbf{e}} \qquad \text{and} \qquad \mathbf{d}=\frac{k-2m}{\mathbf{e}}~, \label{SpecificSF}
\end{align}
with the new mode dependence
\begin{align}
\hat{v}_{k,m,n} = \frac{1}{2}(\mathbf{e}\hat{\psi} - k\phi)~,
\end{align}
where we have defined
\begin{align}
\frac{\mathbf{e}}{2}=\mathbf{a} \left(\frac{k-2m}{2} \right) -\mathbf{b}(m+n)~. \label{edef}
\end{align}
We see that in order to have the correct periodicity in $\hat{\psi}$, it must be that $\mathbf{e}\in \mathbb{Z}$.
Using the standard $(\beta,V)$ of the 6-dimensional superstrata
\begin{align}
\beta = -\frac{R a^{2}}{2 \Sigma}\left(d\phi +\cos 2\theta \, d\psi \right)~, \qquad V = \frac{1}{r_{+}}~, \label{beta and V}
\end{align}
and a spectral transformation of the form (\ref{SpecVars1}) constrained by (\ref{SpecificSF}) and (\ref{edef}) then leads to a two centered ambipolar GH base with
\begin{align}
\widehat{V} = \frac{q_{-}}{r_{-}} + \frac{q_{+}}{r_{+}}\qquad \text{with} \qquad q_{-} = -\frac{m+n}{\mathbf{e}} \qquad \text{and} \qquad q_{+} = \frac{k-m+n}{\mathbf{e}}~. \label{GHchargesFlowed}
\end{align}
It is now clear that the integers $(\mathbf{e},k)$ control the mode of the flowed solution and $(m,n)$ control the GH charges. If we make the choice to trade $\mathbf{b}$ for $\mathbf{e}$ then there is one remaining degree of freedom, $\mathbf{a}$. This parameter implements a gauge transformation of the form (\ref{GaugeTrans}), as can be seen directly by considering two spectral flows differing by a choice of $\mathbf{a}$. Consider the two transformations
\begin{align*}
\begin{pmatrix}
\hat{v}_{1}/R \\ \hat{\psi}_{1}
\end{pmatrix} = \begin{pmatrix}\mathbf{a}_{1} & \frac{\mathbf{a}_{1}(k-2m)-\mathbf{e}}{2(m+n)}\\ \frac{2(m+n)}{\mathbf{e}} & \frac{k-2m}{\mathbf{e}}\end{pmatrix} \begin{pmatrix}
v/R \\ \psi
\end{pmatrix}~, \qquad \begin{pmatrix}
\hat{v}_{2}/R \\ \hat{\psi}_{2}
\end{pmatrix} = \begin{pmatrix}\mathbf{a}_{2} & \frac{\mathbf{a}_{2}(k-2m)-\mathbf{e}}{2(m+n)}\\ \frac{2(m+n)}{\mathbf{e}} & \frac{k-2m}{\mathbf{e}}\end{pmatrix} \begin{pmatrix}
v/R \\ \psi
\end{pmatrix}~,
\end{align*}
computing the differences in the transformed coordinates gives
\begin{align}
\frac{\hat{v}_{2}}{R} = \frac{\hat{v}_{1}}{R} + \frac{\mathbf{e}(\mathbf{a}_{2}-\mathbf{a}_{1})}{2(m+n)} \hat{\psi}_1 \qquad \text{and} \qquad \hat{\psi}_{2} = \hat{\psi}_{1}~,
\end{align}
which is exactly a gauge transformation of the form (\ref{GaugeTrans}) between $(\hat{v}_{1},\hat{\psi}_{1})$ and $(\hat{v}_{2},\hat{\psi}_{2})$. Hence the role of $\mathbf{a}$ when $(\mathbf{b},\mathbf{c},\mathbf{d})$ are restricted by (\ref{SpecificSF}) and (\ref{edef}) is to implement a gauge transformation in 5-dimensions. We can now enumerate the meaningful degrees of freedom remaining in the 5-dimensional solutions $(\mathbf{e},k,m,n)$, which by (\ref{GHchargesFlowed}) are equivalent to $(\mathbf{e},k,q_{-},q_{+})$. Thus for a given 5-dimensional mode and GH charges there always exists a 6-dimensional superstrata and a spectral transformation that leads to a 5D superstrata with these properties.
Since $(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}) \in \text{SL}(2,\mathbb{Q})$ this process is invertible, given a BPS solution on a two centered ambipolar GH base in 5-dimensions it can be transformed into a BPS solution on a flat $\mathbb{R}^{4}$ base in 6-dimensions. Thus the identification between 6D$\implies$5D single-mode superstrata is in fact one to one. Given a 5-dimensional solution with a two centered ambipolar GH base it is possible to invert (\ref{6to5data}) to uplift to 6-dimensions, then spectral flow so that the base becomes flat $\mathbb{R}^{4}$. we summarize this result by writing 6D$\iff$5D for single-mode superstrata.
\subsection{6D $\iff$ 5D solutions for multi-mode superstrata?} \label{SubSec: non reduction}
Multi-mode superstrata are solutions that superpose multiple single-mode superstrata. These solutions involve $(Z_{I},\Theta^{(I)})$ that depend on multiple modes of the form (\ref{moding}), the simplest case being when one considers just two modes labeled by $(k_{1},m_{1},n_{1})$ and $(k_{2},m_{2},n_{2})$. For instance in \cite{Heidmann:2019zws} the $\Theta^{(4)}$ introduced for a simple two-mode solution is
\begin{align}
\Theta^{(4)} = b_{4}\vartheta_{k_{1},m_{1},n_{1}}+b_{5}\vartheta_{k_{2},m_{2},n_{2}} +c_{4}\varphi_{k_{1},m_{1},n_{1}} +c_{5}\varphi_{k_{2},m_{2},n_{2}}~,
\end{align}
where the constants $(b_{4},b_{5},c_{4},c_{5})$ are generically non-zero. The obstruction to reduction is now clear, this flux depends on both $(v_{k_{1},m_{1},n_{1}},v_{k_{2},m_{2},n_{2}})$, a spectral transformation will only remove the $v$-dependence if the modes are \textit{parallel} $(m_{1}+n_{1},k_{1}-2m_{1})\propto (m_{2}+n_{2},k_{2}-2m_{2})$. Generalization to more modes is immediate.
\section{Relations amongst superstrata families}
\label{Sect:Sec4 relations amongst superstrata}
This section highlights several relationships amongst superstrata families that were either not known or not highlighted in the current literature. In particular it is shown that after spectral transformation and reduction to 5 dimensions the $(k,m,n)$ and $(k,k-m,n)$ families are equivalent.
\subsection{Equivalence of 5-dimensional solutions related by signs} \label{Sub sec: signs}
Some simple observations about the structure of the 5D BPS equations (\ref{5D BPS Z1eq})-(\ref{5DBPSfinal}) can by made be summarizing the data upon which it depends
\begin{align}
(Z_{I},Z_{3},\Theta^{(I)},\Theta^{(3)},\omega)~,
\end{align}
and altering some signs. A couple of ``new" solutions can be found by defining the new data $(\widetilde{Z}_{I},\widetilde{Z}_{3},\widetilde{\Theta}^{(I)},\widetilde{\Theta}^{(3)},\widetilde{\omega})$ by either of the following:
\begin{align}
(\widetilde{Z}_{I},\widetilde{Z}_{3},\widetilde{\Theta}^{(I)},\widetilde{\Theta}^{(3)},\widetilde{\omega})&= (-Z_{I},Z_{3},-\Theta^{(I)},\Theta^{(3)},\omega)~, \label{Trans1}\\
(\widetilde{Z}_{I},\widetilde{Z}_{3},\widetilde{\Theta}^{(I)},\widetilde{\Theta}^{(3)},\widetilde{\omega})&= (Z_{I},Z_{3},-\Theta^{(I)},-\Theta^{(3)},-\omega) \label{Trans2}~.
\end{align}
The first of these transformations (\ref{Trans1}) corresponds to a trivial redefinition
\begin{align}
(\widetilde{Q}_{1},\widetilde{Q}_{5},\widetilde{b}_{4},\widetilde{c}_{4}) = (-Q_{1},-Q_{5},-b_{4},-c_{4})~.
\end{align}
The second transformation (\ref{Trans2}) is more subtle, looking back at the 5-dimensional geometry (\ref{ds5}) we see that if one also reverses time $\tilde{t}=-t$ then the geometry is unchanged. If one considers that the $Z_{I}$ control the electric charge and the $\Theta^{(I)}$ the magnetic charge, then these two solutions are indeed just identified by time reversal, and thus equivalent.
If we look closely at the spectral transformations of section (\ref{SubSec: spectral flow}) we discover a third transformation. Consider the spectral flow that redefines $(\hat{v},\hat{\psi})=(-v,-\psi)$ using the SL$(2,\mathbb{Q})$ transformation
\begin{align}
(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}) = (-1,0,0,-1) \label{Trans3}~.
\end{align}
Under this transformation
\begin{align}
(\widehat{Z}_{I},\widehat{Z}_{3},\widehat{\Theta}^{(I)},\widehat{\Theta}^{(3)},\widehat{\omega})= (-Z_{I},-Z_{3},-\Theta^{(I)},-\Theta^{(3)},\omega)\qquad \text{and} \qquad \widehat{ds}_{4}^{2}(\mathcal{B}) = -ds_{4}^{2}(\mathcal{B})~,
\end{align}
where it is understood that if there is any functional dependence on $\psi$ in the data it must be replaced by $\hat{\psi}=-\psi$. This relabeling does not obviously lead to a new soltion of the BPS equations, but since the spectral transformation that produces it requires no alterations of the identifications on the $(v,\psi)$ circles, we conclude it is identical to the solution before the transformation was performed.
\subsection{Relating the $(k,m,n)$ and $(k,k-m,n)$ superstrata}
Based on the results of the previous subsection it is worth considering if any of the 6-dimensional superstrata when reduced to 5 dimensions lead to the same solution. Consider two families $(k_{1},m_{1},n_{1})$ and $(k_{2},m_{2},n_{2})$, if they are to posses the same mode dependence after spectral flow, the same $\mathbf{e}$ must be used in each flow and $k_{1}=k_{2}\equiv k$ must be fixed. Consider the situation when the two 5-dimensional GH bases are related by
\begin{align}
-q_{-(k,m_{1},n_{1})}=q_{+(k,m_{2},n_{2})} \qquad \text{and}\qquad -q_{-(k,m_{2},n_{2})}=q_{+(k,m_{1},n_{1})} ~,
\end{align}
which from (\ref{GHchargesFlowed}) fixes
\begin{align}
m_{2}=k-m_{1} \qquad \text{and} \qquad n_{1} =n_{2}~.
\end{align}
The $(Z_{I},\beta,ds_{4}^{2}(\mathcal{B}))$ for the $(k,m,n)$ family after spectral transformation to remove $\hat{v}$ dependence are given by:
\begin{align}
Z_{1} &= \frac{2\mathbf{e}}{\Upsilon_{k,m,n}}\left(Q_{1} + \frac{b_{4}^{2}R^{2}}{Q_{5}} \Delta_{2k,2m,2n}\cos \hat{v}_{2k,2m,2n} \right) \label{ZZZZ1} ~,\\
Z_{2}&= \frac{2\mathbf{e}Q_{5}}{\Upsilon_{k,m,n}}~, \\
Z_{4}&= \frac{2\sqrt{2}\mathbf{e}R}{\Upsilon_{k,m,n}} \Delta_{k,m,n} \cos \hat{v}_{k,m,n} ~,\\
\beta &= \frac{\mathbf{e}R\left[(\mathbf{e}-\mathbf{a}k+2\mathbf{a}m )(a^{2}+2r^{2})+a^{2}(\mathbf{e}-\mathbf{a}k-2\mathbf{a}n)\cos 2\theta\right]}{2(m+n)\Upsilon_{k,m,n}} \, d\psi -\frac{a^{2}\mathbf{e}R}{\Upsilon_{k,m,n}} \, d\phi~,
\end{align}
where
\begin{align}
\Upsilon_{k,m,n}= (k-2m)(a^{2}+2r^{2})+a^{2}(k+2n)\cos 2\theta~.
\end{align}
The GH base is of the form (\ref{GHbase}) with $V$ is given by the $\widehat{V}$ of (\ref{GHchargesFlowed}) and
\begin{align}
A&=\frac{8 r^2 \left(a^2+r^2\right) (k-2 m)\cos 2 \theta -2 a^2 \left(a^2+2 r^2\right) (k+2 n) \sin ^2 2 \theta }{8\mathbf{e}\Sigma \Lambda} d\phi ~. \label{AAAAA1}
\end{align}
Applying the transformations $m\to k-m$, $\theta\to\frac{\pi}{2}-\theta$, $\phi\to - \phi$, $\psi\to-\psi$, $\mathbf{a}\to\frac{1}{m+n}(\mathbf{e}-\mathbf{a}(k-m+n))$ to the data outlined above and labeling the transformed quantities by tildes gives
\begin{align}
(\widetilde{Z}_{1},\widetilde{Z}_{2},\widetilde{Z}_{4},\tilde{\beta})=(-Z_{1},-Z_{2},-Z_{4},\beta) \qquad \text{and} \qquad \widetilde{ds}_{4}^{2}(\mathcal{B})=-ds_{4}^{2}(\mathcal{B})~.
\end{align}
The form of the BPS equations then fixes $(\Theta^{(I)},F,\omega)$ and implies the full identification
\begin{align}
(\widetilde{Z}_{I},\widetilde{Z}_{3},\widetilde{\Theta}^{(I)},\widetilde{\Theta}^{(3)},\widetilde{\omega})&=(-Z_{I},-Z_{3},\Theta^{(I)},\Theta^{(3)},-\omega)~, \notag \\
\widetilde{ds}_{4}^{2}(\mathcal{B})&= -ds_{4}^{2}(\mathcal{B})~. \label{trans6}
\end{align}
Referring to section \ref{Sub sec: signs} we see that this corresponds to a spectral transformation of the form (\ref{Trans3}) followed by a transformation of the form (\ref{Trans2}), thus the $(k,m,n)$ and $(k,k-m,n)$ families are equivelent when reduced 5-dimensions.
\subsection{The $(1,0,n)$ and $(1,1,n)$ original superstrata} \label{SubSec: 11n}
The relationship of the previous subsection can be explicitly demonstrated for the $(1,0,n)$ and $(1,1,n)$ original superstrata families. The $(F,\omega)$ for the $(1,0,n)$ family were already known in closed form \cite{Bena:2016ypk}, while it is possible to compute the closed form for the $(1,1,n)$ family:
\begin{align}
F_{1,0,n} &= \frac{b^{2}}{a^{2}}(\Gamma^{n}-1)~, \qquad ~~ \omega_{1,0,n} = \omega_{0}+ \frac{b^{4}}{b_{4}^{2}} \frac{R}{2\Sigma} \left((\Gamma^{n}-1)\sin^{2}\theta \right)(d\phi - d\psi)~, \label{10n Sol}\\
F_{1,1,n}&= \frac{b^{2}}{a^{2}} (\Gamma^{n+1}-1) ~, \qquad \omega_{1,1,n} = \omega_{0} + \frac{b^{4}}{b_{4}^{2}}\frac{R}{2\Sigma} \left[ \Gamma^{n+1}\cos^{2}\theta \,(d\phi+d\psi)-\sin^{2}\theta \, (d\phi-d\psi) \right]~. \label{11n Sol}
\end{align}
It is interesting that the $F$ and $\omega$ prior to spectral flow are not related in an obvious way.
\section{Separability of wave equations in 5 and 6 dimensions}
\label{Sect:Sect5 Seperability}
This section studies the massless wave equations for various superstrata in depth. The results of a search to find superstrata families with separable massless wave equations (SMWEs) in either 5 or 6 dimensions is summarized.
\subsection{General structure of wave equation for axially symmetric BPS solutions} \label{SubSec: wave equation structure}
For superstrata defined on a flat $\mathbb{R}^{4}$ or two center GH bases, the data appearing in (\ref{ds6}) and ({\ref{ds5}}) are axially symmetric and so have the functional dependence
\begin{align}
V=V(r,\theta)~, \qquad P=P(r,\theta)~, \qquad F=-Z_{3}=F(r,\theta)~, \qquad A=A_{\phi}(r,\theta)\, d\phi~,\label{rest1}
\end{align}
\begin{align}
\omega= \omega_{\phi}(r,\theta) \, d\phi + \omega_{\psi}(r,\theta)~d\psi \qquad \text{and} \qquad \beta =\beta_{\phi}(r,\theta)\, d\phi + \beta_{\psi}(r,\theta)\, d\psi~. \label{rest2}
\end{align}
The massive wave equation is given by
\begin{align}
\frac{1}{\sqrt{-g}} \partial_{\mu} \left( \sqrt{-g}g^{\mu\nu}\partial_{\nu}\Phi \right) = M^{2} \Phi~, \label{MassiveWE}
\end{align}
where $g$ denotes the determinate of $g_{\mu\nu}$ which is either (\ref{ds6}) or (\ref{ds5}) depending on whether it is in 6 or 5 dimensions. In 6 dimensions we utilize the periodicity of the $(v,\phi,\psi)$ coordinates and independence of $u$, so look for solutions of the separable form
\begin{align}
\Phi = K(r) S(\theta) e^{i\left(\frac{w}{R} u +\frac{p}{R}v+q_{1}\phi +q_{2}\psi \right)}~, \label{PhiAnsatz}
\end{align}
where $(w,p,q_{1},q_{2})$ are constants, the 5-dimensional form is obtained by setting $p=0$. The massive wave equation (\ref{MassiveWE}) can then be written as
\begin{align}
\frac{1}{r}\partial_{r}\left(r(a^{2}+r^{2})\partial_{r}K \right) + \frac{1}{\sin 2\theta} \partial_{\theta} \left( \sin 2\theta \, \partial_{\theta} S \right) + G^{(i)}_{1}(r,\theta) = \frac{M^{2} G^{(i)}_{2}(r,\theta)}{KS}~, \label{WEgen}
\end{align}
where $i\in \{5,6\}$ indexes the 5 or 6-dimensional version. Direct computation gives
\begin{align}
G_{2}^{(6)}=\frac{\Sigma \Lambda}{4} V \sqrt{P} \qquad \text{and} \qquad G_{2}^{(5)} = \frac{\Sigma \Lambda}{4} V \left(-F P \right)^{1/3}~,
\end{align}
looking at the form of $\Sigma\Lambda$ in (\ref{SigmaLambda}), it is obvious that these terms destroy separability. However, when $M=0$ separability will depend solely on the form of $G_{1}^{(i)}(r,\theta)$. In 6 dimensions
\begin{align}
G_{1}^{(6)} &= -\frac{\Lambda \Sigma}{R^{2}} \left\lbrace \frac{4\Gamma}{r^{4}}\left[q_{1}R-p \beta_{\phi}-w \omega_{\psi}+A_{\phi}(-q_{2}R +p \beta_{\psi}+w \omega_{\psi}) \right]^{2} \right. \\
& \qquad\qquad\qquad\qquad \left. + \frac{V}{4} \left[w(-2p+w F)P+V(-q_{2}R+p \beta_{\psi}+w \omega_{\psi})^{2} \right] \right\rbrace ~.\label{6DG}
\end{align}
It is convenient to expand $G_{1}^{(6)}(r,\theta)$ in the form
\begin{align}
G_{1}^{(6)}(r,\theta) =\frac{1}{2} \sum_{x_{1},x_{2}\in \mathcal{S}}x_{1}x_{2} G_{x_{1}x_{2}}(r,\theta) ~,\label{Gscheme}
\end{align}
where $\mathcal{S}=\left\lbrace w,p,q_{1},q_{2} \right\rbrace$ and $G_{x_{1},x_{2}}$ are functions. For future reference the form of the $G_{x_{1}x_{2}}(r,\theta)$ are summarized in a table in appendix \ref{App: 1}, where we have introduced the function
\begin{align}
\Gamma = \frac{r^{2}}{a^{2}+r^{2}}~.
\end{align}
The convenience of introducing things this way is that simply dropping the terms proportional to $p$ and setting $F=-Z_{3}$ gives the 5-dimensional result:
\begin{align}
G_{1}^{(5)}(r,\theta)=\left. G_{1}^{(6)}(r,\theta) \right|_{p= 0,F= -Z_{3}}~. \label{G6Dto5D}
\end{align}
This makes the tables appearing in the appendices useful, the full tables give the 6-dimensional result, omitting the last four rows gives the 5-dimensional result.
The massless wave equations will be separable if every $G_{x_{1}x_{2}}(r,\theta)$ term splits into a sum of a function of $r$ and a function of $\theta$ alone. Given (\ref{G6Dto5D}) there is also the possibility for the 6-dimensional wave equation to be non-separable whilst the 5-dimensional one is, if non-separable terms only appear in terms with a factor of $p$. Section \ref{SubSec: 21n sep} shows that this occurs for the $(2,1,n)$ original family.
\subsection{Separability of $(1,0,n)$ and $(1,1,n)$ original superstata} \label{SubSec: 10n 11n wave}
In \cite{Bena:2017upb} it was shown that the $(1,0,n)$ family have a SMWEs in 6-dimensions. Since spectral transformations leave $(r,\theta)$ inert, performing such a transformation should not effect the seperability of (\ref{Gscheme}). We performed the spectral transformation procedure of section \ref{Sect:Sec3 relating 5D 6D superstrata}, to both the $(1,0,n)$ and $(1,1,n)$ families, thus removing any $v$-dependence in the solutions. The resulting $G_{x_{1}x_{2}}$ for the wave equations of these families are summarized in appendix \ref{App: 2}, table \ref{10n table}. The 5-dimensional result is given again by applying (\ref{G6Dto5D}) and dropping the last 4 rows of the table.
Table \ref{10n table} possesses some interesting features:
\begin{itemize}
\item In addition to the $(1,0,n)$ family having a SMWEs in 6 dimensions, the $(1,1,n)$ family has SMWEs as well.
\item Both families have SMWEs in 5 dimensions.
\item The remaining 6-dimensional spectral transformation parameters $(\mathbf{a},\mathbf{e})$ alter the form of the wave equations substantially, whilst maintaining separability. It is possible to set either
\begin{align}
\mathbf{a}(1+2n)-\mathbf{e}=0 \qquad \text{or} \qquad \mathbf{a}-\mathbf{e}=0~,
\end{align}
and simplify either the $r$ or $\theta$ dependent parts of the wave equation.
\item Redefining $\tilde{\theta}=\frac{\pi}{2}-\theta$ for one of the families, we see that the 5-dimensional wave equations become identical. As was required by the identification of these solutions in section \ref{SubSec: 11n}.
\item The spectral transformation parameter $\mathbf{a}$ does not appear in any of the 5-dimensional terms since it is a gauge transformation (see section \ref{SubSec: 6D5D relationship}), but it does appear in the 6-dimensional terms.
\end{itemize}
\subsection{Separability of $(2,1,n)$ superstata} \label{SubSec: 21n sep}
In \cite{Bena:2017upb} it was shown how the 6-dimensional $(2,1,n)$ original superstrata family has SMWEs so long as the momentum on the GH fiber direction $\psi$ vanishes\footnote{In \cite{Bena:2017upb} it was not acknowledged explicitly that this was the GH fiber direction, the choice of coordinates there obscured this fact.}. Using the spectral transformations of section \ref{SubSec: 6D5D relationship} this becomes a constraint requiring the momentum on the $v$-circle to vanish, the 5-dimensional reduction should thus have SMWEs. In contrast the supercharged $(2,1,n)$ family has SMWEs 6 dimensions alaready \cite{Heidmann:2019zws}.
To aid in the presentation of the wave equations for the $(2,1,n)$ family we schematically break up the $G_{x_{1}x_{2}}(r,\theta)$ of (\ref{WEgen}) into pieces distinguished by their dependence on $b$ or $c$:
\begin{align}
G_{x_{1}x_{2}}= G^{(0)}_{x_{1}x_{2}}+G^{(b)}_{x_{1}x_{2}}+G^{(c)}_{x_{1}x_{2}}+G^{(bc)}_{x_{1}x_{2}}~,
\end{align}
where we define
\begin{align}
G^{(0)}_{x_{1}x_{2}}&=\left.G_{x_{1}x_{2}}\right|_{b=0,c=0}~,\qquad\qquad G^{(b)}_{x_{1}x_{2}}=\left.G_{x_{1}x_{2}}\right|_{c=0}-G^{(0)}_{x_{1}x_{2}}~,\\
\qquad G^{(c)}_{x_{1}x_{2}}&=\left.G_{x_{1}x_{2}}\right|_{b=0}-G^{(0)}_{x_{1}x_{2}} ~,\qquad\, G^{(bc)}_{x_{1}x_{2}}=G_{x_{1}x_{2}}-\left(G^{(0)}_{x_{1}x_{2}}+G^{(b)}_{x_{1}x_{2}}+G^{(c)}_{x_{1}x_{2}} \right)~.
\end{align}
Thus the original superstrata result is given by the $(G^{(0)}_{x_{1}x_{2}},G^{(b)}_{x_{1}x_{2}})$ terms,the supercharged result by the $(G^{(0)}_{x_{1}x_{2}},G^{(c)}_{x_{1}x_{2}})$ terms, and the hybrid result by the full $G_{x_{1}x_{2}}$.
The result of the wave equation analysis are presented in appendix \ref{App 3}, table \ref{21n table}. It has several interesting features:
\begin{itemize}
\item As highlighted in \cite{Heidmann:2019zws} the supercharged $(2,1,n)$ family have SMWEs in 6 dimensions.
\item The original $(2,1,n)$ family fail to have SMWEs in 6 dimensions due to the term
\begin{align}
G_{pw}^{(b)}(r,\theta)=\frac{b^2 \Gamma (2 \mathbf{a} (n+1)-\mathbf{e}) \left(a^2 \Gamma ^{n+1}+2 r^2\right)}{2 (n+1) r^4}+\frac{a^2 b^2 \mathbf{e} \Gamma ^{n+2} \cos 2 \theta }{r^4}~.
\end{align}
\item Both the original and supercharged flavors have SMWEs in 5 dimensions.
\item Unlike the $(1,0,n)$ and $(1,1,n)$ families there is now only one obvious choice for fixing $(\mathbf{a},\mathbf{e})$ to simplify the 6-dimensional wave equations
\begin{align}
2\mathbf{a}(n+1)-\mathbf{e}=0~.
\end{align}
\end{itemize}
There are two non vanishing $G^{(bc)}_{x_{1}x_{2}}$ terms for the hybrid $(2,1,n)$ family. The term
\begin{align}
G^{(bc)}_{pw}(r)=\frac{ \Gamma ^2 \mathbf{e} \left(\Gamma ^n \left(a^4 (n+1) (n+2)+2 a^2 (n+2) r^2+2 r^4\right)-2 \left(a^2+r^2\right)^2\right)}{a^2 \sqrt{n} (n+1) \sqrt{n+2} r^4}~,
\end{align}
which is removed in the 5-dimensional reduction. As well as
\begin{align}
G^{(bc)}_{wq_{1}}(r,\theta) = \left(\frac{2 b c \left(\Gamma ^{n+2} \left(a^4 (n+1) (n+2)+2 a^2 (n+2) r^2+2 r^4\right)-2 r^4\right)}{a^2 \sqrt{n (n+2)} r^4}\right) \cos 2\theta~,
\end{align}
which is non-separable, but is removed upon setting $q_{1}=0$. Thus the hybrid solutions don't have SMWEs in 6 dimensions due to the $G^{(b)}_{pw}$ and $G^{(bc)}_{wq_{1}}$ terms, but they will have SMWEs in 5 dimensions if one sets $q_{1}=0$. This also implies the null geodesics with no motion in the $\phi$-direction can be solved for analytically for the hybrid $(2,1,n)$ family in 5 dimensions.
\section{Prepotentials} \label{Sect 3.5: prepotentials}
The $(\Theta^{(I)},\Theta^{(3)})$ fluxes in 5 dimensions can be derived from prepotential functions $(\Phi^{(I)},\Phi^{(3)})$ on the GH base \cite{Tyukov:2018ypq}. Once the possible prepotentials are characterized and it is understood what moduli of the hyper-K\"{a}hler base the corresponding fluxes control, the $(Z_{I},Z_{3})$ are simply derived from the prepotenitals without the need to solve any differential equations. In this section we summarize the prepotential construction, uplift the construction to 6-dimensions and compute the prepotentials for the superstrata fluxes (\ref{thetatilde}) and (\ref{thetahat}) with arbitrary $(k,m,n)$. Previously, the the only known examples of prepotentials were for the original superstrata fluxes (\ref{thetatilde}) for $k=2m$ in 5 dimensions.
\subsection{Prepotentials in 5-dimensions} \label{SubSec: 5D prepotentials}
In \cite{Tyukov:2018ypq} it was shown that in 5-dimensions the $\Theta^{(I)}$ can be derived from harmonic functions on the GH base known as prepotentials $\Phi^{(I)}$. The construction is
\begin{align}
\Theta^{(I)} = d \left(\tensor{J}{_{\mu}^{\nu}}\partial_{\nu} \Phi^{(I)} \, dx^{\mu} \right)~,
\end{align}
where $J$ is the complex structure. For axisymmetric multi-centered GH bases the canonical complex structure given
\begin{align}
J=(d\psi +A) \wedge dy^{3} -V\, dy^{1}\wedge dy^{2}~,
\end{align}
where the GH charges are coincident on the $y^{3}$ axis and $x^{\mu}\in (\psi,r,\theta,\phi)$.
The $(Z_{I},Z_{3})$ that solve the BPS equations (\ref{5D BPS Z1eq})-(\ref{5D BPS Z4eq}) can in principle be found without solving any differential equations. Given any harmonic $(1,1)$ form $\Theta$, a perturbation of a Ricci-flat K\"{a}hler manifold with metric $g_{\mu\nu}$ such that it stays Ricci-flat and K\"{a}hler is given by
\begin{align}
\delta g_{\mu\nu} = \frac{1}{2}\left(\tensor{J}{_\mu^\rho}\Theta_{\rho\nu} +\tensor{J}{_\nu^\rho}\Theta_{\rho\mu} \right)~.
\end{align}
If there is a family of Ricci-flat K\"{a}hler manifolds with parameter $a$, such as the two centered GH bases with (\ref{GHchargesFlowed}), then one might expect $\partial_{a}g_{\mu\nu}=\delta g_{\mu\nu}$. This is true modulo an infinitesimal coordinate change $x^{\mu}\to x^{\mu}+Y^{\mu}_{(a)}$. This vector field $Y^{\mu}_{(a)}$ can be fixed by introducing the covariant derivative
\begin{align}
\mathcal{D}_{a} \equiv \partial_{a} + \mathcal{L}_{Y_{(a)}}
\end{align}
and demanding
\begin{align}
\mathcal{D}_{a}J=\Theta~, \qquad \mathcal{D}_{a}g_{\mu\nu} =\frac{1}{2}\left(\tensor{J}{_\mu^\rho}\Theta_{\rho\nu} +\tensor{J}{_\nu^\rho}\Theta_{\rho\mu} \right)~,
\end{align}
where $ \mathcal{L}_{Y_{(a)}}$ is the Lie derivative with respect to the vector field $Y^{\mu}_{(a)}$.
If there is another harmonic function $\widehat{\Theta}$ with flux $\widehat{\Theta} = d \left(\tensor{J}{_{\mu}^{\nu}}\partial_{\nu} \Phi \, dx^{\mu} \right)$ then
\begin{align}
\nabla^{2} \left( \mathcal{D}_{a}\widehat{\Phi}\right) = \star \left( \Theta \wedge \widehat{\Theta}\right)~. \label{Zprepotentials}
\end{align}
In principle this allows gives the $Z_{I}$ that solve (\ref{5D BPS Z1eq})-(\ref{5D BPS Z4eq}) directly from the $\Phi^{(I)}$. However, in practice it is only known how to construct $\mathcal{D}_{a}$ for $\Theta$ of the form $\Theta=d\beta$ as appears in (\ref{beta and V}). It is necessary to understand what modulus of the GH base the $\Theta$ controls, for $d\beta$ it is known to be the spacing between the GH charges parametrized by $a$. The modulus modulus controlled by the superstrata fluxes (\ref{Theta1})-(\ref{Theta4}) after 5-dimensional reduction are unknown.
\subsection{Prepotentials in 6-dimensions} \label{SubSec: 6D prepotentials}
To motivate/derive the results of \cite{Tyukov:2018ypq} it was important that the $\Theta^{(I)}$ were supported solely by the homology on a hyper-K\"{a}hler base. For generic 6-dimensional supertrata this is certainly not the case since the canonical GH base used is just flat $\mathbb{R}^{4}$, indeed the non-trivial homology is due to the the pinching off of the $v$-circle with this base. However, using the spectral transformations of section \ref{Sect:Sec3 relating 5D 6D superstrata} to remove the $v$-dependence of the $\Theta^{(I)}$, ensuring they are supported solely on the homology on the GH base again, even in 6 dimensions. Thus in order to derive the prepotentials for (\ref{thetatilde}) and (\ref{thetahat}) the appropriate spectral flow must be made. Using hats to represent quantities after spectral flow we define
\begin{align}
\widehat{J}_{k,m,n}=(d\hat{\psi} +\hat{A}) \wedge dy^{3} -\widehat{V}\, dy^{1}\wedge dy^{2}~, \qquad \widehat{d}_{k,m,n} \Phi = (\partial_{\hat{\psi}}\Phi) \, d\hat{\psi} + d_{3} \Phi~.
\end{align}
With $(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d})$ satisfying (\ref{SpecificSF}) and (\ref{edef}) direct computation gives
\begin{align}
\widehat{J}_{k,m,n} = \frac{r \cos 2\theta}{2} d\hat{\psi} \wedge dr + \frac{(a^{2}+2r^{2})\sin 2\theta}{4} d\theta \wedge d\hat{\psi} + \frac{(k-2m)r}{2\mathbf{e}}d\phi \wedge d r +\frac{a^{2}(k+2n)\sin 2\theta}{4 \mathbf{e}} d\phi \wedge d\theta ~.
\end{align}
Noting the form of equations (\ref{Speclambdas}), the $\widehat{\Theta}^{(I)}$ will be of the form
\begin{align}
\widehat{\Theta}^{(1)} &= Q_{5} \kappa ~, \\
\widehat{\Theta}^{(2)} &=Q_{1} \kappa +\frac{R}{\sqrt{2} Q_{5}}\left( b_{1} \widehat{\vartheta}_{2k,2m,2n} + c_{2} \widehat{\varphi}_{2k,2m,2n} \right)~,\\
\widehat{\Theta}^{(4)} &= b_{4} \widehat{\vartheta}_{k,m,n} + c_{4} \widehat{\varphi}_{k,m,n}~,
\end{align}
where $(\widehat{\vartheta}_{k,m,n},\widehat{\varphi}_{k,m,n})$ are the spectral flowed versions of $(\vartheta_{k,m,n},\varphi_{k,m,n})$ and computation gives
\begin{align*}
\kappa &= \frac{8 a^2 (m+n) (k-m+n) }{R \left(\left(a^2+2 r^2\right) (k-2 m)+a^2 \cos (2 \theta ) (k+2 n)\right)^2}\widehat{\mathcal{J}}_{k,m,n}~, \\
\widehat{\mathcal{J}}_{k,m,n}&= \frac{r \cos 2\theta}{2} d\hat{\psi} \wedge dr - \frac{(a^{2}+2r^{2})\sin 2\theta}{4} d\theta \wedge d\hat{\psi} + \frac{(k-2m)r}{2\mathbf{e}}d\phi \wedge d r -\frac{a^{2}(k+2n)\sin 2\theta}{4 \mathbf{e}} d\phi \wedge d\theta ~.
\end{align*}
Written in terms of a self dual two form basis the expressions for the $(\widehat{\vartheta}_{k,m,n},\widehat{\varphi}_{k,m,n})$ are more complicated than those of (\ref{thetatilde}) and (\ref{thetahat}) prior to the flow. However, using the prepotential prescriptions
\begin{align}
\widehat{\vartheta}_{k,m,n} = \widehat{d}_{k,m,n} \left( (\widehat{J}_{k,m,n})_{\mu}^{~~\nu}\partial_{\nu} \widehat{\Phi}^{(\vartheta)} dx^{\mu}\right) \qquad \text{and} \qquad \widehat{\varphi}_{k,m,n} = \widehat{d}_{k,m,n} \left( (\widehat{J}_{k,m,n})_{\mu}^{~~\nu}\partial_{\nu} \widehat{\Phi}^{(\varphi)} dx^{\mu}\right)~,
\end{align}
they can be summarized as
\begin{align}
\widehat{\Phi}^{(\vartheta)} &=C_{1} \frac{\cos \hat{v}_{k,m,n}}{\Delta_{k,m,n}} - \frac{\Delta_{k,m,n}\cos \hat{v}_{k,m,n} }{\sqrt{2}a^{2}k\mathbf{e}} \left[ \left(a^2+r^2\right) (k-2 m) \, _2F_1\left(1,1-k;n+1;-\frac{r^2}{a^2}\right) \right. \notag \\
& \qquad\qquad \qquad \qquad \qquad \qquad + a^{2}\left(m+n-(k+2n) \, _2F_1\left(1,k+1;m+1;\cos ^2 \theta \right) \, \sin^{2}\theta\right) \Big]~, \\
\widehat{\Phi}^{(\varphi)} &= C_{2} \frac{\cos \hat{v}_{k,m,n}}{\Delta_{k,m,n}} \notag\\
& \qquad -\frac{\Delta_{k,m,n}\cos \hat{v}_{k,m,n}}{\sqrt{2}a^{2}(k-m)(k+n)mn \mathbf{e}} \left[ m(k-m)(k+2n)(a^{2}+r^{2}) \, _2F_1\left(1,1-k;n+1;-\frac{r^2}{a^2}\right) \right. \notag \\
& \qquad \qquad + a^{2}n \left( m(m+n)+(k-2m)(k+n)\, _2F_1\left(1,k+1;m+1;\cos ^2\theta \right) \sin^{2}\theta \right) \Big]~,
\end{align}
where $C_{1}$ and $C_{2}$ are constants. These prepotentials can be used in the canonical flat $\mathbb{R}^{4}$ base by performing the required inverted spectral transformation. This will also work for multi-mode solutions, but different flows will be needed for individual modes.
The $\widehat{\Phi}^{(\vartheta)}$ prepotentials are for the original supertrata fluxes, while $\widehat{\Phi}^{(\varphi)}$ correspond to the supercharged potentials. Previously only the $k=2m$ original superstrata prepotentials were known. By extending to all $(k,m,n)$, as well as the supercharged case, we see that the structure is far richer. For instance it was previously unknown that harmonic prepotentials on 2 centered GH bases could be constructed using $ _2F_1\left(1,1-k;n+1;-\frac{r^2}{a^2}\right)$. Such terms appear in the supercharged flavor exactly when $k=2m$, as well as in the original flavor when $k\neq 2m$.
It is hoped that a mathematical framework can be developed based on functional analysis on ambi-polar GH bases, that might allow one to construct all prepotentials relavent for superstrata. Additionally it would be extremely useful to determine the moduli the corresponding fluxes control and integrate them to produce new hyper-K\"{a}hler bases. It is possible the prepotentials displayed above could aid in this program.
\section{Discussion, conclusion and outlook}
\label{Sect:Sect6 Discussion}
It was shown how to transform any single-mode superstrata in 6 dimensions to become independent of the $v$ coordinate using a spectral transformations. This alters the base from flat $\mathbb{R}^{4}$ to ambipolar 2 centered GH, trading the three mode numbers $(k,m,n)$ for the new mode numbers $(\mathbf{e},k)$ and GH charges $(q_{-},q_{+})$. Once this flow has been made it is straightforward to reduce to a 5-dimensional solution, in fact the $(\mathbf{e},k,q_{-},q_{+})$ are sufficient to parametrize the most general two centered 5-dimensional superstrata. These 5-dimensional solutions include both asymptotically $\text{AdS}_{2}\times \mathbb{S}^{3}$ and $\text{AdS}_{3}\times \mathbb{S}^{2}$ examples. Corresponding to microstate geometries of black strings and black holes respectively. Examples of the black string microstate geometries had been considered in \cite{Bena:2017geu}, while the black hole microstate geometries are new and correspond to having non-zero net GH charge.
The dimensional reduction will fail if one cannot find a spectral transformation that removes the dependence of the data $(Z_{I},Z_{3},\Theta^{(I)},\Theta^{(3)},F,\omega)$ on $v$. This occurs for multi-mode superstrata, unless the distinct modes are arranged to be parallel. Additionally it was shown that the $(k,m,n)$ and $(k,k-m,n)$ superstrata both reduce to the same 5-dimensional solutions. There should be more states in the 6-dimensional as the added dimension allows for a larger event horizon of the black hole being approximated. Thus it's interesting that there are both 6-dimensional solutions that don't reduce to 5 dimensions and different 6-dimensional solutions that reduce to the same 5-dimensional solutions.
A search for superstrata with SMWEs was conducted. Previously it was known that the original $(1,0,n)$ and supercharged $(2,1,n)$ superstrata in 6-dimensions had SMWEs. These families were spectrally transformed into the form appropriate for dimensional reduction, the remaining spectral transformation parameters $(\mathbf{a},\mathbf{e})$ then indexed families of distinct 6-dimensional 2 centered superstrata with different SMWEs. The 5-dimensional reductions of these solutions were necessarily also have SMWEs. The original $(2,1,n)$ superstrata solutions were known to be non-separable in 6-dimensions, we showed that in 5-dimensions the obstruction is removed and SMWEs are produced. If the momentum around the $\phi$-circle vanishes the hybrid $(2,1,n)$ family also have SMWEs in 5 dimensions. We also showed that the $(1,1,n)$ original superstrata have SWMEs in 6 dimensions.
Separability of a massless wave equation implies the existence of a conformal killing tensor. Since the 5-dimensional geometries are independent of $(t,\phi,\psi)$, there are enough conserved quantities in our examples to solve for the null geodesics analytically. It might be possible to learn more about the fate of infalling objects using these geodesics, since objects released sufficiently far away from the bottom of the throat will be approximately following a null geodesics by the time they reach the bottom. It may also be possible to construct Green's functions for these massless wave equations and study wave scattering in these geometries, investigations of this type have already been conducted for the $(1,0,n)$ family in 6 dimensions \cite{Heidmann:2019zws,Bena:2018bbd,Bena:2019azk}.
The ability to transform solutions so that the fluxes $\Theta^{(I)}$ are independent of $v$ is useful in its own right. Microstate geometries in general use a phenomena known as dissolving charges in fluxes to avoid having singular sources. To exploit this phenomena the fluxes need to thread non-trivial cycles in the geometry. In 5 dimensions the only non-trivial geometry is that of the GH base and so one can bring the full arsenal of tools developed for hyper-K\"{a}hler manifolds to study the $\Theta^{(I)}$, and by association through the BPS equations the rest of the data $(Z_{I},Z_{3},\Theta^{(3)},F,\omega)$. Using such transformations we showed how prepotentials can be constructed for the fluxes in 6 dimensions and explicitly constructed them for all $(k,m,n)$.
There are still open questions raised by the work of \cite{Tyukov:2018ypq} around whether it is possible to uncover a mathematical framework on hyper-K\"{a}hler manifolds that gives insight into BPS solutions? There it was shown how the $(\Theta^{I},\Theta^{(3)})$ fluxes are derived from prepotential functions and control moduli of the base which allow one to construct the $(Z_{I},Z_{3})$ analytically without solving any differential equations. The open questions are whether one can determine the moduli the $\Theta^{(I)}$ control? What are the new hyper-K\"{a}hler bases these moduli parametrize? As well as whether another principle can be found such that $\omega$ can be found without solving the final BPS equation? By demonstrating that the same tools can be used in 6 dimensions we have provided another setting in which these questions might be answered. Additionally the form of the prepotentials for general $(k,m,n)$ we constructed are richer than those known previously \cite{Tyukov:2018ypq}, perhaps they may shed light on some of these questions.
It is hoped that the results presented here inform and motivate future study of the superstrata solutions, their rich structure promises to further the microstate geometry program and our understanding of black hole physics.
\section*{Acknowledgements}
I would like to thank Nicholas Warner, Pierre Heidmann and Felipe Rosso for valuable discussions and feedback on drafts or this work. The IPhT, CEA-Saclay provided accommodation and a stimulating environment during the genesis and completion of this project. This work was funded in part by the US Department of Energy under the grant DE-SC0011687 and by the ERC Grant 787320 - QBH Structure.
| proofpile-arXiv_065-7116 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Liquid atomization is an important physical process in a wide variety of applications ranging from manufacturing (including 3D printing) to drug delivery and fuel sprays.
The process of liquid breakup has a strong dependence on the Weber number which relates the inertial force to the surface tension.
As a large quantity of atomization applications occur in low Mach number flow regimes, significant numerical modeling effort has focused on incompressible schemes~\cite{Gorokhovski2008}. \reva{State of the art secondary atomisation modeling in the compressible flow regime has largely focused on the early stages of the breakup process and/or higher Weber numbers where the effects of surface tension are assumed to be negligible and are not considered \cite{Meng2018,Liu2018,Xiang2017}.}
Meanwhile, technical challenges involving supersonic combustion ramjets (scramjets) has identified a need for greater understanding of the penetration, mixing, and atomization of liquid jets injected into high-speed compressible crossflows~\cite{Lee2015}.
Liquid jet atomization consists of primary and secondary breakup.
The former consists of the bulk liquid transforming into smaller jets, sheets, and droplets.
Secondary breakup consists of liquid droplets or ligaments undergoing further deformation and breakup and has generally been classified into vibrational, bag, multi-mode (or bag-and-stamen), sheet-thinning, and catastrophic regimes according to the Weber number~\cite{Guildenbecher2009,Pilch1987,Hsiang1992,Faeth1995}. \reva{However, Theofanous et al.~\cite{Theofanous2004} examined droplet breakup in highly rarefied supersonic flow conditions and instead proposed classification of the breakup into two primary criticalities, Rayleigh-Taylor piercing (RTP) and shear-induced entrainment (SIE). The defining feature of RTP is the penetration of the droplet by the gas while SIE is demarcated by a breakup process involving a peeling of the outer surface of the droplet~\cite{Theofanous2012}. As noted by Guildenbecher et al.~\cite{Guildenbecher2009}, this departure from the traditional breakup morphology suggests more investigation of the topic is needed. Moreover, several researchers have pointed out a dependence of the breakup behavior on the density ratio~\cite{Jalaal2014,Han2001} which is important in the context of high speed flows with varying post-shock gas densities and significant compressibility effects.}
Simulating the entire atomization process requires extremely high resolution due to the multiscale nature of the features involved.
This is especially problematic at high Reynolds and Weber numbers where resolving the boundary layer on the droplet surface and becomes difficult and large numbers of small droplets can be generated.
Subgrid droplet models can relax the computational complexity and have been used to simulate liquid jet injection in supersonic crossflows~\cite{Kim2012,Liu2016}.
However they generally utilize steady-state empirical relations for the drag coefficient of solid spherical particles as a function of the particle Reynolds number to calculate drop trajectories~\cite{crowe2011multiphase}.
To better understand the behavior of deforming droplets in crossflows and the secondary atomization process in general, various experimental and numerical studies have been performed and were recently reviewed by Guildenbecher et al~\cite{Guildenbecher2009}.
With respect to the drag coefficient, Kim et al.~\cite{Kim1998} found that the effects of the initial relative velocity and large relative acceleration or deceleration are significant when predicting rectilinear motion of spherical particles in crossflows.
Experiments by Temkin and Mehta~\cite{temkin1982} showed that the unsteady drag is always larger in decelerating or smaller in accelerating flows than the steady state value.
Wadhwa et al.~\cite{Wadhwa2007} coupled a compressible gas phase solver with an incompressible liquid phase solver and found for axisymmetric conditions the droplet Weber number affects the drag coefficient of a drop traveling at high speeds and placed in quiescent air.
Finally, the unsteady nature of the flow as well as the scales (both temporal and spatial) involved in droplet breakup means experimentally measuring the local drop and ambient flow fields during secondary atomization is incredibly challenging~\cite{Guildenbecher2009}.
Therefore, numerical simulations are a valuable tool for providing important physical insight in such conditions.
While some experimental~\cite{Theofanous2004} and numerical~\cite{Chang2013} investigations exist on the interface dynamics and breakup behavior of liquid droplets at a handful of supersonic flow conditions and Weber numbers, the secondary atomization process across a diverse range of physical conditions has not yet been investigated thoroughly.
Experimental investigation of liquid columns (as opposed to spherical droplets) allows for easier visualization of the wave structures~\cite{Igra2001,Sembian2016}, although difficulties remain in visualizing the later stages of the breakup process.
The deformation behavior of the two-dimensional liquid columns have also been found to follow similar trends as that of three-dimensional spherical droplets~\cite{Igra2002,Igra2010}.
Numerous researchers have simulated the two-dimensional shock-column interaction, commonly as a test case for compressible multicomponent flow solvers~\cite{Igra2010,Meng2014,Shukla2010,Shukla2014,Terashima2009,Terashima2010,Chen2008,Nonomura2014}.
Notable examples include the work of Terashima and Tryggvason~\cite{Terashima2009} who simulated the entire evolution of the column breakup, while Meng and Colonius~\cite{Meng2014} and Chen~\cite{Chen2008} examined the sheet-thinning process and evaluated column trajectories and drag coefficients.
However, such studies focused on the early stages of breakup and neglected the effects of both surface tension and molecular viscosity.
As a result, questions remain as to the breakup process of a liquid column when accounting for molecular viscosity and surface tension effects and especially in the context of supersonic flows.
Fortunately, the cylindrical geometry of the water column can be efficiently modeled using a two-dimensional domain providing faster turnaround times compared to full three-dimensional simulations.
This allows a wider range of physical conditions to be efficiently examined where for similar reasons axisymmetric domains and/or lower gas-liquid density ratios have been employed in incompressible studies~\cite{Strotos2016,Han1999,Han2001}.
Garrick et al.~\cite{Garrick2016} performed a preliminary study of secondary atomization without molecular viscosity effects and while using a non-conservative interface sharpening scheme.
Several simulations of water column-shock interactions were performed including an $M_s=1.47$ shock with comparisons to experiment and an $M_s=3$ shock with and without surface tension.
These simulations considered the early stages of breakup and successfully highlighted the effects of surface tension on the dynamics of the gas-liquid interface.
The dependence of the breakup behavior on the Weber number for $\mathrm{We}=5-100$ was also examined with an array of $M_s=1.39$ ($M=0.5$ crossflow) shock-column simulations.
The liquid-gas density ratio was set to $\rho_l/\rho_g=10$ to reduce computational effort.
Garrick et al.~\cite{Garrick2016a} extended the numerical method to account for molecular viscosity and non-uniform grids and replaced the non-conservative interface sharpening scheme with a conservative reconstruction based interface sharpening scheme.
That approach was then applied to simulate primary and secondary atomization in high speed crossflow.
The present work applies the same approach to a wider range of secondary atomization conditions for a two-dimensional liquid column with a high density ($\rho_l=1000 \mbox{ kg}/\mbox{m}^3$).
This should provide a first order estimate of the three-dimensional behavior but with the benefit of a significantly reduced computational cost.
\reva{To gain a better understanding of the secondary atomization process in high speed flows, the present work simulates shock-column interactions at various Weber and incident shock Mach numbers to examine the combined effects of surface tension and compressibility on the breakup process across a broad range of physical conditions.
This involves detailed two-dimensional simulations of column breakup in high speed compressible flows while accounting for capillary and viscous forces and utilizing an interface sharpening scheme to maintain the fluid immiscibility condition and prevent unphysical numerical smearing of the interface.
Particular focus is made on the breakup process and drag coefficient of the droplets over time.}
The two-dimensional nature of the study is motivated by the focus on a broad range of physical conditions which would be otherwise cost prohibitive to simulate in three dimensions.
This follows prior studies which utilized two-dimensional or axisymmetric domains (see~\cite{Meng2014,Han1999,Han2001,Chen2008,Chen2008a,Igra2001a}) and is also motivated by experimental observations of qualitatively similar breakup characteristics for two-dimensional liquid columns and three-dimensional spherical droplets~\cite{Igra2001,Igra2002}.
The paper is organized as follows.
Section~\ref{sec:modeling3} describes the mathematical model and non-dimensionalization.
Section~\ref{sec:num3} describes the numerical approach while the problem statement is reviewed in Section~\ref{sec:problem3}.
\reva{Section~\ref{sec:results3} presents a two-dimensional investigation of the breakup process and drag coefficient of a liquid column across a range of Weber and incident shock Mach numbers.}
\revc{This is followed with a three-dimensional droplet breakup simulation in Section~\ref{sec:3dsim} and conclusions in Section~\ref{sec:conclusions3}.}
\section{Mathematical model}\label{sec:modeling3}
The present work utilizes the approach of Garrick et al.~\cite{Garrick2016,Garrick2016a} for solving the flowfield.
A non-dimensional form of the quasi-conservative five equation model of Allaire~\cite{Allaire2002} is employed with capillary and molecular viscosity terms.
As such, the compressible multicomponent Navier-Stokes equations govern the flowfield~\cite{Perigaud2005}:
\begin{subequations}
\begin{align}
&\frac{\partial \rho_1 \phi_1}{\partial t} + \nabla \cdot (\rho_1 \phi_1 \mathbf{u}) = 0, \label{eqn:finala} \\
&\frac{\partial \rho_2 \phi_2}{\partial t} + \nabla \cdot (\rho_2 \phi_2 \mathbf{u}) = 0, \\
&\frac{\partial \rho \mathbf{u}}{\partial t} + \nabla \cdot (\rho \mathbf{u} \mathbf{u} + p \tilde{I}) = \frac{1}{\mathrm{Re_a}} \nabla \cdot \boldsymbol{\tau} + \frac{1}{\mathrm{We_a}} \kappa \nabla \phi_1, \\
&\frac{\partial E}{\partial t} + \nabla \cdot \left( ( E + p) \mathbf{u} \right) = \frac{1}{\mathrm{Re_a}} \nabla \cdot (\boldsymbol{\tau} \cdot \mathbf{u}) + \frac{1}{\mathrm{We_a}} \kappa \nabla \phi_1 \cdot \mathbf{u}, \\
&\frac{\partial \phi_1}{\partial t} + \mathbf{u} \cdot \nabla \phi_1 = 0, \label{eqn:finale}
\end{align}
\end{subequations}
where $\rho_1 \phi_1$, $\rho_2 \phi_2$, and $\rho$ are the liquid, gas, and total densities, $\mathbf{u}=(u,v)^T$ is the velocity, $\phi_1$ is the liquid volume fraction, $p$ is the pressure, $\mathrm{We_a}$ and $\mathrm{Re_a}$ are the acoustic Weber and Reynolds numbers, respectively, $\kappa$ is the interface curvature, and $E$ is the total energy
\begin{equation}
E = \rho e + \frac{1}{2}\rho \mathbf{u} \cdot \mathbf{u}
\end{equation}
where $e$ is the specific internal energy.
\revb{The model is non-dimensionalized using the rules in Table~\ref{tab:nondim2} where primes indicate dimensional quantities and the subscript `$0$' refers to a chosen reference state. The dimensional distance $l^\prime_0$ is chosen as the droplet diameter.
This results in the viscous and capillary forces being scaled by acoustic Reynolds and Weber numbers:
\begin{align}
\mathrm{Re_a} &=\frac{\rho^\prime_0 a^\prime_0 l^\prime_0}{\mu^\prime_0} \\
\mathrm{We_a} &=\frac{\rho^\prime_0 a^{\prime 2}_0 l^\prime_0}{\sigma^\prime_0}
\end{align}
where $\mu^\prime_0$ and $\sigma^\prime_0$ are the reference dimensional viscosity and surface tension coefficients, respectively.}
The viscous stress tensor $\boldsymbol{\tau}$ is given with the non-dimensional mixture viscosity $\mu$:
\begin{equation}
\boldsymbol{\tau} = 2 \mu \left( \mathbf{D} - \frac{1}{3} (\nabla \cdot \mathbf{u}) \mathbf{I} \right)\label{eqn:tau}
\end{equation}
where $\mathbf{D}$ is the deformation rate tensor
\begin{equation}
\mathbf{D} = \frac{1}{2} \left( \nabla \mathbf{u} + ( \nabla \mathbf{u} )^T \right).
\end{equation}
The fluid components are considered immiscible and the liquid and gas volume fraction functions ($\phi_1$ and $\phi_2$ respectively) are used to capture the fluid interface.
Mass is discretely conserved for each phase via individual mass conservation equations.
Surface tension is implemented as a volume force as in the CSF model~\cite{Alamos1992} with terms in both the momentum and energy equations~\cite{Perigaud2005}.
While a conservative form of the \reva{surface tension term} exists~\cite{Gueyffier1999}, the present model utilizes the non-conservative form which enables flexible treatment of the curvature term $\kappa$ and its accuracy.
\begin{table}\centering
\caption{Non-dimensional rules used in the model.\label{tab:nondim2}}
\begin{tabular}{cc}
Parameter & Rule \\
\hline
Position & $x=x^\prime/l^\prime_0$ \\
Time & $t=t^\prime a^\prime_0/l^\prime_0$ \\
Velocity & $u=u^\prime/a^\prime_0$ \\
Density & $\rho=\rho^\prime/\rho'_0$ \\
Pressure & $p=p^\prime/\rho^\prime_0 a^{\prime 2}_0$ \\
Total Energy & $E=E^\prime/\rho^\prime_0 a^{\prime 2}_0$ \\
Curvature & $\kappa=\kappa^\prime l^\prime_0$ \\
Surface tension coefficient & $\sigma = \frac{1}{\mathrm{We_a}} = \frac{\sigma^\prime_0}{\rho^\prime_0 a_0^{\prime 2} l^\prime_0}$ \\
Viscosity & $\mu = \frac{1}{\mathrm{Re_a}} = \frac{\mu^\prime_0}{\rho^\prime_0 a_0^\prime l^\prime_0}$ \\
\hline
\end{tabular}
\end{table}
\subsection{Equation of state and mixture rules}
To close the model, the stiffened gas equation of state (EOS)~\cite{stiffenedgas} is employed to model both the gas and liquid phases. \revb{The stiffened gas equation of state utilises fitting parameters $\gamma$ and $\pi_\infty$ to recreate the sonic speed in various materials based on experimental measurements. In the case of air, $\gamma=1.4$ becomes the specific heat ratio with $\pi_\infty=0$ and the stiffened gas equation of state simplifies to the ideal gas law. For a given simulation containing a liquid ($1$) and gas ($2$), the stiffened gas equation of state fitting parameters are computed at every point within the domain as a function of the volume fraction:
\begin{equation}
\Gamma = \frac{1}{\gamma-1} = \frac{\phi_2}{\gamma_2 - 1} + \frac{\phi_1}{\gamma_1-1}\label{eqn:gam3}
\end{equation}
and
\begin{equation}
\Pi = \frac{\gamma \pi_\infty}{\gamma-1} = \frac{\phi_2 \gamma_2 \pi_{\infty,2}}{\gamma_2 - 1} + \frac{\phi_1 \gamma_1 \pi_{\infty,1}}{\gamma_1-1}.\label{eqn:pi3}
\end{equation}
where $\gamma_1$, $\gamma_2$, $\pi_{\infty,1}$, and $\pi_{\infty,2}$ are the specific stiffened gas EOS fitting parameters for the liquid ($1$) and gas ($2$). Using the mixture quantities $\Gamma$ and $\Pi$ the total energy becomes
\begin{equation}
E = \Gamma p + \Pi + \frac{1}{2}\rho \mathbf{u} \cdot \mathbf{u}.
\end{equation}
The speed of sound is given by
\begin{equation}
c = \sqrt{ \frac{ \gamma (p + \pi_\infty) }{\rho} }
\end{equation}
where the stiffened gas EOS fitting parameters $\gamma$ and $\pi_\infty$ are computed using the mixture quantities in Eqs.~\ref{eqn:gam3} and~\ref{eqn:pi3}.}
Similar to Coralic and Colonius~\cite{Coralic2014}, the mixture viscosity is determined following Perigaud and Saurel~\cite{Perigaud2005} but written in non-dimensional form for use in Eq.~\ref{eqn:tau}:
\begin{align}
\mu &= \frac{\mu^\prime_1}{ \mu^\prime_0}\phi_1 + \frac{\mu^\prime_2}{ \mu^\prime_0} \phi_2 \notag\\
&= N\phi_1 + \phi_2
\end{align}
where the liquid ($1$) and gas ($2$) viscosities are assumed to remain constant with the gas viscosity used as the reference state $\mu^\prime_0$.
As a result, $\mu^\prime_2/ \mu^\prime_0=1$ and $N=\mu^\prime_1/ \mu^\prime_0$ becomes the liquid to gas viscosity ratio.
\section{Numerical method}\label{sec:num3}
The model (Eqs.~\ref{eqn:finala}-\ref{eqn:finale}) is discretized using a finite volume method on a non-uniform two-dimensional Cartesian grid.
The convective fluxes are upwinded using the Harten-Lax-van Leer-Contact (HLLC) approximate Riemann solver originally developed by Toro et al.~\cite{Toro1994,Toro2009} with modifications for surface tension by Garrick et al.~\cite{Garrick2016}.
Following the approach of Johnsen and Colonius~\cite{Johnsen2006}, oscillation free advection of material interfaces is ensured with adaptations to the HLLC for a quasi-conservative form of the volume fraction transport equation.
Viscous terms are implemented following Coralic and Colonius~\cite{Coralic2014}.
Spatial reconstruction to cell faces is performed on the primitive variables using the second order MUSCL scheme with the minmod limiter.
The fluid immiscibility condition is maintained using the $\rho$-THINC interface sharpening procedure~\cite{Garrick2016a} for reconstructing the phasic densities and volume fraction within the interface.
The conserved variables are integrated in time using an explicit third order TVD Runge-Kutta scheme~\cite{Gottlieb1996}.
Interface curvature is calculated via the interface normals ($\kappa=-\nabla \cdot \mathbf{n}$) which are determined using the smoothed interface function of Shukla et al.~\cite{Shukla2010} and second order central differences.
A full description of the numerical method employed and the results of standard validation cases can be found in the work of Garrick et al.~\cite{Garrick2016,Garrick2016a}.
\subsection{Water column attached domain}
Additional computational efficiency is gained by translating the static domain with the $x$ component of the column center of mass.
This requires appropriate modifications to the fluxes via a simplified arbitrary Lagrangian Eulerian (ALE) formulation~\cite{Luo2004}.
The liquid center of mass (and thus the moving grid) velocity $u_c$ is determined via~\cite{Meng2014}
\begin{equation}
u_c = \frac{\int \rho_1\phi_1 u dV}{\int \rho_1 \phi_1 dV}.\label{eqn:uc}
\end{equation}
The individual control volumes remain static, however, the overall computational domain translates downstream such that the liquid center of mass remains approximately centered throughout the simulation.
\subsection{Drag coefficient}\label{sec:drag}
In the present study the drag coefficient of the liquid is computed following the approach of Meng and Colonius~\cite{Meng2014}:
\begin{equation}
C_d = \frac{m a_c}{\frac{1}{2} \rho_g ( u_g - u_c)^2 d_0} \label{eqn:cd}
\end{equation}
where $d_0$ is the undeformed diameter of the column, $\rho_g$ and $u_g$ are the initial post-shock gas conditions and $u_c$ is the center of mass velocity given by equation~\ref{eqn:uc}.
The acceleration is then computed using finite differences in time~\cite{Meng2014}:
\begin{equation}
a_c = \frac{d}{dt} \frac{\int \rho_1\phi_1 u dV}{\int \rho_1 \phi_1 dV}.\label{eqn:ac}
\end{equation}
\section{Problem statement}\label{sec:problem3}
Standard benchmark cases to verify and validate the shock and interface capturing scheme and the implementation of surface tension were performed by Garrick et al.~\cite{Garrick2016,Garrick2016a}.
For the present simulations, the initial conditions are depicted in Figure~\ref{fig:domain} and correspond to a liquid column ($\rho_l=1000 \mbox{ kg}/\mbox{m}^3$) in air ($\rho_g=1.2 \mbox{ kg}/\mbox{m}^3$) at ambient pressure ($p=101325\mbox{ Pa}$).
The column has unity non-dimensional diameter and is centered at the origin.
Dirichlet and extrapolation conditions are enforced on the upstream and remaining boundaries respectively.
The domain consists of a block of uniform cells in the vicinity of the column corresponding to a resolution of 120 points across the initial column diameter.
Grid stretching to the boundary results in an overall domain of $1579\times1589$ cells.
\begin{figure}\centering
\includegraphics[width=0.5\textwidth]{figures/Figure1}
\caption{\label{fig:domain}Initial layout of the two-dimensional computational domain.
The liquid column has a unity non-dimensional diameter and is centered at the origin.}
\end{figure}
Simulations are performed for incident shock Mach numbers of $M_s=1.47$, $M_s=2$, $M_s=2.5$, and $M_s=3$. \revb{The incident shock wave is traveling at a speed defined by the incident shock Mach number toward the liquid column which is stationary in ambient air conditions. The Mach number of the induced crossflow for each simulation is determined by first employing the normal shock relations to compute the Mach number and local speed of sound in the gas behind the incident shock. The crossflow Mach number is the ratio of the post-shock (crossflow) gas velocity in the shock moving reference frame to the post-shock speed of sound.}
Passage of these incident shocks over the liquid column induces a crossflow with corresponding Mach numbers of $M=0.58$, $M=0.96$, $M=1.2$, and $M=1.36$, respectively, \revb{which range from subsonic to supersonic speeds. These initial conditions are analagous to experimental shock tube setups whereby pressurized gas is released from a driver section into a driven section such that a shock wave develops and travels down the tube to produce a uniform step change in velocity over droplets inserted into the driven section~\cite{Guildenbecher2009}.}
The \reva{surface tension term} in the momentum and energy conservation equations is scaled by the acoustic Weber number.
To examine the breakup behavior for a range of physical conditions, simulations with $\mathrm{We_a}=1, 5, 10, 20, 50, 100, \mbox{ and } 1000$ were performed for each incident shock speed.
In addition, the breakup behaviors for $\mathrm{We_a}=0.05 \mbox{ and } 0.2$ are considered for the $M_s=3$ incident shock speed.
The acoustic Reynolds number was held constant with a value of $\mathrm{Re_a}=1000$ and a liquid to gas viscosity ratio of $N=\mu_l/\mu_g=45$.
In the dimensional sense and for a given surface tension coefficient, each acoustic Weber number represents a different column diameter.
Of particular interest is the difference in breakup behavior in subsonic versus supersonic crossflow across the range of Weber numbers.
To quantify the strength of the \reva{surface tension} for each simulation, several Weber numbers are described.
These are the acoustic, crossflow, and effective Weber numbers.
The acoustic Weber number is given in terms of the reference quantities used to non-dimensionalize the system:
\begin{equation}
\mathrm{We_a} =\frac{\rho^\prime_0 a^{\prime 2}_0 d_0^\prime}{\sigma^\prime_0}.
\end{equation}
Meanwhile the crossflow Weber number $\mathrm{We_c}$ is computed using the post-shock crossflow conditions:
\begin{equation}
\mathrm{We_c} =\mathrm{We_a} \rho u^2 \label{eqn:wep}
\end{equation}
where $u$ is the non-dimensional streamwise flow speed and $\rho$ is the non-dimensional density behind the incident shockwave.
The crossflow Reynolds number is similarly estimated by scaling the acoustic Reynolds numbers by the initial post-shock conditions to give $\mathrm{Re}_{1.47}=1430$, $\mathrm{Re}_{2}=4000$, $\mathrm{Re}_{2.5}=7000$, and $\mathrm{Re}_{3}=10290$ for the $M_s=1.47, 2, 2.5$ and $M_s=3$ cases respectively.
Based on the crossflow Reynolds and Weber numbers, these simulations correspond to Ohnesorge numbers ranging from 0.001 to 0.045.
Finally, all simulation times are scaled into their respective non-dimensional characteristic times given by~\cite{Nicholls1969}:
\begin{equation}
t^*= \frac{t u}{D \sqrt{\epsilon}}
\end{equation}
where $u$ is the crossflow velocity and $\epsilon$ is the liquid to gas density ratio using the post-shock conditions. \revb{ The presence of the density ratio in this equation indicates some dependence of the breakup behavior on the local density ratio which varies for each incident shock Mach number as the post-shock gas density varies depending on the strength of the incident shock. In addition, for the simulations with supersonic crossflow a bow shock is generated in front of the liquid column, further compressing the gas. As a result the local gas-liquid density ratio varies considerably for each incident shock Mach number.
One approach to quantify the compressibility effects is the computation of an effective Weber number which considers the local flow conditions that occur behind the bow shock for the simulations with a supersonic crossflow. This effective Weber number can be computed using the crossflow Mach and Weber numbers and the velocity and density normal shock relations~\cite{Xiao2016}:
\begin{equation}
\mathrm{We_{eff}} = \frac{2+(\gamma-1)M^2}{(\gamma+1)M^2}\mathrm{We_c}.\label{eqn:weeff}
\end{equation}
}
\section{Results and discussion}\label{sec:results3}
\subsection{Validation}
\subsubsection{Grid resolution study and drag uncertainty estimation}
Grid convergence studies on the shock and interface capturing behavior of the scheme were performed by Garrick et al.~\cite{Garrick2016,Garrick2016a}.
In the present work, the effect of grid resolution on the breakup behavior and drag coefficient is examined with several simulations of the $M_s=3$, $\mathrm{We_a}=100$ (crossflow $\mathrm{We}\approx2300$) shock-column interaction with grid resolutions in the vicinity of the column of $D/60$, $D/120$, $D/240$, and $D/480$.
In lieu of performing an exhaustive grid resolution study at each shock speed and Weber number to be tested, it is assumed that relatively similar behavior trends will apply for the range of conditions in the production runs to follow.
\revb{First it is important to highlight the limitations of the present simulations. As noted by Jain et al.~\cite{Jain2015}, liquid breakup is ultimately a molecular process and without multiscale modeling the breakup will be initiated by the grid resolution. As noted by Meng and Colonius~\cite{Meng2018} in their recent paper, this means grid convergence of the breakup behavior is impossible to achieve in a traditional sense. With regards to the viscous effects, direct numerical simulations that resolve the boundary layer on the liquid surface are impractical without highly specialized solvers capable of both significant adaptive mesh refinement and additional body fitted structured conformal meshes which can achieve effective grid resolutions of up to $D/4000$~\cite{Chang2013}. For these reasons, recent studies of secondary atomisation in this flow regime have tended to consider flow conditions where viscous and surface tension effects can be safely neglected~\cite{Meng2018,Liu2018,Xiang2017}. Therefore while both viscous and surface tension effects are included in the simulations presented here, it should be acknowledged that these effects will be under-resolved to some degree. However, the goal is partly to determine to what degree the physics involved in secondary atomisation can be captured despite this limitation. }
First, the drag coefficient is examined in Figure~\ref{fig:cdgridreso}.
Note that the drag (Eq.~\ref{eqn:cd}) is determined by integrating the acceleration of the total liquid mass in the domain (Eq.~\ref{eqn:ac}), so as liquid mass is separated and swept downstream it will have a corresponding effect on the drag coefficient.
This is particularly noticable in Figure~\ref{fig:cdgridreso} where the drag coefficients separate around $t^*=1$, however, they remain reasonably correlated until approximately $t^*=2$ at which point they diverge.
The deformation and breakup behavior of the different simulations is depicted in Figure~\ref{fig:gridresoMs3} which depicts a time history of the gas-liquid interface (i.e.
$\phi_1=0.5$ iso-line) throughout the simulations where each row depicts a different solution time and each column a different grid resolution.
Like with the drag coefficient, the early stages ($t^*<1$) of the deformation process does not vary significantly across the grid resolutions tested.
For $1< t^*<2$ more fine scale ligament and droplet features are observed in the finer grid resolutions but the general behavior remains similar in the three simulations.
The minor differences in the location and trajectory of the smaller droplet particles impact the computed drag coefficient and explains the previously discussed separation of the coefficients in Figure~\ref{fig:cdgridreso} for $t^*>1$.
For $t^*>2$ the general behavior consists of the flow ``piercing" through the center of the droplet.
This piercing is initiated sooner at the finer grid resolutions (or delayed on coarser grids) but the general breakup behavior is qualitatively similar in all three simulations, albeit with significantly more small droplets captured on the finest grid.
\revb{Finally, an additional $D/120$ simulation was performed with a domain twice as large and produced nearly identical results to the original $D/120$ simulation, verifying the domain size was not impacting the results.}
\begin{figure}\centering
\subfigure[Drag coefficient over time for different grid resolutions.]{\includegraphics[width=0.45\textwidth]{figures/Converge}}
\subfigure[\reva{Mean and standard deviation (SD) of drag coefficient from all grid resolutions at each time point as an estimate of drag coefficient uncertainty over time.}]{\includegraphics[width=0.45\textwidth]{figures/Mean}}
\caption{\label{fig:cdgridreso}$M_s=3$, $\mathrm{We_a}=100$ drag coefficient at different grid resolutions (left) and an estimate of the uncertainty in the drag coefficient over time (right).}
\end{figure}
\begin{figure}\centering
\includegraphics[width=0.85\textwidth,right]{figures/Figure3}
\caption{\label{fig:gridresoMs3}$M_s=3$, $\mathrm{We_a}=100$ breakup behavior at $D/60$ (left), $D/120$ (center), and $D/240$ (right) grid resolutions.}
\end{figure}
These results can be broken into several useful groups based on the observed behavior of the drag coefficient and breakup characteristics.
For $t^*<1$ the results converge and should provide a reasonable estimate of the drag coefficient and droplet deformation.
From $1\leq t^*< 2$ there is some uncertainty in the breakup behavior in terms of the presence and trajectory of smaller droplet clouds, however the general behavior remains the same and as the drag coefficients reasonably correlate across the grid resolutions they should provide at least a first order estimate.
For $t^*\geq 2$ there is significantly more uncertainty in the drag coefficients which begin to diverge across the grid resolutions, however, the general breakup behavior is still observed at all three resolutions.
\subsection{Deformation and breakup behavior}\label{sec:breakup}
The effect of Weber number on the deformation and breakup characteristics of the liquid column is investigated for each shock speed \revb{using a grid resolution of D/120.}
Time histories of the gas-liquid interface (i.e.
$\phi_1=0.5$ iso-line) are shown in corresponding figures where the Weber numbers are depicted at the bottom of each figure.
Each row depicts a different solution time and each column a different Weber number.
The characteristic time $t^*$ for each row of images is depicted on the left side of each figure.
In all cases the crossflow is traveling from left to right.
\subsubsection{$M_s=1.47$}
Figure~\ref{fig:Ms147} depicts the results for the $M_s=1.47$ simulations.
For this Mach number, the crossflow Weber numbers correspond closely to the acoustic Weber numbers.\revb{ For this shock strength the local gas-liquid density ratio using the initial post-shock gas conditions is $\rho_l/\rho_g\approx460$.}
The observed breakup characteristics exhibit reasonable qualitative agreement with the different regimes observed in subsonic experiments for $\mathrm{Oh<0.1}$.
The regimes are listed in Table~\ref{tab:regimes} where the transition Weber numbers are approximate partly due to the continuous nature of the breakup process and the arbitrary choice for specific transition points~\cite{Guildenbecher2009}.
As a result, different researchers have reported slight variations on the transition between different regimes~\cite{Pilch1987}, however, the order in which they appear remains the same~\cite{Jain2015}.
At lower Weber numbers (Figure~\ref{fig:Ms147}(a) and (b)) a vibrational type mode is observed where the \reva{surface tension is} large enough for the column to remain intact and oscillate as an ellipse.
\begin{table}\centering
\caption{Breakup regimes and transition Weber number as given by~\cite{Guildenbecher2009}.\label{tab:regimes}}
\begin{tabular}{cr}
\hline
Vibrational & $0 < \mathrm{We} < \sim 11$ \\
Bag & $\sim 11 < \mathrm{We} < \sim 35$ \\
Multimode & $\sim 35 < \mathrm{We} < \sim 80$ \\
Sheet-thinning & $\sim 80 < \mathrm{We} < \sim 350$ \\
Catastrophic & $\mathrm{We} > \sim 350$
\end{tabular}
\end{table}
Figure~\ref{fig:Ms147}(c) depicts various stages of what appears to be a bag breakup process.
Generally this regime is characterized by the growth of a bag structure where the center of the drop is blown downstream and attached to an outer rim
\begin{figure}\centering
\includegraphics[width=0.95\textwidth,right]{figures/Figure4}
\begin{tabular*}{\textwidth}{ll @{\extracolsep{\fill}} rrrrrrrr}
& (a) & (b) & (c) & (d)& (e)& (f)& (g) & \quad \\
$\mathrm{We_a}$ & 1 & 5 & 10 & 20& 50& 100& 1000 & \quad \\
$\mathrm{We_c}$ & 0.9 & 4.7 & 9.4 & 19& 47& 94& 941 & \quad \\
\end{tabular*}
\caption{\label{fig:Ms147}$M_s=1.47$ deformation and breakup behavior.}
\end{figure}
In the bag-and-stamen/multi-mode regime, the center of the droplet is driven downstream more slowly than the rim leading to the creation of a bag/plume structure~\cite{Dai2001}.
Similar features are observed in the present liquid column simulations as depicted in Figure~\ref{fig:Ms147}(d) and (e). Figure~\ref{fig:Ms147}(d) depicts the formation of this bag-and-stamen type structure at a slightly lower Weber number (20) compared to the breakup regimes observed for incompressible flow characterized in Table~\ref{tab:regimes}. However in the present compressible flow simulations, a small standing shock is observed downstream of the liquid column. A similar standing shock feature has been observed in prior numerical results without surface tension at this flow speed~\cite{Meng2014,Terashima2009}.
The pressure disturbance caused by the presence of the standing shocks could contribute to the growth of the bag-and-stamen structure. Figure~\ref{fig:Ms147}(e) is characterized by a substantial plume/bag-and-stamen structure forming around $t^*=2.3$ before its subsequent rupture into numerous small droplets.
Finally, the breakup characteristics in Figure~\ref{fig:Ms147}(g) correlate well with the so-called catastrophic regime where the drop surface is corrugated by large amplitude waves resulting in a large number of smaller droplets and ligaments~\cite{Guildenbecher2009}.
\subsubsection{$M_s=2$}
Figure~\ref{fig:Ms2} depicts the breakup behavior for the $M_s=2$ simulations.
The post-shock conditions are in the transonic regime with a crossflow Mach number of $M=0.96$. \revb{ For this shock strength the local gas-liquid density ratio using the initial post-shock gas conditions is $\rho_l/\rho_g\approx312$.}
Across the range of Weber numbers the breakup behavior is very similar to the slower $M_s=1.47$ case even as late as $t^*=2$.
However, at later times the general breakup behavior begins to noticably deviate from the lower Mach number case, especially with respect to the overall size of the ligament structures which were observed to stretch considerably further in the $M_s=1.47$ simulations.
As the $M_s=2$ shock induces a faster crossflow than the $M_s=1.47$ case, the crossflow Weber number corresponding to each acoustic Weber number is slightly higher.
Figure~\ref{fig:Ms2}(b) depicts a bag-and-stamen type breakup structure with the outer rim of the column being swept downstream faster than the center of the column, resulting in the formation of several ligament structures.
Figures~\ref{fig:Ms2}(c)-(e) depict a unique multimode type of asymmetric breakup culminating in the collapse of the droplet into a largely coherent ligament structure although an increasing number of smaller droplets are generated during this process at the higher Weber numbers.
This noticably asymmetric behavior appears to originate from small asymmetries which appear earlier during the deformation process, i.e.
in Figures~\ref{fig:Ms2}(c)-(e) at $t^*=1.98, 2.47$.
Finally, a catastrophic type breakup is observed at the highest Weber numbers in Figures~\ref{fig:Ms2}(f) and (g).
\begin{figure}\centering
\includegraphics[width=0.95\textwidth,right]{figures/Figure6}
\begin{tabular*}{\textwidth}{cc @{\extracolsep{\fill}} rrrrrrrr}
& (a) \hspace*{5mm} & (b) & (c) & (d)& (e)& (f)& (g) \\
$\mathrm{We_a}$ & 1 \hspace*{5mm} & 5 & 10 & 20& 50& 100& 1000 \\
$\mathrm{We_c}$ & 5 \hspace*{5mm}& 25 & 50 & 100& 250& 500& 5000 \\
\end{tabular*}
\caption{\label{fig:Ms2}$M_s=2$ deformation and breakup behavior.}
\end{figure}
\subsubsection{$M_s=2.5$}
Figure~\ref{fig:Ms250} depicts the breakup behavior for the $M_s=2.5$ simulations.
The higher incident shock speed means the post-shock conditions consist of a supersonic flow.\revb{ For this shock strength the local gas-liquid density ratio using the initial post-shock gas conditions is $\rho_l/\rho_g\approx250$.}
As a result, the estimated crossflow Weber number is much higher for each acoustic Weber compared to the corresponding $M_s=1.47$ and $M_s=2$ simulations.
With the presence of supersonic flow and an associated bow shock appearing in front of the droplet, the effective post-shock Weber number is computed using Eq.~\ref{eqn:weeff} to provide a comparable metric for subsonic simulations. \revb{Using the same approach to compute an effective gas-liquid density ratio accounting for the bow-shock gives $\rho_l/\rho_g\approx187$.}
The breakup behavior is generally similar to the $M_s=2$ simulations with a vibrational type mode observed in Figure~\ref{fig:Ms250}(a), multimode type behavior in Figures~\ref{fig:Ms250}(b)-(d) and catastrophic type breakup in Figures~\ref{fig:Ms250}(e)-(g).
Similarly to the $M_s=2$ simulations, a feature of this catastrophic breakup behavior is the generation of a ``channel" whereby the liquid column is pierced in the center into two separate chunks.
\begin{figure}\centering
\includegraphics[width=0.95\textwidth,right]{figures/Figure9}
\begin{tabular*}{\textwidth}{cc @{\extracolsep{\fill}} rrrrrrr}
\hspace*{3mm} & (a) \hspace*{3mm} & (b) \hspace*{3mm} & (c) \hspace*{3mm} & (d) \hspace*{3mm}& (e) \hspace*{3mm}& (f) \hspace*{3mm}& (g) \\
\hspace*{3mm} $\mathrm{We_a}$ & 1 \hspace*{3mm} & 5 \hspace*{3mm} & 10 \hspace*{3mm} & 20 \hspace*{3mm}& 50 \hspace*{3mm}& 100 \hspace*{3mm}& 1000 \\
\hspace*{3mm} $\mathrm{We_c}$ & 12\hspace*{3mm} & 61 \hspace*{3mm} & 123 \hspace*{3mm} & 245 \hspace*{3mm}& 613 \hspace*{3mm}& 1225 \hspace*{3mm}& 12250 \\
\hspace*{3mm} $\mathrm{We_{eff}}$ & 9.2 \hspace*{3mm}& 46 \hspace*{3mm} & 92 \hspace*{3mm} & 183 \hspace*{3mm}& 458 \hspace*{3mm}& 917 \hspace*{3mm} & 9167 \\
\end{tabular*}
\caption{\label{fig:Ms250}$M_s=2.5$ deformation and breakup behavior.}
\end{figure}
\subsubsection{$M_s=3$}
Theofanous et al.~\cite{Theofanous2004} performed experiments of aerobreakup of spherical liquid droplets in $M=3$ crossflows.
They observed ``piercing" ($44<\mathrm{We}<10^3$) and ``stripping" ($\sim 10^3<\mathrm{We}$) breakup regimes.
Figure~\ref{fig:Ms3} depicts the breakup behavior for the present simulations which considers an $M_s=3$ shock speed that results in a considerably slower $M=1.36$ crossflow compared to the experiments of Theofanous et al.
Despite this difference, the range of breakup features depicted in Figure~\ref{fig:Ms3} with the estimated effective Weber numbers varying from approximately 0.7 in Figure~\ref{fig:Ms3}(a) to 1400 in Figure~\ref{fig:Ms3}(g) appear to qualitatively match descriptions of the experimentally observed breakup regimes despite the disparity in crossflow speeds and flow dimensionality.
As with the previous simulations, the higher crossflow speed in the $M_s=3$ case results in significantly higher crossflow Weber numbers for each acoustic Weber number.
As a result, a significant number of small droplets are generated even at relatively low acoustic Weber numbers such as Figure~\ref{fig:Ms3}(e) and in the early stages of Figures~\ref{fig:Ms3}(f)-(g).
Catastrophic breakup is observed in the later stages of Figures~\ref{fig:Ms3}(f)-(g).
As in the $M_s=2$ and $M_s=2.5$ simulations, this catastrophic breakup is characterized by a channel which forms in the liquid column, splitting it into two.
This general behavior is similar to that experimentally observed for a waterdrop in a shocktube by Waldman et al~\cite{Waldman1972}. They described the breakup process as an initially continuous stripping of liquid from the droplet surface followed by a growth in the amplitude of surface waves which lead to the final disintegration of the droplet.
This description appears qualitatively similar to the time history of breakup depicted in Figures~\ref{fig:Ms3}(f)-(g).
\revb{ For this shock strength the local gas-liquid density ratio using the initial post-shock gas conditions is $\rho_l/\rho_g\approx216$. The effective gas-liquid density ratio accounting for the bow-shock gives $\rho_l/\rho_g\approx134$.}
\begin{figure}
\includegraphics[width=.95\textwidth,right]{figures/Figure11_2}
\begin{tabular*}{\textwidth}{cc @{\extracolsep{\fill}} rrrrrrr}
& (a) \hspace*{5mm} & (b) \hspace*{5mm} & (c) \hspace*{5mm} & (d) \hspace*{5mm}& (e) \hspace*{5mm}& (f) \hspace*{5mm}& (g) \\
$\mathrm{We_a}$ & 0.05 \hspace*{5mm} & 0.2 \hspace*{5mm} & 1 \hspace*{5mm} & 5 \hspace*{5mm}& 10 \hspace*{5mm}& 50 \hspace*{5mm}& 100 \\
$\mathrm{We_c}$ & 1.1 \hspace*{5mm} & 4.6 \hspace*{5mm} &22.9 \hspace*{5mm}& 114 \hspace*{5mm}& 229 \hspace*{5mm}& 1143 \hspace*{5mm}& 2286 \\
$\mathrm{We_{eff}}$ & 0.71 \hspace*{5mm} &2.8 \hspace*{5mm}&14.1 \hspace*{5mm}& 71 \hspace*{5mm}& 141 \hspace*{5mm}& 707 \hspace*{5mm} & 1414 \\
\end{tabular*}
\caption{\label{fig:Ms3}$M_s=3$ deformation and breakup behavior.}
\end{figure}
\subsection{Drag coefficient}\label{sec:dragcoef}
Figure~\ref{fig:cde} depicts comparisons of the early stages of the drag coefficient with prior numerical results of Meng and Colonius~\cite{Meng2014}, Chen~\cite{Chen2008}, and Terashima and Tryggvason~\cite{Terashima2009}.
The drag coefficient was computed following the approach of Meng and Colonius~\cite{Meng2014} as discussed in section~\ref{sec:drag}.
Good agreement is obtained with the data of~\cite{Meng2014}, disparities in the other results can likely be attributed to the use of a different approach to calculate the drag coefficient, where drift data (and not averaged fluid velocity) is used to estimate the column acceleration.
Further discussion of different approaches for computing the drag coefficient can be found in~\cite{Igra2002} and~\cite{Meng2014}.
\begin{figure}\centering
\subfigure[$M_s=1.47$]{\includegraphics[width=0.45\textwidth]{figures/Figure13a}}
\subfigure[$M_s=2.5$]{\includegraphics[width=0.45\textwidth]{figures/Figure13b}}
\caption{\label{fig:cde}Drag coefficient comparison during the early stages for $M_s=1.47$ (left) and $M_s=2.5$ (right) compared to Meng and Colonius~\cite{Meng2014}, Chen~\cite{Chen2008}, and Terashima and Tryggvason~\cite{Terashima2009}.}
\end{figure}
Figure~\ref{fig:cd} depicts the drag coefficient at the later stages of the simulations with comparisons to Meng and Colonius~\cite{Meng2014}.
An extra simulation was also performed to provide a reference point to a stationary and rigid cylinder in crossflow where the drag coefficient is known.
This was approximated with a high liquid density ($\rho_l=10,000\mbox{ kg}/\mbox{m}^3$) case with $\mathrm{We_a}=1$.
Note that even under these conditions, some deformation of the high density liquid does occur.
Generally for $1000 < \mathrm{Re} < 3\times 10^5$, the drag coefficient of a cylinder is known to be approximately unity~\cite{Anderson}.
This value is plotted as a solid blue line in Figure~\ref{fig:cd1} and agrees well with the present subsonic simulation with a crossflow Reynolds number of 1430.
Meanwhile from Gowen and Perkins~\cite{Gowen1963} the drag coefficient of a stationary cylinder in a $M=1.2$ crossflow (i.e.
the crossflow for $M_s=2.5$) is approximately 1.64 and is plotted as a solid blue line in Figure~\ref{fig:cd3} for reference.
This value reasonably predicts the minimum drag coefficient value for the $\rho_l=10,000\mbox{ kg}/\mbox{m}^3$ simulation and which occurs around $t^*=0.3-0.4$ in Figure~\ref{fig:cd3}.
\begin{figure}\centering
\subfigure[$M_s=1.47$\label{fig:cd1}]{\includegraphics[width=0.45\textwidth]{figures/Figure14a}} \subfigure[$M_s=2$\label{fig:cd2}]{\includegraphics[width=0.45\textwidth]{figures/Figure14b}}\\
\subfigure[$M_s=2.5$\label{fig:cd3}]{\includegraphics[width=0.45\textwidth]{figures/Figure14c}}
\subfigure[$M_s=3$\label{fig:cd4}]{\includegraphics[width=0.45\textwidth]{figures/Figure14d}}
\caption{\label{fig:cd}Drag coefficient comparison at the later stages.
The $M_s=1.47$ (a) and $M_s=2.5$ (c) cases include comparisons to Meng and Colonius~\cite{Meng2014}.
The solid blue line depicts approximate equivalent drag coefficients of a solid circular cylinder with $C_d\approx 1$ in (a) and $C_d\approx 1.64$ in (c).}
\end{figure}
\revb{While the general trend is similar, overall the drag coefficient exhibits less unsteady variation compared to the results of~\cite{Meng2014}. The inclusion of surface tension and especially interface sharpening employed in the current simulations reduces the amount of liquid material stripped from the interface where it would otherwise enter the highly chaotic wake region and contribute to unsteady liquid acceleration measurements. }
Generally, lower drag coefficients are observed with lower Weber numbers for each shock Mach number except $M_s=3$ which shows less relative variation between the drag coefficients at Weber numbers in the range of 1 to 100 in Figure~\ref{fig:cd4}.
Significant differences in the drag as a function of the Weber number are observed in the $M_s=1.47$ and $M_s=2$ cases in Figures~\ref{fig:cd1} and~\ref{fig:cd2}.
Less variation is observed between the higher Weber numbers for the $M_s=2.5$ and $M_s=3$ cases depicted in Figure~\ref{fig:cd3} and~\ref{fig:cd4}.
Gowen and Perkins also noted there was almost no observed variation in the drag coefficient as a function of Reynolds number in the supersonic flow regime for a solid circular cylinder~\cite{Gowen1963}.
They stated that the suction pressures on the downstream side of the cylinder contribute a large part of the total drag in subsonic flows but as a percentage of the total drag this contribution rapidly decreases as the Mach number increases.
Supporting the experimental observations of Temkin and Mehta~\cite{temkin1982}, the unsteady drag is found to be larger in the decelerating relative flows of the liquid columns compared to that of the rigid stationary column.
The coefficients are observed to be twice as large or more compared to the rigid case for all shock Mach numbers.
Interestingly, comparing the present supersonic cases to the subsonic cases shows that at higher Mach numbers there is significantly less variation in the drag coefficient as a function of the Weber number for the liquid columns.
Upon first inspection, this is perhaps surprising as section~\ref{sec:breakup} demonstrated a broad range of breakup behaviors at each Mach number as a function of the Weber number and the drag is computed as an integration over the acceleration of the total liquid volume as it undergoes breakup.
However, an examination of the breakup behaviors for the supersonic cases in Figures~\ref{fig:Ms250} and \ref{fig:Ms3} appears to show a similar deformed diameter progression for the Weber number 1-100 cases within the respective Mach numbers.
To explore this, an effective diameter of the deformed drop was computed and the results are presented in Figure~\ref{fig:deff}.
This value is computed as the total projected length of the liquid on an x-normal plane, where the liquid is defined as $\phi>0.5$.
Comparing the calculated effective diameters, a similar trend is observed for the effective diameter as the drag.
Significant differences are seen in the effective diameter of the subsonic cases while less variation is observed in the supersonic cases at higher Weber numbers.
This suggests the similarities in drag are a product of a similar effective diameter throughout the breakup process, even if the breakup itself differs.
\begin{figure}\centering
\subfigure[$M_s=1.47$\label{fig:deff1}]{\includegraphics[width=0.45\textwidth]{figures/Figure15a}}
\subfigure[$M_s=2$\label{fig:deff2}]{\includegraphics[width=0.45\textwidth]{figures/Figure15b}}\\
\subfigure[$M_s=2.5$\label{fig:deff3}]{\includegraphics[width=0.45\textwidth]{figures/Figure15c}}
\subfigure[$M_s=3$\label{fig:deff4}]{\includegraphics[width=0.45\textwidth]{figures/Figure15d}}
\caption{\label{fig:deff} Effective deformed diameter comparison between acoustic Weber number at the four crossflow velocities.}
\end{figure}
\begin{figure}\centering
\subfigure[$M_s=1.47$\label{fig:cdp1}]{\includegraphics[width=0.45\textwidth]{figures/Figure16a}}
\subfigure[$M_s=2$\label{fig:cdp2}]{\includegraphics[width=0.45\textwidth]{figures/Figure16b}}\\
\subfigure[$M_s=2.5$\label{fig:cdp3}]{\includegraphics[width=0.45\textwidth]{figures/Figure16c}}
\subfigure[$M_s=3$\label{fig:cdp4}]{\includegraphics[width=0.45\textwidth]{figures/Figure16d}}
\caption{\label{fig:cdp}Drag coefficient comparison using the effective diameter for calculation.}
\end{figure}
Figure~\ref{fig:cdp} depicts the drag coefficient computed again using Eq.~\ref{eqn:cd} but with the time dependent effective diameter used in place of the undeformed diameter term $d_0$. As noted by Meng and Colonius \cite{Meng2014}, the computed drag coefficients can largely be assumed as constant regardless of shock speed during the early stages of breakup when accounting for the effective deforming diameter of the droplets. Interestingly, the present simulations show that this assumption is still relatively reasonable during the mid and later stages of breakup and even when accounting for the effects of surface tension across a wide range of Weber numbers. This is a notable result given the wide range of breakup behaviours observed in the present simulations. These results are especially relevant at supersonic speeds where less variation of the drag coefficient is observed as a function of Weber number.
\section{Three-dimensional simulation of droplet breakup}\label{sec:3dsim}
\revc{
A three-dimensional simulation of droplet breakup was performed. The objective being to further validate the ability of the numerical method to predict three-dimensional droplet breakup behaviour and to provide a point of comparison to the two-dimensional liquid column breakup simulations.
The flow conditions were set to match the experimental conditions in Figure 33 of Theofanous et al \cite{Theofanous2012}. Specifically, the simulation consists of a water droplet impacted by a shockwave with post-shock crossflow conditions of $M=0.32$, $\mathrm{Re}_{g}=2.2 \times 10^4$, $\mathrm{We}= 7.8 \times 10^2$, and $\mathrm{Oh}=2.4 \times 10^{-3}$.
Given the computational complexity of such a three-dimensional simulation, the grid resolution in the vicinity of the droplet was set to a relatively coarse $D/80$ and symmetry boundary conditions were employed at the centerline such that the computational domain consisted of only a quarter of the overall droplet.
Figure~\ref{fig:3dsim} shows the progression of the droplet deformation and breakup. The present resolution is inadequate to capture the fine scale features of the breakup process however the overall droplet shape evolution over time reasonably agrees with the experimental behavior shown in the video supplementing Figure 33 of Theofanous et al \cite{Theofanous2012} (see supplementary multimedia material of \cite{Theofanous2012} for video).
}
\begin{figure}\centering
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0005.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0010.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0015.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0020.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0025.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0030.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0035.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0040.png}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{figures/output-0045.png}}
\caption{\label{fig:3dsim} Snapshots from three-dimensional droplet breakup simulation corresponding to experiment from Figure 33 of Theofanous et al ~\cite{Theofanous2012}. Simulation consists of a water droplet impacted by a shock-wave with freestream flow conditions $M=0.32$, $\mathrm{Re}_{g}=2.2 \times 10^4$, $\mathrm{We}= 7.8 \times 10^2$, and $\mathrm{Oh}=2.4 \times 10^{-3}$. Note the crossflow is moving from right to left.}
\end{figure}
\section{Conclusion}\label{sec:conclusions3}
Numerical experiments are performed of $M_s=1.47, 2, 2.5$ and $M_s=3$ shockwaves interacting with liquid columns at various Weber numbers.
The simulations account for the effects of compressibility, molecular viscosity, and surface tension.
The shockwaves induce a crossflow leading to aerobreakup of the liquid column.
A diverse range of complex interface dynamics and breakup modes are observed with good correlation to experimentally observed behavior across the range of Weber numbers tested.
During the early stages of the breakup process (i.e deformation), similar behavior is observed across the range of Mach numbers tested.
However, at later times the breakup behavior varies significantly depending on both the Mach and Weber numbers.
Additionally, lower Weber numbers result in lower observed drag coefficients for the liquid columns.
Depending on the Weber number, the drag coefficients are still approximately two to three times those observed for a rigid liquid column.
As a function of the Weber number, significantly less variation in the drag coefficient and qualitative flow features is observed as the Mach number increases.
In addition, when utilizing a deformed diameter in the drag coefficient calculation the results show significantly reduced variation between Weber numbers across all Mach numbers.
This has implications for subgrid atomization models which determine droplet trajectories based on estimated particle drag coefficients.
\revc{A three-dimensional simulation, while under-resolved, displays reasonable agreement with the corresponding experimental breakup behavior, highlighting the potential of the numerical approach for future investigations. }
\section{Acknowledgments}
This work is supported by Taitech, Inc.
under sub-contracts TS15-16-02-004 and TS16-16-61-004 (primary contract FA8650-14-D-2316).
The computational resources in this paper are partially supported by the HPC@ISU equipment at Iowa State University, some of which has been purchased through funding provided by NSF under MRI grant number CNS 1229081 and CRI grant number 1205413. This work has been approved for unlimited release: LA-UR-19-25304.
| proofpile-arXiv_065-7117 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Enumeration of lozenge tilings of a region on a triangular lattice has been studied for many decades. In particular, people are interested in regions whose number of lozenge tilings is expressed as a simple product formula. One such region is a hexagonal region with a triangular hole in the center. Many works have been done on this topic by Ciucu [2], Ciucu et al. [6], and Okada and Krattenthaler [10]. Later, Rosengren [11] found a formula for a weighted enumeration of lozenge tilings of a hexagon with an arbitrary triangular hole. He pointed out that the ratio between numbers of lozenge tilings of two such regions whose holes have symmetric position with respect to the center has a nice product formula. In this paper, we give a conceptual explanation of the symmetry, which enables us to generalize the result to hexagons with arbitrary collinear triangular holes. In his paper, Ciucu [3] defined a new structure, called a fern, which is an arbitrary string of triangles of alternating orientations that touch at corners and are lined up along a common axis. He considered a hexagon with a fern removed from its center and proved that the ratio of the number of lozenge tilings of two such regions is given by a simple prodcut formula. Later, Ciucu [5] also proved that the same kind of ratio for centrally symmetric lozenge tilings also has a simple product formula. In particular, he pointed out that for hexagons with a fern removed from the center, the ratio of centrally symmetric lozenge tilings is the square root of the ratio of the total number of tilings. Ciucu also conjectured in [5] (See also [4]) that this square root phenomenon holds more generally, when any finite number of collinear ferns are removed in a centrally symmetric way. In this current paper, we prove Ciucu's conjecture, and we extend it further.
\section{Statement of Main Results}
Any hexagon on a triangular lattice has a property that difference between two parallel sides is equal for all 3 pairs. Thus, we can assume that the side lengths of the hexagon are \textit{a, b+k, c, a+k, b, c+k} in clockwise order, where \textit{a} is a length of a top side. Also, without loss of generality, we can assume that \textit{k} is non-negative and a southeastern side of a hexagon (=\textit{c}) is longer than or equal to a side length of a southwestern side (=\textit{b}). Note that this hexagonal region has \textit{k} more up-pointing unit-triangles than down-pointing unit-triangles. Since every lozenge consists of one up-pointing unit-triangle and one down-pointing unit-triangle, to be completely tiled by lozenges, we have to remove \textit{k} more up-pointing unit-triangles than down-pointing unit-triangles from the hexagon. There are many ways to do that, but let's consider a following case.
Let's call a set of triangles on a triangular lattice is \textit{collinear} or \textit{lined up} if horizontal side of all triangles are on a same line.
Now, let's consider any horizontal line passing through the hexagon. Suppose the line is \textit{l}-th horizontal line from bottom side of the hexagon. Note that the length of the horizontal line depends on the size of \textit{l}: Let's denote the length of the line by \textit{L(l)}. Then we have $L(l)=a+k-l+min(b,l)+min(c,l)$.
For any subsets $X=\{x_1, ..., x_{m+k}\}$ and $Y=\{y_1, ..., y_m\}$ of $[L(l)]:=\{1,2,...,L(l)\}$, let $H_{a,b,c}^{k,l}(X:Y)$ be the region obtained from the hexagon of side length $a$, $b+k$, $c$, $a+k$, $b$, $c+k$ in clockwise order from top by removing up-pointing unit-triangles whose labels of horizontal sides form a set $X=\{x_1, x_2, ..., x_{m+k}\}$, and down-pointing unit-triangles whose labels of horizontal sides form a set $Y=\{y_1, y_2, ..., y_m\}$ on the \textit{l}-th horizontal line from the bottom, where labeling on the horizontal line is $1,2,...,\textit{L(l)}$ from \textbf{left to right}. Let's call the horizontal line as a \textit{baseline} of removed triangles. Similarly, let $\overline{H}_{a,b,c}^{k,l}(X:Y)$ be a same kind of region, except that labeling on the horizontal line is $1$, $2$, ..., $L(l)$ from \textbf{right to left}. Also, for any region \textit{R} on a triangular lattice, let \textit{M(R)} be a number of lozenge tilings of the region. First theorem expresses a ratio of numbers of lozenge tilings of two such region as a simple product formula.
\begin{thm}
Let a, b, c, k, l, m be any non-negative integers such that $b \leq c$, $0 \leq l \leq b+c$ and $m \leq min(b, l, b+c-l)$. Also, let $X=\{x_1, x_2, ..., x_{m+k}\}$ and $Y=\{y_1, y_2, ..., y_m\}$ be subsets of $\lbrack L(l)\rbrack=\{1, 2, ..., L(l)\}$. Then
\begin{equation}
\begin{aligned}
&\frac{M({H_{a,b,c}^{k,l}(X:Y)})}{M({\overline{H}_{a,b,c}^{k,b+c-l}(X:Y)})}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\\
&\cdot\frac{\prod_{i=1}^{m+k}(x_i-b+max(b, l))_{(b-l)}\cdot(a+k+min(b, l)+1-x_i)_{(c-l)}}{\prod_{j=1}^{m}(y_i-b+max(b, l))_{(b-l)}\cdot(a+k+min(b, l)+1-y_j)_{(c-l)}}
\end{aligned}
\end{equation}
\end{thm}
where the hyperfactorial \textit{H}(\textit{n}) is defined by
\begin{equation}
H(n):=0!1!\cdot\cdot\cdot(n-1)!
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{1.jpg}
\caption{Two regions $H_{4,3,7}^{2,4}(\{2,4,6,9\}:\{4,8\})$(left) and $H_{4,3,7}^{2,6}(\{2,4,6,9\}:\{4,8\})$(right)}
\end{figure}
To state next results, we need to recall a result of Cohn, Larsen and Propp [8], which is a lozenge tilings interpretation of a classical result of Gelfand and Tsetlin [9]. Recall that $\Delta(S):=\prod_{s_1<s_2, s_1,s_2\in S}{(s_2-s_1)}$ and $\Delta(S,T):=\prod_{s \in S ,t \in T}{|t-s|}$ for any finite sets S and T.
\begin{prop}
For any non-negative integers $m, n$ and any subset $S=\{s_1, s_2,...,s_n\} \subset [m+n]:=\{1, 2,..., m+n\}$, let $T_{m,n}(S)$ be the region on a triangular lattice obtained from the trapezoid of side lengths $m$, $n$, $m+n$, $n$ clockwise from the top by removing the up-pointing unit-triangles whose bottoms sides are labeled by elements of a set $S=\{s_1, s_2,...,s_n\}$, where bottom side of the trapezoid is labeled by $1, 2, ..., m+n$ from left to right. Then
\begin{equation}
M(T_{m,n}(S))=\frac{\Delta(S)}{\Delta([n])}=\frac{\Delta(S)}{H(n)}
\end{equation}
\end{prop}
For any finite subset of integers $S=\{s_1, s_2,..., s_n\}$, where elements are written in increasing order, let $T(S)$ be a region obtained by translating a region $T_{s_n-(s_1-1)-n,n}(s_1-(s_1-1), s_2-(s_1-1),...,s_n-(s_1-1))$ by $(s_1-1)$ units to the right and $s(S):=M(T(S))=\frac{\Delta(S)}{H(n)}$. A region on a triangular lattice is called \textit{balanced} if it contains same number of up-pointing unit-triangles and down-pointing unit-triangles. Geometrically, $T(S)$ is the balanced region that can be obtained from a trapezoid of bottom length $(s_n-s_1+1)$ by deleting up-pointing unit-triangles whose labels are $s_1$, $s_2$,..., $s_n$ on bottom, where bottom line is labeled by $s_1$, $(s_1+1)$,..., $(s_n-1)$, $s_n$ (See Figure 2.2).
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{2.jpg}
\caption{A region T(\{-3, 0, 1, 3, 7\})}
\end{figure}
In his paper, Ciucu [3] defined a new structure, called a \textit{fern}, which is an arbitrary string of triangles of alternating orientations that touch at corners and are lined up. For non-negative integers $a_1$,...,$a_k$, a \textit{fern} $F(a_1,...,a_k)$ is a string of \textit{k} lattice triangles lined up along a horizontal lattice line, touching at their vertices, alternately oriented up and down and having sizes $a_1$,...,$a_k$ from left to right (with the leftmost oriented up). We call the horizontal lattice line as a \textit{baseline} of the fern.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{3.jpg}
\caption{Budded fern $F(2,1,-1,-1,1,-1:2,0,1,0,1)$ (top left) and its baseline representation (top right), corresponding budded bowtie (bottom left) and its baseline representation (bottom right). A red point is a turning point}
\label{fig:my_label}
\end{figure}
Now, let's give additional structure to fern by adding \textit{buds} (triangles) on the baseline, and we will call this new structure as a \textit{budded fern}. To label this new structure, we remove all unit-triangles from the budded fern, excepts unit-triangles whose horizontal side is on the baseline. We call it as a \textit{baseline representation} of the budded fern. Then we count numbers of consecutive up-pointing unit-triangles, down-pointing unit-triangles and vertical unit-lozenges on the baseline. When we count these numbers, up-pointing (or down-pointing) unit-triangle which is contained in a vertical unit-lozenge is not considered as an up-pointing (or down-pointing) unit-triangle. If an up-pointing unit-triangle and a down-pointing unit-triangle are adjacent, we think as if there are $0$ vertical lozenges between them. Now, we line up these numbers from left to right, and put - to numbers that represent numbers of down-pointing unit-triangles. Then, by allowing $a^k_1=0$ and $a^k_{r_k}=0$, we get a sequence of integers $a^k_1$, $w^k_1$, $a^k_2$, $w^k_2$,..., $a^k_{r_k-1}$, $w^k_{r_k-1}$, $a^k_{r_k}$, where $a^k_i$ represent a (signed) number of consecutive up-pointing (or down-pointing) unit-triangles, and $w^k_i$ represent a number of consecutive vertical unit-lozenges. Let $A_k$ represents a sequence $(a^k_1, a^k_2,,..., a^k_{r_k})$ and $W_k$ represents a sequence $(w^k_1, w^k_2,,..., w^k_{r_k-1})$. Then we denote the original budded fern as $F(A_k:W_k)$, and its baseline representation as $F_{br}(A_k:W_k)$. Let $L^k_i$ be a leftmost vertex of $a^k_i$ consecutive triangles, and $R^k_i$ be a rightmost vertex of the consecutive triangles. Also, let $I^k:=\{i\in[r_k]|a^k_i> 0\}$, $J^k:=\{i\in[r_k]|a^k_i< 0\}$, $p_k:=\sum_{i\in I^k} a^k_i$ and $n_k:=-\sum_{i\in J^k} a^k_i$.
From this budded fern, we will construct a corresponding \textit{budded bowtie} as follows. From a baseline representation of the budded fern, we move up-pointing unit-triangles to left, and down-pointing unit-triangles to right along the baseline, fixing vertical lozenges. Then we call a right vertex of a right-most up-poiting unit-triangle which is not a part of a vertical lozenge as a \textit{turning point} of a new structure and denote it by $T^k$. Then we put vertical lozenges between consecutive up-pointing (or down-pointing) unit-triangles as much as possible. Then we get a bowtie (possibly a slipped bowtie) with some triangles attached. We call this as a \textit{budded bowtie} and denote it and its baseline representation by $B(A_k:W_k)$ and $B_{br}(A_k:W_k)$, respectively. Also, let $u_k$ be a smallest positive integer such that $p_k \leq |a^k_1|+|a^k_2|+...+|a^k_{u_k}|$, and $v_k \in [a^k_{u_k}]$ be a positive integer such that $|a^k_1|+|a^k_2|+...+|a^k_{u_k-1}|+v_k=p_k$.
When we say a budded fern $F(A_k:W_k)$, we equip with corresponding sequences $A_k$, $W_k$, sets $I^k$, $J^k$, indices $r_k$, $p_k$, $n_k$, $u_k$, $v_k$ and vertices $L_1^k, L_2^k,..., L_{r_k}^k$, $R_1^k, R_2^k,..., R_{r_k}^k$, $T^k$.
Now, let $H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ be a region obtained from the hexagon of side length $a$, $b+k$, $c$, $a+k$, $b$, $c+k$ in clockwise order from top by removing budded ferns $F(A_1:W_1),..., F(A_n:W_n)$ on a $l$-th horizontal line from the bottom so that a distance between a leftmost vertex on the horizontal line and a leftmost vertex of $F(A_1:W_1)$ is $m_1$, a distance between a rightmost vertex on the horizontal line and a rightmost vertex of $F(A_t:W_t)$ is $m_{t+1}$, and a distance between two adjacent budded ferns $F(A_i:W_i)$ and $F(A_{i+1}:W_{i+1})$ is $m_{i+1}$ for all $i\in[t-1]$. We can similarly think of a region $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$. From the region, we label the $l$-th horizontal line from the bottom by 1,2,...,$L(l)$ from left to right. Let $X^1$ be a set of labels whose corresponding segment is a side of an up-pointing unit-triangular hole, but not a side of an down-pointing unit-triangular hole. Similarly, let $X^2$ be a set of labels of segments whose corresponding segment is a side of an down-pointing unit-triangular hole, but not a side of an up-pointing unit-triangular hole and $W$ be a set of label of segments whose corresponding segment is a side of both up-pointing and down-pointing unit-triangular holes.
Similarly, let $H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ be a region obtained from the hexagon of side length $a$, $b+k$, $c$, $a+k$, $b$, $c+k$ in clockwise order from top by removing budded bowties $B(A_1:W_1),..., B(A_t:W_t)$ from a $l$-th horizontal line, where positions of removed budded bowties on the horizontal line is exactly same as positions of corresponding budded ferns. Again, we can think of a region $H_{a,b,c}^{k,l}(B_{br}(A_1:W_1),..., B_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ and we can define sets $Y^1$ and $Y^2$ from this region as we defined sets $X^1$ and $X^2$ from $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$. Note that we have $X^1\cup X^2=Y^1\cup Y^2$.
For any point and a line on the triangular lattice, let distance between a point and a line be a shortest length of a path from the point to the extension of the line along lattice. Especially, for any lattice point $E$ in a hexagon, let $d_{NW}(E)$ be a distance between the point $E$ and a northwestern side of the hexagon. Similarly, we can define $d_{SW}(E)$, $d_{NE}(E)$ and $d_{SE}(E)$ to be distances between a point $E$ and a southwestern side, northeastern side and southeastern side of the hexagon, respectively. Next theorem expresses a ratio of numbers of lozenge tilings of the two regions as a simple product formula.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{4.jpg}
\caption{An example of regions (from left to right): $H_{12,8,15}^{3,13}(F(2,-1,1:1,1), F(-1,1,2,-1:0,1,0):3,6,2)$ and $H_{12,8,15}^{3,13}(B(2,-1,1 : 1,1), B(-1,1,2,-1 : 0,1,0) : 3, 6, 2)$}
\end{figure}
\begin{thm}
Let $a, b, c, k, l, m_1,...,m_{t+1}$ be any non-negative integers and $F(A_1:W_1)$,..., $F(A_t:W_t)$ be any budded ferns. Let $p:=\sum_{i=1}^{t}{p_i}$, $n:=\sum_{i=1}^{t}{n_i}$, $w:=\sum_{i=1}^{t}{\sum_{j=1}^{r_i-1}w_j^i}$ and $m:=\sum_{i=1}^{t+1}m_i$. Suppose indices satisfy following conditions: 1) $p=n+k$, 2) $p+n+w+m=L(l)$, 3) $n+w\leq min(b,l,b+c-l)$. Then we have
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{k,l}(F(A_1:W_1), F(A_2:W_2),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{k,l}(B(A_1:W_1), B(A_2:W_2),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\\
&
\begin{aligned}
\cdot\prod_{i=1}^{t}\Bigg[&\frac{H(d_{SW}(T^i))H(d_{NW}(L^i_{u_i}))H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))}{H(d_{SW}(L^i_{u_i}))H(d_{NW}(T^i))H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))}\\
&\cdot\prod_{j < u_i, j \in J_i}\frac{H(d_{SW}(R^i_j))H(d_{NW}(L^i_j))H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}{H(d_{SW}(L^i_j))H(d_{NW}(R^i_j))H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))} \\
&\cdot\prod_{j \geq u_i, j \in I_i}\frac{H(d_{SW}(L^i_j))H(d_{NW}(R^i_j))H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}{H(d_{SW}(R^i_j))H(d_{NW}(L^i_j))H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))} \Bigg]
\end{aligned}
\end{aligned}
\end{equation}
\end{thm}
Now, let's consider a case when a region $H_{a,b,c,k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ is centrally symmetric, that is invariant under $180^{\circ}$ rotation with respect to a center of a hexagon. To satisfy this condition, the region should satisfy following conditions:\\
(1) $b$ and $c$ have same parity\\
(2) $k=0$ and $l=\frac{b+c}{2}$\\
(3) $m_s=m_{t+2-s}$ for all $s\in [t+1]$ and $r_i=r_{t+1-i}$ for all $i\in [t]$\\
(4) $a^i_{j}=-a^{t+1-i}_{r_i+1-j}$ and $w^i_u=w^{t+1-i}_{r_i-u}$ for all $i\in [t]$, $j\in [r_i]$, $u\in [r_i-1]$
When these conditions hold, a region $H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ and a corresponding region $H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ are centrally symmetric, so we can compare their number of centrally symmetric lozenge tilings. Let $M_\odot(G)$ be a number of centrally symmetric lozenge tiling of a region G on a triangular lattice. The last theorem expresses a ratio of numbers of centrally symmetric lozenge tilings of the two regions as a simple product formula.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{5.jpg}
\caption{Centrally symmetric regions (from left to right) $H_{10,11,13}^{0,12}(F(-1,1,-2,0:1,0,1), F(0,2,-1,1:1,0,1):2,5,2)$ and $H_{10,11,13}^{0,12}(B(-1,1,-2,0:1,0,1), B(0,2,-1,1:1,0,1):2,5,2)$}
\end{figure}
\begin{thm}
Let $a,b,c, m_1,..., m_{t+1}$ be any non-negative integers and $F(A_1:W_1),..., F(A_t:W_t)$ be any budded ferns that satisfy all four conditions stated above. Let $p:=\sum_{i=1}^{t}{p_i}$, $n:=\sum_{i=1}^{t}{n_i}$, $w:=\sum_{i=1}^{t}{\sum_{j=1}^{r_i-1}w_j^i}$ and $m:=\sum_{i=1}^{t+1}m_i$. Suppose indices satisfy following additional conditions:\\
1) $p+n+w+m=a+b$\\
2) $p+w=n+w\leq b$. Then we have
\begin{equation}
\begin{aligned}
&\frac{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\sqrt{\frac{M(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}}\\
&=\frac{s(X^1)}{s(Y^1)}\cdot\prod_{i=1}^{t}\Bigg[\frac{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))}{H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))}\\
&\cdot\prod_{j<u_i, j\in J_i}\frac{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}\prod_{j\geq u_i, j\in I_i}\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}\Bigg]
\end{aligned}
\end{equation}
\end{thm}
\section{Proof of the main results}
A region on a triangular lattice is called \textit{balanced} if it contains same number of up-pointing and down-pointing unit-triangles. Let's recall a useful result which is implicit in work of Ciucu [1] (See also Ciucu and Lai [7]).
\begin{lem}
(Region-splitting Lemma). Let $R$ be a balanced region on a triangular lattice. Assume that a subregion $S$ of $R$ satisfies the following two conditions:\\
(1) (Seperating Condition) There is only one type of unit-triangle (either up-pointing or down-pointing) running along each side of the border between $S$ and $R-S$\\
(2) (Balancing Condition) $S$ is balanced.
Then
\begin{equation}
M(R)=M(S)M(R-S)
\end{equation}
\end{lem}
To prove theorems in this paper, we need to simplify expressions that involves $\Delta$. For this purpose, let's recall a property of $\Delta$:
Let $X=\{x+1,x+2_,...,x+m\}$ and $Y=\{y+1,y+2,...,y+n\}$ be two sets of consecutive integers such that $x+m<y+1$. Then
\begin{equation}
\begin{aligned}
\Delta(X,Y)=\prod_{i=1}^{m}(y-x-m+i)_n&=\prod_{i=1}^{m}\frac{(y-x+n-m+i-1)!}{(y-x-m+i-1)!}\\
&=\frac{H(y-x-m)H(y-x+n)}{H(y-x)H(y-x+n-m)}
\end{aligned}
\end{equation}
The crucial idea of this paper is the following:\\
Each of our three main results involves the ratio of the number of tilings of two regions. For each of these two regions, we partition the set of lozenge tilings of each region according to the positions of the vertical lozenges that are bisectecd by the baseline. The partition classes obtained for the numerator and denominator are naturally paired up. Then, using Proposition 2.2. and Lemma 3.1., we verify that the ratio of the number of tilings in the corresponding partition classes does not depend on the choice of partition class (i.e. it is the same for all classes of the partition).\\
\textit{Proof of Theorem 2.1.} Let's first consider a case when $b < l \leq c$.\\
From any lozenge tiling of $H_{a,b,c}^{k,l}(X:Y)$, we will generate a pair of lozenge tiling of two trapezoidal regions with some dent on top (or bottom). If we focus on lozenges below the baseline, then the lozenges form a pentagonal region that has $b$ down-pointing unit-triangle dents on top. Among $b$ dents, $m$ of them are from the region $H_{a,b,c}^{k,l}(X:Y)$ itself, namely down-pointing unit-triangles whose bases are labeled by $y_1$, $y_2$,..., $y_m$, and remaining $(b-m)$ of them are down-pointing unit-triangles whose labels of their bases are from $[L(l)]\setminus (X\cup Y)=[a+b+k]\setminus (X\cup Y)$. Let $Z:=\{z_1, z_2,..., z_{b-m}\} \subset [L(l)]\setminus (X\cup Y)$ be a set of labels of bases of remaining $(b-m)$ dents, and let $B:=\{-|b-l|+1, -|b-l|+2,..., -1, 0\}$. Then, we can easily see that there is a natural bijection between a set of lozenge tilings of the pentagonal region having $b$ down-pointing unit-triangle dents on top and a set of lozenge tilings of a region $T(B \cup Z\cup Y)$. So from a lozenge tiling of $H_{a,b,c}^{k,l}(X:Y)$, we generate a lozenge tiling of a region $T(B \cup Z\cup Y)$.
\begin{figure}
\centering
\includegraphics[width=11cm]{6.jpg}
\caption{Correspondence between a lozenge tiling and a pair of trapezoid regions with dents (when $b<l\leq c$)}
\end{figure}
Now we return to the lozenge tiling of $H_{a,b,c}^{k,l}(X:Y)$, and focus on lozenges above the baseline. Again, they from a pentagonal region that have $(b+k)$ up-pointing unit-triangle dents on bottom. Among $(b+k)$ dents, $(m+k)$ of them are from the region $H_{a,b,c}^{k,l}(X:Y)$ itself, namely up-pointing unit-triangles whose bases are labeled by $x_1$, $x_2$,..., $x_{m+k}$ and remaining $(b-m)$ of them are up-pointing unit-triangles whose labels form a set $Z$. Let $C:=\{L(l)+1,L(l)+2,...,L(l)+|c-l|\}$. Then same observation allow us to see that there is a bijection between a set of lozenge tilings of the pentagonal region having $(b+k)$ up-pointing unit-triangle dents on bottom and a set of lozenge tilings of a region $T(Z\cup X\cup C)$. Thus, we generate a lozenge tiling of a region $T(Z\cup X\cup C)$ from a lozenge tiling of $H_{a,b,c}^{k,l}(X:Y)$.
Hence, from a lozenge tiling of $H_{a,b,c}^{k,l}(X:Y)$, we generate a pair of lozenge tiling of a region $T(B \cup Z\cup Y)$ and $T(Z\cup X\cup C)$ and this correspondence is reversible (See Figure 3.1.). Now, we partiton a set of lozenge tiling of the region $H_{a,b,c}^{k,l}(X:Y)$ by a set $Z:=\{z_1, z_2,..., z_{b-m}\} \subset [L(l)]\setminus (X\cup Y)$ which represents labels of position of vertical lozenges on the baseline. Number of lozenge tilings of the region $H_{a,b,c}^{k,l}(X:Y)$ with $(b-m)$ vertical lozenges on the baseline whose labels of position form a set $Z:=\{z_1, z_2,..., z_{b-m}\} \subset [L(l)]\setminus (X\cup Y)$ is $H_{a,b,c}^{k,l}(Z\cup X:Z\cup Y)$. Also, by Lemma 3.1., $M(H_{a,b,c}^{k,l}(Z\cup X:Z\cup Y))$ is just a product of number of lozenge tilings of two pentagonal regions with unit-triangular dents on top (or bottom). However, numbers of lozenge tilings of two pentagonal region is same as number of lozenge tilings of regions $T(B \cup Z\cup Y)$ and $T(Z\cup X\cup C)$, respectively. Thus, we have
\begin{equation}
\begin{aligned}
&H_{a,b,c}^{k,l}(X:Y)\\
&=\sum_{Z=\{z_1, z_2,..., z_{b-m}\} \subseteq [L(l)]\setminus (X\cup Y)}H_{a,b,c}^{k,l}(X\cup Z:Y\cup Z)\\
&=\sum_{Z \subseteq [L(l)]\setminus (X\cup Y)}M(T(X\cup{Z}\cup{C}))M(T(B\cup{Y}\cup{Z}))\\
&=\sum_{Z \subseteq [L(l)]\setminus (X\cup Y)}s(X\cup{Z}\cup{C})s(B\cup{Y}\cup{Z})\\
&=\sum_{Z\subseteq [L(l)]\setminus({{X}\cup{Y}})}{\frac{\Delta(X\cup{Z}\cup{C})}{H(b+c+k-l)}}\cdot{\frac{\Delta(B\cup{Y}\cup{Z})}{H(l)}}\\
&=\frac{1}{H(l)\cdot H(b+c+k-l)}\cdot\sum_{Z\subseteq[L(l)]\setminus({{X}\cup{Y}})}{\Delta(X\cup{Z}\cup{C})\cdot\Delta(B\cup{Y}\cup{Z})}
\end{aligned}
\end{equation}
A lozenge tiling of ${\overline{H}_{a,b,c}^{k,b+c-l}(x_1,x_2,...,x_{m+k}:y_1,y_2,...,y_m)}$ can be also analyzed in a similar way and we can express a number of lozenge tiling of it as follows:
\begin{equation}
\begin{aligned}
&M({\overline{H}_{a,b,c}^{k,b+c-l}(X:Y)})\\
&=\frac{1}{H(k+l)\cdot H(b+c-l)}\cdot\sum_{Z\subseteq[L(l)]\setminus({{X}\cup{Y}})}{\Delta(B\cup{X}\cup{Z})\cdot\Delta(Y\cup{Z}\cup{C})}
\end{aligned}
\end{equation}
where the sum is taken over all $(b-m)$ elements subset $Z\subseteq[L(l)]\setminus({{X}\cup{Y}})$.\\
However, for any $(b-m)$ elements subset $Z\subseteq[L(l)]\setminus({{X}\cup{Y}})$,
\begin{equation}
\begin{aligned}
\frac{\Delta(X\cup{Z}\cup{C})\cdot\Delta(B\cup{Y}\cup{Z})}{\Delta(B\cup{X}\cup{Z})\cdot\Delta(Y\cup{Z}\cup{C})}&=\frac{\Delta(X)\Delta(Z)\Delta(C)\Delta(X,Z)\Delta(X,C)\Delta(Z,C)}{\Delta(B)\Delta(X)\Delta(Z)\Delta(B,X)\Delta(B,Z)\Delta(X,Z)}\\
&~~~\cdot\frac{\Delta(B)\Delta(Y)\Delta(Z)\Delta(B,Y)\Delta(B,Z)\Delta(Y,Z)}{\Delta(Y)\Delta(Z)\Delta(C)\Delta(Y,Z)\Delta(Y,C)\Delta(Z,C)}\\
&=\frac{\Delta(X,C)\Delta(B,Y)}{\Delta(B,X)\Delta(Y,C)}
\end{aligned}
\end{equation}
Note that this ratio does not depend on a choice of a set Z. Hence, by combining (3.3), (3,4) and (3.5), we have
\begin{equation}
\begin{aligned}
&\frac{M({H_{a,b,c}^{k,l}(X:Y)})}{M({\overline{H}_{a,b,c}^{k,b+c-l}(X:Y)})}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\cdot\frac{\sum_{Z\subseteq[a+b+k]\setminus({{X}\cup{Y}})}{\Delta(X\cup{Z}\cup{C})\cdot\Delta(B\cup{Y}\cup{Z})}}{\sum_{Z\subseteq[a+b+k]\setminus({{X}\cup{Y}})}{\Delta(B\cup{X}\cup{Z})\cdot\Delta(Y\cup{Z}\cup{C})}}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\cdot\frac{\Delta(X,C)\Delta(B,Y)}{\Delta(B,X)\Delta(Y,C)}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\cdot\frac{\prod_{i=1}^{m+k}{(a+b+k+1-x_i)_{(c-l)}}\cdot\prod_{j=1}^{m}{(y_j)_{(l-b)}}}{\prod_{i=1}^{m+k}{(x_i)_{(l-b)}}\cdot\prod_{j=1}^{m}{(a+b+k+1-y_j)_{(c-l)}}}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\cdot\frac{\prod_{i=1}^{m+k}(x_i+l-b)_{(b-l)}(a+b+k+1-x_i)_{(c-l)}}{\prod_{j=1}^{m}(y_i+l-b)_{(b-l)}(a+b+k+1-y_j)_{(c-l)}}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\\
&~~~\cdot\frac{\prod_{i=1}^{m+k}(x_i-b+max(b, l))_{(b-l)}(a+k+min(b, l)+1-x_i)_{(c-l)}}{\prod_{j=1}^{m}(y_i-b+max(b,l))_{(b-l)}(a+k+min(b, l)+1-y_j)_{(c-l)}}
\end{aligned}
\end{equation}
Now let's consider a case when $l \leq b$\\
Similar observation enable us to observe that $M(H_{a,b,c,k,l}(X:Y))$ can be written as a sum of $M(H_{a,b,c}^{k,l}(X\cup Z:Y\cup Z))$, where $Z=\{z_1, z_2,...,z_{l-m}\}\subset[L(l)]\setminus(X\cup Y)=[a+k+l]\setminus(X\cup Y)$ represents a set of labels of positions of vertical lozenges on the baseline. Also, by Lemma and 3.1. and same argument as we used in previous case, $M(H_{a,b,c}^{k,l}(X\cup Z:Y\cup Z))$ is equal to a product of $M(T(B\cup{X}\cup{Z}\cup{C}))$ and $M(T(Y\cup{Z}))$, where $B=\{-|b-l|+1, -|b-l|+2,..., -1, 0\}$ and $C=\{L(l)+1, L(l)+2,..., L(l)+|c-l|\}$. Hence
\begin{figure}
\centering
\includegraphics[width=11cm]{7.jpg}
\caption{Correspondence between a lozenge tiling and a pair of trapezoid regions with dents (when $l\leq b$)}
\end{figure}
\begin{equation}
H_{a,b,c}^{k,l}(Z\cup X:Z\cup Y)=\frac{\Delta(B\cup{X}\cup{Z}\cup{C})}{H(b+c+k-l)}\cdot\frac{\Delta(Y\cup{Z})}{H(l)}
\end{equation}
If we sum over every $(l-m)$ element set $Z\subseteq[L(l)]\setminus({{X}\cup{Y}})$, then we have a representation of number of lozenge tiling of $H_{a,b,c}^{k,l}(X:Y)$ as follows:
\begin{equation}
\begin{aligned}
&M(H_{a,b,c}^{k,l}(X:Y))\\
&=\frac{1}{H(l)\cdot H(b+c+k-l)}\cdot\sum_{Z\subseteq[L(l)]\setminus({{X}\cup{Y}})}{\Delta(B\cup{X}\cup{Z}\cup{C})\cdot\Delta(Y\cup{Z})}
\end{aligned}
\end{equation}
By same observation, we can represent a number of lozenge tiling of a hexagon $\overline{H}_{a,b,c}^{k,b+c-l}(X:Y)$ as follows:
\begin{equation}
\begin{aligned}
&M(\overline{H}_{a,b,c}^{k,b+c-l}(X:Y))\\
&=\frac{1}{H(k+l)\cdot H(b+c-l)}\cdot\sum_{Z\subseteq[L(l)]\setminus({{X}\cup{Y}})}{\Delta(X\cup{Z})\cdot\Delta(B\cup{Y}\cup{Z}\cup{C})}
\end{aligned}
\end{equation}
Now, we observe a ratio $\frac{\Delta(B\cup{X}\cup{Z}\cup{C})\cdot\Delta(Y\cup{Z})}{\Delta(X\cup{Z})\cdot\Delta(B\cup{Y}\cup{Z}\cup{C})}$ for any subset $Z\subseteq[L(l)]\setminus({{X}\cup{Y}})$ with $(l-m)$ elements:
\begin{equation}
\begin{aligned}
&\frac{\Delta(B\cup{X}\cup{Z}\cup{C})\cdot\Delta(Y\cup{Z})}{\Delta(X\cup{Z})\cdot\Delta(B\cup{Y}\cup{Z}\cup{C})}\\
&=\frac{\Delta(B)\Delta(X)\Delta(Z)\Delta(C)\Delta(B,X)\Delta(B,Z)\Delta(B,C)\Delta(X,Z)\Delta(X,C)\Delta(Z,C)}{\Delta(X)\Delta(Z)\Delta(X,Z)}\\
&\cdot\frac{\Delta(Y)\Delta(Z)\Delta(Y,Z)}{\Delta(B)\Delta(Y)\Delta(Z)\Delta(C)\Delta(B,Y)\Delta(B,Z)\Delta(B,C)\Delta(Y,Z)\Delta(Y,C)\Delta(Z,C)}\\
&=\frac{\Delta(B,X)\Delta(X,C)}{\Delta(B,Y)\Delta(Y,C)}\\
&=\frac{\prod_{i=1}^{m+k}{(x_i)_{(b-l)}\cdot(a+b+k+1-x_i)}_{(c-l)}}{\prod_{j=1}^{m}{(y_j)_{(b-l)}\cdot(a+b+k+1-y_j)_{(c-l)}}}
\end{aligned}
\end{equation}
Note that this ratio does not depend on a choice of a set Z. Hence, by combining (3.8), (3.9) and (3.10), we have
\begin{equation}
\begin{aligned}
&\frac{M({H_{a,b,c}^{k,l}(X:Y)})}{M({\overline{H}_{a,b,c}^{k,b+c-l}(X:Y)})}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\cdot\frac{\sum_{Z\subseteq[a+k+l]\setminus({{X}\cup{Y}})}{\Delta(B\cup{X}\cup{Z}\cup{C})\cdot\Delta(Y\cup{Z})}}{\sum_{Z\subseteq[a+k+l]\setminus({{X}\cup{Y}})}{\Delta(X\cup{Z})\cdot\Delta(B\cup{Y}\cup{Z}\cup{C})}}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\cdot\frac{\prod_{i=1}^{m+k}{(x_i)_{(b-l)}\cdot(a+b+k+1-x_i)}_{(c-l)}}{\prod_{j=1}^{m}{(y_j)_{(b-l)}\cdot(a+b+k+1-y_j)_{(c-l)}}}\\
&=\frac{H(k+l)H(b+c-l)}{H(l)H(b+c+k-l)}\\
&~~~\cdot\frac{\prod_{i=1}^{m+k}{(x_i-b+max(b, l))_{(b-l)}\cdot(a+k+max(b, l)+1-x_i)}_{(c-l)}}{\prod_{j=1}^{m}{(y_j-b+max(b, l))_{(b-l)}\cdot(a+k+max(b, l)+1-y_j)_{(c-l)}}}
\end{aligned}
\end{equation}
The case when $c < l \leq b+c$ can be proved similarly as we did for the case when $l \leq b$. Hence, the theorem has been proved. $\square$ \\
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{8.jpg}
\caption{}
\end{figure}
\textit{Proof of Theorem 2.3}. Again, let's consider a case when $b < l \leq c$ first. If we compare two regions $H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, ..., m_{t+1})$ and $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1,..., m_{t+1})$, two regions are different by equilateral triangles with zig-zag horizontal boundary. However, in any lozenge tiling of a region $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$, those regions are forced to be tiled by vertical lozenges (See Figure 3.3). Hence, two regions have same number of lozenge tiling. Similarly, two regions $H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1,..., m_{t+1})$ and $H_{a,b,c}^{k,l}(B_{br}(A_1:W_1),..., B_{br}(A_t:W_t) : m_1,..., m_{t+1})$ have same number of lozenge tilings. Thus we have
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1,..., m_{t+1}))}{M(H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1,..., m_{t+1}))}\\
&=\frac{M(H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1,..., m_{t+1}))}{M(H_{a,b,c}^{k,l}(B_{br}(A_1:W_1),..., B_{br}(A_t:W_t) : m_1,..., m_{t+1}))}
\end{aligned}
\end{equation}
For $i\in[t]$, $j\in [r_i]$, let $X^i_j=\{d_{NW}(L^i_j)+1, d_{NW}(L^i_j)+2,..., d_{NW}(R^i_j) (=d_{NW}(L^i_j)+a^i_j)\}$, $V_i=\{d_{NW}(L^i_j)+1, d_{NW}(L^i_j)+2,..., d_{NW}(T^i) (=d_{NW}(L^i_j)+v_i)\}$ and $\overline{V_i}=X^i_{u_i}\setminus V_i=\{d_{NW}(T^i)+1,..., d_{NW}(R^i_j)\}$. Then $X^1=\cup_{i=1}^{t}\cup_{j\in I_i}X^i_j$, $X^2=\cup_{i=1}^{t}\cup_{j\in J_i}X^i_j$, $Y^1=\cup_{i=1}^{t}((\cup_{j=1}^{u_i-1}X^i_j)\cup V_i)$ and $Y^2=\cup_{i=1}^{t}(\overline{V_i}\cup (\cup_{j=u_i+1}^{r_i}X^i_j))$.
By same observation as we did in the Theorem 1, Lozenge tiling of a hexagonal region $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ can be partitioned according to $(b-n-w)$ vertical unit-lozenges that are bisected by the $l$-th horizontal line. Let $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1} : z_1,..., z_{b-n-w})$ be a region obtained from $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ by removing $(b-n-w)$ unit-lozenges that are bisected by segments on $l$-th horizontal line whose labels are elements of a set $Z=\{z_1,z_2,...,z_{b-n-w}\}$. Then, by same argument as we used in proof of Theorem 2.1., we have
\begin{equation}
\begin{aligned}
&M(H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1,..., m_{t+1} : z_1,..., z_{b-n-w}))\\
&=s(Z\cup{X^1}\cup W\cup C)\cdot s(B\cup Z\cup{X^2}\cup W)\\
&=\frac{\Delta(Z\cup{X^1}\cup W\cup C)}{H(b+c+k-l)}\cdot\frac{\Delta(B\cup Z\cup{X^2}\cup W)}{H(l)}
\end{aligned}
\end{equation}
where $B=\{-|b-l|+1,..., -1, 0\}$ and $C=\{L(l)+1, L(l)+2,..., L(l)+|c-l|\}$.
If we sum over every $(b-n-w)$ element set $Z\subset[L(l)]\setminus(X^1\cup X^2\cup W)$, then we have a representation of number of lozenge tilings of $H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1,..., m_{t+1})$ as follows:
\begin{equation}
\begin{aligned}
&M(H_{a,b,c}^{k,l}(F_{br}(A_1:W_1),..., F_{br}(A_t:W_t) : m_1,..., m_{t+1}))\\
&=\sum_{Z}\frac{\Delta(Z\cup{X^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{X^2}\cup W)}{H(l)\cdot H(b+c+k-l)}\\
&=\frac{\sum_{Z}\Delta(Z\cup{X^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{X^2}\cup W)}{H(l)\cdot H(b+c+k-l)}\\
\end{aligned}
\end{equation}
Similarly, a number of lozenge tilings of
$H_{a,b,c}^{k,l}(B_{br}(A_1:W_1),..., B_{br}(A_t:W_t) : m_1,..., m_{t+1})$ can be expressed as follow:
\begin{equation}
\begin{aligned}
&M(H_{a,b,c}^{k,l}(B_{br}(A_1:W_1),..., B_{br}(A_t:W_t) : m_1,..., m_{t+1}))\\
&=\sum_{Z}\frac{\Delta(Z\cup{Y^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{Y^2}\cup W)}{H(l)\cdot H(b+c+k-l)}\\
&=\frac{\sum_{Z}\Delta(Z\cup{Y^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{Y^2}\cup W)}{H(l)\cdot H(b+c+k-l)}\\
\end{aligned}
\end{equation}
Now, let's observe a ratio $\frac{\Delta(Z\cup{X^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{X^2}\cup W)}{\Delta(Z\cup{Y^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{Y^2}\cup W)}$ for any set $Z\subset[L(l)]\setminus(X^1\cup X^2\cup W)$ with ($b-n-w$) elements:
\begin{equation}
\begin{aligned}
&\frac{\Delta(Z\cup{X^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{X^2}\cup W)}{\Delta(Z\cup{Y^1}\cup W\cup C)\cdot\Delta(B\cup Z\cup{Y^2}\cup W)}\\
&=\frac{\Delta(Z)\Delta(X^1)\Delta(W)\Delta(C)\Delta(Z,X^1)\Delta(Z,W)\Delta(Z,C)\Delta(X^1,W)\Delta(X^1,C)\Delta(W,C)}{\Delta(Z)\Delta(Y^1)\Delta(W)\Delta(C)\Delta(Z,Y^1)\Delta(Z,W)\Delta(Z,C)\Delta(Y^1,W)\Delta(Y^1,C)\Delta(W,C)}\\
&\cdot\frac{\Delta(B)\Delta(Z)\Delta(X^2)\Delta(W)\Delta(B,Z)\Delta(B,X^2)\Delta(B,W)\Delta(Z,X^2)\Delta(Z,W)\Delta(X^2,W)}{\Delta(B)\Delta(Z)\Delta(Y^2)\Delta(W)\Delta(B,Z)\Delta(B,Y^2)\Delta(B,W)\Delta(Z,Y^2)\Delta(Z,W)\Delta(Y^2,W)}\\
&=\frac{\Delta(X^1)\Delta(X^2)\Delta(X^1,C)\Delta(B,X^2)}{\Delta(Y^1)\Delta(Y^2)\Delta(Y^1,C)\Delta(B,Y^2)}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}\cdot\frac{\Delta(B,X^2)}{\Delta(B,Y^2)}
\end{aligned}
\end{equation}
In above simplification, we use a fact that $X^1 \cup X^2=Y^1 \cup Y^2$, which implies $\Delta(Z,X^1)\Delta(Z,X^2)=\Delta(Z,Y^1)\Delta(Z,Y^2)$ and $\Delta(X^1,W)\Delta(X^2,W)=\Delta(Y^1,W)\Delta(Y^2,W)$. Note that what we get does not depend on a choice of a set $Z$. Hence, by (3.14), (3.15), (3.16) and (3.17), we have
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}\cdot\frac{\Delta(B,X^2)}{\Delta(B,Y^2)}
\end{aligned}
\end{equation}
Since $X^1=\cup_{i=1}^{t}\cup_{j\in I_i}X^i_j$ and $Y^1=\cup_{i=1}^{t}((\cup_{j=1}^{u_i-1}X^i_j)\cup V_i)$,
\begin{equation}
\begin{aligned}
\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}&=\frac{\prod_{i=1}^{t}\prod_{j\in I_i}\Delta(X^i_j, C)}{\prod_{i=1}^{t}((\prod_{j=1}^{u_i-1}\Delta(X^i_j, C))\cdot\Delta(V_i, C))}\\
&=\prod_{i=1}^{t}\Bigg[\frac{1}{\Delta(V_i, C)}\prod_{j<u_i, j\in J_i}\frac{1}{\Delta(X^i_j, C)}\prod_{j\geq u_i, j\in I_i}\Delta(X^i_j, C)\Bigg]
\end{aligned}
\end{equation}
However, by (3.2), we have
\begin{equation}
\begin{aligned}
\Delta(V_i, C)&=\frac{H(L(l)-d_{NW}(L^i_{u_i})-v_i)H(L(l)-d_{NW}(L^i_{u_i})+|c-l|)}{H(L(l)-d_{NW}(L^i_{u_i}))H(L(l)-d_{NW}(L^i_{u_i})+|c-l|-v_i)}\\
&=\frac{H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))}{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))}
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
\Delta(X^i_j, C)&=\frac{H(L(l)-d_{NW}(L^i_{u_i})-a^i_j)H(L(l)-d_{NW}(L^i_{u_i})+|c-l|)}{H(L(l)-d_{NW}(L^i_{u_i}))H(L(l)-d_{NW}(L^i_{u_i})+|c-l|-a^i_j)}\\
&=\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}
\end{aligned}
\end{equation}
Hence, by (3.18), (3.19) and (3.20), we have
\begin{equation}
\begin{aligned}
\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}=&\prod_{i=1}^{t}\Bigg[\frac{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))}{H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))}\\
&\cdot\prod_{j<u_i, j\in J_i}\frac{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}\prod_{j\geq u_i, j\in I_i}\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}\Bigg]
\end{aligned}
\end{equation}
Also, since $X^2=\cup_{i=1}^{t}\cup_{j\in J_i}X^i_j$ and $Y^2=\cup_{i=1}^{t}(\overline{V_i} \cup (\cup_{j=u_i+1}^{r_i}X^i_j))$,
\begin{equation}
\begin{aligned}
\frac{\Delta(B,X^2)}{\Delta(B,Y^2)}&=\frac{\prod_{i=1}^{t}\prod_{j\in J_i}\Delta(B, X^i_j)}{\prod_{i=1}^{t}(\Delta(B, \overline{V_i})\cdot\prod_{j=u_i+1}^{r_i}\Delta(B, X^i_j))}\\
&=\prod_{i=1}^{t}\Bigg[\frac{\Delta(B, X^i_{u_i})}{\Delta(B, \overline{V_i})}\prod_{j<u_i, j\in J_i}\Delta(B, X^i_j)\prod_{j\geq u_i, j\in I_i}\frac{1}{\Delta(B, X^i_j)}\Bigg]\\
&=\prod_{i=1}^{t}\Bigg[\Delta(B, V_i)\prod_{j<u_i, j\in J_i}\Delta(B, X^i_j)\prod_{j\geq u_i, j\in I_i}\frac{1}{\Delta(B, X^i_j)}\Bigg]
\end{aligned}
\end{equation}
Again, by (3.2), we have
\begin{equation}
\begin{aligned}
\Delta(B, V_i)&=\frac{H(d_{NW}(L^i_{u_i}))H(d_{NW}(L^i_{u_i})+|l-b|+v_i)}{H(d_{NW}(L^i_{u_i})+|l-b|)H(d_{NW}(L^i_{u_i})+v_i)}\\
&=\frac{H(d_{NW}(L^i_{u_i}))H(d_{SW}(T^i))}{H(d_{SW}(L^i_{u_i}))H(d_{NW}(T^i))}\\
&=\frac{H(d_{NW}(L^i_{u_i}))H(d_{SW}(T^i))}{H(d_{NW}(T^i))H(d_{SW}(L^i_{u_i}))}
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
\Delta(B, X^i_j)&=\frac{H(d_{NW}(L^i_{u_i}))H(d_{NW}(L^i_{u_i})+|l-b|+a^i_j)}{H(d_{NW}(L^i_{u_i})+|l-b|)H(d_{NW}(L^i_{u_i})+a^i_j)}\\
&=\frac{H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}{H(d_{SW}(L^i_j))H(d_{NW}(R^i_j))}\\
&=\frac{H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}{H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}
\end{aligned}
\end{equation}
Hence, by (3.22), (3.23) and (3.24), we have
\begin{equation}
\begin{aligned}
\frac{\Delta(B,X^2)}{\Delta(B,Y^2)}=&\prod_{i=1}^{t}\Bigg[\frac{H(d_{NW}(L^i_{u_i}))H(d_{SW}(T^i))}{H(d_{NW}(T^i))H(d_{SW}(L^i_{u_i}))}\\
&\cdot\prod_{j<u_i, j\in J_i}\frac{H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}{H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}\prod_{j\geq u_i, j\in I_i}\frac{H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}{H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}\Bigg]
\end{aligned}
\end{equation}
Thus, by (3.17), (3.21) and (3.25),
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\\
&\cdot\prod_{i=1}^{t}\Bigg[\frac{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))H(d_{NW}(L^i_{u_i}))H(d_{SW}(T^i))}{H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))H(d_{NW}(T^i))H(d_{SW}(L^i_{u_i}))}\\
&\cdot \prod_{j<u_i, j\in J_i}\frac{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}\\
&\cdot \prod_{j\geq u_i, j\in I_i}\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}\Bigg]
\end{aligned}
\end{equation}
Now, let's consider a case when $l \leq b$.\\
For $i\in[t]$, $j\in [r_i]$, let $X^i_j=\{d_{SW}(L^i_j)+1, d_{SW}(L^i_j)+2,..., d_{SW}(R^i_j) (=d_{SW}(L^i_j)+a^i_j)\}$, $V_i=\{d_{SW}(L^i_j)+1, d_{SW}(L^i_j)+2,..., d_{SW}(T^i) (=d_{SW}(L^i_j)+v_i)\}$ and $\overline{V_i}=X^i_{u_i}\setminus V_i=\{d_{SW}(T^i)+1,..., d_{SW}(R^i_j)\}$. Then $X^1=\cup_{i=1}^{t}\cup_{j\in I_i}X^i_j$, $X^2=\cup_{i=1}^{t}\cup_{j\in J_i}X^i_j$, $Y^1=\cup_{i=1}^{t}((\cup_{j=1}^{u_i-1}X^i_j)\cup V_i)$ and $Y^2=\cup_{i=1}^{t}(\overline{V_i}\cup (\cup_{j=u_i+1}^{r_i}X^i_j))$.\\
By same argument, the ratio can be expressed as follows:
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}\cdot\frac{\Delta(B,X^1)}{\Delta(B,Y^1)}
\end{aligned}
\end{equation}
where $B=\{-|b-l|+1,..., -1, 0\}$ and $C=\{L(l)+1, L(l)+2,..., L(l)+|c-l|\}$.
However, we know that
\begin{equation}
\begin{aligned}
\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}&=\frac{\prod_{i=1}^{t}\prod_{j\in I_i}\Delta(X^i_j, C)}{\prod_{i=1}^{t}((\prod_{j=1}^{u_i-1}\Delta(X^i_j, C))\cdot\Delta(V_i, C))}\\
&=\prod_{i=1}^{t}\Bigg[\frac{1}{\Delta(V_i, C)}\prod_{j<u_i, j\in J_i}\frac{1}{\Delta(X^i_j, C)}\prod_{j\geq u_i, j\in I_i}\Delta(X^i_j, C)\Bigg]\\
\end{aligned}
\end{equation}
Also, we have
\begin{equation}
\begin{aligned}
\frac{\Delta(B,X^1)}{\Delta(B,Y^1)}&=\frac{\prod_{i=1}^{t}\prod_{j\in I_i}\Delta(B, X^i_j)}{\prod_{i=1}^{t}((\prod_{j=1}^{u_i-1}\Delta(B, X^i_j))\cdot\Delta(B, V_i))}\\
&=\prod_{i=1}^{t}\Bigg[\frac{1}{\Delta(B, V_i)}\prod_{j<u_i, j\in J_i}\Delta(B, X^i_j)\prod_{j\geq u_i, j\in I_i}\frac{1}{\Delta(B, X^i_j)}\Bigg]\\
\end{aligned}
\end{equation}
However, by (3.2), we have
\begin{equation}
\begin{aligned}
\Delta(B, X^i_j)&=\frac{H(d_{SW}(L^i_j))H(d_{SW}(L^i_j)+|b-l|+a^i_j)}{H(d_{SW}(L^i_j)+|b-l|)H(d_{SW}(L^i_j)+a^i_j)}\\
&=\frac{H(d_{SW}(L^i_j))H(d_{NW}(R^i_j))}{H(d_{SW}(R^i_j))H(d_{NW}(L^i_j))}\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\Delta(X^i_j, C)&=\frac{H(L(l)-d_{SW}(L^i_j)-a^i_j)H(L(l)-d_{SW}(L^i_j)+|c-l|)}{H(L(l)-d_{SW}(L^i_j))H(L(l)-d_{SW}(L^i_j)+|c-l|-a^i_j)}\\
&=\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}\\
\end{aligned}
\end{equation}
and similarly
\begin{equation}
\begin{aligned}
\Delta(B, V_i)&=\frac{H(d_{SW}(L^i_{u_i}))H(d_{NW}(T^i))}{H(d_{SW}(T^i))H(d_{NW}(L^i_{u_i}))}, \Delta(V_i, C)=\frac{H(d_{SE}(T_i))H(d_{NE}(L^i_{u_i}))}{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T_i))}
\end{aligned}
\end{equation}
Thus, by (3.27)-(3.32), we have
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{k,l}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{k,l}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\\
&\cdot\prod_{i=1}^{t}\Bigg[\frac{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))H(d_{NW}(L^i_{u_i}))H(d_{SW}(T^i))}{H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))H(d_{NW}(T^i))H(d_{SW}(L^i_{u_i}))}\\
&\cdot \prod_{j<u_i, j\in J_i}\frac{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}\\
&\cdot \prod_{j\geq u_i, j\in I_i}\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))H(d_{NW}(R^i_j))H(d_{SW}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))H(d_{NW}(L^i_j))H(d_{SW}(R^i_j))}\Bigg]
\end{aligned}
\end{equation}
The case when $c <l$ can be proved similarly as we did for the case when $l \leq b$. Hence, the Theorem 2 has been proved. $\square$\\
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{9.jpg}
\caption{Cyclically symmetric lozenge tiling of a region $H_{10,11,13}^{0,12}(F(-1,1,-2,0:1,0,1), F(0,2,-1,1:1,0,1) : 2, 5, 2)$}
\end{figure}
\textit{Proof of Theorem 2.4}.
Let's use the same notation as we used in the proof of Theorem 2.3. Like proof of previous theorems, we label the baseline by $1, 2,..., L(\frac{b+c}{2})$ from left to right.
Note that in this case, sets $X^1$, $X^2$, $Y^1$, $Y^2$ and $W$ satisfy $X^2=\{L(\frac{b+c}{2})+1-x|x\in X^1\}$, $Y^2=\{L(\frac{b+c}{2})+1-y|y\in Y^1\}$ and $W=\{L(\frac{b+c}{2})+1-w|w\in W\}$ because the region is centrally symmetric. Crucial observation is that \textbf{a centrally symmetric lozenge tiling of the region is uniquely determined by lozenges below (or above) the horizontal line (See Figure 3.4)} .
Hence, by combining this observation and same argument that we have used in the proof of previous theorems, we have
\begin{equation}
\begin{aligned}
&M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))\\
&=\sum_{Z}\frac{\Delta(Z\cup X^1 \cup W \cup C)}{H(\frac{b+c}{2})}\\
&=\frac{\sum_{Z}\Delta(Z\cup X^1\cup W \cup C)}{H(\frac{b+c}{2})}
\end{aligned}
\end{equation}
where the sum is taken over all sets $Z\subset[L(\frac{b+c}{2})]\setminus(X^1\cup X^2\cup W)$ with $(b-n-w)$ elements that satisfies $Z=\{L(\frac{b+c}{2})+1-z|z\in Z\}$
Similarly, number of centrally symmetric lozenge tiling of the region $H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1})$ can be written as follows:
\begin{equation}
\begin{aligned}
&M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))\\
&=\sum_{Z}\frac{\Delta(Z\cup Y^1\cup W \cup C)}{H(\frac{b+c}{2})}\\
&=\frac{\sum_{Z}\Delta(Z\cup Y^1\cup W \cup C)}{H(\frac{b+c}{2})}
\end{aligned}
\end{equation}
Again, the sum is taken over all sets $Z\subset[L(\frac{b+c}{2})]\setminus(X^1\cup X^2\cup W)$ with $(b-n-w)$ elements that satisfies $Z=\{z\in Z | L(\frac{b+c}{2})+1-z\}$.
For such $Z$, we have
\begin{equation}
\begin{aligned}
\Delta(Z, X^2)&=\prod_{z\in Z, x_2\in X^2}|z-x_2|\\
&=\prod_{z\in Z, x_1\in X^1}|(L(\frac{b+c}{2})+1-z)-(L(\frac{b+c}{2})+1-x_1)|\\
&=\prod_{z\in Z, x_1\in X^1}|x_1-z|\\
&=\Delta(Z, X^1)
\end{aligned}
\end{equation}
Similarly, we also have $\Delta(Z, Y^2)=\Delta(Z, Y^1)$.
Hence we have
\begin{equation}
\begin{aligned}
\Delta(Z, X^1)=\sqrt{\Delta(Z, X^1)\Delta(Z, X^2)}&=\sqrt{\Delta(Z, X^1\cup X^2)}\\
&=\sqrt{\Delta(Z, Y^1\cup Y^2)}\\
&=\Delta(Z, Y^1)
\end{aligned}
\end{equation}
By same reasoning, we have $\Delta(X^1, W)=\Delta(Y^1, W)$.
Now, we observe a ratio $\frac{\Delta(Z\cup X^1\cup W \cup C)}{\Delta(Z\cup Y^1\cup W \cup C)}$ for any set $Z$:
\begin{equation}
\begin{aligned}
&\frac{\Delta(Z\cup X^1\cup W \cup C)}{\Delta(Z\cup Y^1\cup W \cup C)}\\
&=\frac{\Delta(Z)\Delta(X^1)\Delta(W)\Delta(C)}{\Delta(Z)\Delta(Y^1)\Delta(W)\Delta(C)}\\
&\cdot\frac{\Delta(Z,X^1)\Delta(Z,W)\Delta(Z,C)\Delta(X^1,W)\Delta(X^1,C)\Delta(W,C)}{\Delta(Z,Y^1)\Delta(Z,W)\Delta(Z,C)\Delta(Y^1,W)\Delta(Y^1,C)\Delta(W,C)}\\
&=\frac{s(X^1)}{s(Y^1)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}
\end{aligned}
\end{equation}
Since this ratio does not depend on a choice of a set $Z$, by (3.35), (3.36) and (3.40), we have
\begin{equation}
\begin{aligned}
&\frac{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)}{s(Y^1)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}
\end{aligned}
\end{equation}
However, as we have seen in the proof of the Theorem 2.3.,
\begin{equation}
\begin{aligned}
&\frac{M(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}\cdot\frac{\Delta(B,X^2)}{\Delta(B,Y^2)}
\end{aligned}
\end{equation}
Since our region is centrally symmetric, we have
\begin{equation}
\begin{aligned}
s(X^1)&=\frac{1}{H(p)}\prod_{x<y, x,y\in X^1}(y-x)\\
&=\frac{1}{H(n)}\prod_{x<y, x,y\in X^1}((L(\frac{b+c}{2})+1-x)-(L(\frac{b+c}{2})+1-y))\\
&=\frac{1}{H(n)}\prod_{y'<x', x',y'\in X^2}(x'-y')\\
&=s(X^2)
\end{aligned}
\end{equation}
Similarly, $s(Y^1)=s(Y^2)$, $\Delta(B,X^2)=\Delta(X^1,C)$ and $\Delta(B,Y^2)=\Delta(Y^1,C)$
Hence we have
\begin{equation}
\begin{aligned}
&\frac{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\frac{s(X^1)}{s(Y^1)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}\\
&=\sqrt{\frac{s(X^1)s(X^2)}{s(Y^1)s(Y^2)}\cdot\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}\cdot\frac{\Delta(B,X^2)}{\Delta(B,Y^2)}}\\
&=\sqrt{\frac{M(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}}
\end{aligned}
\end{equation}
Also, by (3.21),
\begin{equation}
\begin{aligned}
\frac{\Delta(X^1,C)}{\Delta(Y^1,C)}=&\prod_{i=1}^{t}\Bigg[\frac{H(d_{SE}(L^i_{u_i})H(d_{NE}(T^i)}{H(d_{SE}(T^i)H(d_{NE}(L^i_{u_i})}\\
&\cdot\prod_{j<u_i, j\in J_i}\frac{H(d_{SE}(L^i_j)H(d_{NE}(R^i_j)}{H(d_{SE}(R^i_j)H(d_{NE}(L^i_j)}\prod_{j\geq u_i, j\in I_i}\frac{H(d_{SE}(R^i_j)H(d_{NE}(L^i_j)}{H(d_{SE}(L^i_j)H(d_{NE}(R^i_j)}\Bigg]
\end{aligned}
\end{equation}
Hence, by (3.43) and (3.44), we have
\begin{equation}
\begin{aligned}
&\frac{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M_\odot(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}\\
&=\sqrt{\frac{M(H_{a,b,c}^{0,\frac{b+c}{2}}(F(A_1:W_1),..., F(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}{M(H_{a,b,c}^{0,\frac{b+c}{2}}(B(A_1:W_1),..., B(A_t:W_t) : m_1, m_2,..., m_t, m_{t+1}))}}\\
&=\frac{s(X^1)}{s(Y^1)}\cdot\prod_{i=1}^{t}\Bigg[\frac{H(d_{SE}(L^i_{u_i}))H(d_{NE}(T^i))}{H(d_{SE}(T^i))H(d_{NE}(L^i_{u_i}))}\\
&\cdot\prod_{j<u_i, j\in J_i}\frac{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}\prod_{j\geq u_i, j\in I_i}\frac{H(d_{SE}(R^i_j))H(d_{NE}(L^i_j))}{H(d_{SE}(L^i_j))H(d_{NE}(R^i_j))}\Bigg]
\end{aligned}
\end{equation}
\section{Acknowledgement}
The author would like to thank to his advisor, Professor Mihai Ciucu for his encouragement and useful discussions. The geometric interpretation of terms in formulas which unifies the results is due to him. Also, the author thanks Jeff Taylor for installing software and frequent helpful assistance.
| proofpile-arXiv_065-7127 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:introduction}
\subsection{Motivation}
In recent years, Software-Defined Networking (SDN) has received great attention from both the research community and the industry. For example,
Google has already implemented an SDN architecture, Google B4, on its data centers \cite{jain2013b4}.
By separating the network control and the data plane, SDN overcomes several limitations of the traditional networks such as static configuration, non-scalability and low efficiency.
Due to the logical centralization of the control plane and the programmability of the configuration for the data plane, SDN provides a global view of network resources that enhances the performance and flexibility of the underlying networks.
\begin{figure}[!t]
\centering
\includegraphics[width=5in]{figure1.png}
\caption{The overview of attacks on SDN architecture}
\label{fig:attacks}
\end{figure}
However, applying SDN into networks also introduces some new security issues, which has been stressed in recent years among SDN researchers \cite{pisharody2017brew, zhang2017secure, chung2013nice, bonola2015streamon}.
The overview of attacks on SDN architecture is shown as Fig. \ref{fig:attacks}.
Most of these security issues exist in the application plane and the control plane.
On one hand, without authentication, applications may inject malicious configurations into network devices at will, which could reduce network availability, reliability and even lead to a network breakdown. Note that application flows are network configurations sent by applications and are managed by controllers who install network configurations into switches. Loss of traceability and accountability of application flows may cause trouble for network debugging.
In SDN, tracing and auditing application flows and network states can help monitor and replay network states or debug a broken-down network, and network behaviour information also can be applied to recognize attack patterns \cite{Kim2004A}.
On the other hand, the control plane also exposes network-wide resources to all applications that will open a door for malicious applications.
Moreover, adopting a single controller may result in a single-point failure which can become an attractive target for DoS attacks.
Finally, the lack of authenticated controller-switch communication channel could lead to more severe security threats when the configuration-complex TLS protocol (Transport Plane Security, TLS) is not adopted. Adversaries can launch man-in-the-middle attack and eavesdropping attack by seizing all packages between controllers and switches \cite{benton2013openflow, cui2016fingerprinting}.
In addition, malicious switches can also launch spoofing attacks by faking identities (e.g., IP addresses) which may lead to DoS/DDoS attacks \cite{wu2016low, merlo2014denial}.
In order to address the above security problems, many proposals have been made in recent years \cite{Tantar2014Cognition,zaalouk2014orchsec,porras2012security,hinrichs2008expressing,son2013model,khurshid2012veriflow,handigol2012debugger,ballard2010extensible,
tootoonchian2010hyperflow,koponen2010onix,phemius2014disco,monaco2013applying,ferguson2013participatory,
matsumoto2014fleet,Wen2013Towards,Nayak2009Resonance,Peng2009Improved,Santos2014Decentralizing}.
These proposals each targets at a specific security issue, but to the best of our knowledge, no attemps have been made which address all the common security issues simultaneously with a monolithic architecture. Specifically, existing works that provide application flow authentication or flow secure constraint are to extend an individual secure module on a controller rather than a monolithic secure module in a multi-controllers environment. Additionally, existing role-based access control schemes on network-wide resources are not fine-grained enough. Moreover, optional TLS protocol and other cryptography-based authentication protocols presented in \cite{Peng2009Improved,Santos2014Decentralizing} require multiple interactions (also called multiple pass) to build a controller-switch communication channel. In a network with the physically decentralized control plane, simply combing those existing schemes fails to effectively solve all the common security issues simultaneously, because all secure modules must work seamlessly among multiple controllers.
\subsection{Contributions}
In the paper, we present a Blockchain-based monolithic secure mechanism to effectively address multiple common security issues in SDNs.
In particular, the paper makes the following contributions.
$\bullet$ Our mechanism decentralizes the control plane into multiple controllers while maintaining consensus among all controllers on network-wide resources.
$\bullet$ To overcome the weakness of lacking traceability and accountability of application flows, all flows and network behaviours are recorded on Blockchain so that the network states can be easily replayed for auditing and debugging.
$\bullet$ By assembling a lightweight and practical Attribute-Based Encryption (ABE) scheme, the access permissions of each application on network resources are defined and enforced to avoid resource abuse.
$\bullet$ The effective authentication protocol HMQV (one-pass) is combined with Blockchain to protect the communication channel between the controller and switch against active attacks.
\section{Related Work}
A variety of security measures directed against various security threats among different planes of SDN architecture have been proposed. In terms of application plane, FRESCO\cite{Shin2013FRESCO} is a security development framework compatible with OpenFlow for SDN applications. Cognition\cite{Tantar2014Cognition} was proposed to enforce the security of applications by defining cognitive functions. OrchSec\cite{zaalouk2014orchsec}, an architecture considering the advantages of network-visibility and centralized control provided by SDN, was introduced to develop security applications. FortNOX\cite{porras2012security} extended NOX controller to provide security constraints on flow rules and an role-based authentication scheme for SDN applications. FSL\cite{hinrichs2008expressing} presented a security authentication framework for flow-based network policies. Similarly, Son {et al}\onedot{ } proposed Flover\cite{son2013model} and Khurshid {et al}\onedot{ } presented VeriFlow\cite{khurshid2012veriflow} to verify dynamic flow policies. With the requirements to audit and track network process, Hadigol {et al}\onedot{ }\cite{handigol2012debugger} studied network event debugger enabling network manager to track the root cause of a network bug. OpenSAFE\cite{ballard2010extensible} was proposed to support security auditing and Flow Examination to analyze network traffic and filter network package. On the control plane, a lot of frameworks with a decentralized control plane for OpenFlow were presented. HyperFlow\cite{tootoonchian2010hyperflow} was built on a distributed file system to realize network event distribution among multiple controllers. Onix\cite{koponen2010onix} implemented a physically distributed but logically centralized control platform to avoid threats brought from a single controller. Also, SDN control frameworks such as ONOS\cite{krishnaswamy2013onos}, DISCO\cite{phemius2014disco}, yanc \cite{monaco2013applying}, PANE\cite{ferguson2013participatory}, and Flee\cite{matsumoto2014fleet} supporting the distributed network logic. In order to secure network-wide resources, some security schemes \cite{Wen2013Towards, Nayak2009Resonance}were devoted to provide access control mechanism to protect resources from unconcerned SDN applications. As for controller-switch channel, Transport Plane Security (TLS) was adopted on OpenFlow specification. However, it became optional due to the insufferable drawbacks of TLS. Apart from authenticated controller-switches communication, those were excellent security measures and systems, such as FRESCO \cite{DBLP2013FRESCO}, FloodGuard \cite{WangXG15FloodGuard}, AVANT-GUARD \cite{ShinYPG13AVANT-GUARD}, FLOWGUARD \cite{Hu2014FLOWGUARD}, SE-Floodlight\cite{floodlightcontroller}, SoftFirewall\cite{koerner2014oftables}, CPRecovery \cite{suh2014building} and so on \cite{ShinXHG16}.
\section{Organization}
The rest of the paper is organized as follows. In Section \ref{sec:preliminaries}, we provide a quick overview on Blockchain, Attribute-Based Encryption, and the HOMQV protocol. Then, we describe
our security requirements on OpenFlow/SDN in Section \ref{sec:secRequire}. In Section \ref{sec:design}, we present the design of the Blockchain-based monolithic module. We analyze security issues of the construction in Section \ref{sec:analyze} and in the Section \ref{sec:implement} we present a prototype implementation of our mechanism. Lastly, we conclude the paper in Section \ref{sec:conclusion}.
\section{Preliminaries}\label{sec:preliminaries}
In this section, we give a brief introduction about Blockchain, Attribute-based encryption and HOMQV protocol.
\textbf{Blockchain} is originated from bitcoin and becomes an emerging technology as a decentralized, sharing, immutable database \cite{nakamoto2008bitcoin, croman2016scaling, kogias2016enhancing, aitzhan2016security, swan2015blockchain, cota2017racoon++}.
Data in Blockchain is stored into blocks which are maintained as a chain.
Each block of Blockchain contains a timestamp and the reference, i.e., the hash of a previous block.
Blockchain is maintained in a peer-to-peer network.
The majority of Blockchain network nodes run a consensus protocol to achieve an agreement to generate a new block.
Meanwhile, the data, also called \emph{transactions}, in the new block are also confirmed due to its consensus protocol.
Consensus protocols in the Blockchain setting can be implemented by several different agreement methods, such as POW-based (Proof of Work), BFT-based (Byzantine fault-tolerant) and POS-based (Proof of Stake).
We call them Blockchain protocols.
In the paper, we focus on the BFT-based Blockchain protocol \cite{duan2014hbft}.
This kind of protocol promises instant consensus\cite{lamport1982byzantine}.
In the meantime, Blockchain based on the kind of protocol can balance scalability and performance well, among which scalability means the number of participants and performance includes throughput and latency.
Vukolic {et al}\onedot{ } \cite{vukolic2015quest} showed BFT-based Blockchains behave excellently in network performance and guarantee instant consensus.
It also presented that BFT-based Blockchains possess the excellent ability to support the large capability of clients.
By applying BFT-based Blockchains to our module, controllers in SDN play the roles as clients of the Blockchain, which demonstrates excellent network scalability of controllers.
On the other hand, there are two ways to write data on the Blockchain: \emph{transaction} and \emph{smart contract}.
Smart contract is a program that can automatically execute the partial and total operations pre-defined in the contract and output values as evidences supporting to be verified on Blockchain.
Smart contract usually provides an outer interactive interface and the interaction can be verified based on the cryptography so that smart contract is executed in strict accordance with the predefined logic.
In our context, we build security protocols by smart contract that can be automatically executed when the predefined conditions are triggered on the Blockchain.
Owing to the immutable recorded transactions and results of executed protocols, Blockchain supports network to trace any record linked with a specific time point.
Eventually, Blockchain enables the network-wide data to share some valuable features such as reliability, non-repudiation, traceability and auditability which are adapted to our security requirements.
Thus, this paper is interested in BFT-based Blockchains adapted to our security goals to construct a Blockchain-based secure module which strengthens the security in SDNs.
\textbf{Attribute-based encryption} (ABE) contributes to a fine-grained access control for encrypted data. In an ABE system, a kind of encrypted resources are labeled with a set of descriptive attributes and a specific access structure which are associated with a private key of an access-user. It determines the kind of encrypted resources that can be decrypted by the access-user with the private key satisfying the pre-defined access structure\cite{goyal2006attribute,bethencourt2007ciphertext, yao2015lightweight, guo2014cp, liu2015traceable, jung2015control}.
We employ a lightweight ABE scheme which possesses execution efficiency and low communication costs\cite{yao2015lightweight} to achieve our secure and efficient requirements for access control. As for the framework of ABE, four algorithms are introduced as follows and they will be used as black boxes.
\begin{itemize}
\item $(PK,MK)$${\sf =Setup}(\kappa)$: Taking input of the security parameter $\kappa$, this algorithm outputs the public parameters $PK$ and a master key $MK$.
\item $E$${\sf=Encryption}(m, attr, PK)$: This is a randomized algorithm that takes as input a message $m$, a set of attributes $attr$, and the public parameters $PK$. It outputs the ciphertext $E$.
\item $D$${\sf =KeyGeneration}(\textbf{A}, MK, PK)$: This is a randomized algorithm that takes as input an access structure $\textbf{A}$, the master key $MK$ and the public parameters $PK$. It outputs a decryption key $D$.
\item $M$${\sf =Decryption}(E,D)$: Taking as input the ciphertext $E$ that was encrypted under the set $attr$ of attributes, the decryption key $D$ for access control structure $\textbf{A}$ and the public parameters $PK$, it outputs the message $M$ if $attr \in \textbf{A}$.
\end{itemize}
\textbf{One-pass HMQV protocol (HOMQV)} is a high-performance securely authenticated protocol which combines security, flexibility and efficiency \cite{krawczyk2005hmqv}. Its security has been proved in \cite{halevi2011one}.
Specifically, it uses a cyclic group $G$ of prime order $q$ and is generated by a given generator $g$. In the initial step, there are two communication parties Alice ($ID_{Alice}$) and Bob ($ID_{Bob}$) with the long-term keys $A=g^a$, $B=g^b$, respectively. Before a key-exchange protocol, Bob first checks the key $A$ sent by Alice that $A\in G'$ (if not it aborts). Then Bob randomly chooses $y\in_{R} Z_q$, computes $Y=g^y$ and sends it to Alice. Bob also computes a session key $H(\sigma,ID_{Alice},ID_{Bob},Y)$ where $\sigma=A^{(y+eb)}$ and $e=H'(Y,ID_{Alice})$. When receiving $Y$ and $ID_{Bob}$, Alice checks $Y$ and Bob's public key in $G'$ (if not it aborts) and then computes the session key $H(\sigma',ID_{Bob},ID_{Alice},Y)$ where $\sigma'=(YB^e)^a$. Finally, Alice and Bob share the same session key because of $\sigma=\sigma'$ and the triple $(ID_{Alice},ID_{Bob},Y)$ is regarded as the session id.
However, applying the basic HOMQV protocol to controller-switch communication, controllers as receivers fail to resist reply attack since the protocol only has one pass. In addition, it needs a certificate authority to update the long-term keys of two parties in the basic protocol. Fortunately, the two security issues can be overcome by using the Blockchain, which enables a security-strengthen protocol applied into the authenticated controller-switch communication.
\section{Security Requirements}\label{sec:secRequire}
In this section, we list security requirements which should be achieved.
\textbf{Application flows authentication.} The nature of high programmability and configurability on network devices of OpenFlow/SDN forces us to pay more security attention on the application plane.
Applications (e.g., traffic engineering) provide a variety of management services for network by configuring application flows versus new threats for network.
For example, a malicious or compromised application may inject malicious application flows into network devices thereby leading to a dramatic consequence.
Therefore, to authenticate application flows from legitimate applications and verify the authenticity of application flows are significant for the configurable OpenFlow/SDN.
\textbf{Application flows tracing and accounting.} Traceability and accountability for application flows can assist operators to troubleshoot network once a network device breaks down or suffers from abnormal network behaviors.
On the other hand, for the scenarios of flow arbitration, a flow arbitration system with the duty to arbitrate conflict flows, needs to identify which flows are sent by which applications\cite{porras2015securing} where traceability and accountability for application flows are urgent to be provided.
\textbf{Network behaviours auditing.} An audit system provides periodic auditing for network behaviours (network events associated with resulting network states), which helps to enforce the stability and robust the security of OpenFlow/SDN-based network.
Analyzing from auditing results by linking network events with respective network states in current time, operators can make adjustment of the network management and enhance network performance next time.
Furthermore, relying on the audit system, attack pattern recognition also can be supported to resist future network attacks.
\textbf{Secure access control on network-wide resources.} It faces some potential threats that network-wide resources on the control plane are exposed to all applications.
For instance, Hartman {et al}\onedot{ } [35] presented a kind of network security applications serving for firewall or intrusion detection that can access network resources of the firewall.
A malicious application may abuse the resources by utilizing the instance of the kind application to bypass the firewall.
Consequently, it is necessary to construct a secure access control mechanism which is customized to applications according to their categories and the network scope they are supposed to contribute to.
\textbf{Decentralization of control plane.} A single controller is not feasible.
Obviously, a single-point failure may occur and the scalability to expand network is limited because a single controller should ensure the endurance capacity for network flows from various applications and manage a large number of devices.
On the other hand, a distributed control plane can improve the flexibility and resilience of network, e.g., each controller is responsible for a network slice with a certain number of devices.
In the meantime, we also require a distributed control plane to sustain logically centralized network view which is one of important features in SDN.
There are distributed controllers including Onix\cite{koponen2010onix}, HyperFlow\cite{belter2014programmable}, HP VAN SDN[10], ONOS\cite{krishnaswamy2013onos}, DISCO\cite{phemius2014disco}, yanc\cite{monaco2013applying}, PANE\cite{ferguson2013participatory}, and Fleet\cite{matsumoto2014fleet}, but among which few of them can maintain a consistent view of the global network resources.
\textbf{Controller-switch communication authentication.}
The communication channel between controllers and switches suffers from some active attacks in SDN networks, such as man-in-the-middle attack, reply attack and spoofing attack.
Actually, the implementation of OpenFlow originally defined TLS\cite{dierks2008transport} for the controller-switch communication.
However, it made TLS adoption in option on its latter versions due to the high complexity of configuration and high communication cost of TLS\cite{wasserman2013security,son2013model}. This results in lots of security threats such as malicious flow insertion and flow modification when a controller installs a flow to a switch\cite{benton2013openflow}.
Thus, any device connected to a controller under an lightweight authentication protocol is inevitable.
\section{Concrete Design}\label{sec:design}
\begin{figure}[!t]
\centering
\includegraphics[width=5in]{figure2.png}
\caption{The new architecture of SDN appended the Blockchain layer}
\label{fig:overview}
\end{figure}
In this section, we give a concrete design of the Blockchain-based monolithic
secure mechanism as Fig. \ref{fig:overview}, and introduce it from four aspects: Blockchain layer, entities building, transactions building and protocols building.
Additionally, we represent cryptographic primitives an asymmetric encryption algorithm and a digital signature algorithm as $\textsf{AE}$ and $\textsf{DS}$, respectively.
$\textsf{AE}$ algorithm is claimed by $\textsf{(KeyGen, Enc, Dec)}$, the key generation, encryption and decryption algorithms respectively and $\textsf{DS}$ algorithm is defined by $\textsf{(KeyGen, Sig, Ver)}$, the key generation, signature and verification algorithms respectively. A pair of public key and private key in the algorithms are represented as $PK$ and $SK$.
\subsection{Blockchain layer}
Blockchain is used as a packaged and underlying component.
It provides functionalities of resource-recording and resource-sharing among multiple controllers on the control plane.
The functionality of resource-recording represents that Blockchain can be used to record network resources of each controller. The functionality of resource-sharing demonstrates all recorded resources (mainly network events) are shared among all controllers, thereby maintaining the same network view.
We utilize the existing stable Blockchain platform to implement our requirements rather than building a new Blockchain.
The original consensus protocol of the applied Blockchain is not changed and it guarantees the reliability of our new network architecture.
The reason is that many Blockchain-based applications, in order to obtain new required functionalities, create a new Blockchain production (such as utilizing a variant of consensus protocol), but there exist some potential threats, e.g., making chain fork and resulting in a disastrous loss.
Thus, the mechanism applies a worth examining stable Blockchain as the underlying layer of the Blockchain layer.
Blockchain is used to write down all application flows and network events associating with the respective network states where those data are represented as raw transactions.
In addition to transactions, we build smart contract to implement security protocols which further satisfy the security requirements (e.g., to alert the failure of a controller in time). The timestamping and trustworthy features of Blockchain enable the real-time reliability of all recorded application flows and all time-series of the network-wide views during the running process in the arbitrary time.
On the other hand, multiple controllers who are regraded as clients to participate in the underlying Blockchain undertake to record network data (from the application plane and from the device plane) as raw transactions into Blockchain.
The motivation of controllers to manage network, to an extend, keeps the liveness of the underlying Blockchain.
Meanwhile, all applications are obliged to provide network flows (or policies) through the control plane for network devices (e.g., OpenFlow/SDN switches) on the data plane.
OpenFlow/SDN switches also intend to work in network by sending messages (or events) to controllers or providing its resources to controllers.
For example, a switch will request its registered controller when it receives a coming package but fails to forward it. Those network motivations of participating entities enable the underlying Blockchain to be applied significantly.
However, the question on selecting which kind of consensus protocol that the trustworthy underlying Blockchain is based on should be carefully considered.
We desire to gain confirmed transactions (network resources) without canceling on the Blockchain where there does not exist any fork or appear forks with overwhelming probability
and to maintain consistency of those transactions (network resources) among all controllers.
We expect the underlying Blockchain layer to reach consensus under a negligible consensus latency and without the presence of the temporary forks.
The two requirements for the underlying Blockchain imply the property of consensus finality, which is proposed by Vukolic\cite{vukolic2015quest}.
Consensus finality property is defined that once a valid block was appended to the Blockchain at some point in time, the block never was abandoned from the blockchian. \cite{vukolic2015quest} also claimed and proved that any BFT-based Blockchain can satisfy the property of consensus finality, wherein it also supports the excellent network performance and thousands of clients, that is significant to apply to SDN.
\cite{vukolic2016eventually} indicated that there are practical systems (e.g., Ripple network2 \cite{schwartz2014ripple} or OpenBlockchain3 \cite{OpenBlockchain}) implementing the transformation from the eventually consistent POW consensus to the instantly consistent BFT consensus.
Moreover, the transaction processing ability per second is also concerned.
It implies throughput capacity of the Blockchain can perform, which determines the network performance of our module built on the Blockchain (e.g., throughput capacity of the control plane on network events which are network-wide resources).
Fortunately, BFT-based Blockchain protocols enjoy excellent performance on throughput\cite{vukolic2015quest}.
Therefore, a kind of Blockchains based on BFT protocols such as Ripple network2 \cite{schwartz2014ripple} are adopted to implement the Blockchain layer in new architecture of SDN.
\subsection{Entities building}
In our context, we define entities who are actively participating in SDN including applications (i.e., $APP$), controllers (i.e., $CON$) and switches (i.e., $SWITCH$).
Formally, we give expressions to describe the entities in the form of multi-tuples as follows:
\begin{small}
\begin{align*}
APP & =(\textrm{$ID_{app}$}, \textrm{$PK_{app}$}, \textrm{$SK_{app}$}, \textrm{$category$}) \\
CON & =(\textrm{$ID_{contr}$}, \textrm{$PK_{contr}$}, \textrm{$SK_{contr}$}, \textrm{$Slice$})\\
SWITCH & =(\textrm{$ID_{switch}$}, \textrm{$PK_{switch}$}, \textrm{$SK_{switch}$}, \textrm{$Slice$})
\end{align*}
\end{small}Specifically, a unique identifier $ID$ is used to represent the identity of an entity.
An application can use the unique application package name as its identity $ID_{app}$.
For controllers and switches, they use their unique IP addresses or Media Access Control(MAC) addresses, $ID_{contr}$ and $ID_{switch}$ respectively.
A switch works out a pre-setting cryptography puzzle to enable an effective IP address used to register the network.
This method limits the ability of switch to control IP addresses used in the network, which is resistant to spoofing attack.
The tuple $Slice$ represents the network slice a controller or a switch belongs to (The network of SDN composes of several network slices).
Additionally, we employ an asymmetric key generation algorithm, $(\textrm{$PK$}, \textrm{$SK$}) \leftarrow\textsf{KeyGen}$, to create a pair of public key and private key for an entity. According to the autonomous key-selection mechanism used in the context of Blockchain, each entity generates its keys as it desires by the $\textsf{KeyGen}$ algorithm.
The tuple named $category$ in application expression is defined according to use cases among most of SDN applications.
We also define application flows from the application entities and network events created by the switch entities. The entities building flows and creating events meanwhile are responsible to generate identities for them respectively.
\begin{small}
\begin{align*}
flow & =(\textrm{$ID_{flow}$}, \textrm{$content$}, \textrm{$PK_{app}$}, \textrm{$ID_{contr}$},\textrm{$ID_{switch}$}) \\
event & =(\textrm{$ID_{event}$}, \textrm{$event$}, \textrm{$PK_{switch}$}, \textrm{$ID_{contr}$}, \textrm{$ID_{switch}$})
\end{align*}
\end{small}The defined tuples for flows and events also indicate where they come from and where they contribute to. When a flow is sent by an App, the App attaches the flow with its signature for the flow \textrm{$Sign_{flow}$} by its private key
\begin{small}
\begin{align*}
\textrm{$Sign_{flow}$} &= \textrm{$\textsf{DS.Sig}$$(SK_{app},ID_{flow}||ID_{contr}||content)$})
\end{align*}
\end{small}and the same process is necessary for an event.
\subsection{Transactions building}From the time when SDN entities participate in network and during the period they act in network, their all active histories are built into meta-data of transactions on the Blockchain.
The data are classified into three classes: transactions of registered entities, transactions of application flows and transactions of network events.
Transactions of registered entities that not only effectively indicate the existences of entities in SDN but also the connected relationships among them.
At first, each controller managing SDN provides register information which is described in a built entity.
And then they are generated into transactions $T_{contr}$,
\begin{small}
\begin{align*}
T_{contr} &=(\textrm{$ID_{T_{contr}}$}, \textrm{$ID_{contr}$}, \textrm{$PK_{contr}$}, \textrm{$Slice$}, \textrm{$\textsf{DS.Sig}$$(SK_{contr},$}\\
&\textrm{$ID_{contr}||Slice)$})
\end{align*}
\end{small} in which $ID_{contr}$ concatenating $Slice$ can be verified by algorithm $\textsf{DS.Ver}$$(PK_{contr}, \textsf{DS.Sig}$$(SK_{contr}, ID_{contr}||Slice))$ that $ID_{contr}$ in the network slice $Slice$ is linked with $PK_{contr}$ and has been registered on the Blockchain layer.
An application connects with a controller to provide network flows for switches which connect with the controller. Thus, the entity information of the application and the relationship information representing that the application connects with the controller will be recorded as $T_{app}$ and $T_{app-contr}$.
\begin{small}
\begin{align*}
T_{app} &=(\textrm{$ID_{T_{app}}$}, \textrm{$ID_{app}$}, \textrm{$PK_{app}$}, \textrm{$category$}, \textrm{$ID_{contr}$}, \textrm{$Sign_{app}$})\\
\textrm{$Sign_{app}$} &= \textrm{$\textsf{DS.Sig}$$(SK_{app}$}, \textrm{$ID_{app}||category||ID_{contr})$}\\
T_{app-contr} &=(\textrm{$ID_{T_{app-contr}}$}, \textrm{$ID_{T_{app}}$}, \textrm{$ID_{T_{contr}}$})
\end{align*}
\end{small}From transactions $T_{app}$, only the legitimate application owning the public key
can succeed to verify $\textsf{DS.Ver}$$(PK_{app}$, $\textsf{DS.Sig}$$(SK_{app},ID_{app}||category||ID_{contr}))$ the respective signature of the identity be accepted by the controller.
Additionally, transactions $T_{switch}$ and $T_{contr-switch}$ are generated once the controller-switch communication is built.
\begin{small}
\begin{align*}
&T_{switch}=(\textrm{$ID_{T_{switch}}$}, \textrm{$ID_{switch}$}, \textrm{$PK_{switch}$}, \textrm{$Slice$},
\textrm{$ID_{contr}$}, \textrm{$Com$})\\
&T_{contr-switch}=(\textrm{$ID_{T_{contr-switch}}$}, \textrm{$ID_{T_{contr}}$}, \textrm{$ID_{T_{switch}}$})
\end{align*}
\end{small}Note that a controller and a switch build an authenticated communication by using HOMQV protocol which has two security problems.
The two problems are that the origin HOMQV protocol fails to against reply attack and the sender of it needs to update a long-term key with a third party
(the long-term key is the key of a switch used to construct a session key with a targeted controller).
We emphasize that the two security issues of the protocol can be overcome with the help of Blockchain.
The transaction $T_{switch}$ contains the information ($ID_{switch}$ and $PK_{switch}$) of a switch who launches a communicated request to a targeted controller following the HOMQV protocol.
Supposed that a compromised switch launches reply attack to a controller by frequently sending its information.
By auditing $T_{switch}$ on the Blockchain, a controller can refuse a replied request when the replied identity ($ID_{switch}$ and $PK_{switch}$) included in the $T_{switch}$.
It is natural to understand because the Blockchain is regarded as a timestamping database and meanwhile stores communication histories among two parties of the protocol.
On the other hand, in the transaction $T_{switch}$, the tuple $Com$
\begin{small}
\begin{align*}
Com= \textrm{$\textsf{AE.Enc}(PK_{contr}, nonce, \textsf{DS.Sig}(SK_{switch}, nonce))$}
\end{align*}
\end{small} of it is a commitment in encryption by the public key of a controller this switch connects with, which is helpful for the controller to confirm the identity of the switch when its long-term key has been updated.
The commitment includes a nonce the switch selects when the first connection is built.
Specifically, when the same but key-updated switch reconnects with the controller, the controller will audit the transaction $T_{switch}$ on the Blockchain.
It extracts the encrypted tuple from the $T_{switch}$ and makes an identity-verification challenge for the switch. If the switch could answer the correct nonce that the controller verifies whether the encrypted value, with its public key of the nonce is equal to the value extracted from the logged connection transaction. If it does, the verification is effective and the controller continues to share a new session key using its updated key.
An transaction of an application flow, represented by $T_{flow-afore}$ and $T_{flow-after}$, includes the identity of the flow, the flow content, the identifier of the flow-stemming application, the identifier of the targeted controller and a signature signed by the flow-stemming application with its private key. $T_{flow-afore}$ and $T_{flow-after}$ are on behalf of the transactions that some application flow is injected into the network by a specific application, passes by a related controller and ultimately is installed on a specific switch. $T_{flow-afore}$ records the process from some specific application to a specific controller and the other one $T_{flow-after}$ demonstrates its trace from the controller to a specific device.
\begin{small}
\begin{align*}
T_{flow-afore} &=(\textrm{$ID_{T_{flow-afore}}$}, \textrm{$ID_{flow}$}, \textrm{$ID_{contr}$}, \textrm{$PK_{app}$},
\textrm{$content$},\\
&\textrm{$Sign_{flow}$})\\
\textrm{$Sign_{flow}$} &= \textrm{$\textsf{DS.Sig}$$(SK_{app},ID_{flow}||ID_{contr}||content)$})\\
T_{flow-after} &=(\textrm{$ID_{T_{flow-after}}$}, \textrm{$ID_{flow}$}, \textrm{$ID_{contr}$}, \textrm{$ID_{switch}$}, \textrm{$state$})\\
T_{flow} &=(\textrm{$ID_{T_{flow}}$}, \textrm{$ID_{T_{flow-afore}}$}, \textrm{$ID_{T_{flow-after}}$})
\end{align*}
\end{small}Before generating $T_{flow-afore}$, controllers need to authenticate all application flows. A malicious flow will be filtered because controllers will audit the application creating the flow and verify its identity with the transaction $T_{app}$. The transaction $T_{flow-after}$ is recorded until the flow is installed into the flow table of a switch and the transaction contains the respective resource states $state$ sent by the switch where its description is omitted here because this process is similar with the process to generate events by switches as following.
Eventually, the last kind of transactions, that is, transactions of network events dedicate the network events provided by switches when the authenticated communication links have been built. The transactions $T_{event}$ mainly are the dynamic states which are triggered by the respective application events (including in application flows) and additionally \texttt{Pack\_in} messages.
\begin{small}
\begin{align*}
T_{event} &=(\textrm{$ID_{T_{event}}$}, \textrm{$ID_{event}$}, \textrm{$PK_{switch}$}, \textrm{$ID_{contr}$}, \textrm{$ID_{switch}$}, \textrm{$event$})
\end{align*}
\end{small}Note that an authenticated communication link enables two parties to share a session key and any message among them is protected with the sharing session key. However, messages (network events) included in a generated transaction $T_{event}$ on the Blockchain are in the form of non-encryption. The messages are thought being authenticated because they come from a trustworthy controller-switch communication following the HOMQV protocol.
\subsection{Protocols building}\label{subsec:protocolBuilding}
Relying on the timestamping network records on the Blockchain, we defined necessary protocols to enforce security. In the protocols, a lightweight and efficient ABE scheme and an authentication protocol HOMQV are implemented. With the help of the defined protocols, our security goals are achieved. We explain how to realize the security goals based on the Blockchain and introduce how to combine the Blockchain with ABE scheme and HOMQV protocol.
\textbf{Security enhancement on the control plane} We define protocols of detection and authentication for the newly coming application flows based on the existing transactions on the Blockchain. At first, \texttt{AuthFlowProtocol} is implemented to enable the authentication of application flows. In the protocol a controller first, by $IsRightFlow( )$ verifies the identity of an application creating the flow based on the transactions $T_{app}$, $T_{app-contr}$, $T_{contr}$ and $T_{contr-switch}$. Then, it uses the public key of the application to verify the flow content, by $verifyFlow(PK_{app})$ when the first step is valid.
\begin{algorithm}[h]
\caption{\texttt{AuthFlowProtocol}: To check whether the flow are related to the registered application and controller with the records $T_{app}$, $T_{app-contr}$, $T_{contr}$ and $T_{contr-switch}$ on Blockchain. If the relationship records exist, the protocol continues and uses $PK_{app}$ to verify the signature. Then, if the signature is signed by the application and the content of the sent flow does not be modified.}
\LinesNumbered
\textbf{procedure}\\
\textbf{call} \texttt{FlowReplyDetectionProtocol}\;
\If{ $IsRightFlow()$ $\equiv$ true}
{
\If{ $verifyFlow(PK_{app})$ $\equiv$ true}
{
$generateT_{flow-afore}(flow)$\;
}
\textbf{end if}\;
}
\textbf{end if}\;
\textbf{end procedure}
\end{algorithm}
Similarly, \texttt{FlowReplyDetectionProtocol} is used to detect the replied flows by auditing the identifiers of flows based on the transactions $T_{flow-afore}$. It is called by \texttt{AuthFlowProtocol} as its sub-protocol before executing the protocol logic to verify the identity of a flow application. When the flow is installed in a switch by some controller and the associating network states are record, the transaction $T_{flow-after}$ is generated.
\begin{algorithm}
\caption{\texttt{FlowReplyDetectionProtocol}: To check $ID_{flow}$ and determine if it has existed. If it has existed, the protocol uses $PK_{app}$ to gain $T_{app}$. If $T_{app}$ exists, the protocol decreases the reputation value of the application as punishment and returns false. If it is a new flow, return true.}
\LinesNumbered
\textbf{procedure}\\
\If{ $checkID_{flow}()$ $\equiv$ true}
{
\If{ $getT_{app}(PK_{app})$ $\equiv$ true}
{
$reduceReputation(T_{app}.ID_{app})$ \;
\textbf{return} false\;
}
\textbf{end if}\;
}
\textbf{end if}\;
\textbf{return} true\;
\textbf{end procedure}
\end{algorithm}
Following the same mechanism as the protocol, the protocol \texttt{AuditNetworkProtocol} provides traceability of network behaviours by auditing the transactions $T_{flow\_afore}$, $T_{flow-after}$ and $T_{event}$.
In particular, it enables the traceability of application flows when it has been injected into network and traces a cause of a network event.
It works because those logged network data (transactions) not only record the sent network flows among SDN entities but also demonstrate all network behaviour which has arose.
Within a running network, if the network suffers abnormal attacks, the attack processes also are logged as transactions.
On the other hand, with those logged transactions of attack trajectories, the future attacks lunched on the network can be recognized that is attacks pattern recognition.
The functionality of the protocol is configured flexibly by network managers according to the need of troubleshooting network.
As for the requirement to support notifications for the applications after a process of flows-arbitration is finished, we define the protocol \texttt{FlowArbitrationLossNotifyProtocol}.
By the protocol, an application lacking arbitration would gain a notification.
It audits the network records that demonstrate which one of conflicted flows targeted at the same switch is adopted. Specifically, it checks the latest record that flow is generated into a transaction $T_{flow-after}$ and $T_{flow-afore}$. After finishing the process to arbitrate conflicted flows, the controller sends a notification to the application generating the arbitrated flow.
\begin{algorithm}
\caption{\texttt{FlowArbitrationLossNotifyProtocol}: Send a notification to an application which is out of arbitration.}
\LinesNumbered
\textbf{procedure}\\
$getNewBlock( )$\;
$ID_{flow}$ $\leftarrow$ $auditT_{flow-after}( )$\;
$ID_{T_{flow-afore}}$ $\leftarrow$ $auditT_{flow-afore}(ID_{flow})$\;
$PK_{app}$ $\leftarrow$ $auditT_{flow-afore}(ID_{T_{flow-afore}})$\;
$ID_{app}$ $\leftarrow$ $auditT_{app}(PK_{app})$\;
$NotifySDN\_APP(ID_{app})$\;
\textbf{end procedure}
\end{algorithm}
In order to ensure the real-time stable response for switches, controllers that the switches link with in SDN need to keep active. Thus, \texttt{ControllerFailedNotifyProtocol} is defined to notify a switch when its directly connected controller has failed. It depends on an assumption that, if some controller never participates in or becomes off-line, all transactions on the latest several blocks would not demonstrate any network behaviour of the controller. That is, by auditing the transactions of the latest several blocks, the controller are identified being failed to a certain extent. Based on the idea, we define the protocol which is executed automatically once some controller is alive-loss in a period of time on the Blockchain. The protocol needs to periodically check whether all controllers are active by reading records of network behaviours with the latest block being created.
\begin{algorithm}
\caption{\texttt{ControllerFailedNotifyProtocol}: It is responsible to send a notification to the switches connecting with a controller when the controller is failed.}
\LinesNumbered
\textbf{procedure}\\
$(T_{event}, T_{flow\_after}) \leftarrow getLastSixBlocks( )$\;
\If{ $auditAliveOfController$($T_{event}, T_{flow\_after}$) $\equiv$ null}
{
\textbf{continue}\;
}
\textbf{end if}\;
\If{ $auditAliveOfController$($T_{event}, T_{flow\_after}$) $\not\equiv$ null}
{
$[ID_{T_{contr}}] \leftarrow getFailedControllers( )$\;
\For{ $ID_{T_{contr}}$ \textbf{in} $[ID_{T_{contr}}]$}
{
$[ID_{T_{switch}}]$ $\leftarrow$ $auditT_{contr-switch}(ID_{T_{contr}})$\;
$[ID_{switch}] \leftarrow getSwitches([ID_{T_{switch}}])$\;
\For{ $ID_{switch}$ \textbf{in} $[ID_{switch}]$}
{
$NotifySwitch(ID_{switch})$\;
}
\textbf{end for}\;
}
\textbf{end for}\;
}
\textbf{end if}\;
\textbf{end procedure}
\end{algorithm}
To stress it once again, we adopt multiple controllers on the control plane while a consistent network view of resources can be maintained using Blockchain as a sharing resource channel. Since all controllers record all network events and collect network resources from devices connecting with it, the network-wide resources are public when the underlying Blockchain announces a new block.
Note that the lastly transactions are viewed valid and accepted consistently under the consensus mechanism of the underlying Blockchain, specifically, the aforementioned BFT-based Blockchain. Therefore, depending on the creation of the reliable new block following the underlying consensus protocol, all controllers achieve consensus on the whole network view.
\begin{figure}[!t]
\centering
\includegraphics[width=5in]{figure3.png}
\caption{The access control on the network-wide topology resources}
\label{fig:access}
\end{figure}
\textbf{Secure assess control on network-wide resources}
\begin{algorithm}
\caption{\texttt{AccessControlProtocol}: It provides attribute-based access control on network-wide resources with for SDN Apps.}
\LinesNumbered
\textbf{procedure}\\
${\sf (PK,MK)}$ $\leftarrow$ \textbf{ABE}.${\sf Setup}$()\;
$getLatestTransactions$( )\;
$[T_{app}, T_{app-contr}]$ $\leftarrow$ $getTappAndTapp\_contr$( )\;
\For{ $T_{app}$ \textbf{in} $[T_{app}]$}
{
\For{ $T_{app}$ \textbf{in} $[T_{app-contr}]$}
{
$[T_{contr}]$ $\leftarrow$ $getTapp\_contr$( )\;
}
\textbf{end for}\;
$[Attributes]$ $\leftarrow$ $buildAttrForApp$($T_{app}, [T_{contr}]$)\;
}
\textbf{end for}\;
${\sf E}$ $\leftarrow$ \textbf{ABE}.${\sf Encryption}$(${\sf TD}$, Attributes,${\sf PK}$)\;
${\sf D}$ $\leftarrow$ \textbf{ABE}.${\sf KeyGeneration}$(${\sf AC}$,${\sf PK}$, ${\sf MK}$)\;
${\sf M}$ $\leftarrow$ \textbf{ABE}.${\sf Decryption}$(${\sf D}$,${\sf E}$)\;
\textbf{end procedure}
\end{algorithm}
\begin{algorithm}
\caption{\texttt{AuditAuthenRequestProtocol}: It helps to resist replay attacks of connection requests from a switch.
If the switch has been connected, it sends three parameters(PK, Cipher, ID\_of\_request);
otherwise, it sends two parameters(PK, Cipher).}
\LinesNumbered
\If{ $checkNumOfParam( )$ $\equiv$ $2$}
{
$[T_{switch}]$ $\leftarrow$ $getT_{switch}$( )\;
$checkResult$ $\leftarrow$ $checkIsReplyRequest$($PK_{switch}$, $C_{AE.Enc}(ID_{switch})$, $[T_{switch}]$)\;
\If{ $checkResult$ $\equiv$ $false$}
{
$T_{switch}$ $\leftarrow$ $buildTransForSwitch$($PK_{switch}$, $C_{AE.Enc}(ID_{switch})$, $ID_{contr}$)\;
$ID_{T_{switch}}$ $\leftarrow$ $getFromT_{switch}$($T_{switch}$)\;
$executeHOMQV( )$\;
}
\textbf{end if}\;
\If{ $checkResult$ $\equiv$ $true$}
{
$reduceReputation(PK_{switch})$ \;
\textbf{return} false\;
}
\textbf{end if}\;
}
\textbf{end if}\;
\If{ $checkNumOfParam( )$ $\equiv$ $3$}
{
\textbf{Call} \texttt{SwitchChallengeProtocol}\;
\If{ \texttt{SwitchChallengeProtocol} $\equiv$ $true$}
{
$executeHOMQV( )$\;
}
\textbf{end if}\;
}
\textbf{return} true\;
\textbf{end procedure}
\end{algorithm}
\begin{algorithm}
\caption{\texttt{SwitchChallengeProtocol}: An honest key-updated switch who is intent to rebuild a connection needs to resend a new cipher $Com_{new} = \textsf{AE.Enc}(PK_{contr}, nonce_{new}, \textsf{DS.Sig}(SK_{switch}, nonce_{new}))$ which contains a new nonce and this nonce is required to be equal the nonce which is sent by the switch in the last authenticated connection.}
\LinesNumbered
\textbf{procedure}\\
\textcolor[rgb]{0.50,0.51,0.53}{//get the last tuple of Tswitch, that is a cipher.}\\
$[T_{switch}]$ $\leftarrow$ $getT_{switch}$( )\;
$Com$ $\leftarrow$ $T_{switch}.tuple[T_{switch}.length-1]$\;
$Signature$ $\leftarrow$ $AE.Dec_{SK_{contr}}$($Com$)\;
$Signature_{new}$ $\leftarrow$ $AE.Dec_{SK_{contr}}$($Com_{new}$)\;
\If{ $nonce_{new}$ $\equiv$ $nonce$ and
$\textsf{DS.Ver}(PK_{switch}, Signature))$ $\equiv$ $\textsf{DS.Ver}(PK_{switch}, Signature_{new}))$}
{
\textbf{rerun} true\;
\textbf{end if}\;
}
\textbf{end if}\;
\textbf{return} false\;
\textbf{end procedure}
\end{algorithm}We apply ABE scheme to achieve secure access control on the network-wide resources.
Controllers manage the resources with fine-grained access by encrypting each resource with a set of related attributes.
Each individual SDN App keeps a private key associated with an access structure (${\sf AC}$) consisting of a set of attributes and their relations (AND/OR).
Each attribute set is composed of App identities, App functionalities plus the relationships between the Apps and the controllers.
For example, a monitoring App is assigned with an attribute set that includes its identity, its function (i.e., monitoring) and two controller identities it has connected ($Attributes$=$\{ID_{app}, ``monitoring", ID_{contr_{1}}, ID_{contr_{2}}\}$).
In particular, \cite{kreutz2015software} concludes application functionalities of majority of SDN Apps can be classified into the following five functions: traffic engineering; mobility and wireless; measurement and monitoring; security and dependability and data center networking.
After being encrypted, different kinds of network resources are only public to the proper Apps whose access structure related to its private key is satisfied with the set of attributes used to encrypt the kind of resources.
For example, as shown in Fig. \ref{fig:access}, the network resources are network-wide topology diagrams (${\sf TD}$) which are maintained by controllers. By utilizing the fine-grained access control, we encrypt different topology diagrams. Different Apps access the diagrams (topology of some devices rather than all devices) they can decrypt with their private key. Specifically, an example of the access control made on the network resources of topology diagrams is defined by the protocol \texttt{AccessControlProtocol}.
In the example, we take a traffic engineering App ${\sf app1}$ obtaining topology resources of devices as shown in Fig. \ref{fig:access}. We assume ${\sf app1}$ has registered the controller ${\sf contr1}$ and ${\sf contr1}$ manages switches in the ${\sf slice1}$ and ${\sf slice2}$, which means ${\sf app1}$ can access the topology diagrams in the ${\sf slice1}$ and ${\sf slice2}$. Note that the relationships among ${\sf app1}$, ${\sf contr1}$ and switches are recorded as the registered transactions that can be read by ${\sf contr1}$ on the Blockchain. In a word, we need to make an access policy for an App with the network functionality of traffic engineering and undertaking to provide services for switches in the ${\sf slice1}$ and ${\sf slice2}$. On the other hand, an access structure according to a set of attributes owning to ${\sf app1}$ (i.e., $Attribute$ = $\{ID_{app1}, ``traffic\ engineering", ID_{contr1}\}$) is constructed by ${\sf contr1}$. Then, ${\sf contr1}$ executes \textbf{ABE}.${\sf KeyGeneration}$ algorithm with the access structure to generate a private key ${\sf D}$ for ${\sf app1}$. Lastly, ${\sf app1}$ uses ${\sf D}$ to execute \textbf{ABE}.${\sf Decryption}$ algorithm and obtain the topology diagrams.
\textbf{Authenticated controller-switch communication} The authenticated controller-switch communication is implemented by utilizing HOMQV protocol. Meanwhile, in the context of Blockchain, we define two protocols to enhance the aforementioned security issues of HOMQV protocol. On one hand, if being employed directly, HOMQV protocol fails to guarantee a controller resists replay attack launched by a switch who is ready to connect. \texttt{AuditAuthenRequestProtocol} is defined to overcome effectively the issue.
On the other hand, a long term key which is used to key exchange to share a session key, could be self-updated as the switch pleases. At that time, a switch needs to rebuild authenticated communication with the controller it connects last time. The protocol \texttt{SwitchChallengeProtocol} is defined that a key-updated switch proves its existing connection with a controller in SDN and refreshes its connection with the controller.
\section{Security Analysis}\label{sec:analyze}
In this section, we discuss security issues in our mechanism: authentication for application flows, replay attack detection for flows, notification of failed controllers for switches, secure access control for network-wide resources and authentication for controller-switch connection. We first give fivefold secure knowledge which are guaranteed by utilized components in our mechanism.
\textbf{1). The underlying Blockchain layer is health.} Note that our mechanism applies a worth examining stable Blockchain as the underlying layer of the Control plane \cite{schwartz2014ripple, OpenBlockchain}. Thus, that is reasonable for us to believe the underlying Blockchain layer is health, in which its record data are immutable and never abandoned.
\textbf{2). The lightweight ABE scheme is provably secure.} The security of the lightweight ABE scheme is provably secure in the attribute based selective-set model based on the ECDDH assumption, which is demonstrated in the work \cite{yao2015lightweight}.
\textbf{3). The HOMQV protocol is a secure one-pass key-exchange protocol in the random oracle model and under the Gap-Diffie-Hellman (GDH) assumption.} The work \cite{halevi2011one} provides a formal analysis of the protocol's security. Specifically, it assuming the hardness of Diffie-Hellman problem, proves the HOMQV protocol is secure which guarantees sender's forward secrecy and resilience to compromise of ephemeral data.
\textbf{4). The utilized asymmetric encryption algorithm is provably secure.} Our mechanism uses the classic public key cryptosystem \cite{cramer1998practical} presented by Cramer et al. which is provably secure against adaptive chosen ciphertext attack under standard intractability assumptions.
\textbf{5). The utilized digital signature algorithm is \emph{Strongly Existential Unforgeability}.} The public-key signature algorithm such as Schnorr scheme \cite{schnorr1991efficient} satisfies the security notion that an adversary could not output a new message-signature pair ($m^*$, $\sigma^*$) with a totally different $\sigma^*$ even if the adversary has queried signatures on message $m^*$.
\begin{figure}[!t]
\centering
\includegraphics[width=5in]{figure4.png}
\caption{Transaction auditing graph. Note that a circle in dashed line represents a starting point in an auditing process, and directed edges in green lines, blue lines and purple lines are respectively related to the auditing process for authentication for application flows, replay attack detection for flows and notification of failed controllers for switches.}
\label{fig:auditing}
\end{figure}
Based on the aforementioned secure knowledge, we present our security analyses with the help of Fig. \ref{fig:auditing}.
\textbf{Authentication for application flows.} The protocol \texttt{AuthFlowProtocol} authenticates the identity of an application flow by identifying whether the flow comes from a legitimate App which has registered the network and connects with some controller which manages switches in a network slice. A flow (including its identity and the content) is signed by an App with its secret key and then verified by using the public key of the App. The network only accepts legitimate flows but abandons any abnormal flow which is failed to be verified.
$IsRightFlow()$ in the protocol determines whether the
App creating the flow is legitimate based on the transactions $T_{app}$, $T_{app-contr}$, $T_{contr}$ and $T_{contr-switch}$.
Then, it uses the public key of the application to verify the flow signature to determine whether the content flow is modified by
$verifyFlow(PK_{app})$. This process is shown by green circles and green directed edges in Fig. \ref{fig:auditing}. It starts from $PK_{app}$ in the $flow$ and locates the transaction $T_{app}$ this $PK_{app}$ exists. With $ID_{contr}$, it indexes the transaction $T_{contr}$ and then via the relationship transaction $T_{contr-switch}$ the transaction $T_{switch}$ is located. If the process above goes through, it means that the flow comes from the legitimate App and if the transaction in any step of this process does not exist, the flow is rejected. Then, the signature $Sign_{flow}$ is verified by $PK_{app}$ of the App by using the verification algorithm of digital signature algorithm $\textsf{DS.Ver}$$(PK_{app}, Sign_{flow})$.
\textbf{Replay attack detection for application flows.}
The protocol \texttt{FlowReplyDetectionProtocol} detects replayed flows based on the logged records $T_{flow-afore}$ on the Blockchain. When a newly coming flow is received, the identity of the flow is detected by auditing the transaction $T_{flow-afore}$ as shown in blue circles and blue directed edge.
The flow is accepted if the flow has never been sent. Otherwise, the flow is rejected and the App sending this flow is punished by locating the transaction $T_{app}$.
\textbf{Notification of failed controllers for switches.}
The protocol \texttt{ControllerFailedNotifyProtocol} can notify the switches when the controllers being connected break down. By auditing the records of network behaviors $T_{flow-after}$ and $T_{event}$ within the latest 6 blocks, controllers $ID_{contr}$ without any active response can be found out. Then, based on the relationship transaction $T_{contr-swith}$, switches $ID_{switch}$ connecting with the failed controllers would be notified as shown in purple circles and purple directed edges.
On the other hand, the rest of two security issues are analyzed as follows.
\textbf{Secure access control on network-wide resources.}
The protocol \texttt{AccessControlProtocol} implemented by ABE scheme in \cite{yao2015lightweight} enables an App to access the respective resources when the attribute set of the App satisfies the access structure related to the encrypted resources. Based on the acquired security knowledge that this ABE is provably secure, the access control mechanism is secure.
\textbf{Authentication for controller-switch connection.}
The protocol \texttt{AuditAuthenRequestProtocol} based on HOMQV protocol which has been proved security implements the authenticated communication between controllers and switches. With an extra protocol \texttt{SwitchChallengeProtocol}, two existing security issues of the origin HOMQV protocol are worked out. The two protocols implemented on the Blockchain provide secure authentication enhancement of controllers and switches.
\section{Proof-of-concept Implementation}\label{sec:implement}
\begin{figure}[!t]
\centering
\includegraphics[width=5in]{figure5.png}
\caption{Schematic of our architecture prototype}
\label{fig:topology}
\end{figure}
As Floodlight \cite{floodlightcontroller} project puts the world's largest SDN ecosystem into practice\cite{Ecosystem}, we decide to build our mechanism on the Floodlight project and illustrate the utility of our security enhancement.
On the other hand, we build Blockchain environment based on Hyperledger Fabric $V1.0$\cite{Hyperledger} which is an open project of Blockchain. As shown in Fig. \ref{fig:topology}, we demonstrate a schematic of our architecture prototype and compare it with original Floodlight architecture.
Focusing on the security goal we intend to achieve, we append our required Blockchain providers to the corresponding Floodlight application modules, in which Blockchain providers are implemented surrounding Floodlight application modules by applying programming method of Aspect Oriented Programming.
The Blockchain providers, \texttt{TopologyBlockchainProvider} and \texttt{LinkBlockchainProvider} are attached to the primary modules \texttt{TopologyManager} and \texttt{LinkDiscovery} respectively. \texttt{TopologyBlockchainProvider} collects topology resources of network via \texttt{TopologyManager} while \texttt{LinkBlockchainProvider} monitors the status of links in network.
In the meantime, they are responsible to communicate with the defined security protocol \texttt{AccessControlProtocol} on Blockchain so that a customized access control mechanism for applications is provided.
The provider named \texttt{ForwardBlockchainProvider} undertakes to monitor network packages which are forwarded among devices. It collects package information and forwarding pathes that are prepare for flow transaction and event transaction generation.
\texttt{DeviceBlockchainProvider} tracks information of network devices, which is used to generate entity transactions.
\texttt{FlowBlockchainProvider} is used to catch the newly flows via \texttt{StaticFlowPusher}, which is prepare for \texttt{FlowTransactionGenerator}.
Between the communication of module-appended Floodlight project and Blockchain, we construct an implemented project decoupling from the Floodlight project which includes four modules. The four modules are \texttt{EntityTransactionGenerator}, \texttt{FlowTransactionGenerator}, \texttt{EventTransactionGenerator} and \texttt{TopologyTransactionGenerator} which undertake to encapsulate network data and generate transactions in the context of Blockchain.
In addition, the security protocols mentioned in section \ref{subsec:protocolBuilding} are implemented by smart contract which is validated and secure on Blockchain.
In order to connect with Blockchain, this exists an interface like \texttt{Web3j} devoted to build defined transactions and security protocols into Blockchain.
Note that Ethereum\cite{Ethereum} which is another open source Blockchain project offers a library called \texttt{Web3j} for a variety of Jave application integrating with Ethereum. For Hyperledger Blockchain, the third-party library, as \texttt{Web3j} for Ethereum, also is expected to be used to integrate our Java application based on Floodlight. By calling the third-party library as middle interface between Floodlight project and Blockchain project, the communication with secure protocols on Blockchain can be achieved.
\section{Conclusion}\label{sec:conclusion}
SDN has become an emerging technology to enhance network performance. With its extensive adoption, some security issues of SDN are exposed and imperative to be studied. In the paper, we present a Blockchain-based monolithic secure mechanism for SDN. By utilizing Blockchain to record all network flows and events and to implement secure protocols with smart contracts, the presented secure mechanism overcomes the common security issues in SDN. In particular, the decentralized control plane tackles the problem of single-point failure and improves network scalability; application flows can be authenticated, tracked and accounted; network-wide resources are protected with access control scheme and authenticated communication channels are ensured between controllers and switches. At last, the security analysis and an implementation prototype of our mechanism demonstrate the effectiveness of security improvement for SDN.
\bibliographystyle{unsrt}
| proofpile-arXiv_065-7133 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section[#1]{#2}}
\def\frac {\partial}{\partial r} {\noindent {\it Proof.} }
\def\noindent {\it Remark} {\noindent {\it Remark} }
\def\nabla{\nabla}
\def\bar{\nabla}{\overline\nabla}
\def\ir#1{\mathbb R^{#1}}
\def\hh#1{\Bbb H^{#1}}
\def\ch#1{\Bbb {CH}^{#1}}
\def\cc#1{\Bbb C^{#1}}
\def\f#1#2{\frac{#1}{#2}}
\def\qq#1{\Bbb Q^{#1}}
\def\cp#1{\Bbb {CP}^{#1}}
\def\qp#1{\Bbb {QP}^{#1}}
\def\grs#1#2{\bold G_{#1,#2}}
\def\bb#1{\Bbb B^{#1}}
\def\dd#1#2{\frac {d\,#1}{d\,#2}}
\def\dt#1{\frac {d\,#1}{d\,t}}
\def\mc#1{\mathcal{#1}}
\def\frac {\partial}{\partial r}{\frac {\partial}{\partial r}}
\def\frac {\partial}{\partial \phi}{\frac {\partial}{\partial \phi}}
\def\pf#1{\frac{\partial}{\partial #1}}
\def\pd#1#2{\frac {\partial #1}{\partial #2}}
\def\ppd#1#2{\frac {\partial^2 #1}{\partial #2^2}}
\def\epw#1{\varepsilon_1\wedge\cdots\wedge \varepsilon_{#1}}
\def\tilde{\tilde}
\font\subjefont=cmti8 \font\nfont=cmr8
\def\inner#1#2#3#4{(e_{#1},\varepsilon_1)(e_{#2},\varepsilon_2)(\nu_{#3},\varepsilon_1)(\nu_{#4},\varepsilon_2)}
\def\second#1#2{h_{\alpha,i#1}h_{\beta,i#2}\langle e_{#1\alpha},A\rangle\langle e_{#2\beta},A\rangle}
\def\alpha{\alpha}
\def\beta{\beta}
\def\bold G_{2,2}^2{\bold G_{2,2}^2}
\def\text{Re }_{I\!V}{\text{Re }_{I\!V}}
\def\bold C_m^{n+m}{\bold C_m^{n+m}}
\def\bold G_{n,m}^m(\bold C){\bold G_{n,m}^m(\bold C)}
\def\p#1{\partial #1}
\def\pb#1{\bar\partial #1}
\def\delta{\delta}
\def\Delta{\Delta}
\def\eta{\eta}
\def\zeta{\zeta}
\def\varepsilon{\varepsilon}
\def\epsilon{\epsilon}
\def\Gamma{\Gamma}
\def\gamma{\gamma}
\def\kappa{\kappa}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\omega{\omega}
\def\Omega{\Omega}
\def\theta{\theta}
\def\Theta{\Theta}
\def\sigma{\sigma}
\def\Sigma{\Sigma}
\def\underline{\underline}
\def\wedge{\wedge}
\def\varsigma{\varsigma}
\def\text{Hess }{\mbox{Hess}}
\def\Bbb{R}{\Bbb{R}}
\def\Bbb{C}{\Bbb{C}}
\def\mbox{tr}{\mbox{tr}}
\def\Bbb{U}{\Bbb{U}}
\def\langle{\langle}
\def\rangle{\rangle}
\def\rightarrow{\rightarrow}
\defD\hskip -2.9mm \slash\ {D\hskip -2.9mm \slash\ }
\def\partial\hskip -2.6mm \slash\ {\partial\hskip -2.6mm \slash\ }
\def\bar{\nabla}{\bar{\nabla}}
\def\aint#1{-\hskip -4.5mm\int_{#1}}
\def\mbox{Vol}{\mbox{Vol}}
\def\overline{\overline}
\def\mathbf{\mathbf}
\def\Bbb{O}{\Bbb{O}}
\def\Bbb{H}{\Bbb{H}}
\def\text{Re }{\text{Re }}
\def\text{Im }{\text{Im }}
\def\mathbf{Id}{\mathbf{Id}}
\def\text{Arg}{\text{Arg}}
\def\text{Hess }{\text{Hess }}
\renewcommand{\subjclassname}{%
\textup{2000} Mathematics Subject Classification}
\subjclass[2010]{58E20,~53A10,~53C42.}
\begin{document}
\pagenumbering{Roman}\setcounter{page}{1}
\pagenumbering{arabic} \setcounter{page}{1}
\title[Dirichlet problem and minimal cones]{Recent progress on the Dirichlet problem for the minimal surface system and minimal cones}
\author{Yongsheng\ Zhang}
\address{
Tongji University \& Max Planck Institute for Mathematics at Bonn}\email
[email protected]
}
\date{}
\thanks{Sponsored in part by NSFC (Grant No. 11601071),
and a Start-up Research Fund from Tongji University.}
\begin{abstract}
This is a very brief report on recent developments on the Dirichlet problem for the minimal surface system and minimal cones in Euclidean spaces.
We shall mainly focus on two directions:
(1)
Further systematic developments after Lawson-Osserman's paper \cite{l-o} on the Dirichlet problem for minimal graphs of high codimensions.
Aspects including non-existence, non-uniqueness and irregularity properties of solutions have been explored from different points of view.
(2) Complexities and varieties of area-minimizing cones in high codimensions.
We shall mention interesting history and exhibit some recent results which successfully furnished new families of minimizing cones of different types.
\end{abstract}
\maketitle
\Section{Introduction}{Introduction}\label{S1
\subsection{Plateau problem}
The problem is to consider minimal surfaces spanning a given contour.
Roughly speaking, there are two kinds depending on desired minimality.
One is the ``minimizing" setting for finding global minimizers for area functionals under various boundary or topological constraints;
while the other is the ``minimal" setting for critical points.
Stories of the problem can trace back to J.-L. Lagrange
who, in 1768, considered graphs with minimal area over some domain $D$ of $\mathbb R^2$.
A necessary condition is the Euler-Lagrange equation, for $z=z(x,y)$,
\begin{equation}\label{2dim}
(1+z_y^2)z_{xx}-2z_xz_yz_{xy}+(1+z_x^2)z_{yy}=0.
\end{equation}
From then on, the theory of minimal surfaces (with vanishing mean curvature)
soon launched an adventure journey.
Lots of great mathematicians,
including Monge, J. Meusnie, A.-M. Legendre, S. Poisson, H. Scherk, E. Catalan, O. Bonnet, H. Schwarz, S. Lie and many others,
entered this filed and made it flourishing for more than one century.
During the mathematical developments, Belgian physicist J. Plateau did a good number of intriguing experiments with soap films (not merely using wires)
and in \cite{P} gained some explanation about the phenomena of stability and instability,
i.e., whether or not small deformations of the film can decrease area.
By laws of surface tension, an observable soap film bounded by a given simple closed curve is stable minimal.
Thus Plateau provided physical solutions to the question in $\mathbb R^3$ in the minimal setting,
and the problem was named after Plateau since then.
However, it took more time for rigorous mathematical arguments.
In 1930, J. Douglas \cite{d} and T. Rad\'o \cite{r} affirmatively answered the problem in $\mathbb R^3$,
respectively, in the minimizing setting.
General cases were subsequently studied and a big portion
were solved due to Federer and Fleming's celebrated compactness theorems of normal currents and integral currents \cite{FF}
in expanded territories.
\subsection{Dirichlet problem for minimal graph of condimension one}
It can be seen that Plateau problem (for minimal surfaces with given simple closed boundary curves)
is actually beyond the scope of Lagrange's original question.
If $D$ is a bounded domain of $\mathbb R^{n+1}$ with $C^2$ boundary and $\phi:\p D\rightarrow {\mathbb R}^{1}$,
then the Dirichlet problem for minimal surface equation is asking for solution $f: D\rightarrow {\mathbb R}^{1}$ to satisfy
following generalization of \eqref{2dim}
%
\begin{equation}\label{DP1}
(1+|\nabla f|^2)\triangle f- \sum_{i,j=1}^{n+1}f_i f_j f_{ij}=0
\end{equation}
%
and
$f|_{\p D}=\phi$.
Hence the Dirichlet problem can be regarded as a special kind of Plateau problem,
which searches for graph solutions
for graph boundary data.
For $n+1=2$ and convex $D$, the Dirichlet problem is solvable for any continuous boundary data, see
\cite{r2}.
In general situation,
by efforts of Jenkins-Serrin \cite{j-s} and later Bombieri-De Giorgi-Miranda \cite{b-d-m},
the Dirichlet problem turns out to be well posed (i.e., having a unique solution) for any continuous boundary function if and only if
$\p D$ is everywhere mean convex.
Moreover, if solution exists, it must be $C^\omega$ due to de Giorgi \cite{de} (also see \cite{St} and \cite{m1});
and its graph is absolutely area-minimizing (see \cite{fe}),
which means any competitor sharing the same boundary possesses larger volume.
Dirichlet problem for minimal graphs of high codimensions will be discussed in \S \ref{S2}.
\subsection{Bernstein problem}
In his paper \cite{B}, Bernstein showed that
every solution to \eqref{DP1} for $n=1$ or \eqref{2dim} in the entire $\mathbb R^2$ (with no boundary requirement at infinity)
has to be affine. Fleming \cite{fle} suggested a new idea for this problem which also works for $n\geq 2$ via De Giorgi's improvement \cite{de2}.
The principle states that the existence of a non-affine solution over $\mathbb R^{n+1}$ implies the existence of a non-planar area-minimizing hypercone in $\mathbb R^n$.
Almgren \cite{A} followed this line and gained the same conclusion for $n=2$.
In \cite{S} J. Simons greatly extended the results
by showing no non-planar stable hypercones in $\mathbb R^{n+1}$ for $n\leq 6$.
In $\mathbb R^8$, he discovered stable minimal hypercones
\begin{equation}
C_{k,k}
=
C\left(
S^{k}\left(\sqrt{\frac{1}{2}}\right)
\times
S^{k}\left(\sqrt{\frac{1}{2}}\right)
\right)
\subset \mathbb R^{2(k+1)}
\ \text{ when } k\geq 3.
\end{equation}
Here, for a set $E$ in unit sphere, the cone over $E$ is defined to be $C(E):=\{tx:x\in E,\ t\in(0,\infty)\}$.
Then he naturally raised the question whether $C_{k,k}$ for $k\geq 3$ in $\mathbb R^{2(k+1)}$, nowadays called Simons cones, are area-minimizing.
Immediately, the celebrated article \cite{b-d-g} by Bombieri-De Giorgi-Giusti confirmed that all Simons cones are area-minimizing and
constructed a non-planar minimal graph over $\mathbb R^8$ in $\mathbb R^9$ which has $C_{3,3}\times \mathbb R$ as its tangent cone at infinity.
As a result, the yes-no answer to the Bernstein problem got complete:
there exist no non-planar minimal graphs over $\mathbb R^{n+1}$ in $\mathbb R^{n+2}$ when $n\leq 6$;
but there are such creatures when $n\geq 7$.
Still, lots of interesting subtle behaviors are mysterious to us,
such as what types of entire minimal graphs can occur?
Right after \cite{b-d-g}, H. B. Lawson, Jr. considered equivariant plateau problems in \cite{l} and obtained almost all homogeneous area-minimizing hypercones
(see \cite{zha}).
P. Simoes \cite{PS1, PS2} added that $C_{2,4}$ is also minimizing.
R. Hardt and L. Simon \cite{HS} discovered characterization foliations for area-minimizing hypercones.
D. Ferus and H. Karcher \cite{FK} showed, by constructing characterization foliation, that every cone over the minimal isoparametric hypersurface of an inhomogeneous isoparametric foliation on a sphere is area-minimizing.
G. Lawlor \cite{Law}
completed the classification of all homogeneous area-minimizing hypercones.
Hence one can get a classification of all isoparametric homogeneous area-minimizing hypercones accordingly.
Actually, for each $C$ of these minimizing hypercones, L. Simon \cite{LS} gave a beautiful construction of minimal graph with tangent cone $C\times \mathbb R$ at infinity,
thus creating a huge variety of solutions to the Bernstein problem.
\Section{Dirichlet problem for minimal surfaces of high codimensions}{Dirichlet problem for minimal surfaces of high codimensions}\label{S2}
Given an open bounded, strictly convex $\Omega\subset\mathbb R^{n+1}$ and $\phi:\p \Omega\rightarrow {\mathbb R}^{m+1}$,
the \textbf{Dirichlet problem} (cf. \cite{j-s,b-d-m, de, m1,l-o})
searches for weak solutions $f\in C^0(\bar \Omega)\cap Lip(\Omega)$ such that
\begin{equation}\label{ms}
\left\{\begin{array}{cc}
\sum\limits_{i=1}^{n+1}\pf{x^i}(\sqrt{g}g^{ij})=0, & j=1,\cdots,n+1,\\
\sum\limits_{i,j=1}^{n+1}\pf{x^i}(\sqrt{g}g^{ij}\pd{f^\alpha}{x^j})=0, & \alpha=1,\cdots,m+1,
\end{array}
\right.
\end{equation}
where $g_{ij}=\delta_{ij}+\sum\limits_{\alpha=1}^{m+1}\pd{f^\alpha}{x^i}\pd{f^\alpha}{x^j}$, $(g^{ij})=(g_{ij})^{-1}$ and $g=\det(g_{ij})$, and further
$$
f|_{\partial \Omega}=\phi.
$$
Note that $F(x)\mapsto (x, f(x))$ being harmonic (i.e., \eqref{ms}) is equivalent to its (or its graph) being minimal with respect to the induced metric from Euclidean space.
%
When $m=0$, \eqref{ms} can be reduced to the classical \eqref{DP1} to which many literatures were devoted as mentioned in \S \ref{S1}.
In this section we shall talk about the case of $m\geq 1$.
An astonishing pioneering work was done by Lawson and Osserman in \cite{l-o}, in which $\Omega$ is always assumed to be a unit disk $\mathbb D^{n+1}$.
In particular, they exhibited the following remarkable differences.
\begin{itemize}
\item [(1)] For $n=1$, $m\geq 1$, real analytic boundary data can be found
so that
there exist at least three different analytic solutions to the Dirichlet problem.
Moreover,
one of them has unstable minimal graph.
\item [(2)] For $n\geq 3$ and $n-1\geq m\geq 2$, the problem is in general not solvable.
A non-existence theorem is that, for each $C^2$ map $\eta:S^{n}\rightarrow S^{m}$ that is not homotopic to zero under the dimension assumption,
there exists a positive constant $R_\eta$ depending only on $\eta$, such that
the problem is unsolvable for the boundary data $\phi=R\cdot \eta$, where
$R$ is a (vertical rescaling) constant no less than $R_\eta$.
\item [(3)] For certain boundary data, there exists a Lipschitz solution to the Dirichlet problem which is not $C^1$.
\end{itemize}
The ideas are briefly summarized as follows.
(1) is based on a classical result by Rad\'o for $n=1$ case,
which
says that every solution to the Plateau problem for boundary data given by a graph over boundary of a convex domain in some 2-dimensional plane
has to be a graph over that domain.
In fact, Lawson and Osserman were able to construct an action invariant boundary data of graph type for a $Z_4$-action in the total ambient Euclidean space $\mathbb R^{3+m}$ (for $m\geq 1$),
such that under action of a generator of this $Z_4$-action each geometric solution to the Plateau problem (in the minimizing setting) cannot be fixed.
Namely, one gains two distinct geometric solutions to that boundary,
and therefore, according to Rad\'o, two essentially different solutions to the corresponding Dirichlet problem.
Then
by \cite{m-t} and \cite{Shi}
there exists an unstable minimal solution of min-max type to the same boundary.
Such boundary condition violates the uniqueness of solution and the minimizing property of solution graph.
In particular, they constructed boundary supports at least three analytic solutions to the Dirichlet problem.
It seems that more than 3 solutions may be created for certain boundary data,
if one considered symmetry by a discrete action of higher order group and actions of the entire group in some more subtle way.
(2) is due to a nice special volume expression and the well-known density monotonicity for minimal varieties in Euclidean space.
The proof of this meaningful result was achieved through a contradiction argument.
Roughly speaking, the former can provide an upper bound for volume of graphs of solutions (as long as existed);
while the latter guarantee a lower bound.
Combined with the dimension assumption, these two bounds together lead to a contradiction
when the rescaling factor becomes big.
However, it is still completely mysterious and quite challenging to us how to figure out the exact maximal value of stretching factor with existence of solution(s).
(3) is stimulated by (2).
After establishing the non-existence result (2), Lawson-Osserman realized that, for a map satisfying both the dimension and homotopy conditions,
if one rescaled the vertical stretching factor by a tiny number, then Dirichlet problem is solvable due to the Implicit Functional Theorem, e.g. see \cite{n};
however if by a quite large number, then no Lipschitz solution can ever exist to the rescaled boundary functions.
So a natural {\bf philosophy} by Lawson-Osserman states that
there should exist $R_0$ such that the boundary condition $R_0\cdot\eta$ supports a singular solution.
For first concrete examples of such kind,
they considered the three noted Hopf maps between unit spheres.
%
Expressed in complex coordinates
$\eta(z_1,z_2)=(|z_1|^2-|z_2|^2,2z_1\bar z_2)$
%
is the first.
They looked for a minimal cone $C=C(\text{graph of }\phi)$ over graph $\phi=R_0\cdot\eta$.
Therefore, if existed, $C$ is also a graph with a link of ``spherical graph" type
\begin{equation}
\label{sg}
L:=C\bigcap S^6=\left\{(\alpha x,\sqrt{1-\alpha^2}\eta(x):x\in S^3\right\}.
\end{equation}
Since a cone is minimal if and only if its link is a minimal variety in the unit sphere, it only needs to determine when $L$ is minimal.
If one uses quaternions, then, isometrically up to a sign, $\eta(q)=qi\bar q$ for $q$ of unit length of $\mathbb H$ into pure imaginary part of $\mathbb H$,
and $L$ can be viewed as an orbit through $((1,0,0,0), i)$ under action $Sp(1)\cong S^3$ with $q\cdot (\alpha x,\sqrt{1-\alpha^2}\eta(x))=(\alpha qx,\sqrt{1-\alpha^2}q\eta(x)\bar q)$.
As a result, the orbit of maximal volume, corresponding to $\alpha={\frac{2}{3}}$, is minimal in $S^6$.
Hence slope $R_0$ can take value $\frac{\sqrt{1-\alpha^2}}{\alpha}=\frac{\sqrt 5}{2}$.
Similar procedures can be done for the other two Hopf maps.
Inspired by the above, in recent joint work \cite{x-y-z0}, we attacked the question directly by generalizing \eqref{sg}.
We introduced
\begin{defi}\label{d1}
A $C^2$ map $\eta: S^{n}\rightarrow S^{m}$
is called an Lawson-Osserman map {\bf (LOM)}
if there exists $\theta\in (0,\frac{\pi}{2})$,
s.t.
$F(x) :=(\cos\theta\cdot x,\sin\theta\cdot \eta(x))$
gives a mininmal submanifold in $S^{m+n+1}$.
The cone C(Image(F)) is called associated Lawson-Osserman cone {\bf (LOC)}.
\end{defi}
\begin{rem}\label{r1}
For $\phi=\tan\theta\cdot \eta$,
$C(Graph(\phi))=C(Image(F))$ is a mininmal graph.
So there is
a singular solution given by $f(x)=\begin{cases}
|x|\cdot \tan\theta\cdot\eta(\frac{x}{|x|}), & x\neq0; \\
0, & x=0.
\end{cases}$
\end{rem}
By Remark \ref{r1} it is clear that each Lawson-Osserman map induces a boundary function $\phi$
which supports a cone-type singular solution.
Then how many LOMs? In \cite{x-y-z0} we give a characterization.
\begin{thm}\label{t1}
A $C^2$ map $\eta:\ S^{n
{\rightarrow} S^{m}$ is LOM if and only if the followings hold
for standard metrics $g_{m+n+1}, g_m, g_n$ of unit spheres
\begin{equation}\label{eqc1}
\begin{cases}
\eta:(S^n,F^*g_{m+n+1})
\rightarrow
(S^m,g_m)
\text{ is harmonic};\\
\sum_{i=1}^n\dfrac{1}{\cos^2\theta+\lambda_i^2\sin^2\theta}=n,
\text{ where }\lambda_i^2 \text{ are diagonals of }\eta^*g_{m+n+1} \text{ to } g_n.
\end{cases}
\end{equation}
\end{thm}
In order to better understand the second condition in \eqref{eqc1},
we put a strong restriction.
\begin{defi} \label{d2}
$\eta$ is called an {\bf LOMSE}, if it is an LOM and in addition, for each $x\in S^n$, all nonzero singular values of $(\eta_*)_x$ are equal, i.e.,
$$\{\lambda_1,\cdots,\lambda_n\}=\{0,\lambda\}.$$
\end{defi}
\begin{rem}\label{r2}
Let $p,\, n-p$ be the multiplicities for $\lambda$ and $0$.
Then the second in \eqref{eqc1} becomes
$$\frac{n-p}{\cos^2\theta}+\frac{p}{\cos^2\theta+\lambda^2\sin^2\theta}=n.$$
From the equality one can easily deduce that $p$ and $\lambda$ have to be independent of point $x$.
\end{rem}
So how many these LOMSEs? There turns out to be a constellation of uncountably many, even under the severe restriction!
In \cite{x-y-z0} we derived a structure theorem.
\begin{thm}\label{t2}
$\eta$ is an LOMSE
if and only if
$\eta=i\circ\pi$
where $\pi$ is a Hopf fibration to $(\mathbb P^p,h)$
and $i:(\mathbb P^p,\lambda^2h)
{\looparrowright} (S^m,g_m)$ is an isometric minimal immersion.
\end{thm}
\begin{rem}\label{r31}
$\pi$ gives a countably many levels and
in most levels the moduli space of isometric minimal immersions from projective spaces into standard spheres
form (a sequence of) compact convex bodies of vector spaces of high dimensions.
\cite{x-y-z0} perfectly embeds the relevant theory (see \cite{c-w,wa,oh,u,to,to2}) into the construction of LOMSEs.
\end{rem}
%
\begin{rem}\label{r32}
In particular, using coordinates of ambient Euclidean spaces,
$\eta$ can be expressed as $(\eta_1,\cdots,\eta_{m+1})$.
All $\eta_i$ are spherical harmonic polynomials sharing a common even degree $k$.
Moreover $\lambda=\sqrt{\frac{k(k+n-1)}{p}}$.
We call such an LOMSE of ${\bf (n,p,k)}$ type.
\end{rem}
Besides singular solutions, we cared about smooth solutions as well.
By Morrey's famous regularity result \cite{mo},
a $C^1$ solution to \eqref{ms} is automatically $C^\omega$.
In particular,
a preferred variation of LOC associated to an LOM $\eta$ can be
$$
M=M_{\rho,\eta}:=
\{(rx,\rho(r)\eta(x)):x\in S^n, r\in
(0,\infty)
\}
\subset \mathbb R^{m+n+2}.
$$
Its being minimal is equivalent to two conditions (similar to that of \eqref{eqc1}, see \cite{x-y-z0} for details).
When $\eta$ is an LOMSE,
one of the conditions holds for free and the other gives the following.
\begin{thm}\label{t3}
For an LOMSE $\eta$, $M$ above is minimal if and only if
\begin{equation}\label{ODE1}
\frac{\rho_{rr}}{1+\rho_r^2}+\frac{(n-p)\rho_r}{r}+\frac{p(\frac{\rho_r}{r}-\frac{\lambda^2\rho}{r^2})}{1+\frac{\lambda^2\rho^2}{r^2}}=0.
\end{equation}
%
\end{thm}
By introducing $\varphi:=\frac{\rho}{r}$ and $t:=\log r$,
\eqref{ODE1} transforms to
\begin{equation}\label{ODE2}
\left\{
\begin{array}{ll}
\varphi_t=\psi,\\
\psi_t=-\psi-\Big[\big(n-p+\frac{p}{1+\lambda^2\varphi^2}\big)\psi+\big(n-p+\frac{(1-\lambda^2)p}{1+\lambda^2\varphi^2}\big)\varphi\Big]
\big[1+(\varphi+\psi)^2\big].
\end{array}
\right.
\end{equation}
{\ }\\
This system is symmetric about the origin and owns exact 3 fixed points $(0,0),\, P(\varphi_0, 0)$ and $-P$,
where $\varphi_0=\tan\theta$.
Through linearization, it can be seen that the origin is always a saddle point
and
$P$ has two types:
\begin{enumerate}
{
\item [(I)]
$P$ is a {stable center} when $(n,p,k)=(3,2,2), (5,4,2), (5,4,4)$ or $n\geq 7$;
}
{
\item [(II)] $P$ is a {stable spiral point} when $(n,p)=(3,2)$,
$k\geq 4$ or $(n,p)=(5,4)$, $k\geq 6$.
}
\end{enumerate}
By very careful analysis including excluding limit circles,
there exists a special orbit emitting from the origin and approaching to $P$ for the system \eqref{ODE2}
and for $t\in (-\infty, +\infty)$.
\begin{figure}[h]
\begin{minipage}[c]{0.4\textwidth}
\includegraphics[scale=0.46]{F1.eps}
\end{minipage}%
\begin{minipage}[c]{0.65\textwidth}
\includegraphics[scale=0.55]{F2N.eps}
\end{minipage}
\end{figure}
Translated back to the $r\rho$-plane, the illustration graphs would be
$$\begin{minipage}[c]{0.5\textwidth}
\includegraphics[scale=0.45]{I3N2019.eps}
\end{minipage}%
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[scale=0.45]{I4N2019.eps}
\end{minipage}$$
Therefore, we got minimal graphs (other than cone) in $\mathbb R^{m+n+2}$ defined everywhere away from the origin of $\mathbb R^{n+1}$.
%
Since $\frac{d\rho}{dr}=\varphi+\psi$,
intense attentions were given to orbits emitting from the origin in $\varphi\psi$-plane.
They produce minimal surfaces which are $C^1$ at $r=0$.
So the natural $C^0$ extension is in fact $C^\omega$ according to Morrey's regularity result.
This is the way how we constructed entire $C^\omega$ minimal graphs with LOCs as tangent cones at infinity.
Type (II) contains interesting information.
Note that the fixed point orbit $P$ stands for the LOC and the ray in the $r\rho$-plane with constant slope $\varphi_0=\tan\theta$.
Since vertical line $\varphi=\varphi_0$ intersects the orbit infinitely many times,
there are corresponding intersections of the solution curve and the LOC ray in $r\rho$-plane.
Each intersection point gives us a minimal graph $G_i$ over a disk of radius $r_i$.
Rescale $G_i$ by $\frac{1}{r_i}$ and denote new graphs by $\tilde G_i$.
Then $\tilde G_i$ are mutually different minimal graphs over the unit disk $\mathbb D^{n+1}$
with the same boundary $-$ graph of $\tan\theta\cdot\eta$ over unit sphere $S^n$.
Hence we see that there exist boundary data which support infinitely many $C^\omega$ solutions and at least one singular solution to the Dirichlet problem!
%
This extended Lawson-Osserman's non-uniqueness result (1) from finiteness to infiniteness.
More can be read off.
Clearly, by density monotonicity, volumes of $\tilde G_i$ strictly increase to that of the truncated LOC.
So none of the LOCs of Type (II) are area-minimizing.
Actually, recently we
showed
in a joint work \cite{NZ}
that LOC of Type (II) are even not stable.
They bring unstable singular solutions to the Dirichlet problem (cf. (1) for Lawson-Osserman construction).
Since the solution curve oscillates between rays of slopes $\varphi_1$ and $\varphi_2$,
it is not always the case that once we have singular solution for slope $\varphi_0$,
then solutions suddenly vanish immediately for $\varphi>\varphi_0$.
It is a question for what kind of $\varphi$ outside $[0,\varphi_1]$ the problem can be solved?
Union of the set of such value and $[0,\varphi_1]$ is called {\bf slope-existence range} of $\eta$ for the Dirichlet problem.
To extend $[0,\varphi_1]$,
maybe the first difficulty is to figure out whether the orbit between the origin and point $(\varphi_1,0)$ gives a stable compact minimal graph.
In the opposite direction on non-existence,
a recent preprint \cite{z0} confirmed that the slope-existence range
should usually be contained in a compact set of $\mathbb R_{\geq 0}$.
More precisely, we prove
\begin{thm}
For
every
LOMSE $\eta$ of
%
{either Type (I) or Type (II)},
there exists positive constant $R_\eta$ such that
when constant $R\geq R_\eta$,
the Dirichlet problem has no solutions for $\phi=R\cdot \eta$.
\end{thm}
\section{Minimal cones}\label{S3}
As briefly mentioned before, it is useful to see if a local structure is stable or not for observation, and
it is also quite important to know structures of minimizing currents.
Minimal cones are infinitesimal structures of minimal varieties, while minimizing cones are infinitesimal structures of minimizing currents.
Both determine, in some sense, local diversities of certain geometric objects.
In fact we naturally encountered many examples of minimal cones.
For example, Lawson-Osserman \cite{l-o} constructed three minimal cones for singular solutions to the Dirichlet problem.
The first cone was shown to be coassociative in $\mathbb R^7$ and hence area-minimizing by the fundamental theorem of calibrated geometries
in the milestone paper \cite{h-l}.
However, it was unknown for 40 years if the other two are minimizing or not.
In our recent joint work \cite{x-y-z} we proved that all LOCs of $(n, p, 2)$ type (for which case moduli space of $i$ to each Laplacian eigenvalue is a single point)
are area-minimizing.
Since the other two original Lawson-Osserman cones are of $(7,4,2)$ type and $(15,8,2)$ type respectively,
so the long-standing question got settled.
Area-minimizing cones of $(n,p,2)$ type are all homeomorphic to Euclidean spaces.
For other kind of area-minimizing cones, we considered those associated to isoparametric foliations of unit spheres.
There are two natural classes of minimal surfaces $-$ minimal isoparametric hypersurfaces and focal submanifolds.
By virtue of a successful combination of Lawlor's curvature criterion and beautiful structure of isoparametric foliations,
we were able in \cite{TZ} to show that, except in low dimensions, cones overs the ``minimal products" (defined therein)
among these two classes are area-minimizing.
These provide a large number of new area-minimizing cones with various links of rich complexities.
Note that none of them can be split as product of (area-minimizing) cones of lower dimensions.
It is currently unknown to the author whether minimal products of links of general area-minimizing cones can always span an area-minimizing cone.
In \cite{z} we considered a realization problem, first attacked by N. Smale \cite{NS, NS2} in later 1990s.
\begin{quote}
{\it Can any area-minimizing cone be realized as a tangent cone at a point
of some homologically area-minimizing {\tt compact} singular submanifold?}
\end{quote}
N. Smale constructed first such examples by applying many tools in geometric analysis and geometric measure theories in \cite{NS},
while ours seems a bit simpler through the theory of calibrations with necessary understandings on Lawlor's work \cite{Law}.
We showed
\begin{thm}\label{t4}
Every oriented area-minimizing cone in \cite{Law} can be realized to the above question.
\end{thm}
\begin{rem}\label{r4}
Prototypes can be all the newly-discovered oriented area-minimizing cones in \cites{TZ, x-y-z} and
all Cheng's examples of homogeneous area-minimizing cones of codimension $2$ in \cite{Ch}
(e.g. minimal cones over
$\text{U}(7)/\text{U}(1)\times \text{SU}(2)^3$ in $\mathbb R^{42}$,
$\text{Sp}(n)\times \text{Sp}(3)/\text{Sp}(1)^3\times \text{Sp}(n-3)$ in $\mathbb R^{12n}$ for $n\geq 4$,
and $\text{Sp}(4)/\text{Sp}(1)^4$ in $\mathbb R^{27}$)
via a variation of our arguments in \cite{z}.
\end{rem}
All the above cones have smooth links. It could be highly useful if one can derive an effective way to study cones with non-smooth links.
As for stability and instability, in \cite{NZ} we borrowed ideas
\cite{br, h-l, l} for orbit space.
We focused on a preferred subspace associated to given LOMSE
and its quotient space.
With a canonical metric
$
\sigma_0^2\cdot
\left[
\left(
r^2+\lambda^2\rho^2
\right)^p
\cdot
r^{2(n-p)}
\right]
\cdot [dr^2+d\rho^2]
$
where $\sigma_0$ is the volume of $n$-dimensional unit sphere,
the length of any curve in the quotient space
equals the volume of corresponding submanifold in $\mathbb R^{m+n+2}$.
Hence, the infinitely many $C^\omega$ solution curves in the $r\rho$-plane for Type (II) in \S \ref{S2} determine geodesics
connecting $Q:=(1,\tan\theta)$ and the origin in the quotient space.
\begin{figure}[h]
\includegraphics[scale=0.55]{M2019.eps}
\label{AS}
\end{figure}
\\
{\ }
\\
We showed
\begin{thm}\label{t5}
The line segment $\overline{0Q}$ is stable for Type \text{(I)} and unstable for Type {(II)}.
\end{thm}
\begin{rem}\label{r5}
In fact, $\overline{0Q}$ is minimizing for Type \text{(I)}.
However, the difficulty is whether it is possible to lift the stability or even the area-minimality property back to LOCs in $\mathbb R^{m+n+2}$.
\end{rem}
\section{Open questions}\label{S4}
Besides several open questions in previous sections, we want to emphasize a few more in this section.
1. Systematic study about LOMs beyond LOMSEs.
It still remains unclear how to construct LOMs for other type distribution of critical values, for instance
$\{0,\, \lambda_1,\, \lambda_2\}$ or $\{\lambda_1,\, \lambda_2\}$ where $\lambda_1$ and $\lambda_2$ are different positive numbers.
These may involve more complicated dynamic systems and perhaps chaos phenomena.
2. It seems unknown in general if the cones over image of $i$ itself in Remark \ref{r31} is area-minimizing or not.
This would need certain systematic understandings about second fundamental form of $i$.
It can also help us a lot for a complete classification of which Lawson-Osserman cones associated to LOMSEs
are area-minimizing.
3. How about situation for the finite left cases in \cite{TZ}.
The most famous one may be the cone over the image of Veronese map in $S^4$, a focal submanifold for the isoparametric foliation with $g=3$ and $m=1$.
It is still open whether the cone over the image, a minimal embedded $\mathbb RP^2$ of constant curvature, is a minimizing current mod 2 (see \cite{Zm}).
{\ }
\section*{Aknowlegement}
The author would like to thank MPIM at Bonn for warm hospitality.
This work was sponsored in part by {the}
NSFC (Grant No. {11601071})
and a Start-up Research Fund from Tongji University.
{\ }
| proofpile-arXiv_065-7141 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recommender systems are an important means of improving a user's web experience.
Collaborative filtering is a widely-applied technique in recommender systems~\citep{ricci2015recommender}, in which patterns across similar users and items are leveraged to predict user preferences~\citep{su2009survey}. This naturally fits within the learning paradigm of latent variable models (LVMs)~\citep{bishop2006pattern}, where latent representations capture the shared patterns. Due to their simplicity and effectiveness, LVMs are still a dominant approach.
Traditional LVMs employ linear mappings of limited modeling capacity~\citep{paterek2007improving,mnih2008probabilistic}, and a growing body of literature involves applying deep neural networks (DNNs) to collaborative filtering to create more expressive models~\citep{he2017neural,wu2016collaborative,liang2018variational}.
Among them, variational autoencoders (VAEs)~\citep{kingma2013auto,rezende2014stochastic} have been proposed as non-linear extensions of LVMs~\citep{liang2018variational}. Empirically, VAEs significantly outperform many competing LVM-based methods. One essential contribution to the improved performance is the use of the multinomial likelihood, which is argued by ~\citet{liang2018variational} to be a close proxy to ranking loss.
This property is desirable, because in recommender systems we generally care more about the ranking of predictions than an individual item's score.
Hence, prediction results are often evaluated using top-$N$ ranking-based metrics, such as Normalized Discounted Cumulative Gain (NDCG)~\citep{jarvelin2002cumulated}. The VAE is trained to maximize the likelihood of observations; as shown below, this does not necessarily result in higher ranking-based scores.
A natural question concerns whether one may directly optimize against ranking-based metrics, which are by nature non-differentiable and piecewise-constant.
Previous work on learning-to-rank has been explored this question in the information-retrieval community, where relaxations/approximations of ranking loss are considered~\citep{weimer2008cofi,liu2009learning,li2014learning,weston2013learning}.
In this paper, we borrow the actor-critic idea from reinforcement learning (RL)~\citep{sutton1998reinforcement} to propose an efficient and scalable learning-to-rank algorithm. The critic is trained to approximate the ranking metric, while the actor is trained to optimize against this learned metric.
Specifically, with the goal of making the actor-critic approach practical for recommender systems,
we introduce a novel feature-based critic architecture. Instead of treating raw predictions as the critic input, and hoping the neural network will discover the metric's structure from massive data, we consider engineering sufficient statistics for efficient critic learning.
Experimental results on three large-scale datasets demonstrate the actor-critic's ability to significantly improve the performance of a variety of latent-variable models, and achieve better or comparable performance to strong baseline methods.
\section{Background: VAEs for Collaborative Filtering}
Vectors are denoted as bold lower-case letters $\boldsymbol{x}$, matrices as bold uppercase letters $\Xmat$, and scalars as lower-case non-bold letters $x$. We use $\circ$ for function composition, $\odot$ for the element-wise multiplication, and $| \cdot | $ for cardinality of a set. $\delta( \cdot )$ is the indicator function.
We use $n \in \{1, \dots , N \}$ to index users, and $m \in \{1, \dots, M \}$ to index items. The user-item interaction matrix $\Xmat \in \{0,1\}^{N \times M }$ collected from the users' implicit feedback is defined as:
\begin{align}
x_{nm}
\left\{\begin{matrix}
1, & \text{if interaction of user}~n~\text{with item}~m~\text{is observed};\\
0, & \hspace{-54mm}\text{otherwise.}
\end{matrix}\right.
\end{align}
Note that $x_{nm}=0$ does not necessarily mean user $n$ dislikes item $m$;
they may simply be unaware of the item.
Further, $x_{nm}=1$ is not equivalent to saying user $n$ likes item $m$, but that there is at least interest.
\paragraph{VAE model}~\hspace{-4mm}
VAEs have been investigated for collaborative filtering \citep{liang2018variational}, where this principled Bayesian approach is shown to achieve strong performance on large-scale datasets.
Given the user's interaction history $\boldsymbol{x} = [x_1, . . . , x_M ]^{\top} \in \{0,1\}^M$, our goal is to predict the full interaction behavior with all remaining items.
To simulate this process during training, a random binary mask $\bv \in \{0,1\}^M$ is introduced, with the entry $1$ as {\it un-masked}, and $0$ as {\it masked}. Thus, $\boldsymbol{x}_h = \boldsymbol{x} \odot \bv$ is the user's partial interaction history. The goal becomes recovering the masked interactions: $\boldsymbol{x}_p = \boldsymbol{x} \odot (1 - \boldsymbol{x}_h)$, which is equivalent to recovering the full $\boldsymbol{x}$ as $\boldsymbol{x}_h$ is known.
In LVMs, each user's binary interaction behavior is assumed to be controlled by a $k$-dimensional user-dependent latent representation $\boldsymbol{z} \in \mathbb{R}^K$. When applying VAEs to collaborative filtering~\citep{liang2018variational}, the user's latent feature $\boldsymbol{z}$ is represented as a distribution $q(\boldsymbol{z} | \boldsymbol{x} )$, obtained from some partial history $\boldsymbol{x}_h$ of $\boldsymbol{x}$. With the assumption that $q(\boldsymbol{z} | \boldsymbol{x} )$ follows a Gaussian form, the {\em inference} of $\boldsymbol{z}$ for the corresponding $\boldsymbol{x}$ is performed as:
\begin{align} \label{eq_inference} \hspace{-10mm}
q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x} ) = \mathcal{N}(\muv, \mbox{diag}(\sigmav^2)),
~~\text{with}~~ \muv, \sigmav^2=f_{\boldsymbol{\phi}}(\boldsymbol{x}_{h}),~~
\boldsymbol{x}_h = \boldsymbol{x} \odot \bv,~~\hspace{1mm} \bv \sim \mbox{Ber}(\alpha),
\end{align}
where $\alpha$ is the hyper-parameter of a Bernoulli distribution, $f_{\boldsymbol{\phi}}$ is a $\boldsymbol{\phi}$-parameterized neural network, which outputs the mean $\muv$ and variance $\sigmav^2$ of the Gaussian distribution.
After obtaining a user's latent representation $\boldsymbol{z}$, we use the {\em generative} process to make predictions. In \citet{liang2018variational} a multinomial distribution is used to model the likelihood of items.
Specifically, to construct $p_{\boldsymbol{\theta}}(\boldsymbol{x} | \boldsymbol{z})$, $\boldsymbol{z}$ is transformed to produce a probability distribution $\boldsymbol{\pi}$ over $M$ items, from which the interaction vector $\boldsymbol{x}$ is assumed to
have been drawn:
\vspace{-0mm}
\begin{align} \label{eq_multi}
\boldsymbol{x} \sim \mbox{Mult} (\boldsymbol{\pi}), ~~\mbox{with}~~\boldsymbol{\pi} =\mbox{Softmax} (g_{\boldsymbol{\theta}} (\boldsymbol{z}))
\end{align}
where $g_{\boldsymbol{\theta}}$ is a $\boldsymbol{\theta}$-parameterized neural network.
The output $\boldsymbol{\pi}$ is normalized via a softmax function to produce a probability vector $\boldsymbol{\pi} \in \Delta^{M-1}$
(an ($M-1$)-simplex) over the entire item set.
\paragraph{Training Objective}~\hspace{-2mm}
Learning VAE parameters $\{\boldsymbol{\phi}, \boldsymbol{\theta}\}$ yields the following generalized objective:
\vspace{-1mm}
\begin{align} \label{eq_reg_elbo} \hspace{-2mm}
\mathcal{L}_{\beta}(\boldsymbol{x}; \boldsymbol{\theta}, \boldsymbol{\phi})
\!=\! \mathcal{L}_{E} \!+\!\beta \mathcal{L}_{R},
~\text{with}~
\mathcal{L}_{E}\!=\!-\mathbb{E}_{q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x})} \big[ \log p_{\boldsymbol{\theta}}(\boldsymbol{x} | \boldsymbol{z}) \big]
~\text{and}~ \mathcal{L}_{R}\!=\!\mbox{KL} (q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x}) || p(\boldsymbol{z}) )
\end{align}
where $\mathcal{L}_{E}$ is the {\it negative log likelihood} (NLL) term, $\mathcal{L}_{R}$ is the KL regularization term with standard normal prior $p(\boldsymbol{z})$, and $\beta$ is a weighting hyper-parameter.
When $\beta=1$, we can lower-bound the log marginal likelihood of the data using \eqref{eq_reg_elbo} as
$
-\mathcal{L}_{\beta=1}(\boldsymbol{x}; \boldsymbol{\theta}, \boldsymbol{\phi}) \le \log p(\boldsymbol{x})
$.
This is commonly known as the {\it evidence lower bound} (ELBO) in variational inference~\citep{blei2017variational}. Thus \eqref{eq_reg_elbo} is the negative $\beta$-regularized ELBO. To improve the optimization efficiency, the {\it reparametrization trick}~\citep{kingma2013auto,rezende2014stochastic} is used to draw samples $\boldsymbol{z} \sim q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x})$ to obtain an unbiased estimate of the ELBO, which is further optimized via stochastic optimization.
We call this procedure {\it maximum likelihood estimate (MLE)}-based training, as it effectively maximizes the (regularized) ELBO. The testing stage of VAEs for collaborative filtering is detailed in Section~\ref{sec:testing_vae} of the Supplement.
\paragraph{Advantages of VAEs}
The VAE framework successfully scales to relatively large datasets by making use of amortized inference~\citep{gershman2014amortized}: the prediction for all users share the same procedure, which effectively requires evaluating two functions -- the encoder $f_{\boldsymbol{\phi}}(\cdot)$ and the decoder $g_{\boldsymbol{\theta}}(\cdot)$. Crucially, as all users share the same encoder/decoder, the number of parameters required for an autoencoder is independent of the number of users. This is in contrast to some traditional latent factor collaborative filtering
models~\citep{paterek2007improving,hu2008collaborative,mnih2008probabilistic}, where a unique latent vector is learned for each user. The reuse of encoder/decoder for all users is well-aligned with collaborative filtering, where user preferences are analyzed by exploiting the similar patterns inferred from past experiences~\citep{liang2018variational}. VAEs has the two advantages {\em simultaneously}: expressive representation power as a non-linear model, and the number of parameters being independent of the number of users.
\begin{wrapfigure}{R}{0.52\textwidth}
\vspace{-4mm}
\begin{tabular}{c}
\includegraphics[width=7.00cm]{figs/example_nll_dcg.pdf} \\
\end{tabular}
\vspace{-0mm}
\caption{\small Difference between MLE-based training loss and ranking-based evaluation.
For A, $- 1 \! \times \!\log 0.8 - 1 \! \times \! \log 0.1 =- \log 0.08$; For B, $- 1 \! \times \! \log 0.3 - 1 \times \! \log 0.3 =- \log 0.09$.
NLL assigns a better value to the misranked example than to the properly-ranked one. NDCG always assigns maximum value to properly-ranked scorings.
}
\label{fig:example_div}
\end{wrapfigure}
\paragraph{Pitfalls of VAEs}
Among various likelihood forms, it was argued in~\citet{liang2018variational} that multinomial likelihoods are a closer proxy to the ranking loss than the traditional Gaussian or logistic likelihoods.
Though simple and effective, the MLE procedure
may still diverge with the ultimate goal in recommendation of correctly suggesting the top-ranked items.
To illustrate the divergence between MLE-based training and ranking-based evaluation, consider the example in Figure~\ref{fig:example_div}. For the target $\boldsymbol{x}=\{1,1,0,0\}$, two different predictions $A$ and $B$ are provided. In MLE, the training loss is the multinomial NLL: $-\boldsymbol{x} \log \boldsymbol{\pi} $, where $\boldsymbol{\pi}$ is the predicted probability. From the NLL point of view, $B$ is a better prediction than $A$, because $B$ shows a lower loss than $A$. However, $B$ ranks an incorrect item highest, and therefore would return a worse recommendation than $A$. Fortunately, NDCG is calculated directly from the ranking, and so captures this dependence. This inspired us to directly use ranking-based evaluation metrics to guide training. For details on calculating NDCG, refer to Section~\ref{sec:evaluation_protocol} of the Supplement.
\vspace{-3mm}
\section{Ranking-Critical Training}
\vspace{-3mm}
We introduce a novel algorithm for recommender system training, which we call Ranking-Critical Training (RaCT). RaCT learns a differentiable approximation to the ranking metric, which the prediction network then leverages as a target for optimization through gradient ascent. This is in contrast to existing methods in collaborative filtering, which define an objective relaxation ahead of time. This methodology of learning approximations to functions which cannot be optimized directly stems from the actor-critic paradigm of RL, which we adapt for collaborative filtering.
Any ranking-based evaluation metric can be considered as a ``black box'' function $\omega: \{ \boldsymbol{\pi}; \boldsymbol{x}, \bv \} \mapsto y \in [0, 1]$, which takes in the prediction $\boldsymbol{\pi}$ to compare with the ground-truth $\boldsymbol{x}$ (conditioned on the mask $\bv$), and outputs a scalar $y$ to rate the prediction quality.
As in \eqref{eq_inference}, $\bv$ partitions a user's interactions into those that are ``observed'' and ``unobserved'' during inference.
As we are only interested in recovering the unobserved items in recommendation, we compute the ranking score of predicted items $\boldsymbol{\pi}_p = \boldsymbol{\pi} \odot (1-\boldsymbol{x}_h)$ based on the ground-truth items $\boldsymbol{x}_p$.
One salient component of a ranking-based Oracle metric $\omega^*$ is to sort $\boldsymbol{\pi}_p$. The sorting operation is non-differentiable, rendering it impossible to directly use $\omega^*$ as the critic.
While REINFORCE~\citep{williams1992simple} may appear to be suited to tackle the non-differentiable problem, it suffers from large estimate variance~\citep{silver2014deterministic}, especially in the collaborative filtering problem, which has a very large prediction space.
This motivates consideration of a differentiable neural network to approximate the mapping executed by the Oracle.
In the actor-critic framework, the prediction network is called the \textit{actor}, and the network which approximates the oracle is called the \textit{critic}. The actor begins by making a prediction (action) given the user's interaction history as the state. The critic learns to estimate the value of each action, which we define as the task-specific reward, \ie the Oracle's output.
The value predicted by the critic is then used to train the actor.
Under the assumption that the critic produces the exact values, the actor is trained based on an unbiased estimate of the gradient of the prediction value in terms of relevant ranking quality metrics.
In Figure~\ref{fig:schemes}, we illustrate the actor-critic paradigm in (b), and the traditional auto-encoder shown in (a) can be used as the actor in our paradigm.
\begin{figure*}[t!
\vspace{-0mm}\centering
\begin{tabular}{c c}
\hspace{-0mm}
\includegraphics[height=2.2cm]{figs/ae_scheme.pdf} &
\hspace{0mm}
\includegraphics[height=2.2cm]{figs/rct_scheme.pdf} \\
(a) Traditional auto-encoder paradigm \vspace{-0mm} &
(b) Proposed actor-critic paradigm \hspace{-0mm}
\end{tabular}
\vspace{-1mm}
\caption{Illustration of learning parameters $\{\boldsymbol{\phi},\boldsymbol{\theta}\}$ in the two different paradigms. (a) Learning with MLE, as in VAEs; (b) Learning with a learned ranking-critic. The {\it actor} can be viewed as the function composition of encoder $f_{\boldsymbol{\phi}}(\cdot)$ and $g_{\boldsymbol{\theta}}(\cdot)$ in VAEs. The {\it critic} mimics the ranking-based evaluation scores, so that it can provide ranking-sensitive feedback in the actor learning.}
\vspace{-5mm}
\label{fig:schemes}
\end{figure*}
\paragraph{Naive critic}
Conventionally one may concatenate vectors $[\boldsymbol{\pi}_p, \boldsymbol{x}_p ]$ as input to a neural network, and train a network to output the measured ranking scores $y$.
However, this naive critic is impractical, and failed in our experiments. Our hypothesis is that since this network architecture has a huge number of parameters to train (as the input data layer is of length $2M$, where $M>10k$), it would require rich data for training. Unfortunately, this is impractical: $\{\boldsymbol{\pi}, \boldsymbol{x}\} \in \mathbb{R}^M$ are very high-dimensional, and the implicit feedback used in collaborative filtering is naturally sparse.
\vspace{-0mm}
\paragraph{Feature-based critic}
The naive critic hopes a deep network can discover structure from massive data by itself, leaving much valuable domain knowledge unused.
We propose a more efficient critic, that takes into account the structure underlined by the assumed likelihood in MLE~\citep{miyato2018cgans}. We describe our intuition and method below, and provide the justification from the perspective of adversarial learning in Section~\ref{sec:gan} of the Supplement.
Consider the computation procedure of the evaluation metric as a function decomposition $ \omega = \omega_{0} \circ \omega_{\boldsymbol{\psi}}$, including two steps:
\vspace{-2mm}
\begin{itemize}
\item
$ \omega_{0}: \boldsymbol{\pi} \mapsto \hv $, feature engineering of prediction $ \boldsymbol{\pi} $ into the {\it sufficient statistics} $\hv$ ;
\item
$ \omega_{\boldsymbol{\psi}}: \hv \mapsto \hat{y} $, neural approximation of the mapping from the statistics $\hv$ to the estimated ranking score $\hat{y}$, using a $\boldsymbol{\psi}$-parameterized neural network.
%
\end{itemize}
The success of this two-step critic largely depends on the effectiveness of the feature $\hv$. We hope feature $\hv$ is $(\RN{1})$ {\it compact} so that fewer parameters in the critic $ \omega_{\boldsymbol{\psi}} $ can simplify training; $(\RN{2})$ {\it easy-to-compute} so that training and testing is efficient; and $(\RN{3})$ {\it informative} so that the necessary information is preserved.
We suggest to use a 3-dimensional vector as the feature, and leave more complicated feature engineering as future work. In summary, our feature is
\begin{align} \label{eq_features}
\hv = [ \mathcal{L}_{E} , | \mathcal{H}_0 |, |\mathcal{H}_1 |],
\end{align}
where
$(\RN{1})$ $\mathcal{L}_{E}$ is the negative log-likelihood in~\eqref{eq_reg_elbo}, defined in the MLE training loss.
$(\RN{2})$ $| \mathcal{H}_0 |$ is the number of unobserved items that a user will interact, with $\mathcal{H}_0 = \{m| x_m = 1 ~\text{and}~b_m = 0\}$.
$(\RN{3})$ $| \mathcal{H}_1 |$ is the number of observed items that a user has interacted, with $\mathcal{H}_1 = \{m| x_m = 1 ~\text{and}~b_m = 1\}$.
The NLL characterizes the prediction quality of the actor's output $\pi$ against the ground-truth $\boldsymbol{x}$ in an item-to-item comparison manner, \eg the inner product between two vectors $-\boldsymbol{x} \log \boldsymbol{\pi} $ as in the multinomial NLL~\citep{liang2018variational}. Ranking is made easier when there are many acceptable items to rank highly (e.g. when $| \mathcal{H}_0 |$ is large), and made difficult when predicting from very few interactions (e.g. when $| \mathcal{H}_1 |$ is small), motivating these two features. Including these three features allows the critic to guide training by weighting the NLL's relation to ranking given this context about the user. Interestingly, this idea to consider the importance of user behavior statistics coincides with the scaling trick in SVD~\citep{nikolakopoulos2019eigenrec}.
Note that $| \mathcal{H}_0 |$ and $| \mathcal{H}_1 |$ are user-specific, indicating the user's frequency to interact with the system, which can be viewed as side-information about the user. They are only used as features in training the critic to better approximate the ranking scores, and not in training the actor. Hence, we do not use additional information in the testing stage.
\paragraph{Actor Pre-training} In order to be a helpful feature for the critic, the NLL must hold some relationship to the ranking-based objective function. But for the high-dimensional datasets common to collaborative filtering, the ranking score is near-uniformly zero for a randomly-initialized actor. In this situation, a trained critic will not propagate derivatives to the actor, and therefore the actor will not improve. We mitigate this problem by using a pre-trained actor, such as VAEs that have been trained via MLE.
\paragraph{Critic Pre-training}
Training a generic critic to approximate the ranking scores for all possible predictions is difficult and cumbersome. Furthermore, it is unnecessary.
In practice, a critic only needs to estimate the ranking scores on the restricted domain of the current actor's outputs. Therefore, we train the critic offline on top of the pre-trained MLE-based actor.
To train the critic, we minimize the Mean Square Error (MSE) between the critic output and true ranking score $y$ from the Oracle:
\begin{align}~\label{eq_critic}
\vspace{-3mm}
\mathcal{L}_{C}(\hv, y; \boldsymbol{\psi}) = \| \omega_{\boldsymbol{\psi}} (\hv) - y \|^2 ,
\end{align}
where the target $y$ is generated using its non-differential definition, which plays the role of ground truth simulator in training.
\paragraph{Actor-critic Training}
Once the critic is well trained, we fix its parameters $\boldsymbol{\psi}$ and update the actor parameters $\{ \boldsymbol{\phi}, \boldsymbol{\theta} \}$ to maximize the estimated ranking score
\begin{align}~\label{eq_actor}
\mathcal{L}_{A}(\hv ; \boldsymbol{\phi}, \boldsymbol{\theta}) = \omega_{\boldsymbol{\psi}} (\hv),
\end{align}
where $\hv$ is defined in~\eqref{eq_features},
including NLL feature extracted from the prediction made in~\eqref{eq_reg_elbo}, together with count features.
During back-propagation, the gradient of $\mathcal{L}_{A}$ wrt the prediction $\boldsymbol{\pi}$ is
$
\frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\pi} } =
\frac{\partial \mathcal{L}_{A}}{\partial \hv } \frac{\partial \hv}{\partial \boldsymbol{\pi} } .
$
It further updates the actor parameters, with the encoder gradient
$ \frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\phi} } =
\frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\pi} }
\frac{\partial \boldsymbol{\pi}}{\partial \boldsymbol{\phi} } $
and the decoder gradient
$ \frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\theta} } =
\frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\pi} }
\frac{\partial \boldsymbol{\pi}}{\partial \boldsymbol{\theta} } $.
Updating the actor changes its predictions, so we must update the critic to produce the correct ranking scores for its new input domain.
The full RaCT training procedure is summarized in Algorithm~1 in the Supplement.
Stochastic optimization is used, where a batch of users
$\mathcal{U} = \{\boldsymbol{x}_i | i \in \mathcal{B} \}$ is drawn at each iteration, with $\mathcal{B}$ as a random subset of user index in $\{1, \cdots, N\}$. The pre-training of the actor in Stage 1 and the critic in Stage 2 are important; they provide good initialization to the actor-critic training in Stage 3 for fast convergence. Further, we provide an alternative interpretation to view our actor-critic approach in \eqref{eq_critic} and \eqref{eq_actor} from the perspective of adversarial learning~\citep{goodfellow2014generative} in the Supplement. This can partially justify our choice of feature engineering.
\section{Related Work}
{\bf Deep Learning for Collaborative Filtering}
There are many recent efforts focused on developing deep learning models for collaborative filtering~\citep{sedhain2015autorec,xue2017deep,he2018outer,he2018adversarial,zhang2017deep,chen2017attentive}.
Early work on DNNs focused on explicit feedback settings~\citep{georgiev2013non,salakhutdinov2007restricted,zheng2016neural}, such as rating predictions.
Recent research gradually recognized the importance of implicit feedback~\citep{wu2016collaborative,he2017neural,liang2018variational}, where the user's preference is not explicitly presented~\citep{hu2008collaborative}. This setting is more practical but challenging, and is the focus of our work.
The proposed actor-critic method belongs to the general two-level architectures for recommendation systems, where a coarse to fine prediction procedure is used. For a systematic method comparison for top-N recommendation tasks, we suggest referring to~\citet{dacrema2019we}.
Our method is closely related to three papers, on VAEs~\citep{liang2018variational}, collaborative denoising autoencoder (CDAE)~\citep{wu2016collaborative} and neural collaborative filtering (NCF)~\citep{he2017neural}.
CDAE and NCF may suffer from scalability issues: the model size grows linearly with both the number of users as well as items.
The VAE~\citep{liang2018variational} alleviates this problem via amortized inference.
Our work builds on top of the VAE, and improves it by optimizing to the ranking-based metric.
{\bf Learned Metrics in Vision \& Languages } Recent research in computer vision and natural language processing has generated excellent results, using learned instead of hand-crafted metrics. Among the rich literature of generating realistic images via generative adversarial networks (GANs)~\citep{goodfellow2014generative,radford2015unsupervised,karras2017progressive}, our work is most similar to ~\citet{larsen2016autoencoding}, where the VAE objective~\citep{kingma2013auto} is augmented with the learned representations in the GAN discriminator~\citep{goodfellow2014generative} to better measure image similarities.
For language generation, the discrepancy between word-level MLE training and sequence-level semantic evaluation has been alleviated with GANs or RL techniques~\citep{bahdanau2016actor,ren2017deep,lin2017adversarial}. The RL approach directly optimizes the metric used at test time,
and has shown improvement on various applications, including dialogue~\citep{li2016deep}, image captioning~\citep{rennie2017self} and translations~\citep{ranzato2015sequence}.
Despite the significant successes in other domains, there has been little if any research reported for directly learning the metrics with deep neural networks for collaborative filtering. Our work fills the gap, and we hope it inspires more research in this direction.
{\bf Learning to Rank (L2R)} The idea of L2R has existed for two decades in the information-retrieval community. The goal is to maximize a given ranking-based evaluation metric~\citep{liu2009learning,li2014learning}, generally through optimizing objective relaxations~\citep{weimer2008cofi}.
Many L2R methods used in recommendation, such as the popular pairwise L2R methods BPR ~\citep{rendle2009bpr} and WARP~\citep{weston2011wsabie}, are trained by optimizing a pairwise classification function that penalizes mis-ranked pairs of items. Through negative sampling~\citep{hu2008collaborative}, these methods can scale to extremely high-dimensional output spaces. However, it is computationally expensive to compute low-variance updates to a model when the number of items is large.
An alternative to the pairwise approach is \textit{listwise} loss functions, which minimize a loss calculated from a user's entire interaction history. By considering the entire interaction history these methods can more closely model ranking, and generally perform better than their pairwise counterparts~\citep{xia2008listmle}.
Furthermore, compared to methods which calculate relative ranking for each pair ~\citep{weston2011wsabie}, the per-user amortization of rank-calculation can be computed more efficiently.
NLL is an example of a listwise loss function, as it is calculated over a user's entire interaction history. Interestingly, NLL is also used as the loss function for ListNet~\citep{cao2007listnet}, a classic listwise L2R method designed to probabilistically maximize Top-1 Recall. The VAE framework under NLL can be seen as a principled extension of this method to Top-N collaborative filtering. Our ranking-critical training further extends this methodology by explicitly calculating the relationship between a differentiable listwise loss function and the desired ranking-based evaluation function.
\vspace{-2mm}
\section{Experiments}
\vspace{-2mm}
\paragraph{Experimental Settings}
We implemented our algorithm in TensorFlow. The source code to reproduce the experimental results and plots is included as Supplementary Material.
We conduct experiments on three publicly available
large-scale datasets, which represent different item recommendation scenarios, including user-movie ratings and user-song play counts. This is the same set of user-item consumption datasets used in~\citet{liang2018variational}, and we keep the same pre-processing steps for fair comparison.
The statistics of the datasets, evaluation protocols and hyper-parameters are summarized in the Supplement.
VAE~\citep{liang2018variational} is used as the baseline, which plays the role of our actor pre-training. The NCDG@100 ranking metric is used as the critic's target in training.
{\bf Baseline Methods}~~~
We use ranking-critical training to improve the three MLE-based methods described in Section 2.1: VAE, DAE, and MF. We also adapt traditional L2R methods as the actors in our framework, where the L2R loss is used to replace $\mathcal{L}_{E}$ in \eqref{eq_features} to construct the feature. We consider WARP and LambdaRank, two pairwise loss functions designed for optimizing NDCG, for these experiments. We also compare our approaches with four representative baseline methods in collaborative filtering. CDAE~\citep{wu2016collaborative} is a strongly-performing neural-network based method, weighted MF~\citep{hu2008collaborative} is a linear latent-factor model, and SLIM~\citep{ning2011slim} and EASE~\citep{steck2019ease} are item-to-item similarity models. We additionally compare with Bayesian Pairwise Ranking~\citep{rendle2009bpr}, but as this method did not yield competitive performance on these datasets, we omit the results.
\begin{figure*}[t!
\vspace{-0mm}\centering
\begin{tabular}{c c c}
\hspace{-2mm}
\includegraphics[height=3.4cm]{figs/improvement_ndcg/improve_ndcg_ml-20m.pdf} &
\hspace{-4mm}
\includegraphics[height=3.4cm]{figs/improvement_ndcg/improve_ndcg_netflix.pdf} &
\hspace{-4mm}
\includegraphics[height=3.4cm]{figs/improvement_ndcg/improve_ndcg_msd.pdf}
\vspace{-2mm}
\\
(a) ML-20M dataset \vspace{-0mm} &
(b) Netflix dataset \hspace{-0mm} &
(c) MSD dataset\hspace{-0mm} \\
\end{tabular}
\vspace{-2mm}
\caption{Performance improvement (NDCG@100) with RaCT over the VAE baseline.}
\vspace{-4mm}
\label{fig:improvement}
\end{figure*}
\subsection{Overall Performance of RaCT}
\vspace{-2mm}
\paragraph{Improvement over VAE}
In Figure~\ref{fig:improvement}, we show the learning curves of RaCT and VAE on the validation set. The VAE converges to a plateau by the time that the RaCT finishes its actor pre-training stage, \eg 150 epochs on ML-20 dataset, after which the VAE's performance is not improving. By contrast, when the RaCT is plugged in, the performance shows a significant immediate boost. For the amount of improvement gain, RaCT takes only half the number of epochs that VAE takes in the end of actor pre-training. For example, RaCT takes 50 epochs (from 150 to 200) to achieve an improvement of 0.44-0.43 = 0.01, while VAE takes 100 epochs (from 50 to 150) to achieve an improvement of 0.43-0.424 = 0.006.
\begin{wrapfigure}{R}{0.55\textwidth}
\vspace{-0mm}
\centering
\begin{tabular}{c c}
\hspace{-4mm}
\includegraphics[height=3.5cm]{figs/correlation_objective/correlation_scatter_training_softmax.png} &
\hspace{-7mm}
\includegraphics[height=3.5cm]{figs/correlation_objective/correlation_scatter_training_ract.png}
\\
(a) MLE\vspace{-0mm} &
(b) RaCT \hspace{-0mm} \\
\end{tabular}
\vspace{-2mm}
\caption{Correlation between the learning objectives (MLE or RaCT) and evaluation metrics on training.}
\vspace{-2mm}
\label{fig:correlation}
\end{wrapfigure}
\paragraph{Training/Evaluation Correlation} We visualize scatter plots between learning objectives and evaluation metric for all users on ML-20M dataset in Figure~\ref{fig:correlation}. More details and an enlarged visualization is shown in Figure~\ref{fig:correlation_supp} of the Supplement.
The Pearson's correlation $r$ is computed. NLL exhibits low correlation with the target NDCG ($r$ is close to zero), while the learned metric in RaCT shows much higher positive correlation. It strongly indicates RaCT optimizes a more direct objective than an MLE approach. Further, NLL should in theory have a negative correlation with the target NDCG, as we wish that minimizing NLL can maximize NDCG. However, in practice it yields positive correlation. We hypothesize that this is because the number of interactions for each user may dominate the NLL values. That partially motivates us to consider the number of user interactions as features.
\begin{table*}[t!]
\vspace{-2mm}
\caption{ Comparison on three large datasets. The best testing set performance is reported. The results below the line are from~\citet{liang2018variational}, and VAE$^{\ddag}$ shows the VAE results based on our runs. {\color{blue} Blue} indicates improvement over the VAE baseline, and {\bf bold} indicates overall best. }
\label{tab:compare_sota}
\begin{adjustbox}{scale=.80,tabular=c|ccc|ccc|ccc}
\toprule
Dataset &
\multicolumn{3}{ c|}{ML-20M} &
\multicolumn{3}{ c|}{Netflix} &
\multicolumn{3}{ c }{MSD} \\ \hline
Metric
& R@20 & R@50 & NDCG@100
& R@20 & R@50 & NDCG@100
& R@20 & R@50 & NDCG@100 \\
\midrule
RaCT
& \textbf{\color{blue} 0.403} & \textbf{\color{blue} 0.543} & \textbf{\color{blue} 0.434}
& {\color{blue}0.357} & \textbf{\color{blue} 0.450} & {\color{blue}0.392}
& {\color{blue} 0.268} & {\color{blue} 0.364} & {\color{blue} 0.319} \\
VAE$^{\ddag}$
& 0.396 & 0.536 & 0.426
& 0.350 & 0.443 & 0.385
& 0.260 & 0.356 & 0.310 \\ \hline
WAR
& 0.310 & 0.448 & 0.348
& 0.273 & 0.360 & 0.312
& 0.162 & 0.253 & 0.210 \\
LambdaRan
& 0.395 & 0.534 & 0.427
& 0.352 & 0.441 & 0.386
& 0.259 & 0.355 & 0.308 \\
\hline
EAS
& 0.391 & 0.521 & 0.420
& 0.\textbf{362} & 0.445 & \textbf{0.393}
& \textbf{0.333} & \textbf{0.428} & \textbf{0.389} \\
VA
& 0.395 & 0.537 & 0.426
& 0.351 & 0.444 & 0.386
& 0.266 & 0.364 & 0.316 \\
CDA
& 0.391 & 0.523 & 0.418
& 0.343 & 0.428 & 0.376
& 0.188 & 0.283 & 0.237 \\
WM
& 0.360 & 0.498 & 0.386
& 0.316 & 0.404 & 0.351
& 0.211 & 0.312 & 0.257 \\
SLI
& 0.370 & 0.495 & 0.401
& 0.347 & 0.428 & 0.379
& -- & -- & -- \\
\bottomrule
\end{adjustbox}
\vspace{-3mm}
\end{table*}
{\bf Comparison with traditional L2R methods}
As examples of traditional L2R methods, we compare to our method using WARP~\citep{weston2011wsabie} and LambdaRank~\citep{burges2007learning} as the ranking-critical objectives. We use implementations of both methods designed specifically to maximize NDCG. We observe that WARP and LambdaRank are roughly 2 and 10 times more computationally expensive than RaCT per epoch, respectively. Table~\ref{tab:compare_sota} shows the results of RaCT, WARP and LambdaRank, using the same amount of wall-clock training time. We observe the trends that WARP degrades performance, and LambdaRank provides performance roughly equal to VAE. WARP's poor performance is perhaps due to poor approximation of the ranking when the number of items is large.
{\bf Comparison with existing methods}
In Table~\ref{tab:compare_sota}, we report our RaCT performance, and compare with competing methods in terms of three evaluation metrics: NDCG@100, Recall@20, and Recall@50.
We use the published code\footnote{\url{https://github.com/dawenl/vae_cf}} of~\citet{liang2018variational}, and reproduce the VAE as our actor pre-training. We further use their reported values for the classic collaborative filtering methods CDAE, WMF, and SLIM.
Our reproduced VAE results are very close to~\citet{liang2018variational} on the ML-20M and Netflix datasets, but slightly lower on the MSD dataset. The RaCT is built on top of our VAE runs, and consistently improves its baseline actor for all the evaluation metrics and datasets, as seen by comparing the rows RaCT and VAE$^{\ddag}$.
The proposed RaCT also significantly outperforms competing LVMs, including VAE, CDAE, and WMF.
When comparing to EASE~\citep{steck2019ease}, our method performs substantially better for ML-20M, comparably for Netflix, and is substantially outperformed for MSD. We observe a similar trend when comparing SLIM (an item-to-item similarity method) and CDAE (a latent variable method).
As SLIM and EASE rely on recreating the Gram-matrix ${\bf G} = \Xmat^T\Xmat$, their performance should improve with the the number of users~\citep{steck2019ease}. However this performance may come at a computational cost, as inference requires multiplication with an unfactored $M \times M$ matrix. EASE requires computing a dense item-to-item similarity matrix, making its inference on MSD roughly 30 times more expensive than for VAE or RaCT. A practitioner's choice between these two methods should be informed by the specifics of the dataset as well as demands of the system.
In the Supplement, we study the generalization of RaCT trained with different ranking-metrics in Section~\ref{sec_metrics_supp}, and break down the performance improvement with different cut-off values of NDCG in Section~\ref{sec_cut_off_supp}, and with different number of interactions of $\Xmat$ in Section~\ref{sec_interactions_supp}.
\begin{table}[t!]
\begin{minipage}{0.53\linewidth}
\centering
\begin{adjustbox}{scale=.93,tabular=l|ccc}
\toprule
Actor & Before & After & Gain \\
\midrule
VA
& 0.4258 & 0.4339 & 8.09 \\
VAE (Gaussian)
& 0.4202 & 0.4224 & 2.21 \\
VAE ($\beta = 0$)
& 0.4203 & 0.4255 & 5.17 \\
VAE (Linear)
& 0.4156 & 0.4162 & 0.53 \\ \hline
DAE~\citep{liang2018variational}
& 0.4205 & 0.4214 & 0.87 \\
MF~\citep{liang2018variational}
& 0.4159 & 0.4172 & 1.37 \\ \hline
WAR
& 0.3123 & 0.3439 & 31.63 \\
\bottomrule
\end{adjustbox}
\vspace{1mm}
\caption{\small Performance gain ($\times 10^{-3}$) for various actors.}
\vspace{-0mm}
\label{tab:compare_actors}
\end{minipage}\hfill
\begin{minipage}{0.45\linewidth}
\vspace{-2mm}
\centering
\begin{tabular}{c}
\hspace{-5mm}
\includegraphics[height=3.50cm]{figs/plot_feature_ablation.pdf} \\
\end{tabular}
\vspace{-1mm}
\captionof{figure}{\small Ablation study on features
}
\vspace{-0mm}
\label{fig:feature_ablation}
\end{minipage}
\vspace{-4mm}
\end{table}
\subsection{What Actor Can Be Improved by RaCT?}
In RL, the choice of policy plays a crucial role in the agent's performance. Similarly, we would like to study how different actor designs impact RaCT performance. Table~\ref{tab:compare_actors} shows the performance of various policies before and after applying RaCT. The results on NDCG@100 are reported. The VAE, DAE and MF models follow the setup in~\citet{liang2018variational}.
We modify one component of the VAE at a time, and check the change of performance improvement that RaCT can provide.
(1) VAE (Gaussian): we change likelihood form from multinomial to Gaussian, and observe a smaller performance improvement. This shows the importance of having a closer proxy of ranking-based loss.
(2) VAE ($\beta=0$): we remove the KL regularization by setting $\beta=0$, and replace the posterior sampling with a delta distribution. We see a marginally smaller performance improvement. This compares a stochastic and deterministic policy. The stochastic policy (\ie posterior sampling) provides higher exploration ability for the actor, allowing more diverse samples generated for the critic's training. This is essential for better critic learning.
(3) VAE (Linear): we limit the expressive ability of the actor by using a linear encoder and decoder. This significantly degrades performance, and the RaCT cannot help much in this case. RaCT shows improvements for all MLE-based methods, including DAE and MF from ~\citet{liang2018variational}. It also shows significant improvement over WARP.
Please see detailed discussion in Section \ref{sec:actors_supp} of the Supplement.
\subsection{Ablation Study on Feature-based Critic}
In Figure~\ref{fig:feature_ablation}, we investigate the importance of the features we designed in~\eqref{eq_features}, using results from the ML-20M dataset.
The full feature vector consists of three elements:
$\hv = [ \mathcal{L}_{E} , | \mathcal{H}_0 |, |\mathcal{H}_1 |]$.
$\mathcal{L}_{E} $ is mandatory, because it links the actor to the critic; removing it would break the back-propagation to train the actor.
We carefully remove
$| \mathcal{H}_0 |$ or $|\mathcal{H}_1 |$ from $\hv$ at each time, and observe that it leads to performance degradation. In particular, removing $| \mathcal{H}_0 |$ results in a severe over-fitting issue.
When both counts are removed, we observe an immediate performance drop, as depicted by the orange curve. Overall, the results indicate that all three features are necessary to our performance improvement.
\section{Conclusion \& Discussion}
We have proposed an actor-critic framework for collaborative filtering on implicit data. The critic learns to approximate the ranking scores, which in turn improves the traditional MLE-based nonlinear LVMs with the learned ranking-critical objectives.
To make it practical and efficient, we introduce a few techniques: a feature-based critic to reduce the number of learnable parameters, posterior sampling as exploration for better critic estimates, and pre-training of actor and critic for fast convergence.
The experimental results on three large-scale datasets demonstrate the actor-critic's ability to significantly improve the results of a variety of latent-variable models, and achieve better or comparable performance to strong baseline methods.
Though RaCT improves VAEs, it does not start from the best performing actor model. The very recent work by~\citet{dacrema2019we} conducts a systematic analysis of algorithmic proposals for top-N recommendation tasks. There are other simple and efficient methods that perform better than VAEs, such as pure SVD-based models~\citep{cremonesi2010performance,nikolakopoulos2019eigenrec}, RecWalk~\citep{nikolakopoulos2019recwalk} and Personalized Diffusions~\citep{nikolakopoulos2019personalized}.
One interesting future research direction is to explore learning-to-rank techniques for them.
\medskip
\small
\bibliographystyle{iclr2020_conference}
\section{Introduction}
\vspace{-3mm}
Recommender systems are an important means of improving a user's web experience.
Collaborative filtering is a widely-applied technique in recommender systems~\cite{ricci2015recommender}, in which patterns across similar users and items are leveraged to predict user preferences~\cite{su2009survey}. This naturally fits within the learning paradigm of latent variable models (LVMs)~\cite{bishop2006pattern}, where the latent representations capture the shared patterns. Due to their simplicity and effectiveness, LVMs are still a dominant approach. However, traditional LVMs employ linear mappings of limited modeling capacity~\cite{paterek2007improving,mnih2008probabilistic}, which may yield suboptimal performance, especially for large datasets~\cite{he2017neural}.
This problem has been mitigated recently in a growing body of literature that involves applying deep neural networks (DNNs) to collaborative filtering~\cite{he2017neural,wu2016collaborative,liang2018variational}.
Among them, variational autoencoders (VAEs)~\cite{kingma2013auto,rezende2014stochastic} have been proposed as non-linear extensions of LVMs~\cite{liang2018variational}. Empirically, they significantly outperform state-of-the-art methods. One essential contribution to the improved performance is the use of the multinomial likelihood, which is argued to be a close proxy to the ranking loss.
This is desirable, because we generally care most about the ranking of predictions in recommender systems.
Hence, prediction results are often evaluated using top-$N$ ranking-based metrics, such as Normalized Discounted Cumulative Gain~\cite{jarvelin2002cumulated}. The VAE is trained to maximize the likelihood of observations; as shown below, this does not necessarily result in higher ranking-based scores.
A natural question concerns whether one may optimize directly against ranking-based metrics (or a close proxy of them).
Previous work on learning-to-rank has been explored in the information-retrieval community, where relaxations/approximations are considered~\cite{weimer2008cofi,liu2009learning,li2014learning,weston2013learning}. They are not straightforward to adapt for collaborative filtering, especially for large-scale datasets.
Since these methods generally focus on the pair-wise ranking loss between positive and negative items, it is computationally expensive to have a low-variance approximation to the full loss when the number of items is large.
In this paper, we borrow the actor-critic idea from reinforcement learning (RL)~\cite{sutton1998reinforcement} to propose an efficient and scalable learning-to-rank algorithm. The critic is trained to approximate the ranking metric, while the actor is trained to optimize against this learned metric.
Specifically, with the goal of making the actor-critic approach practical for recommender systems,
we introduce a novel feature-based critic architecture. Instead of treating raw predictions as the critic input, and hoping the neural network will discover the metric's structure from massive data, we consider engineering sufficient statistics for efficient critic learning.
Experimental results on three large-scale real-world datasets demonstrate that the proposed method significantly improves on state-of-the-art baselines, and outperforms other recently proposed neural-network approaches.
\vspace{-2mm}
\section{Preliminaries: VAEs for Collaborative Filtering}
\vspace{-2mm}
Vectors are denoted as bold lower-case letters $\boldsymbol{x}$, matrices as bold uppercase letters $\Xmat$, and scalars as lower-case non-bold letters $x$. We use $\circ$ for function composition, $\odot$ for the element-wise multiplication, and $| \cdot | $ for cardinality of a set. $\delta( \cdot )$ is the indicator function.
We use $n \in \{1, \dots , N \}$ to index users, and $m \in \{1, \dots, M \}$ to index items. The user-item interaction matrix $\Xmat \in \{0,1\}^{N \times M }$ collected from the users' implicit feedback is defined as:
\begin{align}
x_{nm}
\left\{\begin{matrix}
1, & \text{if interaction of user}~n~\text{with item}~m~\text{is observed};\\
0, & \hspace{-54mm}\text{otherwise}
\end{matrix}\right.
\end{align}
Note that $x_{nm}=0$ does not necessarily mean that user $n$ dislikes item $m$; it can be that the user is not aware of the item. Further, $x_{nm}=1$ is not equivalent to saying user $n$ likes item $m$, but at least there is interest.
\paragraph{VAE model}~\hspace{-4mm}
VAEs have been investigated for collaborative filtering \cite{liang2018variational}, where this principled Bayesian approach is shown to be the state-of-the-art for large-scale datasets.
Given the user's interaction history $\boldsymbol{x} = [x_1, . . . , x_M ]^{\top} \in \{0,1\}^M$, our goal is to predict the full interaction behavior with all remaining items.
To simulate this process during training, a random binary mask $\bv \in \{0,1\}^M$ is introduced, with the entry $1$ as {\it un-masked}, and $0$ as {\it masked}. Thus, $\boldsymbol{x}_h = \boldsymbol{x} \odot \bv$ is the user's partial interaction history. The goal becomes recovering the masked interactions: $\boldsymbol{x}_p= \boldsymbol{x} \odot (1-\bv)$.
In LVMs, each user's binary interaction behavior is assumed to be controlled by a $k$-dimensional user-dependent latent representation $\boldsymbol{z} \in \mathbb{R}^K$. When applying VAEs to collaborative filtering~\cite{liang2018variational}, the user's latent feature $\boldsymbol{z}$ is represented as a distribution $q(\boldsymbol{z} | \boldsymbol{x} )$, obtained from some partial history $\boldsymbol{x}_h$ of $\boldsymbol{x}$. With the assumption that $q(\boldsymbol{z} | \boldsymbol{x} )$ follows a Gaussian form, the {\em inference} of $\boldsymbol{z}$ for the corresponding $\boldsymbol{x}$ is performed as:
\begin{align} \label{eq_inference} \hspace{-10mm}
q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x} ) = \mathcal{N}(\muv, \mbox{diag}(\sigmav^2)),
~~\text{with}~~ \muv, \sigmav^2=f_{\boldsymbol{\phi}}(\boldsymbol{x}_{h}),~~
\boldsymbol{x}_h = \boldsymbol{x} \odot \bv,~~\hspace{1mm} \bv \sim \mbox{Ber}(\alpha)
\end{align}
where $\alpha$ is the hyper-parameter of a Bernoulli distribution, $f_{\boldsymbol{\phi}}$ is a $\boldsymbol{\phi}$-parameterized neural network, which outputs the mean $\muv$ and variance $\sigmav^2$ of the Gaussian distribution.
After obtaining a user's latent representation $\boldsymbol{z}$, we use the {\em generative} process to make predictions. In \cite{liang2018variational} a multinomial distribution is used to model the likelihood of items.
Specifically, to construct $p_{\boldsymbol{\theta}}(\boldsymbol{x} | \boldsymbol{z})$, $\boldsymbol{z}$ is transformed to produce a probability distribution $\boldsymbol{\pi}$ over $M$ items, from which the interaction vector $\boldsymbol{x}$ is assumed to
have been drawn:
\vspace{-0mm}
\begin{align} \label{eq_multi}
\boldsymbol{x} \sim \mbox{Mult} (\boldsymbol{\pi}), ~~\mbox{with}~~\boldsymbol{\pi} =\mbox{Softmax} (\exp\{ g_{\boldsymbol{\theta}} (\boldsymbol{z}) \})
\end{align}
where $g_{\boldsymbol{\theta}}$ is a $\boldsymbol{\theta}$-parameterized neural network.
The output $\boldsymbol{\pi}$ is normalized via a softmax function to produce a probability vector $\boldsymbol{\pi} \in \Delta^{M-1}$
(an ($M-1$)-simplex) over the entire item set.
\paragraph{Training Objective}~\hspace{-2mm}
Learning VAE parameters $\{\boldsymbol{\phi}, \boldsymbol{\theta}\}$ yields the following generalized objective:
\vspace{-1mm}
\begin{align} \label{eq_reg_elbo} \hspace{-2mm}
\mathcal{L}_{\beta}(\boldsymbol{x}; \boldsymbol{\theta}, \boldsymbol{\phi})
\!=\! \mathcal{L}_{E} \!+\!\beta \mathcal{L}_{R},
~\text{with}~
\mathcal{L}_{E}\!=\!-\mathbb{E}_{q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x})} \big[ \log p_{\boldsymbol{\theta}}(\boldsymbol{x} | \boldsymbol{z}) \big]
~\text{and}~ \mathcal{L}_{R}\!=\!\mbox{KL} (q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x}) || p(\boldsymbol{z}) )
\end{align}
where $\mathcal{L}_{E}$ is the {\it negative log likelihood} (NLL) term, $\mathcal{L}_{R}$ is the KL regularization term, and $\beta$ is a weighting hyper-parameter.
When $\beta=1$, we can lower-bound the log marginal likelihood of the data using \eqref{eq_reg_elbo} as
$
-\mathcal{L}_{\beta=1}(\boldsymbol{x}; \boldsymbol{\theta}, \boldsymbol{\phi}) \le \log p(\boldsymbol{x})
$.
This is commonly known as the {\it evidence lower bound} (ELBO) in variational inference~\cite{blei2017variational}. Thus \eqref{eq_reg_elbo} is the negative $\beta$-regularized ELBO. To improve the optimization efficiency, the {\it reparametrization trick}~\cite{kingma2013auto,rezende2014stochastic} is used to draw samples $\boldsymbol{z} \sim q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x})$ to obtain an unbiased estimate of the ELBO, which is further optimized via stochastic optimization.
We call this procedure {\it maximum likelihood estimate (MLE)}-based training, as it effectively maximizes the (regularized) ELBO. The testing stage of VAEs for collaborative filtering is detailed in Section~\ref{sec:testing_vae} of Supplement.
\paragraph{Advantages of VAEs}
The VAE framework has a favorable characteristic that it is scalable to large datasets, by making use of amortized inference~\cite{gershman2014amortized}: the prediction for all users share the same procedure, which effectively requires evaluating two functions -- the encoder $f_{\boldsymbol{\phi}}(\cdot)$ and the decoder $g_{\boldsymbol{\theta}}(\cdot)$. This is much more efficient than most traditional latent factor collaborative filtering
models~\cite{paterek2007improving,hu2008collaborative,mnih2008probabilistic}, where a time-consuming optimization procedure is typically performed to obtain the latent
factor for a user who is not present in the training data.
This makes the use of autoencoders particularly attractive in industrial applications, where fast prediction is important.
Interestingly, this amortized inference procedure reuses the same functions to answer related new problems. This is well aligned with collaborative filtering, where user preferences are analyzed by exploiting the similar patterns inferred from past experiences.
\begin{wrapfigure}{R}{0.52\textwidth}
\centering
\vspace{-4mm}
\begin{tabular}{c}
\includegraphics[width=7.00cm]{figs/example_nll_dcg.pdf} \\
\end{tabular}
\vspace{-0mm}
\caption{\small Difference between MLE-based training loss and ranking-based evaluation. For A, $- 1 \! \times \!\log 0.8 - 1 \! \times \! \log 0.1 =- \log 0.08$; For B, $- 1 \! \times \! \log 0.4 - 1 \times \! \log 0.25 =- \log 0.10$. The multinomial NLL values disagree with the ground-truth that $A$ is a better recommendation than $B$, while NDCG values are coherent with the ground-truth.}
\label{fig:example_div}
\vspace{-2mm}
\end{wrapfigure}
\paragraph{Pitfalls of VAEs}
Among various likelihood forms,
it was argued in~\cite{liang2018variational} that multinomial likelihoods are a closer proxy to the ranking loss than the traditional Gaussian or logistic likelihoods.
Though simple and effective, the MLE procedure
may diverge with the ultimate goal in recommendation, of correctly suggesting the top-ranked items. Therefore, recommender systems are often evaluated using ranking-based measures, such as NDCG~\cite{jarvelin2002cumulated}.
To illustrate the divergence between MLE-based training and ranking-based evaluation, we provide an example in Figure~\ref{fig:example_div}. For the target $\boldsymbol{x}=\{1,1,0,0\}$, two different predictions $A$ and $B$ are provided. In MLE, the training loss is the multinomial NLL: $-\boldsymbol{x} \log \boldsymbol{\pi} $, where $\boldsymbol{\pi}$ is the predicted probability. From the NLL point of view, $B$ is a better prediction than $A$, because $B$ shows a lower loss than $A$. However, this apparently disagrees with our intuition that $A$ is better than $B$, because $A$ preserves the same ranking order with the target, while $B$ does not. Fortunately, the NDCG values correctly capture the true quality. This has inspired us to directly use ranking-based evaluation metrics to guide training.
\paragraph{From MLE to Ranking-Based Training }
The ranking loss is difficult to optimize, and previous work on its minimization has led practitioners to relaxations and approximations~\cite{weimer2008cofi}. Learning-to-ranking (L2R) methods have been studied in information retrieval~\cite{liu2009learning,li2014learning}, and some techniques can be extended to recommendation settings~\cite{rendle2009bpr,weston2013learning}.
Many L2R methods are essentially trained by optimizing a classification function, such as {\it Bayesian Personalized Ranking} (BPR)~\cite{rendle2009bpr} and {\it Weighted Approximate-Rank Pairwise} (WARP) model~\cite{weston2011wsabie} (Detailed in Section~\ref{sec:two_l2r} of Supplement).
When applying the traditional L2R methods to collaborative filtering, there are two potential issues:
$(\RN{1})$
Each prediction is evaluated and optimized against the true ranking from scratch independently, and this process has to repeat for each new prediction, making L2R methods cumbersome for large-scale datasets.
$(\RN{2})$ The computation of the pairwise loss functions scales quadratically with the number of items, making many L2R methods inefficient to train on high-dimensional datasets.
\begin{figure*}[t!
\vspace{-0mm}\centering
\begin{tabular}{c c}
\hspace{-0mm}
\includegraphics[height=2.2cm]{figs/ae_scheme.pdf} &
\hspace{0mm}
\includegraphics[height=2.2cm]{figs/rct_scheme.pdf} \\
(a) Traditional auto-encoder paradigm \vspace{-0mm} &
(b) Proposed actor-critic paradigm \hspace{-0mm}
\end{tabular}
\vspace{-1mm}
\caption{\small Illustration of learning parameters $\{\boldsymbol{\phi},\boldsymbol{\theta}\}$ in the two different paradigms. (a) Learning with MLE, as in VAEs; (b) Learning with a learned ranking-critic. The {\it actor} can be viewed as the function composition of encoder $f_{\boldsymbol{\phi}}(\cdot)$ and $g_{\boldsymbol{\theta}}(\cdot)$ in VAEs. The {\it critic} mimics the ranking-based evaluation scores, so that it can provide ranking-sensitive feedback in the actor learning.}
\vspace{-5mm}
\label{fig:schemes}
\end{figure*}
\vspace{-3mm}
\section{Ranking-Critical Training}
\vspace{-3mm}
We introduce a novel actor-critic algorithm for ranking-based training, which we call {\it Ranking-Critical Training} (RaCT).
The actor inherits the advantages of VAE to amortize the computation of collaborative filtering.
More importantly, the proposed neural-network-parameterized critic amortizes the computation of learning the ranking metrics, making RaCT scalable on large-scale datasets.
Any ranking-based evaluation metric can be considered as a ``black box'' function $\omega: \{ \boldsymbol{\pi}; \boldsymbol{x}, \bv \} \mapsto y \in [0, 1]$, which takes in the prediction $\boldsymbol{\pi}$ to compare with the ground-truth $\boldsymbol{x}$ (conditioned on the mask $\bv$), and outputs a scalar $y$ to rate the prediction quality.
Specifically, $\bv$ determines the items of interest in testing, \ie the items that are ``unobserved'' during inference.
As we are only interested in recovering the unobserved items in recommendation, we compute the ranking score of predicted items $\boldsymbol{\pi}_p = \boldsymbol{\pi} \odot (1-\bv)$ based on the ground-truth items $\boldsymbol{x}_p = \boldsymbol{x} \odot (1-\bv)$.
One salient component of a ranking-based Oracle metric $\omega^*$ is to sort $\boldsymbol{\pi}_p$. This operator is non-differentiable, rendering it impossible to directly use $\omega^*$ as the critic.
While REINFORCE~\cite{williams1992simple} may appear to be suited to tackle the non-differentiable problem, it suffers from the issue of large estimate variance, as the collaborative filtering problem has a very large prediction space.
This motivates consideration of a neural network to approximate the mapping executed by the Oracle.
This falls into the actor-critic paradigm in RL~\cite{sutton1998reinforcement}, and we borrow the idea for collaborative filtering. It consists of a policy network (actor) and value network (critic). The actor is trained to make a prediction (action) given the user's interaction history as the state. The critic predicts the value of each prediction, which we define as the task-specific reward, \ie the Oracle's output.
The value predicted by the critic is then used to train the actor. Under the assumption that the critic produces the exact values, the actor is trained based on an unbiased estimate of the gradient of the prediction value in terms of relevant ranking quality metrics. In Figure~\ref{fig:schemes}, we illustrate the actor-critic paradigm in (b), and the traditional auto-encoder shown in (a) can be used as the actor in our paradigm.
\paragraph{Naive critic}
Conventionally one may concatenate vectors $[\boldsymbol{\pi}_p, \boldsymbol{x}_p ]$ as input to a neural network, and train a network to output the measured ranking scores $y$.
However, this naive critic is impractical, and failed in our experiments. Our hypothesis is that since this network architecture has a huge number of parameters to train (as the input data layer is of length $2M$, where $M>10k$), it would require rich data for training. Unfortunately, this is impractical: $\{\boldsymbol{\pi}, \boldsymbol{x}\} \in \mathbb{R}^M$ are very high-dimensional, and hence it is too expensive to simulate enough data offline and then fit it to a scalar.
\vspace{-0mm}
\paragraph{Feature-based critic}
The naive critic hopes a deep network can discover structure from massive data by itself, leaving much valuable domain knowledge unused.
We propose a more efficient critic, by taking into account the structure underlined by the assumed likelihood in MLE~\cite{miyato2018cgans}. We describe our intuition and method below, and provide the justification from the perspective of adversarial learning in Section~\ref{sec:gan} of Supplement.
Consider the computation procedure of the evaluation metric as a function decomposition $ \omega = \omega_{0} \circ \omega_{\boldsymbol{\psi}}$, including two steps:
\vspace{-2mm}
\begin{itemize}
\item
$ \omega_{0}: \boldsymbol{\pi} \mapsto \hv $, feature engineering of prediction $ \boldsymbol{\pi} $ into the {\it sufficient statistics} $\hv$ ;
\item
$ \omega_{\boldsymbol{\psi}}: \hv \mapsto \hat{y} $, neural approximation of the mapping from the statistics $\hv$ to the estimated ranking score $\hat{y}$, using a $\boldsymbol{\psi}$-parameterized neural network;
%
\end{itemize}
The success of this two-step critic largely depends on the effectiveness of the feature $\hv$. We hope feature $\hv$ is $(\RN{1})$ {\it compact} so that fewer parameters in the critic $ \omega_{\boldsymbol{\psi}} $ can simplify training; $(\RN{2})$ {\it easy-to-compute} so that training and testing is efficient; and $(\RN{3})$ {\it informative} so that the necessary information is preserved.
We suggest to use a 3-dimensional vector as the feature, and leave more complicated feature engineering as future work. In summary, our feature is
\begin{align} \label{eq_features}
\hv = [ \mathcal{L}_{E} , | \mathcal{H}_0 |, |\mathcal{H}_1 |],
\end{align}
where
$(\RN{1})$ $\mathcal{L}_{E}$ is the negative log-likelihood in~\eqref{eq_reg_elbo}, defined in the MLE training loss.
$(\RN{2})$ $| \mathcal{H}_0 |$ is the number of unobserved items that a user will interact, with $\mathcal{H}_0 = \{m| x_m = 1 ~\text{and}~b_m = 0\}$.
$(\RN{3})$ $| \mathcal{H}_1 |$ is the number of observed items that a user has interacted, with $\mathcal{H}_1 = \{m| x_m = 1 ~\text{and}~b_m = 1\}$.
The NLL characterizes the prediction quality of the actor's output $\pi$ against the ground-truth $\boldsymbol{x}$ in an item-to-item comparison manner, \eg the inner product between two vectors $-\boldsymbol{x} \log \boldsymbol{\pi} $ as in the multinomial NLL~\cite{liang2018variational}.
Note that the ideal optimum of the NLL yields a perfect match $\boldsymbol{\pi}^{*} \propto \boldsymbol{x}$, which also gives the perfect ranking scores. However, the {\it amortization gap} in amortized inference~\cite{kim2018semi,shu2018amortized} offsets the solutions obtained by VAEs from the ideal optimum. Fortunately, in recommendation we are only interested in ensuring that the order of top-ranking items are correct. This objective is easier to achieve, as it can be satisfied with some sub-optimal solutions of VAEs. Hence, we propose to adjust NLL to guide the actor towards them.
In practice, it is intractable to compute $\mathcal{L}_{E}$, and a one-sample estimate is used for fast training
$
\mathcal{L}_{E} \approx - \log p_{\boldsymbol{\theta}}(\boldsymbol{x} | \boldsymbol{z}), ~~\text{with}~~
\boldsymbol{z} \sim q_{\boldsymbol{\phi}}(\boldsymbol{z} | \boldsymbol{x})
$.
Note that $| \mathcal{H}_0 |$ and $| \mathcal{H}_1 |$ are user-specific, indicating the user's frequency to interact with the system, which can be viewed as side-information about the user. They are only used as features in training the critic to better approximate the ranking scores, and not in training the actor. Hence, we do not use additional information in the testing stage.
\paragraph{Critic Pre-training}
Training a generic critic to approximate the ranking scores for all possible predictions is difficult and cumbersome. Furthermore, it is unnecessary.
In practice, a critic only needs to estimate the ranking scores on the restricted domain of the current actor's outputs. Therefore, we train the critic offline on top of the pre-trained MLE-based actor.
To train the critic, we minimize the Mean Square Error (MSE) between the critic output and true ranking score $y$ from the Oracle:
\begin{align}~\label{eq_critic}
\vspace{-3mm}
\mathcal{L}_{C}(\hv, y; \boldsymbol{\psi}) = \| \omega_{\boldsymbol{\psi}} (\hv) - y \|^2 ,
\end{align}
where the target $y$ is generated using its non-differential definition, which plays the role of ground truth simulator in training.
\paragraph{Actor-critic Training}
Once the critic is well trained, we fix its parameters $\boldsymbol{\psi}$ and update the actor parameters $\{ \boldsymbol{\phi}, \boldsymbol{\theta} \}$ to maximize the estimated ranking score
\begin{align}~\label{eq_actor}
\mathcal{L}_{A}(\hv ; \boldsymbol{\phi}, \boldsymbol{\theta}) = \omega_{\boldsymbol{\psi}} (\hv),
\end{align}
where $\hv$ is defined in~\eqref{eq_features},
including NLL feature extracted from the prediction made in~\eqref{eq_reg_elbo}, together with count features.
During back-propagation, the gradient of $\mathcal{L}_{A}$ wrt the prediction $\boldsymbol{\pi}$ is
$
\frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\pi} } =
\frac{\partial \mathcal{L}_{A}}{\partial \hv } \frac{\partial \hv}{\partial \boldsymbol{\pi} } .
$
It further updates the actor parameters, with the encoder gradient
$ \frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\phi} } =
\frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\pi} }
\frac{\partial \boldsymbol{\pi}}{\partial \boldsymbol{\phi} } $
and the decoder gradient
$ \frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\theta} } =
\frac{\partial \mathcal{L}_{A}}{\partial \boldsymbol{\pi} }
\frac{\partial \boldsymbol{\pi}}{\partial \boldsymbol{\theta} } $.
Updating the actor changes its predictions, so we must update the critic to produce the correct ranking scores for its new input domain.
The full RaCT training procedure is summarized in Algorithm~1 in Supplement.
Stochastic optimization is used, where a batch of users
$\mathcal{U} = \{\boldsymbol{x}_i | i \in \mathcal{B} \}$ is drawn at each iteration, with $\mathcal{B}$ as a random subset of user index in $\{1, \cdots, N\}$. The pre-training of the actor in Stage 1 and the critic in Stage 2 are important; they provide good initialization to the actor-critic training in Stage 3 for fast convergence. Further, we provide an alternative interpretation to view our actor-critic approach in \eqref{eq_critic} and \eqref{eq_actor} from the perspective of adversarial learning~\cite{goodfellow2014generative} in Supplement. This can partially justify our choice of feature engineering.
\vspace{-3mm}
\section{Related Work}
\vspace{-3mm}
{\bf Deep Learning for Collaborative Filtering.}
To take advantage of the expressiveness of DNNs, there are many recent efforts focused on developing deep learning models for collaborative filtering~\cite{sedhain2015autorec,xue2017deep,he2018outer,he2018adversarial,zhang2017deep,chen2017attentive}.
Early work on DNNs focused on explicit feedback settings~\cite{georgiev2013non,salakhutdinov2007restricted,zheng2016neural}, such as rating predictions.
Recent research gradually recognized the importance of implicit feedback~\cite{wu2016collaborative,he2017neural,liang2018variational}, where the user's preference is not explicitly presented~\cite{hu2008collaborative}. This setting is more practical but challenging, and is the focus of our work.
Our method is closely related to three papers, on VAEs~\cite{liang2018variational}, collaborative denoising autoencoder (CDAE)~\cite{wu2016collaborative} and neural collaborative filtering (NCF)~\cite{he2017neural}.
CDAE and NCF may suffer from scalability issues: the model size grows linearly with both the number of users as well as items.
The VAE~\cite{liang2018variational} alleviates this problem via amortized inference.
Our work builds on top of the VAE, and improves it by optimizing to the ranking-based metric.
{\bf Learned Metrics in Vision \& Languages. } Recent research in computer vision and natural language processing has generated excellent results, by using learned metrics instead of hand-crafted metrics. Among the rich literature of generating realistic images via generative adversarial networks (GANs)~\cite{goodfellow2014generative,radford2015unsupervised,karras2017progressive,brock2018large}, our work is most similar to~ ~\cite{larsen2016autoencoding}, where the VAE objective~\cite{kingma2013auto,rezende2014stochastic} is augmented with the learned representations in the GAN discriminator~\cite{goodfellow2014generative} to better measure image similarities.
For language generation, the discrepancy between word-level MLE training and sequence-level semantic evaluation has been alleviated with GANs or RL techniques~\cite{li2017adversarial,bahdanau2016actor,ren2017deep,yu2017seqgan,lin2017adversarial}. The RL approach directly optimizes the metric used at test time,
and has shown improvement on various applications, including dialogue~\cite{li2016deep}, image captioning~\cite{rennie2017self} and translations~\cite{ranzato2015sequence}.
Despite the significant successes in vision and language analysis, there has been little if any research reported for directly learning the metrics with deep neural networks for collaborative filtering. Our work fills the gap, and we hope it inspires more research in this direction.
{\bf Learning to Rank (L2R).} The idea of L2R has existed for two decades in the information-retrieval community. The goal is to directly optimize against ranking-based evaluation metrics~\cite{liu2009learning,li2014learning}. Previous work on L2R employs objective relaxations~\cite{weimer2008cofi}. Some techniques can be extended to recommendation settings~\cite{rendle2009bpr,shi2010list,weston2013learning,shi2012tfmap,hidasi2018recurrent}. Many L2R methods in recommendation are essentially trained by optimizing a classification function, such as the popular pairwise L2R method BPR ~\cite{rendle2009bpr} and WARP~\cite{weston2011wsabie} described Section 2.1. One limitation is that they are computationally expensive when the number of items is large. To accelerate these approaches, cheap approximations are made in each training step, which results in degraded performance. In contrast, the proposed RaCT is efficient and scalable. In fact, the traditional L2R methods can be integrated into our actor-critic framework, yielding improved performance as shown in our experiments.
\vspace{-3mm}
\section{Experiments}
\vspace{-3mm}
\paragraph{Experimental Settings}
We implemented our algorithm in TensorFlow.
The source code to reproduce the experimental results \& plots is included as Supplementary Material, and will be released on Github.
We conduct experiments on three publicly available
large-scale datasets. These three ten-million-size datasets represent different item recommendation scenarios, including user-movie ratings and user-song play counts. This is the same set of user-item consumption datasets used in~\cite{liang2018variational}, and we keep the same pre-processing steps for fair comparison.
The statistics of the datasets, evaluation protocols and hyper-parameters are summarized in Supplement.
VAE~\cite{liang2018variational} is used as the baseline, which plays the role of our actor pre-training. The NCDG@100 ranking metric is used as the critic's target in training.
{\bf Baselines}~~~
We use ranking-critical training to improve the three MLE-based methods described in Section 2.1: VAE,
DAE, and MF. We also adapt a traditional L2R method as the actor in our framework.
The L2R loss is used to replace $\mathcal{L}_{E}$ in \eqref{eq_features} to construct the feature.
Since WARP has been shown to perform generally better than BPR for collaborative filtering~\cite{kula2015metadata}, we only consider WARP in the experiments.
We also compare our approaches with four representative state-of-the-art methods in collaborative filtering.
Two neural-network-based methods are CDAE~\cite{wu2016collaborative} and NCF~\cite{he2017neural}, and two linear models are Weighted MF~\cite{hu2008collaborative} and SLIM~\cite{ning2011slim}.
\begin{figure*}[t!
\vspace{-0mm}\centering
\begin{tabular}{c c c}
\hspace{-2mm}
\includegraphics[height=3.4cm]{figs/improvement_ndcg/improve_ndcg_ml-20m.pdf} &
\hspace{-4mm}
\includegraphics[height=3.4cm]{figs/improvement_ndcg/improve_ndcg_netflix.pdf} &
\hspace{-4mm}
\includegraphics[height=3.4cm]{figs/improvement_ndcg/improve_ndcg_msd.pdf}
\vspace{-2mm}
\\
(a) ML-20M dataset \vspace{-0mm} &
(b) Netflix dataset \hspace{-0mm} &
(c) MSD dataset\hspace{-0mm} \\
\end{tabular}
\vspace{-2mm}
\caption{Performance improvement (NDCG@100) with RaCT over the VAE baseline.}
\vspace{-4mm}
\label{fig:improvement}
\end{figure*}
\subsection{Overall Performance of RaCT}
\vspace{-2mm}
\paragraph{Improvement over VAE}
In Figure~\ref{fig:improvement}, we show the learning curves of RaCT and VAE on the validation set. The VAE converges to a plateau by the time that the RaCT finishes its actor pre-training stage, \eg 150 epochs on ML-20 dataset, after which the VAE's performance is not improving. In contrast, when the RaCT is plugged in, the performance shows a significant immediate boost. For the amount of improvement gain, RaCT takes only half the number of epochs that VAE takes in the end of actor pre-training. For example, RaCT takes 50 epochs (from 150 to 200) to achieve an improvement of 0.44-0.43 = 0.01, while VAE takes 100 epochs (from 50 to 150) to achieve an improvement of 0.43-0.424 = 0.006.
\begin{table*}[t!]
\vspace{-0mm}
\begin{adjustbox}{scale=.80,tabular=c|ccc|ccc|ccc}
\toprule
Dataset &
\multicolumn{3}{ c|}{ML-20M} &
\multicolumn{3}{ c|}{Netflix} &
\multicolumn{3}{ c }{MSD} \\ \hline
Metric
& R@20 & R@50 & NDCG@100
& R@20 & R@50 & NDCG@100
& R@20 & R@50 & NDCG@100 \\
\midrule
RaCT
& \textbf{0.403} & \textbf{0.543} & {\bf 0.434}
& \textbf{0.357} & \textbf{0.450} & {\bf 0.392}
& \textbf{0.268} & \textbf{0.364} & \textbf{0.319} \\
VAE$^{\ddag}$
& 0.396 & 0.536 & 0.426
& 0.350 & 0.443 & 0.385
& 0.260 & 0.356 & 0.310 \\
WARP~\cite{weston2011wsabie}
& 0.314 & 0.466 & 0.341
& 0.270 & 0.365 & 0.306
& 0.206 & 0.302 & 0.249 \\
LambdaNet~\cite{burges2007learning}
& 0.395 & 0.534 & 0.427
& 0.352 & 0.441 & 0.386
& 0.259 & 0.355 & 0.308 \\
\hline
VAE~\cite{liang2018variational}
& 0.395 & 0.537 & 0.426
& 0.351 & 0.444 & 0.386
& 0.266 & \textbf{0.364} & 0.316 \\
CDAE~\cite{wu2016collaborative}
& 0.391 & 0.523 & 0.418
& 0.343 & 0.428 & 0.376
& 0.188 & 0.283 & 0.237 \\
WMF~\cite{hu2008collaborative}
& 0.360 & 0.498 & 0.386
& 0.316 & 0.404 & 0.351
& 0.211 & 0.312 & 0.257 \\
SLIM~\cite{ning2011slim}
& 0.370 & 0.495 & 0.401
& 0.347 & 0.428 & 0.379
& -- & -- & -- \\
\bottomrule
\end{adjustbox}
\vspace{-2mm}
\caption{\small Comparison on three large datasets. The best testing set performance is reported. All numbers except RaCT and VAE$^{\ddag}$ are from~\cite{liang2018variational}, where VAE$^{\ddag}$ shows the VAE results based on our runs.}
\label{tab:compare_sota}
\vspace{-4mm}
\end{table*}
{\bf Comparison with traditional L2R methods}
As examples of traditional L2R methods, we use WARP~\cite{weston2011wsabie} and LambdaNet~\cite{burges2007learning} as the ranking-critical objectives to optimize the VAE actor, to replace the last stage of RaCT. We observe that WARP and LambdaNet are roughly 2 and 10 times computationally expensive than RaCT per epoch, respectively. This is because the traditional L2R methods aim to minimize the number of incorrect pairs in ranking, which is not scalable in the high-dimensional datasets considered here.
More importantly, RaCT uses a neural network as the shared critic to amortize the computational cost among different predictions, while the traditional L2R methods do not have the amortized ranking-critical mechanism, and optimize each prediction independently. Table~\ref{tab:compare_sota} shows the results of RaCT, WARP and LambdaNet, using the same amount of wall-clock training time. We observe the trends that WARP degrades performance, and LambdaNet provides slight improvements if not worse. This is perhaps due to the poor approximation to the true ranking when the number of items is large.
{\bf Comparison with state-of-the-art}
In Table~\ref{tab:compare_sota}, we report our RaCT performance, and compare with state-of-the-art methods in terms of three evaluation metrics: NDCG@100, Recall@20, and Recall@50.
We use the published code\footnote{\url{https://github.com/dawenl/vae_cf}} of~\cite{liang2018variational}, and reproduce the VAE as our actor pre-training.
Our reproduced VAE results are very close to~\cite{liang2018variational} on the ML-20M and Netflix datasets, but slightly lower on the MSD dataset. The RaCT is built on top of our VAE runs, and consistently improves the baseline for all the evaluation metrics and datasets, as seen by comparing the rows RaCT and VAE$^{\ddag}$.
The proposed RaCT also significantly outperforms other state-of-the-art methods, including VAE, CDAE, WMF and SLIM.
Following~\cite{liang2018variational}, the comparison with NCF is shown on two small datasets due to its limited scalability, shown in Table~\ref{tab:compare_ncf} in Supplement. RaCT shows only slight improvements, perhaps because the estimate quality of the critic is poor when trained on small datasets.
\begin{wrapfigure}{R}{0.52\textwidth}
\vspace{-7mm}\centering
\begin{tabular}{c c}
\hspace{-4mm}
\includegraphics[height=3.5cm]{figs/correlation_objective/correlation_scatter_training_softmax.png} &
\hspace{-7mm}
\includegraphics[height=3.5cm]{figs/correlation_objective/correlation_scatter_training_ract.png}
\\
(a) MLE\vspace{-0mm} &
(b) RaCT \hspace{-0mm} \\
\end{tabular}
\vspace{-2mm}
\caption{Correlation between the learning objectives (MLE or RaCT) and evaluation metrics on training.}
\vspace{-2mm}
\label{fig:correlation}
\end{wrapfigure}
\paragraph{Training/Evaluation Correlation} We visualize scatter plots between learning objectives and evaluation metric for all users on ML-20M dataset in Figure~\ref{fig:correlation}. The enlarged visualization is shown in Figure~\ref{fig:correlation_supp} of Supplement.
For training the objective, the VAE employs the NLL, while RaCT employs the learned NDCG metric. We ensure that the best model for each method is used: the model after actor pre-training (Stage 1) is used for NLL plots, and the model after the actor-critic alternative training (Stage 3) is used for RaCT plots.
The Pearson's correlation $r$ is computed. NLL exhibits low correlation with the target NDCG ($r$ is close to zero), while the learned metric in RaCT shows much higher positive correlation. It strongly indicates RaCT optimizes a more direct objective than an MLE approach. Further, NLL should in theory have a negative correlation with the target NDCG, as we wish that minimizing NLL can maximize NDCG. However, in practice it yields positive correlation. We hypothesize that this is because the number of interactions for each user may dominate the NLL values. In practice, the NLL value varies a lot; those with a higher number of interactions typically show both higher NLL and higher NDCG. That partially motivate us to consider the number of user interactions as features.
In Supplement, we study the generalization of RaCT trained with different ranking-metrics in Section~\ref{sec_metrics_supp}, and break down the performance improvement with different cut-off values of NDCG in Section~\ref{sec_cut_off_supp}, and with different number of interactions of $\Xmat$ in Section~\ref{sec_interactions_supp}.
\begin{table}[t!]
\begin{minipage}{0.53\linewidth}
\centering
\begin{adjustbox}{scale=.93,tabular=l|ccc}
\toprule
Actor & Before & After & Gain \\
\midrule
VAE~\cite{liang2018variational}
& 0.4258 & 0.4339 & 8.09 \\
VAE (Gaussian)
& 0.4202 & 0.4224 & 2.21 \\
VAE ($\beta = 0$)
& 0.4203 & 0.4255 & 5.17 \\
VAE (Linear)
& 0.4156 & 0.4162 & 0.53 \\ \hline
DAE~\cite{liang2018variational}
& 0.4205 & 0.4214 & 0.87 \\
MF~\cite{liang2018variational}
& 0.4159 & 0.4172 & 1.37 \\ \hline
WARP~\cite{weston2011wsabie}
& 0.3123 & 0.3439 & 31.63 \\
\bottomrule
\end{adjustbox}
\vspace{1mm}
\caption{\small Performance gain ($\times 10^{-3}$) for various actors.}
\vspace{-0mm}
\label{tab:compare_actors}
\end{minipage}\hfill
\begin{minipage}{0.45\linewidth}
\vspace{-2mm}
\centering
\begin{tabular}{c}
\hspace{-5mm}
\includegraphics[height=3.50cm]{figs/plot_feature_ablation.pdf} \\
\end{tabular}
\vspace{-1mm}
\captionof{figure}{\small Ablation study on features
}
\vspace{-0mm}
\label{fig:feature_ablation}
\end{minipage}
\vspace{-4mm}
\end{table}
\subsection{What Actor Can Be Improved by RaCT?}
We investigate how RaCT performs with different actors.
In RL, the policy plays a crucial role in the agent's performance. Similarly, we would like to study how different actor designs impact the RaCT performance. The results are shown in Table~\ref{tab:compare_actors}. It shows the performance of before and after applying RaCT. The results on NDCG@100 are reported. The VAE, DAE and MF models follow the setups in~\cite{liang2018variational}.
We first modify one component of the VAE~\cite{liang2018variational} at a time, and check the change of performance improvement that RaCT can provide.
(1) VAE (Gaussian): we change likelihood form from multinomial to Gaussian, and observe a smaller performance improvement. This shows the importance of having a closer proxy of ranking-based loss.
(2) VAE ($\beta=0$): we remove the KL regularization by setting $\beta=0$, and replace the posterior sampling with a delta distribution. We see a marginally smaller performance improvement. This compares a stochastic and deterministic policy. The stochastic policy (\ie posterior sampling) provides higher exploration ability for the actor, allowing more diverse samples generated for the critic's training. This is essential for better critic learning.
(3) VAE (Linear): we limit the expressive ability of the actor by using a linear encoder and decoder. This significantly degrades performance, and the RaCT cannot help much in this case. RaCT shows improvements for all MLE-based methods, including DAE and MF. It also shows significant improvement over WARP.
Please see detailed discussion in Section \ref{sec:actors_supp} of Supplement.
\subsection{Ablation Study on Feature-based Critic}
In Figure~\ref{fig:feature_ablation}, we investigate the importance of the features we designed in~\eqref{eq_features}.
The full feature vector consists of three elements:
$\hv = [ \mathcal{L}_{E} , | \mathcal{H}_0 |, |\mathcal{H}_1 |]$.
$\mathcal{L}_{E} $ is mandatory, because it links actor to the critic; removing it would break the back-propagation to train the actor.
Results are gathered on the ML-20M dataset using the pre-trained VAE baseline. This ensures that the feature $\mathcal{L}_{E}$ for the critic pre-training is always the same.
We carefully remove
$| \mathcal{H}_0 |$ or $|\mathcal{H}_1 |$ from $\hv$ at each time, and observe that it leads to performance degradation. In particular, removing $| \mathcal{H}_0 |$ results in a severe over-fitting issue.
When both counts are removed, it shows an immediate performance drop, as depicted by the orange curve. Overall, the results indicate that all three features are necessary to our performance improvement.
\vspace{-2mm}
\section{Conclusion}
\vspace{0mm}
We have proposed an actor-critic framework for collaborative filtering on implicit data. The critic learns to approximate the ranking scores, which in turn improves the traditional MLE-based nonlinear LVMs with the learned ranking-critical objectives.
To make it practical and efficient, we introduce a few techniques: a feature-based critic to reduce the number of learnable parameters, posterior sampling as exploration for better critic estimates, and pre-training of actor and critic for fast convergence.
The experimental results on three large-scale datasets demonstrate the superiority of the actor-critic approach, compared with state-of-the-art methods.
\medskip
\small
\bibliographystyle{unsrt}
| proofpile-arXiv_065-7148 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The most accepted hypothesis of star formation is the nebular collapse, when stars are formed from gravitationally unstable molecular clouds. Therefore, it is expected that stars that are born from the same interstellar material cloud should have the same abundance pattern during the Main Sequence, with exception of the light elements Li and Be, that can be destroyed in regions deeper than the convective zone, in solar type stars. Thereby, differences in the chemical content of stars born from the same natal cloud, may suggest that extra processes, that are not necessary connected to stellar evolution, may have influenced its photospheric chemical composition. In particular, planet formation or planet engulfment may imprint important chemical signatures in the host star.
Such phenomenon is expected to leave subtle signs on the stellar abundance pattern, of the order of 0.01 dex \citep{cha10}, that can only be detected with a high-precision analysis.
This can only be achieved with the differential method \citep{nis18}, that requires a comparison between the target star with a similar star of known parameters, that will serve as the standard for the abundance calculations. So, the sample should be restricted to objects that are very similar among themselves, like in the case of solar twins\footnote{More recently, solar twins have been defined as stars with effective temperature within 100K; log g and [Fe/H] within 0.1 dex from the Sun \citep{ram14}.}.
Following this premise, \cite{mel09} analyzed the abundances of 11 solar twins, achieving a high-precision abundance determination, with uncertainties of $\sim$ 0.01 dex, and found not only a depletion of refractory elements, when compared to the average of the sample, but also a trend with condensation temperature (T$_{c}$). The authors suggested that the correlation of the refractory elements abundances with condensation temperature is probably due to rocky planet formation. This hypothesis has been corroborated by \cite{cha10}, who showed that the depleted material in the Sun's convective zone is comparable to the mass of terrestrial planets of our Solar System (see also \cite{gal16}).
However, other hypothesis have been proposed to explain the abundance trend, like the stellar environment in which the star was formed \citep{one14}, although according to the recent theoretical estimates by \cite{gus18}, the mechanism is hardly significant; dust segregation in the protostellar disc \citep{gai15}; the influence of the stellar age \citep{adi14}; and the planet engulfment scenario \citep{spi15}.
In this context, twin stars in binary systems are extremely important, because the effects connected to the stellar environment of its formation and to the Galaxy chemical evolution, would be canceled out in a comparative analysis between both components. Thus, investigating wide binaries can bring more light into the subject of planets interactions (or other astrophysical event) influencing photospheric abundance of their host stars.
Some authors have already reported a T$_{c}$ trend on binary stars. \cite{tes16} found an abundance trend on the WASP94 system, where both stars are planet hosts. The planet-hosting binaries XO-2N/XO-2S \citep{ram15,bia15,tes15}, HD 133131A/B \citep{tes16b} and HAT-P-4 \citep{saf17}, also show chemical anomalies most likely due to planets, and the binary $\zeta^{1,2}$ Ret, where one of the stars hosts a debris disk, also shows a trend with condensation temperature \citep{saf16, adi16b}. Albeit no differences have been found in the HAT-P-1 \citep{liu14}, HD80606/HD80607 \citep{saf15,mac16} and HD20782/HD20781 \citep{mac14} and HD 106515 \citep{saf19}, the evidences are inconclusive in the latter three due to the high abundance errors. Indeed, a more precise abundance analysis of the pair HD80606/HD80607 by \cite{liu18}, shows small but detectable abundance differences between the binary components. A binary system of twin stars with large abundance differences, is HD 240429/HD 240430 \citep{oh18}, for which no planets are known yet.
Furthermore, it was found that Kepler-10, a star hosting a rocky planet, is deficient in refractory elements when compared to stars with similar stellar parameters and from the same stellar population \citep{liu16}.
For 16 Cygni, a binary pair of solar twins, where the B component hosts a giant planet \citep{coc97}, \cite{law01} clearly detected that 16 Cyg A is more metal rich $\Delta$[Fe/H] = +0.025$\pm$0.009 dex than its companion.
Later, \cite{ram11} expanded the analysis to 23 chemical elements, showing abundance differences in all of them by about +0.04 dex and finding a T${_C}$ trend similar to \cite{mel09}, when the binary stars are compared to the Sun. This was confirmed in our previous work \citep{tuc14}, where we show that 16 Cyg A is $0.047 \pm 0.005$ dex metal richer than B and also finding a T$_{c}$ slope of $+1.99 \pm 0.79 \times 10^{-5}$ dex K$^{-1}$ for the refractory elements, as reported in \cite{ram11}). This result was then associated with the rocky core formation of the gas giant 16 Cyg Bb. Recently, \cite{nis17} also found a $\Delta$[Fe/H](A-B)= $+0.031 \pm 0.010$ dex and a T$_{c}$ slope of $+0.98 \pm 0.35 \times 10^{-5}$ dex K$^{-1}$.
In contrast, there are studies that challenge the metallicity difference and the T$_c$ trend between the two components of this system.
\cite{sch11} find a T$_{c}$ trend for both stars relative to the Sun but, however, do not find any significant abundance differences between the pair, in agreement with \cite{del00} and \cite{tak11}.
Also, \cite{adi16b}, analyzing the case of $\zeta^{1,2}$ Ret, argues that the T$_c$ slope trend could be due to nonphysical factors and related to the quality of spectra employed, which is expected as high precision abundances can only be obtained in spectra of adequate quality.
In this context, the initial motivation for this work is to assess if, by revisiting this binary system now with better data with higher resolving power, higher S/N and broader spectral coverage, our previous results (obtained with lower resolving power) would still be consistent; in addition to provide improvements in the precision of the abundance determination, we include the analysis of elements that were not available before. We also challenge our results by employing automated tools to derive stellar parameters and T$_c$ while using the same methodology in all of the cases.
On the following sections, we will present the differential abundances of 34 elements and also the abundances of Li and Be, through spectral synthesis, which may present a possible evidence of planetary engulfment on 16 Cyg A.
\section{Data and analysis}
\subsection{Observations and data reduction}
The observations of 16 Cyg A and B were carried out with the High Dispersion Spectrograph \citep[HDS;][]{nog02} on the 8.2m Subaru Telescope of the National Astronomical Observatory of Japan (NAOJ), located at the Mauna Kea summit, in June 2015. Besides the 16 Cyg binary system, we also observed the asteroid Vesta, which was used as an initial reference for our differential analysis.
For the optical, we obtained an S/N ratio of $\sim$ 750 at 600nm and $\sim$ 1000 at 670nm (Li region), on the highest resolution possible (R$\sim$160 000) using the 0.2" slit.
The UV observations with HDS were made using the 0.4" slit, which provides a R = 90 000 that results in an S/N$\sim$350 per pixel at 340 nm, corresponding to the NH region, and S/N$\sim$200 at 310 nm (Be region). This gave us the opportunity to analyze volatile elements like nitrogen and neutron-capture elements in the UV with S/N $>$ 300.
The stars from the binary system and the Sun (Vesta) were both observed using the same instrumental setup to minimize errors in a differential analysis, which requires comparisons between the spectra of all sample stars for the continuum placement and comparison of the line profiles, to achieve consistent equivalent width (EW) measurements.
The extraction of the orders and wavelength calibration were performed immediately after the observations by Subaru staff, with routines available at the observatory. The continuum normalization and Doppler correction were performed using standard routines with IRAF.
\subsection{Stellar parameters}
Our method to determine stellar parameters and elemental abundances follows the approach described on previous papers \citep[e.g.;][]{ram11,ram14,mel09,mel12,tuc16,spi16},
by imposing differential excitation and ionization equilibrium for Fe I and Fe II lines (Figure \ref{iso_sun}). Since the 16 Cygni system is a pair of solar twins, they have similar physical characteristics to the Sun, and thus we initially used the Sun as a reference for our analysis.
The abundance determination was performed by using the line-by-line differential method, employing the EW manually measured by fitting Gaussian profiles with the IRAF {\it splot} task and deblending when necessary.
Very special care was taken for the continuum placement during the measurements, always comparing and overplotting the spectral lines region for the sample, focusing on a consistent determination.
With the measured EW, we first determined the Fe I and Fe II abundances to differentially obtain the stellar parameters. For this we employed the 2014 version of the LTE code MOOG \citep{sne73} with the MARCS grid of 1D-LTE model atmospheres \citep{gus08}. It is important to highlight that the choice of a particular atmospheric model has a minor impact on the determination of stellar parameters and chemical abundances in a strictly differential analysis, as long the stars that are being studied are similar to the star of reference \citep[e.g.][]{tuc16}.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{16cyga-Vesta.png}
\caption{Excitation and ionization equilibrium of Fe abundances (manually measured) using the Sun as the standard star for 16 Cyg A. Crosses represent Fe I and filled circles Fe II.}
\label{iso_sun}
\end{figure}
To make the analysis more efficient, we employed the Python q${\rm^2}$ code\footnote{https://github.com/astroChasqui/q2} \citep{ram14}, which operates in a semi-automatic mode, by calling MOOG routines to determine the elemental abundances and perform a line-by-line differential analysis using these results. This code also performs corrections of hyperfine structure (HFS) and the determination of uncertainties.
In this work, we take into account the HFS for V, Mn, Co, Cu, Y, Ag, La and Pr using the line list from \cite{mel12}.
The errors are computed considering both observational (due to uncertainties in the measurements, represented by the standard error) and systematic uncertainties (from the stellar parameters and their inter-dependences), as described in in \cite{ram15}. Observational and systematic errors are added in quadrature.
Table \ref{param} shows the stellar parameters obtained for 16 Cyg A and B using the Sun (T$_{eff}$= 5777 K, log g = 4.44 dex, [Fe/H] = 0.0 dex) as reference.
Note that these results are practically the same within the errors as the ones found in \cite{tuc14}, which are T$_{eff}$ = 5830 $\pm$ 7 K, log g = 4.30 $\pm$ 0.02 and [Fe/H] = 0.101 $\pm$ 0.008 dex for
16 Cyg A, and T$_{eff}$ = 5751 $\pm$ 7 K, log g = 4.35 $\pm$ 0.02 and [Fe/H] = 0.054 $\pm$ 0.008 dex for 16 Cyg B.
The final difference in metallicity between the components of this binary system is remarkably similar to the one reported in \cite{tuc14}, that is $\Delta$ [Fe/H] = 0.047 $\pm$ 0.005 dex, while we find in this work $\Delta$[Fe/H] = 0.040 $\pm$ 0.006 dex. This confirms with a significance of $\sim 7\sigma$ that 16 Cyg A is indeed more metal rich when compared to 16 Cyg B, in agreement with \cite{ram11} and the earlier work by \cite{law01}, as well as the recent work by \cite{nis17}.
\subsection{Trigonometric surface gravity}
New parallaxes for the binary stars of 16 Cygni have been measured by the Gaia mission DR 2 \citep{gai18}. The new values are $47.2771 \pm 0.0327$ mas and $47.2754 \pm 0.0245$ mas for 16 Cyg A and B, respectively. Adopting the magnitudes from the General Catalogue of Photometric Data \citep[GCPD;][]{mer97}
with V(A) = $5.959 \pm 0.009$ and V(B) = $6.228 \pm 0.019$, we determined the absolute magnitudes $M_{A} = 4.332 \pm 0.012$ and $M_{B} = 4.599 \pm 0.026$.
Using this information with the values of the T$_{eff}$, metallicity and mass, we estimate the trigonometric surface gravity for the pair of stars. For 16 Cyg A we found log g($A_{T}$) = 4.293 $\pm$ 0.005 dex and for 16 Cyg A log g($B_{T}$) = 4.364 $\pm$ 0.006. Notice that, while the surface gravity for 16 Cyg B has a good agreement with the one found through the ionization equilibrium of Fe lines (Table \ref{param}), for 16 Cyg A the trigonometric value is $\sim$ 0.02 dex lower, albeit they agree within 1.5 $\sigma$. Comparing with the results of \cite{ram11}, both the trigonometric and Fe-lines-based surface gravity are in agreement for the A component, while our surface gravities for B are somewhat higher in both cases.
\subsection{Age, mass and radius}
The age and mass of the binary stars were determined using customized Yonsei-Yale isochrones \citep{yi01}, as described in \cite{ram13,ram14}.
This method provides good relative ages, due to the high precision achieved for the atmospheric
parameters. We estimate the ages and masses with probability distribution functions, through the comparison of atmospheric parameters position
of the star with the values predicted by the isochrones.
Initially, the calculations were based on the [Fe/H], T$_{\rm eff}$ and log $g$, and later we replaced the gravity by the parallax values and magnitudes, to obtain the isochronal ages using the absolute magnitudes.
The results are shown in Table \ref{param}.
Our masses and radii shows a very good agreement when compared to the asteroseismology determinations of M$_{A} = 1.08 \pm 0.02 M_{\odot}$, M$_{B}= 1.04 \pm 0.02 M_{\odot}$,
R$_{A}= 1.229 \pm 0.008 R_{\odot}$ and R$_{B}= 1.116 \pm 0.006 R_{\odot}$, as reported by \cite{met15}.
\begin{table}
\centering
\caption{Stellar parameters for the 16 Cygni binary system using EW measured manually}
\label{param}
{\centering
\renewcommand{\footnoterule}{
\begin{tabular}{lcc}
\hline\hline
{} & 16 Cyg A & 16 Cyg B\\
\hline
T$_{eff}$ (K) & 5832$\pm$5 & 5763$\pm$5\\
log g (dex) & 4.310$\pm$0.014& 4.360$\pm$0.014\\
log g (dex)$_{trigonometric}$ & 4.293$\pm$0.005& 4.364$\pm$0.006\\
$[$Fe/H$]$ (dex) & 0.103$\pm$0.004 & 0.063$\pm$0.004\\
$v_t$ (km.s$^{-1}$) &1.11$\pm$ 0.01 & 1.03$\pm$ 0.01\\
Luminosity(L$\odot$)$_{\log g}$ &1.46 $\pm 0.05$ & 1.19 $\pm 0.04$ \\
Luminosity(L$\odot$)$_{parallax}$ & 1.55 $\pm 0.02$ & 1.23 $\pm 0.02$ \\
Mass (M$\odot$)$_{\log g}$ & $1.06 \pm 0.02$& $1.01 \pm 0.01$\\
Mass (M$\odot$)$_{parallax}$ & $1.06 \pm 0.01$& $1.01 \pm 0.01$\\
Radius (R$\odot$)$_{\log g}$ & $1.19 \pm 0.02$ & $ 1.09 \pm 0.02$\\
Radius (R$\odot$)$_{parallax}$ & $1.18 \pm 0.02$ & $ 1.12\pm 0.01$\\
Age (Gyr)$_{\log g}$ &$6.0 \pm 0.3$ & $6.8 \pm 0.4$\\
Age (Gyr)$_{parallax}$ &$6.4 \pm 0.2$ & $7.1 \pm 0.2$\\
Age (Gyr)$_{[Y/Mg]}$ &$6.2 \pm 1.0$ & $6.3 \pm 1.0$\\
Age (Gyr)$_{[Y/Al]}$ &$6.6 \pm 1.0$ & $6.8 \pm 1.0$\\
\hline
\\
\end{tabular}
}
\end{table}
The inferred isochronal ages of A and B based on log $g$, are 6.0 $\pm$ 0.3 Gyr and 6.7 $\pm$ 0.4 Gyr, respectively. This shows that both components of the system have roughly the same age (within error bars).
We have also estimated the ages of 16 Cyg A and B using the correlation of [Y/Mg] and [Y/Al] as a function of stellar age. The abundance clock [Y/Mg] for solar-type stars was first suggested by \cite{sil12} and the correlation between [Y/Mg] or [Y/Al] and stellar age was quantified for solar twins
by \cite{nis15}, \cite{tuc16} and \cite{spi16} \footnote{Notice that the correlation between [Y/Mg] and age is only valid for solar-metallicity stars \citep{fel17}}.
The derived [Y/Mg] ages are A= 6.2 Gyr and B= 6.3 Gyr using the relation of \cite{tuc16}.
Similar ages are found using the \cite{spi16} relations, A= 6.0 $\pm 1.0$ Gyr and B= 6.1 $\pm 1.0$ Gyr.
These results are consistent with the values calculated using the isochronal method, while the [Y/Al] ages \citep{spi16} give A =6.6 $\pm 1.0$ Gyr and B= 6.8 $\pm 1.0$ Gyr.
The values of age, mass and radius are in agreement with
asteroseismic with values around 7 Gyrs \citep{san16, bel17}, with 16 Cyg A being slightly more massive and with bigger radius than its companion.
\subsection{Activity}
The chromospheric activity is an important constrain on stellar ages \citep{lor18}. In order to measure the activity differences between 16 Cyg A and B, we defined an instrumental activity index based on H$\alpha$ line which is a well-known chromospheric indicator of late-type stars \citep{pasquini91,lyra05,montes01}:
\begin{equation}\label{eq:haindex}
\mathcal{H} = \frac{F_{\rm H\alpha}}{(F_{\rm B}+F_{\rm V})},
\end{equation}
where $F_{\rm H\alpha}$ is the flux integrated around the H$\alpha$ line ($\Delta\lambda$ = 6562.78 $\pm$ 0.3 \AA). We chose this narrow spectral interval to minimize the effective temperature effects that might be present along the H$\alpha$ wings\footnote{Small residual photospheric effects are still expected to be affecting our index measurements, however, this residual feature should have negligible impact on our results since we are not interested in absolute activity scale determination for a wide range of effective temperatures.}. $F_{\rm B}$ and $F_{\rm V}$ are the fluxes integrated around 0.3 \AA\ continuum windows, centered at 6500.375 and 6625.550 \AA, respectively. In table \ref{table:haindex}, we show the estimated $\mathcal{H}$ for 16 Cyg AB and the Sun. The uncertainties were estimated by quadratic error propagation of equation \ref{eq:haindex}, assuming Poisson error distribution.
\begin{table}
\caption{Activity indexes for 16 Cyg A, B, and the Sun. The last row is the mean activity level of 16 Cyg AB.}
\begin{center}
\begin{tabular}{l | c}
\hline
Star & $\mathcal{H}$ \\ \hline \hline
Sun & 0.1909 $\pm$ 0.0019 \\
16 Cyg A & 0.1871 $\pm$ 0.0021 \\
16 Cyg B & 0.1889 $\pm$ 0.0021 \\
\hline
16 Cyg AB & 0.1880 $\pm$ 0.0010 \\
\hline
\end{tabular}
\end{center}
\label{table:haindex}
\end{table}
Accordingly to $\mathcal{H}$, none of 16 Cyg components show unexpected level of chromospheric activity for a typical 6-7 Gyr-old star \citep{mamajek08}. Furthermore, 16 Cyg A and B seem to be chromospherically quiet stars ($\mathcal{H}$ = 0.188 $\pm$ 0.001) and slightly more inactive than the Sun ($\mathcal{H}$ = 0.1909 $\pm$ 0.0019), indicating a chromospheric age older than 4-5 Gyr. This result is in line with Ca II H \& K multi-epoch observations of \citet{isaacson10} who found $\log(R^\prime_{\rm HK})$ $\approx$ -5.05 dex for this system, in good agreement with the mean activity level of $\log(R^\prime_{\rm HK})$ = -5.03 $\pm$ 0.1 dex derived for 49 solar-type stars from the 6-7 Gyr old open cluster NGC 188 \citep{lorenzo16b}.
We inspected the chromospheric signature of other classical indicators along the spectral coverage of our observations such as Ca II H \& K \citep{mamajek08,lorenzo16b}, H$\beta$ \citep{montes01} and Ca II infrared triplet \citep{lorenzo16}. All of them show the same behavior found by H$\alpha$ lines, 16 Cyg A and B are chromospherically older than the Sun (age $>$ 4-5 Gyr) and the activity differences between the components are negligible. In summary, the different activity indicators reinforce the
age results from isochrones and seismology.
\section{Abundance analysis}
We present high-precision abundances for the light elements C, N, O, Na, Mg, Al, Si, S, K, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu and Zn; and the heavy elements
Sr, Y, Zr, Ba, Ru, Rh, Pd, Ag, La, Ce, Nd, Sm, Eu, Gd, and Dy. The abundances of these elements were differentially determined using
initially the Sun as our standard star and then using 16 Cyg B as the reference to obtain the $\Delta [$X/H$]_{(A-B)}$.
The calculations were performed with the same method as described for the iron lines (see also \cite{tuc16}).
Taking into account only the elements with Z $\leq 30$, there is a clear chemical trend as a function of the condensation temperature (T$_{c}$) in the pattern of both stars relative to the Sun (Figure \ref{cyg_sol}), in agreement with \cite{ram11}, \cite{sch11} and \cite{tuc14}. In addition to results based on atomic lines, abundances for the volatiles elements C, N, and O were also determined using the molecules CH, NH and OH (red triangles in Figures \ref{cyg_sol} and \ref{cygab}). There is a very good agreement between the C and O abundances based on high excitation atomic lines and low excitation molecular lines, while for N we only present the abundance based on NH.
The excellent agreement between atomic and molecular-based differential abundances reinforces the reliability of our adopted atmospheric parameters.
\subsection{Abundance vs. condensation temperature trend}
A possible indication of rocky planet formation (or planet engulfment) can be found in the distribution of the differential elemental abundances as a function of condensation temperature. Refractory elements have high condensation temperature (T$_{C} \gtrsim $ 900 K), easily forming dust, being thus an important component of rocky bodies. Terrestrial planets (or the core of giant planets) may influence the surface abundance of its host
star in two ways: ${\it i)}$ the accretion of rocky material (planetary engulfment) depleted of hydrogen that enrich the stellar atmosphere in refractories
\citep[e.g.,][]{spi15, mel17, pet17}; ${\it ii)}$ imprisonment of refractory rich material into rocky objects (i.e, planetesimals, rocky planets, core of giant planets), that deplete the material accreted by the star during its formation \citep[e.g.,][]{mel09, ram11, tuc14}. In the case of planet engulfment, the thermohaline mixing should dilute the overabundance in a few milion year \citep{the12}, however, the thermohaline mixing could not be as effective and still leave some enhancement on the outer layers of the star, that can only be detected with a precision of $\sim 0.01$ dex.
An important point to highlight is that the signature of planet formation or planet engulfment is directly connected to the size of the convective zone during the event. If a solar-mass protostar would go through a fully convective phase that would last longer than the lifetime of the protoplanetary disk \citep[as it is conventionally accepted;][]{hay61}, any event of planetary formation that occur during such phase would be masked by a significant dilution with the stellar material enclosed into the convective zone, which would homogenize the chemical content throughout the star \citep[see Figure 2 of][]{spi15}.
In contrast to the classic steady accretion, there is the scenario of episodic material accretion onto the star (with observational evidence reported by \cite{liu16b}). Models that include episodic accretion can reach the stabilization of the convective zone earlier than 10 Myr with initial mass of $10 M_{Jup}$ and accretion rate bursts of $5 \times 10^{-4} M_{\odot}$yr$^{-1}$, reaching a final mass of 1 $M_{\odot}$ \citep{bar10}. Although is an extreme of their models, it is important to highlight that due to the effects of episodic accretion, the higher the mass of the accretion rate bursts for a given initial mass (or lower the initial mass for a given accretion rate) the greater the impact on the internal structure, reaching the necessary central temperature for the development of the radiative core ($\sim 2 - 3 \times 10^{6}$ K) earlier than what is predicted by the model of a non accreting star \citep[see Fig. 2 and Fig. 4 of][]{bar10}. This effect makes plausible the assumption that the formation of rocky bodies can chemically alter the surface abundance pattern of its parent star.
Following this premise, \cite{mel09} suggested that the depletion of refractory elements in the Sun, when compared to a sample of 11 solar twins (without information regarding planets), is due to the formation of terrestrial planets in the solar system (see also further work by \cite{ram09, ram10}). However, in the literature, there are different suggestions for the Sun's abundance trend with condensation temperature.
\cite{adi14} proposed that the trend with condensation temperature is an effect of the chemical evolution of the Galaxy or depends on the star's birthplace.
Investigating the influence of age on solar twins, \cite{nis15} found a strong correlation of $\alpha$ and s-process elements abundances with stellar age, findings that were confirmed by \cite{tuc16} and \cite{spi16}.
According to \cite{one14}, if the star is formed in a dense stellar environment, the gas of the proto-stellar disk could have its dust cleansed before its birth by radiation of hot stars in the cluster, but recent theoretical estimates by \cite{gus18}, suggest that the mechanism is not significant.
\cite{gai15} associate this effect to the gas-dust segregation in the protoplanetary disk.
\cite{mal16} find differences in the T$_{c}$-slopes of refractory elements between stars with and without known planets, but this effect depends on the evolutionary stage, since it has been detected on main-sequence and subgiant stars, while no trend is found in their sample of giants. The authors also suggest that there is a correlation of both the mass and age, with T$_{c}$.
In this context, the investigation of abundance peculiarities in binary stars with and without planets is essential, because in a binary system there is no effect due to the chemical evolution of the Galaxy and other external factors, because it would equally affect both stars and thus be minimized in a differential analysis.
In this sense, the 16 Cygni system is a very interesting case, where both components are solar twins with the same age from asteroseismology \citep{san16}. On top of that, 16 Cyg B has a detected giant planet with a minimal mass of 1.5 Jupiter mass \citep{coc97}
while 16 Cyg A has no planet detected up to now, being thus a key target to study the effect of planets on the chemical composition of stars.
However, the abundance pattern of 16 Cyg A relative to B is still a controversy. A few authors suggest that there is no difference on the metallicity of the pair \citep{del00, sch11, tak11}, while most found abundance differences of about 0.05 dex \citep{nis17, adi16b, mis16, tuc14, ram11, law01, gon98}.
\subsection{16 Cygni}
A linear fit was performed with orthogonal distance regression (ODS) using the individual abundance errors for each element, excluding K due to its uncertain non-LTE effects.
It was necessary to assume a minimum threshold for the abundances uncertainties because some species were returning very small error bars (0.001 dex), heavily impacting the abundance vs. condensation temperature slope, because some species does not have many lines. In order to address this issue, we adopt a minimum abundance error of 0.009 dex, which is the average error of all species analyzed.
We obtain the slopes $3.99 \pm 0.58\times10^{-5}$ dex K$^{-1}$ and $2.78 \pm 0.57\times10^{-5}$ dex K$^{-1}$ for 16 Cyg A $-$ Sun, and 16 Cyg B $-$ Sun, vs condensation temperature, respectively.
In contrast to our past work \citep{tuc14}, we do not break the linear fit into two distinct curves for the volatiles and refractory elements, as a simple linear fit represents well the trend with T$_{c}$. We include nitrogen from NH, and for the abundances of C and O we assumed the average between the molecular and atomic abundances.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{cyg_sol.png}
\caption{Elemental abundances of 16 Cyg A (upper panel) and B (lower panel) based on our manually measured solar abundances as a function of condensation temperature for light elements (Z$\leq 30$). Solid lines
represent the linear fits with a slope of $3.99 \pm 0.58\times10^{-5} $ for the A component and $2.78 \pm 0.57\times10^{-5}$ for 16 Cyg B.
Red triangles correspond to the molecule-based abundances of C, N, and O.}
\label{cyg_sol}
\end{figure}
In Figure \ref{heavy_sol} we plot the abundances of the heavy elements (Z $>$ 30). In this case, the abundances do not clearly follow the same trend as in the previous case, with slopes $-0.16 \pm 3.99 \times10^{-5}$ dex K$^{-1}$ and $-0.05 \pm 2.97 \times10^{-5}$ dex K$^{-1}$ for 16 Cyg A $-$ Sun and 16 Cyg B $-$ Sun, repectively, with a minimum uncertanty threshold of 0.02 dex in both cases.
However, due to the large errors in the [X/H] and the small range in T$_{c}$ it is not possible to claim if there is indeed a trend by considering only the heavy elements, but we stress that, within the uncertainties, the slope is not actually different from those of Z $\leq$ 30.
Although no T${c}$ trend is detected, there is a difference $\Delta$(A-B) = 0.043 $\pm$ 0.075 dex regarding the abundances of these heavy elements, that somewhat follow the difference of Fe between the stars of the pair, however, due to the high uncertainty we cannot conclude if this discrepancy is real.
\begin{figure}
\centering
\includegraphics[scale=1.2,width=1.0\columnwidth]{heavy_sol.png}
\caption{As Figure \ref{cyg_sol} but for the heavy elements (Z $>$ 30). There is no clear trend with condensation temperature.}
\label{heavy_sol}
\end{figure}
The abundances of 16 Cyg A relative to 16 Cyg B were also determined and are presented in Figure \ref{cygab}.
There is an evident trend between the (A-B) abundances and T$_{c}$.
The slope of the linear fit (without including the n-capture elements) is $1.56 \pm 0.24 \times 10^{-5}$ dex K$^{-1}$ (with a threshold of 0.005 dex). This result agrees with Tucci Maia et al. (2014; slope = $1.88 \pm 0.79 \times 10^{-5}$ dex K$^{-1}$) within error bars, showing once again the consistency and robustness of our analysis. If we include the heavy elements on the fit, we find a slope of $1.38 \pm 0.41 \times 10^{-5}$ (with a threshold of 0.010 dex).
Although the abundance of potassium presented in Table \ref{abun} has been corrected for non-LTE effects using the grid by \cite{tak02}, we did not use it for the linear
fit as the non-LTE grid is too sparse for a precise correction. Our slope is also in good agreement with the recent result by \cite{nis17}, $+ 0.98 \pm 0.35 \times 10^{-5}$ dex K$^{-1}$, , based on high-resolution high-S/N HARPS-N spectra.
\begin{figure*}
\centering
\includegraphics[width=1.9\columnwidth]{cygab_h.png}
\caption{Differential abundances (manually measured) of (A - B) as function of T$_{c}$ for elements with Z $\leq$ 30 (left panel) and adding also the neutron-capture elements (right panel). The red triangles correspond to the molecule-based abundances of C, N and O. The slope found is $1.56 \pm 0.24 \times 10^{-5}$ dex K$^{-1}$, based on the fit to the elements with Z $\leq$ 30.}
\label{cygab}
\end{figure*}
\begin{table*}
\centering
\caption{Elemental abundances of 16 Cygni system relative to the Sun and to 16 Cyg B.}
\label{abun}
{\centering
\renewcommand{\footnoterule}{
\begin{tabular}{lcccccccc}
\hline\hline
Z & Species & T$_{c}$& [X/H]$_{\text{16 Cyg A}}$ & Error & [X/H]$_{\text{16 Cyg B}}$ & Error & [X/H]$_{\text{A-B}}$ & Error\\
\hline
6 & C & 40 & 0.049 & 0.013 & 0.033 & 0.004 & 0.012 & 0.009\\
6 & C* & 40 & 0.058 & 0.009 & 0.041 & 0.008 & 0.017 & 0.006\\
7 & N* & 123 & 0.077 & 0.017 & 0.049 & 0.015 & 0.028 & 0.010\\
8 & O & 180 & 0.063 & 0.014 & 0.050 & 0.009 & 0.012 & 0.008\\
8 & O* & 180 & 0.070 & 0.007 & 0.048 & 0.010 & 0.022 & 0.007\\
11 & Na & 958 & 0.099 & 0.008 & 0.070 & 0.010 & 0.033 & 0.001\\
12 & Mg & 1336 & 0.127 & 0.022 & 0.087 & 0.018 & 0.040 & 0.005\\
13 & Al & 1653 & 0.168 & 0.011 & 0.129 & 0.009 & 0.041 & 0.004\\
14 & Si & 1310 & 0.114 & 0.008 & 0.070 & 0.007 & 0.043 & 0.003\\
16 & S & 664 & 0.070 & 0.009 & 0.058 & 0.009 & 0.031 & 0.008\\
19 & K & 1006 & 0.106 & 0.006 & 0.028 & 0.005 & 0.049 & 0.001\\
20 & Ca & 1517 & 0.112 & 0.002 & 0.075 & 0.002 & 0.040 & 0.003\\
21 & Sc & 1659 & 0.140 & 0.012 & 0.098 & 0.012 & 0.049 & 0.004\\
22 & Ti & 1583 & 0.136 & 0.003 & 0.092 & 0.002 & 0.044 & 0.006\\
23 & V & 1429 & 0.118 & 0.006 & 0.081 & 0.003 & 0.035 & 0.004\\
24 & Cr & 1296 & 0.105 & 0.004 & 0.070 & 0.004 & 0.036 & 0.002\\
25 & Mn & 1158 & 0.095 & 0.005 & 0.069 & 0.006 & 0.028 & 0.003\\
26 & Fe & 1334 & 0.101 & 0.004 & 0.060 & 0.004 & 0.041 & 0.004\\
27 & Co & 1352 & 0.118 & 0.007 & 0.079 & 0.009 & 0.037 & 0.003\\
28 & Ni & 1353 & 0.102 & 0.013 & 0.060 & 0.013 & 0.041 & 0.004\\
29 & Cu & 1037 & 0.126 & 0.015 & 0.090 & 0.014 & 0.036 & 0.006\\
30 & Zn & 726 & 0.142 & 0.006 & 0.109 & 0.005 & 0.029 & 0.004\\
38 & Sr & 1464 & 0.052 & 0.003 & 0.022 & 0.002 & 0.030 & 0.004\\
39 & Y & 1659 & 0.058 & 0.008 & 0.011 & 0.010 & 0.041 & 0.005\\
40 & Zr & 1741 & 0.074 & 0.017 & 0.039 & 0.018 & 0.036 & 0.003\\
44 & Ru & 1551 & 0.120 & 0.082 & 0.092 & 0.092 & 0.029 & 0.010\\
45 & Rh & 1392 & 0.170 & 0.031 & 0.148 & 0.044 & 0.022 & 0.054\\
46 & Pd & 1324 & 0.040 & 0.007 & 0.027 & 0.011 & 0.013 & 0.006\\
47 & Ag & 996 & 0.074 & 0.032 & 0.042 & 0.003 & 0.032 & 0.029\\
56 & Ba & 1455 & 0.083 & 0.005 & 0.027 & 0.003 & 0.055 & 0.001\\
57 & La & 1578 & 0.062 & 0.040 & 0.060 & 0.010 & 0.031 & 0.010\\
58 & Ce & 1478 & 0.110 & 0.020 & 0.090 & 0.011 & 0.034 & 0.007\\
60 & Nd & 1602 & 0.116 & 0.021 & 0.078 & 0.027 & 0.038 & 0.008\\
62 & Sm & 1590 & 0.111 & 0.009 & 0.031 & 0.011 & 0.079 & 0.007\\
63 & Eu & 1358 & 0.159 & 0.024 & 0.103 & 0.021 & 0.055 & 0.033\\
64 & Gd & 1659 & 0.135 & 0.016 & 0.108 & 0.007 & 0.027 & 0.013\\
66 & Dy & 1659 & 0.058 & 0.032 & 0.032 & 0.026 & 0.038 & 0.049\\
\hline
\hline
\end{tabular}
}
\\
$^{\rm *}$ molecular abundance
\end{table*}
\subsection{Automated codes}
We conducted tests utilizing iSpec version 2016 \citep{bla14} and ARES v2 \citep{sou15} to automatically measure the EWs of 16 Cyg A, B and the Sun, in order to differentially determine the stellar parameters. The aim of these tests is to evaluate if these codes, when applied to high-resolution data and following our methodology (same as in Section 2.2), could return a similar result to what we find by "hand". Our motivations for this is to assess if our procedure could be automatized and applied to a bigger sample of stars and still retrieve stellar parameters with the same precision as ours, and to also find out if the chemical composition differences that we found between the 16 Cygni components is consistent by applying different methods of EW meassurement.
As discussed earlier, the differential method minimize most of the error sources while, for a solar twin sample, the uncertainty is almost entirely related to the EW. One big concern in a differential analysis is the continuum normalization to achieve a consistent continuum placement for all stars being analyzed.
In this test, the spectra is the same as our manual analysis, which was previously normalized, and the EW measured following the same line list as ours.
In this way, any discrepancy in the values would be due to how each code interprets and places the continuum, and how the fit is performed.
\begin{table*}
\centering
\caption{Atmospheric parameters for 16 Cygni determined with automated EW measurements for R= 160 000 and 81000 spectra.}
\label{param_au}
{\centering
\renewcommand{\footnoterule}{
\begin{tabular}{lllllll}
\hline\hline
{} & {R= 160 000}& {} & {R= 81 000} & {}\\
\hline
{} & A & B & A & B\\
\hline
{iSpec} & {} & {} & {} & {}\\
T$_{eff}$ (K) & 5834 $\pm$ 5 & 5749 $\pm$ 4 & 5826$\pm$15 & 5753$\pm$14\\
log $g$ (dex) & 4.330 $\pm$ 0.013& 4.360 $\pm$ 0.011 & 4.320$\pm$0.042& 4.350$\pm$0.044\\
$[$Fe/H$]$ (dex) & 0.103 $\pm$ 0.005 & 0.052 $\pm$ 0.003 & 0.098$\pm$0.013 & 0.057$\pm$0.013\\
v$t$ (km.s$^{-1}$) & 1.09$\pm$ 0.01 & 1.02$\pm$ 0.01 & 1.11$\pm$ 0.03 & 1.03$\pm$ 0.03\\
\hline
{ARES} & {} & {} & {} & {}\\
T$_{eff}$ & 5840$\pm$16 & 5753$\pm$15 & 5813$\pm$26 & 5760$\pm$19\\
log $g$ (dex) & 4.330$\pm$0.040& 4.320$\pm$0.038 & 4.290$\pm$0.063& 4.390$\pm$0.055\\
$[$Fe/H$]$ (dex) & 0.114$\pm$0.013 & 0.048$\pm$0.012 & 0.091$\pm$0.022 & 0.061$\pm$0.017\\
v$t$ (km.s$^{-1}$) & 1.07$\pm$ 0.03 & 1.02$\pm$ 0.03 & 1.06$\pm$ 0.05 & 0.97$\pm$ 0.04\\
\hline
\hline
\end{tabular}
}
\end{table*}
In addition to that, we use another set of spectra with lower resolving power (R $\sim$ 81 000, from \cite{tuc14}), in order to evaluate if by using the same tools but providing different resolution spectra, the results could be somewhat discrepant.
In Table \ref{param_au} we present the determined stellar parameters obtained using the EW measurements obtained with the automated codes. Comparing the results, we find that all the codes return stellar parameters in good agreement to ours, based on the "manual" measurements.
In overall, as we go to lower resolution, the uncertainty gets higher, as expected, because it can lead to more blends around the lines measured and thus a more contaminated value for the EW, a phenomenon that also happens with manual measurements as well, as we can see from the stellar parameters from our previous work \citep{tuc14}.
\begin{table*}
\centering
\caption{Slopes of abundances versus condensation temperature, for the elements with Z $\leq$ 30 for the EWs measurements from iSpec and ARES for the R $\sim$ 160 000 and 81 000 spectra.}
\label{slopab_sig}
{\centering
\renewcommand{\footnoterule}{
\begin{tabular}{clcclcc}
\hline\hline
R & iSpec (dex K$^{-1}$) & Min. error (dex) & Significance & ARES (dex K$^{-1}$) & Min. error (dex) & Significance\\
160 000 & {$1.11 \pm 0.31\times10^{-5}$} & 0.005 & 3.58 &{$0.70 \pm 0.64\times10^{-5}$} & 0.010 & 1.09\\
81 000 & {$1.56 \pm 0.44\times10^{-5}$} & 0.008 & 3.55& {$1.24 \pm 1.04\times10^{-5}$} & 0.019 & 1.19\\
\hline
\end{tabular}
}
\end{table*}
We also used iSpec and ARES to determine the elemental abundance of our sample for the elements with Z $\leq$ 30, with the same method and spectra described in the previous section. The trends with condensation temperature are presented in Table \ref{slopab_sig} with its respective abundance thresholds. In both resolution sets iSpec and ARES returns a T$_{c}$ slope that is in agreement with our value within error bars. The codes confirm not only that 16 Cyg A is more metal rich than B, but also the existence of T$_c$ trend even though we use spectra with almost half the resolving power (but still high-resolution spectra) as the other. However, iSpec shows a higher significance on its values.
\subsection{Li and Be}
Lithium and beryllium abundances were determined by performing spectral synthesis calculations, using a method similar to the outlined in \cite{tuc15}. For lithium, we used the Li-7 doublet at 670.7 nm and, for beryllium, we used the doublet resonance lines of Be II at 313.0420 nm and 313.1065 nm. The line list for the Li synthesis is from \cite{mel12}, while for Be we used a modified list of \cite{ash05}, as described in \cite{tuc15}.
For the spectral synthesis, we used the synth driver of the 2014 version of the 1D LTE code MOOG \citep{sne73}.
We adopted A(Be) = 1.38 dex as the standard solar Be abundance from \cite{asp09}.
The model atmospheres were interpolated from the MARCS grid \citep{gus08} using the stellar parameters previously obtained. The abundances of Li were corrected for non-LTE effects using the online grids of the INSPECT project\footnote{http://inspect-stars.com/}.
Beryllium lines are insensitive to non-LTE effects in the solar type stars, according to \cite{asp05}.
To determine the macroturbulence line broadening, we first analyzed the line profiles of the Fe I 602.7050 nm, 609.3644 nm, 615.1618 nm, 616.5360 nm, 670.5102 nm and Ni I 676.7772 lines in the Sun; the synthesis also included a rotational broadening of v sin i = 1.9 km.s$^{-1}$ \citep{bru84} and the instrumental broadening. The macroturbulent velocity found for the Sun is V$_{macro}$ = 3.6 km s$^{-1}$. For 16 Cygni, we estimate the macroturbulence following the relation of \cite{leo16}, which takes into account the dependence with effective temperature and log g.
With the macroturbulence fixed, v $\sin i$ was estimated for 16 Cyg A and B by fitting the profiles of the six lines mentioned above, also including the instrumental broadening.
Table \ref{libe} shows the abundances of Li and Be with their estimated macroturbulence and v $\sin i$.
\begin{table}
\centering
\caption{Abundances of Li and Be for the binary 16 Cyg using spectral synthesis}
\label{libe}
{\centering
\renewcommand{\footnoterule}{
\begin{tabular}{lcc}
\hline\hline
{} & 16 Cyg A & 16 Cyg B\\
\hline
Li (dex) & 1.31$\pm$0.03 & 0.61$\pm$0.03\\
Be (dex)& 1.50$\pm$0.03& 1.43$\pm$0.03\\
V$_{macro}$ (km s$^{-1}$) & 3.97 $\pm$ 0.25 & 3.66 $\pm$ 0.25\\
v $\sin i$ (km s$^{-1}$) & 1.37 $\pm$ 0.04 & 1.22 $\pm$ 0.06\\
\hline
\end{tabular}
}
\end{table}
In Figure \ref{beab} we show the synthetic spectra of 16 Cyg A and B plotted against the observed spectra.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{be_AB.png}
\caption{ Comparison between the observed (blue dots) and synthetic (red solid line) spectra of 16 Cyg A (top) and 16 Cyg B (bottom).}
\label{beab}
\end{figure}
Comparing our results with previous works in the literature, we found that Li and Be on the binary system 16 Cygni is hardly a consensus, but for
lithium there is a qualitative agreement in the A component being more abundant in lithium than the B component.
What is important to highlight here is that we found a higher Be abundance on the A component when compared to B, in contrast to the results of \cite{del00} and \cite{gar98}, while \cite{tak11} does not find any significant Be variation between components, maybe because of the different parameters (including broadening parameters) and different spectra (resolving power, S/N, normalization).
\begin{table*}
\centering
\caption{Comparison of Li and Be abundances.}
\label{libec}
{\centering
\renewcommand{\footnoterule}{
\begin{tabular}{ccccc}
\hline\hline
Star & HD & Li(dex) & Be(dex) & references\\
\hline
16 Cyg A & 186408 & 1.31 & 1.50 & ours\\
{-} & {-} & 1.37 & 1.34 & \cite{tak11}\\
{-} & {-} & 1.27$^{a}$ & 0.99$^{b}$ &$^{a}$\cite{kin97}; $^{b}$\cite{del00}\\
{-} & {-} & 1.24$^{c}$ & 1.10$^{d}$ & $^{c}$\cite{gon98}; $^{d}$\cite{gar98}\\
{-} & {-} & 1.34$^{d}$ & - &$^{d}$\cite{ram11}\\
16 Cyg B & 186427 & 0.61 & 1.43 & ours\\
{-} & {-} & $< 0.60$ & 1.37 & \cite{tak11}\\
{-} & {-} & $\leq 0.60^{a}$ & 1.06$^{b}$ & $^{a}$\cite{kin97}; $^{b}$\cite{del00}\\
{-} & {-} & $< 0.50^{c}$ & 1.30$^{d}$ & $^{c}$\cite{gon98}; $^{d}$\cite{gar98}\\
{-} & {-} & 0.73$^{d}$ & - &$^{d}$\cite{ram11}\\
\hline
\end{tabular}
}
\end{table*}
\section{Discussion}
Lithium and beryllium are elements that are destroyed at different temperatures ($2.5\times10^{6}$ K and $3.5\times10^{6}$ K, respectively) and therefore at different depths in the stellar interiors. According to standard stellar evolution models, these temperatures are only achieved below the base of the convective zone. However, the solar photospheric Li abundance is approximately 150 times lower than the meteoritic value, indicating that extra mixing processes are acting in solar-type stars and need to be taken into account.
In solar-type stars, it is known that Li has a strong correlation with age and surface rotation \citep{bec17, mar16, bau10, nas09}, which suggests internal depletion of lithium.
However, it is a more challenging task to do the same analysis for Be due to difficulties related to its detection utilizing instruments from the ground,
with only two accessible lines of Be II being near the atmospheric cutoff in the UV, at 313 nm, in a heavily populated region of the spectrum.
In \cite{tuc15}, we determined the abundance of Be in a sample of 8 solar twins through a ``differential'' spectral synthesis, where the line list was calibrated to match the observed solar spectrum, which was observed with the same setup as the other stars. We found that the Be content of solar twins is barely depleted, if at all, during the Main Sequence ($\sim 0.04$ dex in a time span of 8 Gyrs).
Thus, in a probable scenario of a planet being engulfed by its host star, if this event happens after the stabilization of its convective zone, one could expect an enhancement of Li and Be, in a similar way as for refractory elements.
If we analyze our result with this hypothesis in mind, the overabundance of lithium and beryllium on 16 Cyg A relative to B (in addition to the enhancement of refractory elements) could indicate an accretion of mass.
In fact, \cite{san02} suggests that the Li abundance can be used as a signal of pollution enrichment of the outer layers of solar-type stars, if stellar ages are well known.
The majority of previous studies agree that both components of the binary system have the same stellar ages. However, as seen in Table \ref{libec}, we found that 16 Cyg A is 0.70 dex more rich in Li than 16 Cyg B, in accord with the results of \cite{tak11, kin97, gon98}, and \cite{ram11}. Furthermore, on the lithium-age trend of \cite{mar16} and \cite{mon13}, 16 Cyg B shows a normal Li abundance for a solar twin of its age, while 16 Cyg A has a Li abundance above the curve, thus, when compared to a sample of solar twins, the A component shows an anomalous abundance of lithium. On top of that, 16 Cyg A also seems to have a higher $v \sin i$ velocity (Table \ref{libe}), which may indicate momentum transferred by mass accretion.
\cite{gon98} also suggests that the odd lithium abundance of 16 Cyg A may be due to planet accretion of a 1-2 Jupiter mass planet. This would increase the abundance not only of Li but of Fe as well. This is reinforced by \cite{maz97} who propose that the separation between the two stars \citep[semi-major axis of 755 AU;][]{pla13} is sufficiently small to permit planet-planet interactions to cause orbit instabilities on the binary pair. Furthermore, the high eccentricity \citep[0.689;][]{wit07} of 16 Cyg Bb could also be evidence of the interaction between the stars.
Similar results were found by \cite{law01}, who find a difference of 0.025 $\pm$ 0.009 dex in [Fe/H] between the 16 Cygni pair (the A component being more metal rich), suggesting a self-pollution scenario.
\cite{gra01} also investigated abundance differences in six main-sequence binaries with separations on the order of hundreds AU (enough to permit orbit instabilities on possible exoplanets) with components with almost the same mass, using the differential abundance technique (errors of the order of 0.01 dex). Four of these systems did not show any chemical differences between components, while the two remaining binary systems (HD 219542 and HD 200466) display a clear metallicity difference, being the primary stars more rich in iron (and in the most analyzed elements) than the secondary. The authors also support the idea that the difference in chemical composition of those binary stars is due to infall of rocky material.
By taking into account the hypothesis of planet accretion pollution, one could expect that the Be abundance would also be enriched on the outer layers of the star, in a similar way as Li. As discussed earlier, according to \cite{tuc15} beryllium is not depleted in a very effective way (if it is at all) on solar twins stars during the Main Sequence, making it also a good proxy for planet accretion after the stabilization of the convective zone.
Comparing the pair of stars, we found that 16 Cyg A has 0.07 $\pm$ 0.03 dex more beryllium than 16 Cyg B, in line with the planet engulfment hypothesis.
Following the procedure outlined in \cite{gal16}, we estimate that, if we add 2.5 - 3.0 earth masses of Earth-like material into the convective zone of 16 Cyg B, we would alter the content of Be in about 0.07 dex, thus canceling the abundance difference between the stars. This estimate is close to the one derived by \cite{tuc14}, who derived that the addition of 1.5 earth mass of a material with a mixture of the composition of the Earth and CM chondrites is necessary to reproduce the refractory elements abundances as function of the condensation temperature pattern on 16 Cyg B. However, \cite{tuc14} assumed that this abundance pattern is a spectral signature of the 16 Cyg Bb rocky core formation and, now considering also the abundances of Li and Be, it may be a signature of planet accretion rather than planet formation.
In contrast, \cite{the12} discuss that engulfment of rocky planets can induce instabilities on a stellar surface, by the dilution of a metal-rich material in young Main Sequence stars, which creates an unstable $\mu$-gradient at the bottom of the convective zone, activating fingering (thermohaline) convection.
This would be responsible for the depletion of the abundances that enriched the convective zone, thus quenching any signature of accretion. However, the authors also discuss that the mixing process would not completely erase the enhanced abundances, meaning that the engulfment event would still be detected in the high precision abundances domain.
In this scenario, \cite{dea15} argues that, during the early periods on the Main Sequence, 16 Cyg B was able to accrete rocky material from its planetary disk,
whereas no accretion may have developed around 16 Cyg A due to the presence of a red dwarf (16 Cyg C) orbiting at 73 AU around the A component \citep{tur01,pat02}.
The models of \cite{dea15}, that take into account the mixing by fingering convection, could reproduce the observed difference of lithium on the binary pair by adding 0.66 Earth mass to 16 Cyg B. However, in those same models, Be does not show any depletion with the addition of 0.66 Earth mass, with the destruction starting to be more effective with the accretion of higher masses. Notice that among the two dozen chemical elements showing abundance differences between 16 Cyg A and B, the model of \cite{dea15} can only explain the difference in lithium.
We find this scenario very unlikely because the lithium content of 16 Cyg B seems to be normal for its age when compared to other solar twins, while 16 Cyg A, on the other hand, is the one that displays an enhanced Li content for a solar twin with $\sim$ 7 Gyr \citep{mar16,mon13}.
Another explanation for the discrepancy in Li abundances could be different initial rotation rates \citep{coc97}.
However, notice that although young solar-type stars of a given mass may have different rotation rates \citep{lor19}, they all seem to converge to the same rotation period at an age of about 0.2 Gyr \citep{bar03}, which is much earlier than the age of 16 Cyg.
Furthermore, the companion 16 Cyg C at 73 AU may be too far as to have any significant impact on 16 Cyg A.
\section{Conclusions}
We present a detailed study of elemental abundances on the 16 Cygni solar twin binary system using higher quality data (R = 160 000, S/N = 1000 at 670nm). We confirm the difference of 0.04 dex in [Fe/H] between 16 Cyg A and B.
We also confirm the positive trend of differential abundances (A - B) as a function of condensation temperature, in very good agreement with our previous work \citep{tuc14}, which was obtained with spectra of lower resolving power and signal to noise ratio on a different instrument. There is also good agreement with the slope obtained independently by \cite{nis17}, and also using a different spectrograph (HARPS-N).
We also find the same result by employing the ARES and iSpec codes to measure the EWs.
This shows that our differential analysis method is consistent and a powerful tool to unveil physical characteristics that can only be seen with high precision abundance determinations, and that the T$_c$ trend is a physical phenomenon, being thus unlikely to be related to some instrumental effect, as we show that high-quality spectra obtained with different spectrographs (Espadons at CFHT, HDS at Subaru, HARPS-N at the Telescopio Nationale Galileo) give essentially the same results (within error bars).
We also determine the abundance of Li and Be through a ``differential'' spectral synthesis analysis, using the solar spectrum (obtained with the same instrumental configuration) to calibrate the line list that was used to perform the calculations. We found that 16 Cyg A exhibits an overabundance of not only Li (as reported by previous studies) but of Be as well, relative to 16 Cyg B.
This discrepancy is compatible with a 2.5 - 3.0 Earth masses of earth like material if we assume a convective zone similar to Sun for both stars.
Interestingly, the amount of rocky material needed to explain the Li and Be overabundances is also compatible
with the trend of the (A-B) abundances vs. condensation temperature, reinforcing thus the hypothesis of planet engulfment,
although a similar opposite trend in (B-A) could be attributed to the effect of the rocky core in 16 Cyg B \citep{tuc14}.
However, the overabundant Li content in 16 Cyg A, above what is expected for its age, suggest that the signature that we are observed is due to a planet engulfment event.
\begin{acknowledgements}
MTM acknowledges support by financial support of Joint committee ESO Chile and CNPq (312956/2016-9).
JM, LS and DLO thanks FAPESP (2014/15706-9, 2016/20667-8, and 2018/04055-8) and CNPq (Bolsa de produtividade).
PJ acknowledges FONDECYT Iniciación 11170174 grant for financial support.
\end{acknowledgements}
| proofpile-arXiv_065-7153 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Supplemental Material}
\section{Appendices}
\subsection{Appendix I: Exact ground state of Hamiltonian $H^{\Lambda}_m$}
Let us consider a single bosonic quantum field $\phi(x)$ in one spatial dimension, together with its conjugate momentum $\pi(x)$. They obey the commutation relation $\left[\phi(x),\pi(y) \right] = i\delta(x-y)$. Let us also introduce the annihilation operator $\psi(x)$ and $\psi(k)$ in real and momentum space
\begin{eqnarray}
\psi(x) &\equiv& \sqrt{\frac{\Lambda}{2}}\phi(x)+\frac{i}{\sqrt{2\Lambda}} \pi(x),~~~~\\
\psi(k) &\equiv& \frac{1}{\sqrt{2\pi}}\int\! dx~ e^{-ikx} \psi(x)
\end{eqnarray}
with $\left[\psi(x),\psi(y)^{\dagger} \right]= \delta(x-y)$ and $\left[\psi(k),\psi(q)^{\dagger} \right]= \delta(k-q)$.
In this work we studied the Hamiltonian $H_m^{\Lambda}$,
\begin{widetext}
\begin{eqnarray}
H^{\Lambda}_m &\equiv& \frac{1}{2}\int\!\! dx\left(\frac{1}{\Lambda^2}(\partial_x \pi(x))^2 +\pi(x)^2 + (\partial_x \phi(x))^2 + m^2 \phi(x)^2\right) \label{eq:app:H1}\\
&=&\frac{1}{\Lambda}\int\! dx \left( \partial_x \psi^{\dagger}(x)\partial_x \psi(x) + \frac{m^2 + \Lambda^2}{2} \psi^{\dagger}(x)\psi(x) + \frac{m^2 - \Lambda^2}{4} \left(\psi(x)^2 + \psi(x)^{\dagger 2}\right) \right) \label{eq:app:H2} \\
&=&\frac{1}{\Lambda} \int\!\! dk\left(k^2 + \frac{m^2 + \Lambda^2}{2} \right)\psi^{\dagger}(k)\psi(k) + \frac{m^2-\Lambda^2}{4}\left(\psi(k)\psi(-k) + \psi(k)^{\dagger}\psi(-k)^{\dagger}\right) \label{eq:app:H3}
\end{eqnarray}
\end{widetext}
This Hamiltonian can be diagonalized by introducing the annihilation operators
\begin{equation}\label{eq:app:aLams}
a^{\Lambda}_s(k) \equiv \sqrt{\frac{\alpha_s(k)}{2}} \phi(k) + \frac{i}{\sqrt{2\alpha_s(k)}}\pi(k),
\end{equation}
where the function $\alpha_s(k)$ is given by
\begin{equation}
\alpha_s(k) \equiv \sqrt{\frac{\Lambda^2k^2 + \Lambda^4 e^{-2s} }{k^2 + \Lambda^2}} = \sqrt{\frac{\Lambda^2k^2 + \Lambda^2 m^2 }{k^2 + \Lambda^2}} \label{eq:app:alphas}
\end{equation}
where $s = \log(\Lambda/m)$. Indeed, the Hamiltonian can be rewritten as
\begin{eqnarray}
H^{\Lambda}_m &=& \int\!\! dk~ E^{\Lambda}_{m}(k) ~ a^{\Lambda}_{s}(k)^{\dagger} a^{\Lambda}_s(k),~~\\\label{eq:app:H4}
E^{\Lambda}_m(k) &\equiv& \sqrt{|k|^2+m^2} \sqrt{1+\left(\frac{k}{\Lambda}\right)^2}\label{eq:app:ELam}
\end{eqnarray}
as can be checked by direct replacement.
Notice that these expressions are valid for any value of the mass parameter $m \in \mathbb{R}$, which we can take to be positive since only $m^2$ appears in the Hamiltonian. Let us first consider two limit cases $m=\Lambda$ and $m=0$ and then the full scale evolution.
\subsubsection{1. Case $m=\Lambda$: unentangled ground states }
For $m=\Lambda$ (or $s=0$) the Hamiltonian simplifies to
\begin{eqnarray}
H_{m=\Lambda}^{\Lambda} &=& \frac{1}{\Lambda} \int\!\! dk (k^2 + \Lambda^2 )\psi^{\dagger}(k)\psi(k)
\end{eqnarray}
and
\begin{equation}\label{eq:appaLams}
a^{\Lambda}_{s=0}(k) \equiv \sqrt{\frac{\Lambda}{2}} \phi(k) + \frac{i}{\sqrt{2\Lambda}}\pi(k),
\end{equation}
where the function $\alpha_{s=0}(k)$ is given by
\begin{equation}
\alpha_{s=0}(k) \equiv \sqrt{\frac{\Lambda^2k^2 + \Lambda^4 }{k^2 + \Lambda^2}} = \Lambda. \label{eq:app:alphas2}
\end{equation}
The ground state is the product state $|\Lambda\rangle$ that is annihilated by any $\psi(k)$, that is $\psi(k)\ket{\Lambda}=0~\forall k$, because the Hamiltonian is positive definite. Through a Fourier transform, this condition is equivalent to $\psi(x)\ket{\Lambda}=0 ~~\forall x$, which we used in the main text to define the unentangled vacuum state $\ket{\Lambda}$.
\subsubsection{2. Case $m=0$: critical ground state }
For $m=0$ (or $s=\infty$) the Hamiltonian is gapless. This can be seen by the fact that
\begin{equation}
\label{eq:APPalphak}
\alpha(k)\equiv \alpha_{s=\infty}(k)=\sqrt{\frac{k^2\Lambda^2}{k^2+\Lambda^2}}
\end{equation}
at small $k\ll \Lambda$ reduces to the CFT profile
\begin{equation}
\alpha(k)=\alpha^{\mbox{\tiny CFT}}(k)-\frac{|k|^3}{2\Lambda^2}+O(k^5),
\end{equation}
where
\begin{equation}
\alpha^{\mbox{\tiny CFT}}(k)=|k|.
\end{equation}
At large $k\gg \Lambda$, the state approaches the unentangled state $\ket{\Lambda}$, in the sense that
\begin{equation}
\alpha(k)=\Lambda-\frac{\Lambda^3}{k^2}+O(k^{-4}).
\end{equation}
The dispersion relation also reduces to the CFT dispersion $E^{\mbox{\tiny CFT}}(k)=|k|$ at small $k\ll \Lambda$. At large $k\gg \Lambda$, the dispersion relation is dominated by the nonrelativistic kinetic energy,
\begin{equation}
E^{\Lambda}(k)=\frac{k^2}{\Lambda}+O(|k|).
\end{equation}
\subsubsection{3. Case $0 < m< \Lambda$: scale evolution}
We will show that the ground state of Eq.~\eqref{eq:app:H1} with $m=\Lambda e^{-s}$ is the cMERA state
\begin{equation}
\label{eq:APPevol}
\ket{\Psi^{\Lambda}(s)} = e^{-is(L+K)} \ket{\Lambda},
\end{equation}
where $K$ is the magic entangler
\begin{equation}
\label{eq:APPK}
K=\frac{-i}{2}\int dk \, g(k)(\psi(k)\psi(-k)-\psi^{\dagger}(k)\psi^{\dagger}(-k))
\end{equation}
with
\begin{equation}
g(k)=\frac{\Lambda^2}{2(k^2+\Lambda^2)}.
\end{equation}
Clearly, at $s=0$ the cMERA state is $|\Lambda\rangle$, the ground state of Eq.~\eqref{eq:app:H1} with $m=\Lambda$.
As shown in \cite{Qi}, the cMERA state Eq.~\eqref{eq:APPevol} is a Gaussian state annihilated by the $a^{\Lambda}_s(k)$ of the form Eq.~\eqref{eq:app:aLams}, where
\begin{equation}
\label{eq:dalphak}
(\partial_s-k\partial_k)\alpha_s(k)=-2\alpha_s(k)g(k).
\end{equation}
Now we can substitute Eq.~\eqref{eq:app:aLams} into Eq.~\eqref{eq:dalphak} and see that it holds for arbitrary $s\in [0,\infty)$. Since $\alpha_s(k)$ uniquely determines a Guassian state, we have shown that $\ket{\Psi^{\Lambda}(s)}$ is in the ground state of the massive free boson Hamiltonian with a UV cutoff $\Lambda$ and mass $m=\Lambda e^{-s}$.
\subsection{Appendix II: computation of correlations functions}
Next we will compute correlation functions, involving the bosonic fields $\psi(x),\psi^{\dagger}(x)$, of a Gaussian state annihilated by
\begin{equation}
a^{\Lambda}(k) \equiv \sqrt{\frac{\alpha(k)}{2}} \phi(k) + \frac{i}{\sqrt{2\alpha(k)}}\pi(k)
\end{equation}
with some function $\alpha(k)$.
First, we express the $\psi(k)$ in terms of these annihilation operators:
\begin{equation}
\psi(k)=\sqrt{\frac{\Lambda}{2}}\phi(k)+\frac{i}{\sqrt{2\Lambda}}\pi(k),
\end{equation}
where
\begin{eqnarray}
\phi(k)=\frac{1}{\sqrt{2\alpha(k)}}(a^{\Lambda}(k)+a^{\Lambda\dagger}(-k)) \\
\pi(k)=-i\frac{\sqrt{2\alpha(k)}}{2}(a^{\Lambda}(k)-a^{\Lambda\dagger}(-k)).
\end{eqnarray}
The correlation functions can then be computed from the canonical commutation relations $[a^{\Lambda}(k),a^{\Lambda\dagger}(k')]=\delta (k-k')$,
\begin{eqnarray}
\langle \psi(k)\psi(k')\rangle &=& \frac{1}{4} \left(\frac{\Lambda}{\alpha(k)}-\frac{\alpha(k)}{\Lambda}\right) \delta(k+k') \\
&\equiv& F(k) \delta(k+k'), \nonumber \\
\label{eq:APPcorr2}
\langle \psi^{\dagger}(k)\psi(k')\rangle &=& \frac{1}{4} \left(\frac{\Lambda}{\alpha(k)}+\frac{\alpha(k)}{\Lambda}-2\right) \delta(k-k') ~~~ \\
&\equiv& n(k) \delta(k-k'). \nonumber
\end{eqnarray}
Transforming them into real space, we obtain,
\begin{equation}
\langle \psi(x)\psi(y) \rangle = \langle \psi^{\dagger}(x)\psi^{\dagger}(y) \rangle = \int \frac{dk}{2\pi} F(k) e^{ik(x-y)},
\end{equation}
\begin{equation}
\label{nn}
\langle \psi^{\dagger}(x)\psi(y)\rangle =\int \frac{dk}{2\pi} n(k) e^{ik(x-y)}.
\end{equation}
In particular, the particle density $\rho_0\equiv\langle \psi^{\dagger}(x)\psi(x)\rangle$ can be computed analytically with the fixed point $\alpha(k)$ in Eq.~\eqref{eq:APPalphak},
\begin{equation}
\frac{\rho_0}{\Lambda}=\frac{1}{8\pi}\int dk\, \left(\sqrt{\frac{1+k^2}{k^2}}+\sqrt{\frac{k^2}{1+k^2}}-2\right).
\end{equation}
The above expression has an IR divergence at $k=0$. Introducing a small mass $m\ll\Lambda$, i.e.,
\begin{equation}
\alpha(k) = \Lambda \sqrt{\frac{k^2+m^2}{k^2+\Lambda^2}}
\end{equation}
we find that
\begin{equation}
\frac{\rho_0}{\Lambda} = \frac{1}{8\pi} \log \frac{\Lambda^2}{m^2}-\frac{1.22741}{8\pi}+O\left(\frac{m^2}{\Lambda^2}\right).
\end{equation}
During the evolution Eq.~\eqref{eq:APPevol}, the cMERA state is the ground state of the massive Hamiltonian with mass $m=\Lambda e^{-s}$. We therefore expect that the particle density $\rho_0$ increases linearly with $s$ for $s\gg 1$. This is a direct consequence of the IR divergence, which is a feature of the free boson CFT in 1+1 dimensions. Note that, however, the ground state energy density of the massless Hamiltonian
\begin{eqnarray}
e_0 &\equiv& \langle \partial_x\psi^{\dagger}\partial_x\psi\rangle+\frac{\Lambda^2}{2}\rho_0-\frac{\Lambda^2}{4}\langle\psi^2+\psi^{\dagger 2}\rangle \\
&=& \int \frac{dk}{2\pi} \, \left(k^2 n(k)+ \frac{\Lambda^2}{2} (n(k)-F(k))\right).
\end{eqnarray}
is \textit{finite} because the IR divergences in $n(k)$ and $F(k)$ cancel each other. Its value only depends on the UV cutoff,
\begin{equation}
e_0=-\frac{\Lambda^2}{6\pi}.
\end{equation}
The fact that $e_0$ is finite is important in the context of numerical optimizations of cMERA through energy minimization \cite{Martin}.
\subsection{Appendix III: cMERA as the ground state of a local Hamiltonian: generic case}
Consider a Gaussian cMERA state annihilated by
\begin{equation}
a^{\Lambda}(k) \equiv \sqrt{\frac{\alpha(k)}{2}} \phi(k) + \frac{i}{\sqrt{2\alpha(k)}}\pi(k)
\end{equation}
with some function $\alpha(k)$. Then it is the ground state of all Hamiltonians with the form
\begin{equation}
H = \int dk\, E(k) a^{\Lambda}(k)^{\dagger} a^{\Lambda}(k),
\end{equation}
where $E(k)\geq 0$ can be any dispersion relation. In terms of original fields, the Hamiltonian is
\begin{equation}
H=\frac{1}{2}\int dk \, \left(\frac{E(k)}{\alpha(k)} \pi(k)\pi(-k)+E(k)\alpha(k)\phi(k)\phi(-k)\right). ~~
\end{equation}
This Hamiltonian is local (that is, it involves only a finite number of derivatives of the field operators) and invariant under spatial parity only if
\begin{eqnarray}
\frac{E(k)}{\alpha(k)}&=&P_1(k^2) \\
E(k)\alpha(k)&=&P_2(k^2),
\end{eqnarray}
where $P_1$ and $P_2$ are (finite-degree) polynomials. Then we have
\begin{eqnarray}
E(k)&=&\sqrt{P_1(k^2)P_2(k^2)} \\
\label{eq:APPalphak0}
\alpha(k)&=&\sqrt{\frac{P_2(k^2)}{P_1(k^2)}}.
\end{eqnarray}
Following Ref. \cite{Qi}, we will also require that both the CFT dispersion relation and the CFT ground state be recovered at small $k$, that is
\begin{equation}
\label{eq:APPEkgeneral}
E(k)=|k|+o(k),
\end{equation}
and
\begin{equation}
\label{eq:APPalphakgeneral}
\alpha(k)=|k|+o(k),
\end{equation}
and that the cMERA state approaches the product state in the UV, that is
\begin{equation}
\label{eq:APPUVcutoff}
\lim_{k\rightarrow\infty} \alpha(k)=\Lambda.
\end{equation}
Expanding the polynomials $P_1$ and $P_2$ as
\begin{eqnarray}
P_1(k^2) =\sum_{l=0}^{l_m} a_l (k^2)^l,~~~~ P_2(k^2) =\sum_{l=0}^{l'_m} b_l (k^2)^l,~~~
\end{eqnarray}
Eqs.~\eqref{eq:APPEkgeneral},~\eqref{eq:APPalphakgeneral} imply $a_0=1,b_0=0,b_1=1$. Eq.~\eqref{eq:APPUVcutoff} forces $P_1$ and $P_2$ to have the same degree $2l'_m=2l_m$ and also that $b_{l_m}=\Lambda^2 a_{l_m}$. The most generic \textit{local} quadratic Hamiltonian that has a cMERA ground state is therefore
\begin{equation}
H^{\Lambda}=\frac{1}{2}\int dx \, \left(\sum_{l=0}^{l_m} a_l (\partial^l_x \pi(x))^2+ \sum_{l=1}^{l_m} b_l (\partial^l_x \phi(x))^2\right)
\end{equation}
subject to the above constraints. The magic cMERA in the main text corresponds to the simplest solution (polynomials of smallest degree), namely $l_m=1$ with $P_1=1+\frac{k^2}{\Lambda^2}$ and $P_2=k^2$. Note that the degrees of $P_1$ and $P_2$ determine the order of derivatives appearing in the Hamiltonian. In the main text, we have second order derivatives in $H$, which are $(\partial_x \phi(x))^2$ and $(\partial_x \pi(x))^2$, in accordance with the degree of $P_1$ and $P_2$. Choosing larger $l_m$ corresponds to regulating the CFT Hamiltonian with higher derivative terms.
The asymptotic behavior of $\alpha(k)$ at large $k$ determines UV properties of the cMERA state. For the set of cMERA states with $\alpha(k)$ in Eq.~\eqref{eq:APPalphak0}, that is, the set of cMERA states that can be the ground state of a local Hamiltonian, it is always true that
\begin{equation}
\label{eq:APPalphaconv}
\frac{\alpha(k)}{\Lambda}=1+O\left(\left(\frac{\Lambda^2}{k^2}\right)^n\right)
\end{equation}
with some positive integer $n\geq 1$. To determine $n$, we first find the smallest $l_1$ such that $b_l=\Lambda^2 a_l$ for all $l_1\leq l\leq l_m$, then $n=l_m-l_1+1$ is the number of such coefficients. Since $b_{l_m}=\Lambda^2 a_{l_m}$, it is clear that $1\leq n\leq l_m$. Now we can show that the cMERA in previous works \cite{cMERA1,Qi} cannot be the ground state of a local Hamiltonian. Indeed, the previous cMERA proposals involve a function $\alpha(k)$ that converges faster than any polynomial at large $k$, contradicting Eq.~\eqref{eq:APPalphaconv}.
Eq.~\eqref{eq:APPalphaconv} has various implications on the correlation functions. First, Eq.~\eqref{eq:APPcorr2} implies that
\begin{equation}
n(k)=O\left(\left(\frac{\Lambda^2}{k^2}\right)^{2n}\right).
\end{equation}
For $n=1$, $n(k)\sim 1/k^4$ which is compatible with a generic bosonic cMPS. The minimial choice $n=l_m=1$ gives the magic cMERA state in the main text. More generally, if $b_{l_m}=\Lambda^2 a_{l_m}$ but $b_{l_m-1}\neq \Lambda^2 a_{l_m-1}$, then $n=1$ and the ground state is compatible with the cMPS in the UV. If $n>1$, the state is compatible with a subclass of cMPS that satisfies certain regularity conditions, which imposes constraints on cMPS variational parameters.
Now consider the implication on the real space correlation function
\begin{equation}
n(x)\equiv \langle \psi^{\dagger}(\vec{x})\psi(0)\rangle.
\end{equation}
it has continous derivatives at $x=0$ up to $4n-2$ order. For example, the expectation value of the non-relativistic kinetic term $\langle\partial_x\psi^{\dagger}\partial_x\psi\rangle=-\partial^2_x n(0)$ is always finite. However, higher order derivatives diverge in the $n=1$ case. We have therefore seen that, by asking the cMERA state to be the ground state of a local Hamiltonian, we automatically have correlation functions with finite orders of smoothness. This is to be in contrast with previous cMERA proposals \cite{cMERA1,Qi}, where correlation functions are infinite-order differentiable.
The entangler $K$ that generates this class of cMERA as the fixed point wavefunctionals also differs from previous works. The fixed point $\alpha(k)$ is related to $g(k)$ in Eq.~\eqref{eq:APPK} by
\begin{equation}
g(k)=\frac{k\partial_k \alpha(k)}{2\alpha(k)}.
\end{equation}
Substituting Eq.~\eqref{eq:APPalphak0} into the equation above, we obtain
\begin{equation}
g(k)=\frac{k^2}{2} \frac{P_1(k^2)P'_2(k^2)-P'_1(k^2)P_2(k^2)}{P_1(k^2)P_2(k^2)}.
\end{equation}
Note that $g(k)$ decays no slower than $1/k^2$ at large $k$ because $P_1(k^2)P'_2(k^2)-P'_1(k^2)P_2(k^2)$ has a degree that is at most $2l_m-4$. We see that $g(k)$ decays polynomially. This implies that its Fourier transform $g(x)$ at $x=0$ is not smooth. For example, the magic cMERA corresponds to $g(x)\propto e^{-\Lambda |x|}$, which does not have first-order derivative at $x=0$. This is in constrast with the Gaussian entangler $g(x)\propto e^{-\sigma(\Lambda x)^2/4}$ which is smooth at $x=0$.
At $k=0$, $P_1(0)=a_0=1, P'_2(0)=b_1=1$ and $\lim_{k\rightarrow 0} P_2(k^2)/k^2=b_1=1$, together give that $g(k)$ is smooth at $k=0$. To see this, let us rewrite
\begin{equation}
g(k)=\frac{1}{2} \left(\frac{P'_2(k^2)}{P_2(k^2)/k^2}-\frac{k^2P'_1(k^2)}{P_1(k^2)}\right).
\end{equation}
Both $P_2(k^2)/k^2$ and $P_1(k^2)$ are polynomials which are nonvanishing at $k=0$. This ensures that $g(k)$ is infinite-order differentiable at $k=0$. The fact that $g(k)$ is smooth at $k=0$ implies that $g(x)$ decays at least exponentially at large $x$, which keeps the entangler $K$ quasi-local. We can also work out
\begin{equation}
g(k=0)=\frac{1}{2},
\end{equation}
which ensures that the scaling dimensions (eigenvalues of $L+K$) come out correctly \cite{Qi}.
In conclusion, we have exhaustively determined the class of Gaussian bosonic cMERA states that can be the ground state of a local quadratic Hamiltonian. They (i) are characterized by two polynomials, (ii) have correlation functions compatible with a cMPS or a subclass of cMPS in the UV, and (iii) are generated by a quasi-local entangler with $g(x)$ decaying at least exponentially at large $x$ but not smooth at $x=0$.
\subsection{Appendix IV: Conformal group and scaling operators}
\subsubsection{1.Relation to conformal group}
The scale invariant magic cMERA $\ket{\Psi^{\Lambda}}$ in the main text is the exact ground state of any Hamiltonian of the form
\begin{equation}
H[E(k)] = \int dk~~E(k) a^{\Lambda}(k)^{\dagger} a^{\Lambda}(k)
\end{equation}
where the magic cMERA annihilation operators $a^{\Lambda}(k)$ are fixed, namely
\begin{eqnarray}
a^{\Lambda}(k) &\equiv& \sqrt{\frac{\alpha(k)}{2}}\phi(k) + \frac{i}{\sqrt{2\alpha(k)}}\pi(k), \\
\alpha(k) &\equiv& \sqrt{\frac{k^2\Lambda^2}{k^2+\Lambda^2}},
\end{eqnarray}
but where for the quasi-particle energies $E(k)$ we can choose any positive function.
Two specific choices of $E(k)$ stand up. One makes $H$ strictly local, the other one makes $H$ part of a quasi-local representation of the conformal algebra.
In this work we wanted the Hamiltonian $H^{\Lambda}$ to be local. This requires the choice
\begin{equation}
E^{\Lambda}(k) \equiv \sqrt{\frac{k^2}{\Lambda^2}(k^2 + \Lambda^2)}.
\end{equation}
In Ref. \cite{Qi} we studied instead the dispersion relation $E^{\mbox{\tiny CFT}}(k)=|k|$, in which case the Hamiltonian $H_{q.l.}^{\Lambda} \equiv H[E^{\mbox{\tiny CFT}}(k)]$ is quasi-local, but by construction has the same spectrum as the local, relativistic CFT Hamiltonian $H^{\mbox{\tiny CFT}}$ in \eqref{eq:HCFT}. (Notice that in Ref. \cite{Qi}, the quasi-local Hamiltonian $H_{q.l.}^{\Lambda}$ was denoted $H^{\Lambda}$). What makes $H_{q.l.}^{\Lambda}$ interesting is that it is part of a quasi-local realization of the conformal algebra, as described in Ref. \cite{Qi}. In particular, $D^\Lambda \equiv L+K$ is a quasi-local realization of the dilation operator, and we have that $D^{\Lambda}$ and $H_{q.l.}^{\Lambda}$ obey the commutation relation
\begin{equation}
-i\left[D^{\Lambda},H_{q.l.}^{\Lambda} \right] = H_{q.l.}^{\Lambda},
\end{equation}
which are the same as the commutation relation of CFT dilation operator $D^{\mbox{\tiny CFT}}$ and CFT Hamiltonian operator $H^{\mbox{\tiny CFT}}$, namely $\left[D^{\mbox{\tiny CFT}}, H^{\mbox{\tiny CFT}}\right]$. That is, $H^{\Lambda}_{q.l.}$ is scale invariant (under the scale transformation generated by $D^{\Lambda} = L + K$).
Instead, by requiring locality, which is of importance from a computational perspective \cite{Martin}, in this work we used a Hamiltonian $H^{\Lambda}$ that is not scale invariant, that is $[D^{\Lambda}, H^{\Lambda}] \not = 0$. We note, however, that since $H^{\Lambda}$ and $H^{\Lambda}_{q.l.}$ have the same eigenvectors (indeed, by construction $\left[ H^{\Lambda}, H^{\Lambda}_{q.l.}\right] =0$) and their dispersion relations $E^{\Lambda}(k)$ of $E^{\mbox{\tiny CFT}}(k)$ are very similar at low energies $k \ll \Lambda$, the violation of scale invariance is small at low energies.
\subsubsection{2.Derivation of scaling operators}
Following Ref. \cite{Qi}, the quasi-local scaling operators $\phi^{\Lambda}(x)$ and $\pi^{\Lambda}(x)$ are related to the sharp fields $\phi(x)$ and $\pi(x)$ by
\begin{eqnarray}
\phi^{\Lambda}(x) &=& \int dy\, \mu_{\phi}(x-y)\phi(y)\\
\label{eq:APPmupi}
\pi^{\Lambda}(x) &=& \int dy\, \mu_{\pi}(x-y)\pi(y),
\end{eqnarray}
where the Fourier transforms of the smearing functions are
\begin{eqnarray}
\mu_{\phi}(k)&\equiv& \sqrt{\frac{\alpha(k)}{|k|}}=\left(1+\frac{k^2}{\Lambda^2}\right)^{-1/2} \\
\mu_{\pi}(k)&\equiv& \sqrt{\frac{|k|}{\alpha(k)}}=\left(1+\frac{k^2}{\Lambda^2}\right)^{1/2}.
\end{eqnarray}
They have distributional Fourier transforms \cite{Qi}
\begin{eqnarray}
\mu_{\phi}(x)&=&\frac{2^{3/4}\Lambda K_{1/4}(|\Lambda x|)}{\Gamma(1/4)|\Lambda x|^{1/4}}\\
\mu_{\pi}(x)&=&\frac{2^{5/4}\Lambda K_{3/4}(|\Lambda x|)}{\Gamma(-1/4)|\Lambda x|^{3/4}}.
\end{eqnarray}
Note that Eq.~\eqref{eq:APPmupi} should be understood as the Hadamard finite-part integral
\begin{equation}
\pi^{\Lambda}(0)=\lim_{\epsilon\rightarrow 0} \left(2 \epsilon^{-1/2}\pi(0)+\int_{\mathcal{R}-(\epsilon,\epsilon)} dx\, \mu_{\pi}(x)\pi(x) \right).
\end{equation}
Other scaling operators include spatial derivatives $\partial^m_x \phi^{\Lambda}(x)$ with scaling dimensions $m$ and $\partial^m_x \pi^{\Lambda}(x)$ with scaling dimensions $m+1$. They can also be expressed as a distribution acting on the sharp fields $\phi(x),\pi(x)$, with profiles
\begin{eqnarray}
\mu_{\partial^m_x\phi}(x) &=& \partial^m_x \mu_{\phi}(x) \\
\mu_{\partial^m_x\pi}(x) &=& \partial^m_x \mu_{\pi}(x).
\end{eqnarray}
Some of the profile functions are plotted in the main text.
\subsection{Appendix V: Continuous matrix product operator}
\subsubsection{1. Matrix product operator (MPO)}
Consider a MPO made of matrices $A_m$ given by
\begin{equation}
A_m \equiv \left( \begin{array}{ccc}
\mathbb{1} & E_m & 0 \\
0 & \lambda \mathbb{1} & F_m \\
0 & 0 & \mathbb{1}
\end{array} \right),
\end{equation}
where $E_m$ and $F_m$ are two operators and $\mathbb{1}$ is the identity operator, all acting on the vector space of the lattice site $m$.
\begin{widetext}
The product of two contiguous MPO matrices $A_m$ and $A_{m+1}$ is
\begin{eqnarray}
A_mA_{m+1} = \left( \begin{array}{ccc}
\mathbb{1} & E_m & 0 \\
0 & \lambda \mathbb{1} & F_m \\
0 & 0 & \mathbb{1}
\end{array} \right) \left( \begin{array}{ccc}
\mathbb{1} & E_{m+1} & 0 \\
0 & \lambda \mathbb{1} & F_{m+1} \\
0 & 0 & \mathbb{1}
\end{array} \right)
= \left( \begin{array}{ccc}
\mathbb{1} & \lambda E_m + E_{m+1} & E_mF_{m+1} \\
0 & \lambda^2 \mathbb{1} & F_m + \lambda F_{m+1} \\
0 & 0 & \mathbb{1}
\end{array} \right).
\end{eqnarray}
Similarly, the product $A_{m}A_{m+1}A_{m+2}$ reads
\begin{eqnarray}
A_{m}A_{m+1}A_{m+2} &=& \left( \begin{array}{ccc}
\mathbb{1} & \lambda E_m + E_{m+1} & E_mF_{m+1} \\
0 & \lambda^2 \mathbb{1} & F_m + \lambda F_{m+1} \\
0 & 0 & \mathbb{1}
\end{array} \right)
\left( \begin{array}{ccc}
\mathbb{1} & E_{m+2} & 0 \\
0 & \lambda \mathbb{1} & F_{m+2} \\
0 & 0 & \mathbb{1}
\end{array} \right)\\
&=&\left( \begin{array}{ccc}
\mathbb{1} & ~~\lambda^2 E_m + \lambda E_{m+1} + \lambda E_{m+2}~~ &
~~E_{m} F_{m+1} + E_{m+1}F_{m+2} + \lambda E_m F_{m+2}~~\\
0 & \lambda^3 \mathbb{1} & F_m + \lambda F_{m+1} +\lambda^2 F_{m+2} \\
0 & 0 & \mathbb{1}
\end{array} \right),
\end{eqnarray}
and by iteration we find that the product $A_1 A_2\cdots A_N$ of $N$ such matrices reads
\begin{equation}
A_1 A_2\cdots A_N =\left( \begin{array}{ccc}
\mathbb{1} & ~~~\sum_{m=1}^N \lambda^{N-m} E_{m} ~~~& \sum_{m=1}^N \sum_{n=m+1}^{N} \lambda^{n-m-1} E_m F_n\\
0 & \lambda^{N} \mathbb{1} & \sum_{m=1}^N \lambda^{m-1} F_m \\
0 & 0 & \mathbb{1}
\end{array} \right).
\end{equation}
With the choice $E_m = F_m = \sqrt{\beta \lambda \epsilon}~ b_{m}$ and $\lambda = e^{-\epsilon\Lambda}$, the product becomes
\begin{equation}
A_1 A_2\cdots A_N = \left( \begin{array}{ccc}
\mathbb{1} & ~~~ \sqrt{\beta\epsilon} ~\sum_{m=1}^N e^{-\Lambda \epsilon(N-m+\frac{1}{2})} ~ b_{m} ~~~& \beta \epsilon~ \sum_{m=1}^N \sum_{n=m+1}^{N} e^{-\Lambda \epsilon(n-m)} ~b_{m} b_{n}~~\\
0 & e^{-\Lambda \epsilon N} \mathbb{1} & \sqrt{\beta\epsilon} ~\sum_{m=1}^N \lambda^{-\Lambda \epsilon(m-\frac{1}{2})} ~ b_{m} \\
0 & 0 & \mathbb{1}
\end{array} \right).
\end{equation}
We are interested in the matrix element $(1,3)$ of this product, namely
\begin{equation}
\bra{1} A_1 A_2 \cdots A_N \ket{3} = \frac{-i\Lambda \epsilon}{4} \sum_{m<n} e^{-\Lambda \epsilon (n-m)} b_mb_n,
\end{equation}
which accounts for one half of the discrete version $K^{\mbox{\scriptsize lattice}}$ in the main text (the other half, quadratic in creation operators $b_m^{\dagger} b_n^{\dagger}$, is obtained similarly).
\subsubsection{2. Continuous matrix product operator (cMPO)}
Next we introduce operators $\psi(x_m) \equiv b_m / \sqrt{\epsilon}$, where $x_m \equiv \epsilon m$, and expand the above matrix $A_m$ in powers of $\epsilon$,
\begin{equation}
A_m = \left( \begin{array}{ccc}
\mathbb{1} & E_m & 0 \\
0 & \lambda \mathbb{1} & F_m \\
0 & 0 & \mathbb{1}
\end{array} \right) = \left( \begin{array}{ccc}
\mathbb{1} &~~ \epsilon~\sqrt{\beta}e^{-\Lambda\epsilon/2} \psi(x_m)& 0 \\
0 & e^{-\Lambda \epsilon} \mathbb{1} & ~~\epsilon~\sqrt{\beta}e^{-\Lambda\epsilon/2} \psi(x_m) \\
0 & 0 & \mathbb{1}
\end{array} \right) = \mathbb{1} + \epsilon \mathcal{A}_m + O(\epsilon^2 ),
\end{equation}
where the cMPO matrix $\mathcal{A}(x_m) = \mathcal{A}_{m}$ reads
\begin{equation}
\mathcal{A}(x_m) = \left( \begin{array}{ccc}
0 &~~ \sqrt{\beta} \psi(x_m) & 0 \\
0 & -\Lambda \mathbb{1} & ~~\sqrt{\beta}\psi(x_m) \\
0 & 0 & 0
\end{array} \right).
\end{equation}
We can now expressed the matrix product $A_1 A_2 \cdots A_N$ in the double limit $\epsilon \rightarrow 0$ and $N\rightarrow \infty$, with finite $L = N\epsilon$, as a path ordered exponential,
\begin{eqnarray}
\mathcal{P}\exp \left(\int_0^L dx~\mathcal{A}(x) \right) \equiv \lim_{\small{\begin{array}{c} \epsilon\rightarrow 0\\N \rightarrow \infty \end{array}}} \left(1+\epsilon \mathcal{A}(x_1)\right) \left(1+\epsilon \mathcal{A}(x_2)\right)\cdots \left(1+\epsilon \mathcal{A}(x_N)\right),
\end{eqnarray}
whose matrix element $(1,3)$ reads
\begin{eqnarray}
\bra{1}\mathcal{P}\exp \left(\int_0^L dx~\mathcal{A}(x) \right)\ket{3} &=& \frac{-i\Lambda}{4} \lim_{\small{\begin{array}{c} \epsilon\rightarrow 0\\N \rightarrow \infty \end{array}}}\sum_{m=1}^N \epsilon \sum_{n=m+1}^N \epsilon~ e^{-\Lambda \epsilon (n-m)} \psi(x_m) \psi(x_n) \\
&=& \frac{-i\Lambda}{4} \int_0^L \!\!dx \int_x^L \!\!dy ~ e^{- \Lambda|x-y|} \psi(x) \psi(y),
\end{eqnarray}
and thus accounts for half of the entangler $K$ in the main text.
\end{widetext}
We conclude that the entangler $K$ of the proposed magic cMERA can indeed be expressed in an extremely compact way using a cMPO. In Ref. \cite{Martin} this observation, which also implies a compact cMPO representation for $e^{isK}$ for small $s$, will be exploited as part of an efficient computational framework for cMERA, namely in order to numerically implement a scale evolution generated by $L+K$.
\end{document}
| proofpile-arXiv_065-7166 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The $\w$-compactification of a $T_1$-space}
In this paper we establish some properties of subspaces of countably compact Hausdorff spaces and hence find some necessary conditions of embeddability of topological spaces into Hausdorff countably compact spaces. Also, we construct an example of regular separable first-countable scattered topological space which cannot be embedded into a Urysohn countably compact topological space but embeds into a totally countably compact Hausdorff space.
First we recall some results \cite{BBR} on embeddings into $\w$-bounded spaces.
We recall \cite[\S3.6]{Eng} that the Wallman compactification $W(X)$ of a topological space $X$ is the space of closed ultrafilters, i.e., families $\U$ of closed subsets of $X$ satisfying the following conditions:
\begin{itemize}
\item $\emptyset\notin\U$;
\item $A\cap B\in\U$ for any $A,B\in\U$;
\item a closed set $F\subset X$ belongs to $\mathcal U$ if $F\cap U\ne\emptyset$ for every $U\in\U$.
\end{itemize}
The Wallman compactification $W(X)$ of $X$ is endowed with the topology generated by the base consisting of the sets
$$\langle U\rangle=\{\F\in W(X):\exists F\in\F,\;F\subset U\}$$ where $U$ runs over open subsets of $X$.
By (the proof of) Theorem~\cite[3.6.21]{Eng}, for any topological space $X$ its Wallman compactification $W(X)$ is compact.
Let $j_X:X\to W(X)$ be the map assigning to each point $x\in X$ the principal ultrafilter consisting of all closed sets $F\subset X$ containing the point $x$. It is easy to see that the image $j_X(X)$ is dense in $W(X)$. By \cite[3.6.21]{Eng}, for a $T_1$-space $X$ the map $j_X:X\to W(X)$ is a topological embedding.
In the Wallman compactification $W(X)$, consider the subspace $$W_\w X={\textstyle\bigcup}\{\overline{j_X(C)}:C\subset X,\;|C|\le\w\},$$ which is the union of closures of countable subsets of $j_X(X)$ in $W(X)$. The space $W_\w X$ will be called the {\em Wallman $\w$-compactification} of $X$.
Following \cite{BBR}, we define a topological space $X$ to be {\em $\overline\w$-normal} if for any closed separable subspace $C\subset X$ and any disjoint closed sets $A,B\subset C$ there are disjoint open sets $U,V\subset X$ such that $A\subset U$ and $B\subset V$.
The properties of the Wallman $\w$-compactification are described in the following theorem whose proof can be found in \cite{BBR}.
\begin{theorem}\label{t:w} For any \textup{(}$\overline\w$-normal\/\textup{)} topological space $X$, its Wallman $\w$-compactification $W_\omega X$ is $\w$-bounded \textup{(}and Hausdorff\textup{)}.
\end{theorem}
A topological space $X$ is called
\begin{itemize}
\item {\em first-countable} at a point $x\in X$ if it has a countable neighborhood base at $x$;
\item {\em Fr\'{e}chet-Urysohn at a point $x\in X$} if for each subset $A$ of $X$ with $x\in\bar A$ there exists a sequence $\{a_n\}_{n\in\omega}\subset A$ that converges to $x$;
\item {\em regular at a point $x\in X$} if any neighborhood of $x$ contains a closed neighborhood of $x$.
\item {\em completely regular at a point $x\in X$} if for any neighborhood $U\subset X$ of $x$ there exists a continuous function $f:X\to[0,1]$ such that $f(x)=1$ and $f(X\setminus U)\subset\{0\}$.
\end{itemize}
If for each point $x$ of a topological space $X$ there exists a countable family $\mathcal{O}$ of open neighborhoods of $x$ such that $\bigcap \mathcal{O}=\{x\}$, then we shall say that the space $X$ has {\em countable pseudocharacter}.
\begin{theorem}\label{t:N1} Let $X$ be a subspace of a countably compact Hausdorff space $Y$. If $X$ is first-countable at a point $x\in X$, then $x$ is regular at the point $x$.
\end{theorem}
\begin{proof} Fix a countable neighborhood base $\{U_n\}_{n\in\IN}$ at $x$ and assume that $X$ is not regular at $x$. Consequently, there exists an open neighborhood $U_0$ of $x$ such that $\overline{V}\not\subset U_0$ for any neighborhood $V$ of $x$. Replacing each basic neighborhood $U_n$ by $\bigcap_{k\le n}U_k$, we can assume that $U_n\subset U_{n-1}$ for every $n\in\IN$. The choice of the neighborhood $U_0$ ensures that for every $n\in\IN$ the set $\overline{U}_n\setminus U_0$ contains some point $x_n$. Since the space $Y$ is countably compact and Hausdorff, the sequence $(x_n)_{n\in\w}$ has an accumulation point $y\in Y\setminus U_0$. By the Hausdorff property of $Y$, there exists a neighborhood $V\subset Y$ of $x$ such that $y\notin \overline{V}$. Find $n\in\w$ such that $U_n\subset V$ and observe that $O_y:=Y\setminus\overline{V}$ is a neighborhood of $y$ such that $O_y\cap\{x_i:i\in\w\}\subset \{x_i\}_{i<n}$, which means that $y$ is not an accumulating point of the sequence $(x_i)_{i\in\w}$.
\end{proof}
\begin{remark} Example~6.1 from~\cite{BBR} shows that in Theorem~\ref{t:N1} the regularity of $X$ at the point $x$ cannot be improved to the complete regularity at $x$.
\end{remark}
\begin{corollary}\label{c1}
Let $X$ be a subspace of a countably compact Hausdorff space $Y$. If $X$ is first-countable, then $X$ is regular.
\end{corollary}
The following example shows that Theorem~\ref{t:N1} cannot be generalized over Fr\'{e}chet-Urysohn spaces with countable pseudocharacter.
\begin{example}\label{e3} There exists a Hausdorff space $X$ such that
\begin{enumerate}
\item $X$ is locally countable and hence has countable pseudocharacter;
\item $X$ is separable and Fr\'echet-Urysohn;
\item $X$ is not regular;
\item $X$ is a subspace of a totally countably compact Hausdorff space.
\end{enumerate}
\end{example}
\begin{proof} Choose any point $\infty\notin \w\times\w$ and consider the space $Y=\{\infty\}\cup(\w\times\w)$ endowed with the topology consisting of the sets $U\subset Y$ such that if $\infty \in U$, then for every $n\in\w$ the complement $(\{n\}\times \w)\setminus U$ is finite. The definition of this topology ensures that $Y$ is Fr\'echet-Urysohn at the unique non-isolated point $\infty$ of $Y$.
Let $\F$ be the family of closed infinite subsets of $Y$ that do not contain the point $\infty$. The definition of the topology on $Y$ implies that for every $F\in\F$ and $n\in\w$ the intersection $(\{n\}\times\w)\cap F$ is finite. By the Kuratowski-Zorn Lemma, the family $\F$ contains a maximal almost disjoint subfamily $\A\subset\F$. The maximality of $\A$ guarantees that each set $F\in\F$ has infinite intersection with some set $A\in\A$.
Consider the space $X=Y\cup\A$ endowed with the topology consisting of the sets $U\subset X$ such that $U\cap Y$ is open in $Y$ and for any $A\in\A\cap U$ the set $A\setminus U\subset\w\times\w$ is finite.
We claim that the space $X$ has the properties (1)--(4). The definition of the topology of $X$ implies that $X$ is separable, Hausdorff and locally countable, which implies that $X$ has countable pseudocharacter. Moreover, $X$ is first-countable at all points except for $\infty$. At the point $\infty$ the space $X$ is Fr\'echet-Urysohn (because its open subspace $Y$ is Fr\'echet-Urysohn at $\infty$).
The maximality of the maximal almost disjoint family $\A$ guarantees that each neighborhood $U\subset Y\subset X$ of $\infty$ has an infinite intersection with some set $A\in\A$, which implies that $A\in\overline{U}$ and hence $\overline{U}\not\subset Y$. This means that $X$ is not regular (at $\infty$).
In the Wallman compactification $W(X)$ of the space $X$ consider the subspace $Z:=X\cup W_\omega\A=Y\cup W_\omega\A$.
We claim that the space $Z$ is Hausdorff and totally countably compact. To prove that $Z$ is Hausdorff, take two distinct ultrafilters $a,b\in Z$. If the ultrafilters $a,b$ are principal, then by the Hausdorff property of $X$, they have disjoint neighborhoods in $W(X)$ and hence in $Z$. Now assume that one of the ultrafilters $a$ or $b$ is principal and the other is not. We lose no generality assuming that $a$ is principal and $b$ is not. If $a\ne\infty$, then we can use the regularity of the space $X$ at $a$ and prove that $a$ and $b$ have disjoint neighborhoods in $W(X)\supset Z$. So, assume that $a=\infty$. It follows from $b\in Z=X\cup W_{\w} \A$ that the ultrafilter $b$ contains some countable set $\{A_n\}_{n\in\w}\subset\A$. Consider the set $$V=\bigcup_{n\in\w}\big(\{A_n\}\cup A_n\setminus\bigcup_{k\le n}\{k\}\times\w\big)$$and observe that $V$ has finite intersection with every set $\{k\}\times\w$, which implies that $Y\setminus V$ is a neighborhood of $\infty$. Then $\langle Y\setminus V\rangle$ and $\langle V\rangle$ are disjoint open neighborhoods of $a=\infty$ and $b$ in $W(X)$.
Finally, assume that both ultrafilters $a,b$ are not principal. Since $a,b\in W_{\w} \A$ are distinct, there are disjoint countable sets $\{A_n\}_{n\in\w},\{B_n\}_{n\in\w}\subset\A$ such that $\{A_n\}_{n\in\w}\in a$ and $\{B_n\}_{n\in\w}\in b$.
Observe that the sets $$V=\bigcup_{n\in\w}(\{A_n\}\cup A_n\setminus\bigcup_{k\le n}B_k)\mbox{ \ and \ }W=\bigcup_{n\in\w}(\{B_n\}\cup B_n\setminus\bigcup_{k\le n}A_k)$$ are disjoint and open in $X$. Then $\langle V\rangle$ and $\langle W\rangle$ are disjoint open neighborhoods of the ultrafilters $a,b$ in $W(X)$, respectively.
To see that $Z$ is totally countably compact, take any infinite set $I\subset Z$. We should find an infinite set $J\subset I$ with compact closure $\bar J$ in $Z$. We lose no generality assume that $I$ is countable and $\infty\notin I$. If $J=I\cap W_\omega\A$ is infinite, then $\bar J$ is compact by the $\w$-boundedness of $W_\omega\A$, see Theorem~\ref{t:w}. If $I\cap W_\omega\A$ is finite, then $I\cap Z\setminus W_\omega\A=I\cap Y=I\cap(\w\times\w)$ is infinite. If for some $n\in\w$ the set $J_n=I\cap(\{n\}\times\w)$ is infinite, then $\bar J_n=J_n\cup\{\infty\}$ is compact by the definition of the topology of the space $Y$. If for every $n\in\w$ the set $I\cap(\{n\}\times\w)$ is finite, then $I\cap(\w\times\w)\in\F$ and by the maximality of the family $\A$, for some set $A\in\A$ the intersection $J=A\cap I$ is infinite, and then $\bar J=J\cup\{A\}$ is compact.
\end{proof}
A topological space $X$ is called {\em weakly $\infty$-regular} if for any infinite closed subset $F\subset X$ and point $x\in X\setminus F$ there exist disjoint open sets $V,U\subset X$ such that $x\in V$ and $U\cap F$ is infinite.
\begin{proposition} Each subspace $X$ of a countably compact Hausdorff space $Y$ is weakly $\infty$-regular.
\end{proposition}
\begin{proof} Given an infinite closed subset $F\subset X$ and a point $x\in X\setminus F$, consider the closure $\bar F$ of $F$ in $Y$ and observe that $x\notin\bar F$. By the countable compactness of $Y$, the infinite set $F$ has an accumulation point $y\in \bar F$. By the Hausdorff property of $Y$, there are two disjoint open sets $V,U\subset Y$ such that $x\in V$ and $y\in U$. Since $y$ is an accumulation point of the set $F$, the intersection $F\cap U$ is infinite. Then $V\cap X$ and $U\cap X$ are two disjoint open sets in $X$ such that $x\in V\cap X$ and $F\cap U\cap X$ is infinite, witnessing that the space $X$ is weakly $\infty$-regular.
\end{proof}
A subset $D$ of a topological space $X$ is called
\begin{itemize}
\item {\em discrete} if each point $x\in D$ has a neighborhood $O_x\subset X$ such that $D\cap O_x=\{x\}$;
\item {\em strictly discrete} if each point $x\in D$ has a neighborhood $O_x\subset X$ such that the family $(O_x)_{x\in D}$ is disjoint in the sense that $O_x\cap O_y=\emptyset$ for any distinct points $x,y\in D$;
\item {\em strongly discrete} if each point $x\in D$ has a neighborhood $O_x\subset X$ such that the family $(O_x)_{x\in D}$ is disjoint and locally finite in $X$.
\end{itemize}
It is clear that for every subset $D\subset X$ we have the implications
$$\mbox{strongly discrete $\Ra$ strictly discrete $\Ra$ discrete}.$$
\begin{theorem}\label{t:N2} Let $X$ be a subspace of a countably compact Hausdorff space $Y$. Then each infinite subset $I\subset X$ contains an infinite subset $D\subset I$ which is strictly discrete in $X$.
\end{theorem}
\begin{proof} By the countable compactness of $Y$, the set $I$ has an accumulation point $y\in Y$. Choose any point $x_0\in I\setminus\{y\}$ and using the Hausdorff property of $Y$, find a disjoint open neighborhoods $V_0$ and $U_0$ of the points $x_0$ and $y$, respectively. Choose any point $y_1\in U_0\cap I\setminus\{y\}$ and using the Hausdorff property of $Y$ choose open disjoint neighborhoods $V_1\subset U_0$ and $U_1\subset U_0$ of the points $x_1$ and $y$, respectively. Proceeding by induction, we can construct a sequence $(x_n)_{n\in\w}$ of points of $X$ and sequences $(V_n)_{n\in\w}$ and $(U_n)_{n\in\w}$ of open sets in $Y$ such that for every $n\in\IN$ the following conditions are satisfied:
\begin{itemize}
\item[1)] $x_n\in V_n\subset U_{n-1}$;
\item[2)] $y\in U_n\subset U_{n-1}$;
\item[3)] $V_n\cap U_n=\emptyset$.
\end{itemize}
The inductive conditions imply that the sets $V_n$, $n\in\w$, are pairwise disjoint, witnessing that the set $D=\{x_n\}_{n\in\w}\subset I$ is strictly discrete in $X$.
\end{proof}
For closed discrete subspaces in Lindel\"of subspaces, the strict discreteness of the set $D$ in Theorem~\ref{t:N2} can be improved to the strong discreteness. Let us recall that a topological space $X$ is {\em Lindel\"of} if each open cover of $X$ contains a countable subcover.
\begin{theorem}\label{t:N3} Let $X$ be a Lindel\"of subspace of a countably compact Hausdorff space $Y$. Then each infinite closed discrete subset $I\subset X$ contains an infinite subset $D\subset I$ which is strongly discrete in $X$.
\end{theorem}
\begin{proof} By the countable compactness of $Y$, the set $I$ has an accumulation point $y\in Y$. Since $I$ is closed and discrete in $X$, the point $y$ does not belong to the space $X$. By the Hausdorff property of $Y$, for every $x\in X$ there are disjoint open sets $V_x,W_x\subset Y$ such that $x\in V_x$ and $y\in W_x$. Since the space $X$ is Lindel\"of, the open cover $\{V_x:x\in X\}$ has a countable subcover $\{V_{x_n}\}_{n\in\w}$. For every $n\in\w$ consider the open neighborhood $W_n=\bigcap_{k\le n}W_{x_k}$ of $y$.
Choose any point $y_0\in I\setminus\{y\}$ and using the Hausdorff property of $Y$, find a disjoint open neighborhoods $V_0$ and $U_0\subset W_0$ of the points $y_0$ and $y$, respectively. Choose any point $y_1\in U_0\cap W_1\cap I\setminus\{y\}$ and using the Hausdorff property of $Y$ choose open disjoint neighborhoods $V_1\subset U_0$ and $U_1\subset U_0\cap W_1$ of the points $y_1$ and $y$, respectively. Proceeding by induction, we can construct a sequence $(y_n)_{n\in\w}$ of points of $X$ and sequences $(V_n)_{n\in\w}$ and $(U_n)_{n\in\w}$ of open sets in $Y$ such that for every $n\in\IN$ the following conditions are satisfied:
\begin{itemize}
\item[1)] $y_n\in V_n\subset U_{n-1}\cap W_n$;
\item[2)] $y\in U_n\subset U_{n-1}\cap W_n$;
\item[3)] $V_n\cap U_n=\emptyset$.
\end{itemize}
The inductive conditions imply that the family $(V_n)_{n\in\w}$ are pairwise disjoint, witnessing that the set $D=\{y_n\}_{n\in\w}\subset I$ is strictly discrete in $X$.
To show that $D$ is strongly discrete, it remains to show that the family $(V_n)_{n\in\w}$ is locally finite in $X$. Given any point $x\in X$, find $n\in\w$ such that $x\in V_{x_n}$ and observe that for every $i>n$ we have $V_i\cap V_{x_n}\subset W_i\cap V_{x_n}\subset W_{n}\cap V_{x_n}=\emptyset$.
\end{proof}
A topological space $X$ is called {\em $\ddot\w$-regular} if it for any closed discrete subset $F\subset X$ and point $x\in X\setminus F$ there exist disjoint open sets $U_F$ and $U_x$ in $X$ such that $F\subset U_F$ and $x\in U_x$.
\begin{proposition}\label{p:sd} Each countable closed discrete subset $D$ of a (Lindel\"of) $\ddot\w$-regular $T_1$-space $X$ is strictly discrete (and strongly discrete) in $X$.
\end{proposition}
\begin{proof} The space $X$ is Hausdorff, being an $\ddot\w$-regular $T_1$-space. If the subset $D\subset X$ is finite, then $D$ is strongly discrete by the Hausdorff property of $X$. So, assume that $D$ is infinite and hence $D=\{z_n\}_{n\in\w}$ for some pairwise distinct points $z_n$. By the $\ddot\w$-regularity there are two disjoint open sets $V_0,W_0\subset X$ such that $z_0\in V_0$ and $\{z_n\}_{n\ge 1}\subset W_0$.
Proceeding by induction, we can construct sequences of open sets $(V_n)_{n\in\w}$ and $(W_n)_{n\in\w}$ in $X$ such that for every $n\in\w$ the following conditions are satisfied:
\begin{itemize}
\item $z_n\in V_n\subset W_{n-1}$;
\item $\{z_k\}_{k>n}\subset W_n\subset W_{n-1}$;
\item $V_n\cap W_n=\emptyset$.
\end{itemize}
These conditions imply that the family $(V_n)_{n\in\w}$ is disjoint, witnessing that the set $D$ is strictly discrete in $X$.
Now assume that the space $X$ is Lindel\"of and let $V=\bigcup_{n\in\w}V_n$. By the $\ddot\w$-regularity of $X$, each point $x\in X\setminus V$ has a neighborhood $O_x\subset X$ whose closure $\bar O_x$ does not intersect the closed discrete subset $D$ of $X$. Since $X$ is Lindel\"of, there exists a countable set $\{x_n\}_{n\in\w}\subset X\setminus V$ such that $X=V\cup \bigcup_{n\in\w}O_{x_n}$. For every $n\in \w$ consider the open neighborhood $U_n:=V_n\setminus\bigcup_{k\le n}\bar O_{x_k}$ of $z_n$ and observe that the family $(U_n)_{n\in\w}$ is disjoint and locally finite in $X$, witnessing that the set $D$ is strongly discrete in $X$.
\end{proof}
The following proposition shows that the property described in Theorem~\ref{t:N2} holds for $\ddot\w$-regular spaces.
\begin{proposition} Every infinite subset $I$ of an $\ddot\w$-regular $T_1$-space $X$ contains an infinite subset $D\subset I$, which is strictly discrete in $X$.
\end{proposition}
\begin{proof} If $I$ has an accumulation point in $X$, then a strictly discrete infinite subset can be constructed repeating the argument of the proof of Theorem~\ref{t:N2}. So, we assume that $I$ has no accumulation point in $X$ and hence $I$ is closed and discrete in $X$. Replacing $I$ by a countable infinite subset of $I$, we can assume that $I$ is countable. By Proposition~\ref{p:sd}, the set $I$ is strictly discrete in $X$.
\end{proof}
A topological space $X$ is called {\em superconnected} \cite{BMT} if for any non-empty open sets $U_1,\dots, U_n$ the intersection $\overline{U}_1\cap\dots\cap\overline{U}_n$ is not empty. It is clear that a superconnected space containing more than one point is not regular. An example of a superconnected second-countable Hausdorff space can be found in \cite{BMT}.
\begin{proposition} Any first-countable superconnected Hausdorff space $X$ with $|X|>1$ contains an infinite set $I\subset X$ such that each infinite subset $D\subset I$ is not strictly discrete in $X$.
\end{proposition}
\begin{proof} For every point $x\in X$ fix a countable neighborhood base $\{U_{x,n}\}_{n\in\w}$ at $x$ such that $U_{x,n+1}\subset U_{x,n}$ for every $n\in\w$.
Choose any two distinct points $x_0,x_1\in X$ and for every $n\ge 2$ choose a point $x_n\in\bigcap_{k<n}\overline{U}_{x_k,n}$. We claim that the set $I=\{x_n\}_{n\in\w}$ is infinite. In the opposite case, we use the Hausdorff property and find a neighborhood $V$ of $x_0$ such that $\overline{V}\cap I=\{x_0\}$. Find $m\in\w$ such that $U_{x_0,m}\subset V$ and $x_0\notin \overline{U}_{x_1,m}$. Observe that $$x_m\in I\cap \overline{U}_{x_0,m}\cap\overline{U}_{x_1,m}=\{x_0\}\cap \overline{U}_{x_1,m}=\emptyset,$$ which is
a desired contradiction showing that the set $I$ is infinite.
Next, we show that any infinite subset $D\subset I$ is not strictly discrete in $X$. To derive a contradiction, assume that $D$ is strictly discrete. Then each point $x\in D$ has a neighborhood $O_x\subset X$ such that the family $(O_x)_{x\in D}$ is disjoint. Choose any point $x_k\in D$ and find $m\in\w$ such that $U_{x_k,m}\subset O_{x_k}$. Replacing $m$ by a larger number, we can assume that $m>k$ and $x_m\in D$. Since $x_m\in\overline{U}_{x_k,m}\subset \overline O_{x_k}$, the intersection $O_{x_m}\cap O_{x_k}$ is not empty, which contradicts the choice of the neighborhoods $O_x$, $x\in D$.
\end{proof}
Next, we establish one property of subspaces of functionally Hausdorff countably compact spaces. We recall that a topological space $X$ is {\em functionally Hausdorff} if for any distinct points $x,y\in X$ there exists a continuous function $f:X\to [0,1]$ such that $f(x)=0$ and $f(x)=1$.
A subset $U$ of a topological space $X$ is called {\em functionally open} if $U=f^{-1}(V)$ for some continuous function $f:X\to\IR$ and some open set $V\subset\IR$.
A subset $K\subset X$ of topological space is called {\em functionally compact} if each open cover of $K$ by functionally open subsets of $X$ has a finite subcover.
\begin{proposition} If $X$ is a subspace of a functionally Hausdorff countably compact space $Y$, then no infinite closed discrete subspace $D\subset X$ is contained in a functionally compact subset of $X$.
\end{proposition}
\begin{proof} To derive a contradiction, assume that $D$ is contained in a functionally compact subset $K$ of $X$. By the countable compactness of $Y$, the set $D$ has an assumulation point $y\in Y$. Since $D$ is closed and discrete in $X$, the point $y$ does not belong to $X$ and hence $y\notin K$. Since $Y$ is functionally Hausdorff, for every $x\in K$ there exists a continuous function $f_x:Y\to[0,1]$ such that $f_x(x)=0$ and $f_x(y)=1$. By the functional compactness of $K$, the cover $\{f_x^{-1}([0,\frac12)):x\in K\}$ contains a finite subcover $\{f_x^{-1}([0,\frac12)):x\in E\}$ where $E$ is a finite subset of $K$. Then $D\subset K\subset f^{-1}([0,\frac12))$ for the continuous function $f=\max_{x\in E}f_x:Y\to [0,1]$, and $f^{-1}((\frac12,1])$ is a neighborhood of $y$, which is disjoint with the set $D$. But this is not possible as $y$ is an accumulation point of $D$.
\end{proof}
Finally, we construct an example of a regular separable first-countable scattered space that embeds into a Hausdorff countably compact space but does not embed into Urysohn countably compact spaces. We recall that a topological space $X$ is {\em Urysohn} if any distinct points of $X$ have disjoint closed neighborhoods in $X$.
\begin{example}\label{e:d} There exists a topological space $X$ such that
\begin{enumerate}
\item $X$ is regular, separable, and first-countable;
\item $X$ can be embedded into a Hausdorff totally countably compact space;
\item $X$ cannot be embedded into an Urysohn countably compact space.
\end{enumerate}
\end{example}
\begin{proof} In the construction of the space $X$ we shall use almost disjoint dominating subsets of $\w^\w$. Let us recall \cite{vD} that a subset $D\subset\w^\w$ is called {\em dominating} if for any $x\in\w^\w$ there exists $y\in D$ such that $x\le^* y$, which means that $x(n)\le y(n)$ for all but finitely many numbers $n\in\w$.
By $\mathfrak d$ we denote the smallest cardinality of a dominating subset $D\subset\w^\w$. It is clear that $\w_1\le\mathfrak d\le\mathfrak c$.
We say that a family of function $D\subset\w^\w$ is {\em almost disjoint} if for any distinct $x,y\in D$ the intersection $x\cap y$ is finite. Here we identify a function $x\in \w^\w$ with its graph $\{(n,x(n)):n\in\w\}$ and hence identify the set of functions $\w^\w$ with a subset of the family $[\w\times\w]^\w$ of all infinite subsets of $\w\times\w$
\begin{claim}\label{cl1} There exists an almost disjoint dominating subset $D\subset\w^\w$ of cardinality $|D|=\mathfrak d$.
\end{claim}
\begin{proof} By the definition of $\mathfrak d$, there exists a dominating family $\{x_\alpha\}_{\alpha\in\mathfrak d}\subset \w^\w$. It is well-known that $[\w]^\w$ contains an almost disjoint family $\{A_\alpha\}_{\alpha\in\mathfrak c}$ of cardinality continuum. For every $\alpha<\mathfrak d$ choose a strictly increasing function $y_\alpha:\w\to A_\alpha$ such that $x_\alpha\le y_\alpha$. Then the set $D=\{y_\alpha\}_{\alpha\in \mathfrak d}$ is dominating and almost disjoint.
\end{proof}
By Claim~\ref{cl1}, there exists an almost disjoint dominating subset $D\subset\w^\w\subset[\w\times\w]^\w$. For every $n\in\w$ consider the vertical line $\lambda_n=\{n\}\times\w$ and observe that the family $L=\{\lambda_n\}_{n\in\w}$ is disjoint and the family $D\cup L\subset[\w\times\w]^\w$ is almost disjoint.
Consider the space $Y=(D\cup L)\cup(\w\times\w)$ endowed with the topology consisting of the sets $U\subset Y$ such that for every $y\in (D\cup L)\cap U$ the set $y\setminus U\subset\w\times\w$ is finite. Observe that all points in the set $\w\times\w$ are isolated in $Y$. Using the almost disjointness of the family $D\cup L$, it can be shown that the space $Y$ is regular, separable, locally countable, scattered and locally compact.
Choose any point $\infty\notin \w\times Y$ and consider the space $Z=\{\infty\}\cup(\w\times Y)$ endowed with the topology consisting of the sets $W\subset Z$ such that
\begin{itemize}
\item for every $n\in\w$ the set $\{y\in Y:(n,y)\in W\}$ is open in $Y$, and
\item if $\infty\in W$, then there exists $n\in\w$ such that $\bigcup_{m\ge n}\{m\}\times Y\subset W$.
\end{itemize}
It is easy to see $Z=\{\infty\}\cup(\w\times Y)$ is first-countable, separable, scattered and regular.
Let $\sim$ be the smallest equivalence relation on $Z$ such that $$\mbox{$(2n,\lambda)\sim(2n+1,\lambda)$ and $(2n+1,d)\sim (2n+2,d)$}
$$for any $n\in\w$, $\lambda\in L$ and $d\in D$.
Let $X$ be the quotient space $Z/_\sim$ of $Z$ by the equivalence relation $\sim$. It is easy to see that the equivalence relation $\sim$ has at most two-element equivalence classes and the quotient map $q:Z\to X$ is closed and hence perfect. Applying \cite[3.7.20]{Eng}, we conclude that the space $X$ is regular. It is easy to see that $X$ is separable, scattered and first-countable.
It remains to show that $X$ has the properties (2), (3) of Example~\ref{e:d}.
This is proved in the following two claims.
\begin{claim} The space $X$ does not admit an embedding into an Urysohn countably compact space.
\end{claim}
\begin{proof} To derive a contradiction, assume that $X=q(Z)$ is a subspace of an Urysohn countably compact space $C$. By the countable compactness of $C$, the set $q(\{0\}\times L)\subset X\subset C$ has an accumulation point $c_0\in C$. The point $c_0$ is distinct from $q(\infty)$, as $q(\infty)$ is not an accumulation point of the set $q(\{0\}\times L)$ in $X$. Let $l\in\w$ be the largest number such that $c_0$ is an accumulation point of the set $q(\{l\}\times L)$ in $C$.
Let us show that the number $l$ is well-defined. Indeed, by the Hausdorffness of the space $C$, there exists a neighborhood $W\subset C$ of $q(\infty)$ such that $c_0\not\subset\overline{W}$. By the definition of the topology of the space $Z$, there exists $m\in\w$ such that $\bigcup_{k\ge m}\{k\}\times Y\subset q^{-1}(W)$. Then $c_0$ is not an accumulation point of the set $\bigcup_{k\ge m}q(\{k\}\times L)$ and hence the number $l$ is well-defined and $l<m$.
The definition of the equivalence relation $\sim$ implies that the number $l$ is odd. By the countable compactness of $C$, the infinite set $q(\{l+1\}\times L)$ has an accumulation point $c_1\in C$. The maximality of $l$ ensures that $c_1\ne c_0$. By the Urysohn property of $C$, the points $c_0,c_1$ have open neighborhoods $U_0,U_1\subset C$ with disjoint closures in $C$.
For every $i\in\{0,1\}$ consider the set $J_i=\{n\in\w:q(l+i,\lambda_n)\in U_i\}$, which is infinite, because $c_i$ is an accumulation point of the set $q(\{l+i\}\times L)=\{q(l+i,\lambda_n):n\in\w\}$. For every $n\in J_i$ the open set $q^{-1}(U_i)\subset Z$ contains the pair $(l+i,\lambda_n)$. By the definition of the topology at $(l+i,\lambda_n)$, the set $(\{l+i\}\times \lambda_n)\setminus q^{-1}(U_i)\subset \{l+i\}\times\{n\}\times\w$ is finite and hence is contained in the set $\{l+i\}\times\{n\}\times[0,f_i(n)]$ for some number $f_i(n)\in\w$. Using the dominating property of the family $D$, choose a function $f\in D$ such that $f(n)\ge f_i(n)$ for any $i\in\{0,1\}$ and $n\in J_i$. It follows that for every $i\in\{1,2\}$ the set $\{l+i\}\times f\subset\{l+i\}\times(\w\times\w)$ has infinite intersections with the preimage $q^{-1}(U_i)$ and hence $\{(l+i,f)\}\in\overline{q^{-1}(U_i)}\subset q^{-1}(\overline{U}_i)$. Taking into account that the number $l$ is odd,
we conclude that $$q(l,f)=q(l+1,f)\in\overline{U}_0\cap\overline{U}_1=\emptyset.$$
which is a desired contradiction completing the proof of the claim.
\end{proof}
\begin{claim} The space $X$ admits an embedding into a Hausdorff totally countably compact space.
\end{claim}
\begin{proof} Using the Kuratowski-Zorn Lemma, enlarge the almost disjoint family $D\cup L$ to a maximal almost disjoint family $M\subset[\w\times\w]^\w$.
Consider the space $Y_M=M\cup(\w\times\w)$ endowed with the topology consisting of the sets $U\subset Y_M$ such that for every $y\in M\cap U$ the set $y\setminus U\subset\w\times\w$ is finite. It follows that $Y_M$ is a regular locally compact first-countable space, containing $Y$ as an open dense subspace.
The maximality of $M$ implies that each sequence in $\w\times\w$ contains a subsequence that converges to some point of the space $Y_M$. This property implies that the subspace $\tilde Y:=(W_\omega M)\cup(\w\times\w)$ of the Wallman extension of $W(Y_M)$ is totally countably compact. Repeating the argument from Example~\ref{e3}, one can show that the space $\tilde Y$ is Hausdorff.
Let $\tilde Z=\{\infty\}\cup(\w\times\tilde Y)$ where $\infty\notin\w\times\tilde Y$. The space $\tilde Z$ is endowed with the topology consisting of the sets
$W\subset \tilde Z$ such that
\begin{itemize}
\item for every $n\in\w$ the set $\{y\in \tilde Y:(n,y)\in W\}$ is open in $\tilde Y$, and
\item if $\infty\in W$, then there exists $n\in\w$ such that $\bigcup_{m\ge n}\{m\}\times \tilde Y\subset W$.
\end{itemize}
Taking into account that the space $\tilde Y$ is Hausdorff and totally countably compact, we can prove that so is the the space $\tilde Z$.
Let $\sim$ be the smallest equivalence relation on $\tilde Z$ such that $$\mbox{$(2n,\lambda)\sim(2n+1,\lambda)$ and $(2n+1,d)\sim (2n+2,d)$}
$$for any $n\in\w$, $\lambda\in W_\omega L$ and $d\in W_\omega D$.
Let $\tilde X$ be the quotient space $\tilde Z/_\sim$ of $\tilde Z$ by the equivalence relation $\sim$. It is easy to see that the space $\tilde X$ is Hausdorff, totally countably compact and contains the space $X$ as a dense subspace.
\end{proof}
\end{proof}
However, we do not know the answer on the following intriguing problem:
\begin{problem}
Is it true that each (scattered) regular topological space can be embedded into a Hausdorff countably compact topological space?
\end{problem}
| proofpile-arXiv_065-7172 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The study of random matrices in mathematics can be traced back to the work of Hurwitz on the invariant measure for the matrix groups $U(N)$ and $SO(N)$ \cite{Hu97,DF17}. In multivariate statistics another stream of random matrix theory was initiated
with the work of Wishart \cite{wishart1928generalised} on estimating the covariance matrices of multivariate statistics when the number of variables is large. In theoretical physics Wigner \cite{wigner1955characteristic} used random matrices to model energy spectrum of Hamiltonians of highly excited states of heavy nuclei. The works of physicists \cite{t1974planar} on the large $N$ limit of $U(N)$ gauge theory provided yet another application to random matrices (and their generalized version often referred to as \emph{matrix models}). Since then random matrix theory and matrix models have been found useful in an overwhelming number of contemporary fields, for example communication engineering \cite{TV04}, the analysis of algorithms \cite{Tr15}, and deep learning \cite{PW17}.
Many tools have been developed to understand the properties of different models and ensembles. One of these tools is called loop equations, and has led to the now well-known Chekhov-Eynard-Orantin topological recursion formula \cite{eynard2004all, chekhov2006free, chekhov2006hermitian}. In the realm of random matrix theory this formula allows for the systematic computation of correlation functions of random matrices, as series in $1/N$. \\
However some random matrix ensembles are, in the existing literature, still out of the scope of these loop equations. These are product ensembles, that is they are random matrices constructed out of a product of several random matrices. In this paper we describe the loop equations for such a product ensemble, specifically considering the case of a random matrix constructed out of the product of two complex Ginibre matrices. Such an ensemble was for instance considered in \cite{Bouchaud2007}, with applications to the study of financial data, while a closely related product ensemble with applications to low energy QCD, was studied in \cite{Osborne-QCD-2-matrices} (see also the text book treatment
\cite[\S 15.11]{Fo10}), allowing for insight into the poorly understood regime of non-zero baryon chemical potential. \\
More generally the product ensembles are found to have many applications. Some of these applications are described in the thesis \cite{Jesper-thesis}. Among those, one finds applications to telecommunication problems where product ensembles provide a model of communication channels where the signal has to pass through different media \cite{muller-telecom-channel-2002}. One also finds applications to the study of spin chains with disorder \cite{prod-rand-mat-book}, quantum transport \cite{Beenakker-review-Qtransport}, quantum information and random graph states \cite{collins2010randoma, collins2013area}. The product ensembles also relate to the study of neural networks. Indeed information about the asymptotic behavior of such ensembles allows one to draw results about stability of gradient in a deep neural network with randomly initialized layers \cite{hanin2018products}. These product ensembles are also of interest for the study of the stability of large dynamical systems \cite{BEN-dyn-syst-RMT,IF18}. As a consequence, finding mathematical and technical tools for investigating the properties of these ensembles can enable progress in these fields of study. \\
Yet another problem of importance is the one of Muttalib-Borodin ensembles. These ensembles were first defined as invariant ensembles, via their eigenvalue probability density function (PDF) \cite{muttalib1995random}, and latter realized in terms of ensembles of random matrices with independent entries \cite{Ch14,FW15}. Their joint PDF is proportional to,
\begin{equation}\label{A}
\prod_{l=1}^N e^{- V(\lambda_l)} \prod_{1 \le i < j \le N} (\lambda_i - \lambda_j)(\lambda_i^\theta - \lambda_j^\theta),
\end{equation}
where $\theta > 0$ is a parameter and $V(\lambda_l)$ can be interpreted as a confining potential. For general potential $V$ and $\theta=2$, this model relates to the $\mathcal{O}(\mathfrak{n})$ matrix model with $\mathfrak{n}=-2$, see \cite{Borot-Eynard-O(n)}, and it also relates to a particular model of disordered bosons \cite{LSZ06}. A key structural interest in the Muttalib-Borodin ensembles is that they are biorthogonal ensembles. That is they admit a family of biorthogonal polynomials and their correlation functions can be expressed in determinantal form, with a kernel that can be expressed in terms of the biorthogonal polynomials; see \cite{borodin1998biorthogonal}. Although it is not immediately obvious, the singular values for the product of $M$ complex Ginibre matrices also give rise to biorthogonal ensembles \cite{AIK13,KZ14}. Moreover, in the asymptotic regime of large separation, the PDF for the squared singular values reduces to (\ref{A}) with $\theta = 1/M$, and $V$ having the leading form $V(x) = - M x^{1/M}$ \cite{FLZ15}.\\
One attractive feature of both the Muttalib-Borodin ensemble, and the squared singular values of products of complex Ginibre matrices, is that in the global density limit the moments of spectral density are given by the Fuss-Catalan family of combinatorial numbers; see
\cite{penson2011product,FW15}. Another is the special role played by particular special functions of the Meijer-G and Wright Bessel function class. Underlying these special functions is a linear differential equation of degree $M +1$. Less well understood is the nonlinear differential system implied by the correlation kernel based on these special functions. These are relevant to the study of gap probabilities; see \cite{WF17,MF18}. \\
Other questions about products of random matrices have been investigated for instance in \cite{Dubach-Peled}. In this work, the authors are concerned about the behavior of traces of general words of Ginibre matrices. In particular they show that the limiting square singular values distribution is a Fuss-Catalan distribution for any words. In the work \cite{DLN}, the authors study the traces of the general words in an alphabet of random matrices constructed out of the marginals of a random tensor. Using combinatorial techniques it is possible to show freeness of some marginals or to describe entirely the free cumulants when there is no freeness of the different marginals in the limit. One interesting aspect is that using these products of marginals it is possible to find distribution interpolating between the square of a Mar\u cenko-Pastur law and the free multiplicative square of a Mar\u cenko-Pastur law.\\
However there is in general little technical tools to describe the lower order in $N$ observables of product ensembles. Indeed free probability provides us with some useful techniques (free additive and multiplicative convolution), but those are restricted to the large $N$ limit, and comes in handy only for the study of the large $N$ density or the behavior of the large $N$ limit of the moments (with some extension to the fluctuations of the linear statistics \textit{via} \cite{collins2007second}).\\
In this paper we focus on describing the loop equations for the random matrix $S_2=X_1X_1^{\dagger}X_2^{\dagger}X_2$, where $X_1, X_2$ are square complex Ginibre matrices. In order to obtain these loop equations we start with Schwinger-Dyson identities and use them to obtain relations between moments, later translated in terms of equations on the resolvents of $S_2$. These equations on the resolvents are the loop equations. One of the new features of the method presented here is that the starting point Schwinger-Dyson identities involve higher order derivatives. This allows us to obtain relations between moments of the matrix $S_2$ only without having to deal with mixed quantities.
Thanks to the combinatorial interpretation of the moments of the matrix $S_2$ (that we also shortly describe), we show that the (connected) resolvents possess a $1/N$ expansion, which is the unique additional ingredient we need to be able to solve the loop equations recursively. \\
Using this data we illustrate the use of the obtained loop equations by computing the large $N$ limit of the resolvent $W_{0,1}(x)$, thus recovering known results relating to the generating function of the moments. We also compute $W_{0,2}(x_1,x_2)$ (that is the Stieltjes transform of the $2$-point correlation function) and show that it takes the expected universal form once expressed in the correct variables, thus relating to the Bergmann kernel on the sphere. We give explicit results for $W_{1,1}(x), W_{2,1}(x)$ (first and second correction to the large $N$ limit of the resolvent), $W_{1,2}(x_1,x_2)$ (first correction to $W_{0,2}(x_1,x_2)$), as well as $W_{0,3}(x_1,x_2,x_3)$. One interesting aspect of the obtained loop equations are their structural properties, that seem to generalize in a very natural way the usual bilinear loop equations for random matrices or matrix models. In particular, the family of loop equations we obtain for this product of matrices are trilinear in the resolvents $W_{g,n}$. This is at the root of the appearance of the double ramification point of $W_{0,1}(x)$ and we expect that a topological recursion formula similar to the one obtained in \cite{Bouchard-Eynard} applies. Moreover they contain generalizations of the derivative difference term usually appearing in the bilinear setting, as well as derivatives of first and second order. Motivated by these interesting structural properties, we use the explicit computations to explore the analytical properties of the $W_{g,n}$ (or rather their analytic continuation on the associated spectral curve). These explorations give further hint that there is a topological recursion formula to compute them systematically. We expect that a similar technique allows to describe the loop equations for the product of $p\ge 2$ rectangular Ginibre matrices $S_p=X_1X_2\ldots X_p(X_1X_2\ldots X_p)^{\dagger}$; we leave this study, as well as the one of a topological recursion formula, to further works. Note that, as a byproduct, we also expect that this technique applies to the interesting matrix models introduced in \cite{A-C2014,A-C2018} to generate hypergeometric Hurwitz numbers.
\paragraph{Organisation of the paper.}
The paper is organized as follows. In section \ref{sec:Wishart}, we use the Wishart case (that is the case of one Ginibre matrix) as a pedagogical example. It is used to sketch the combinatorial arguments allowing to show the existence of the $1/N$ expansion and to illustrate the Schwinger-Dyson equation technique in a simpler context. The reader already accustomed to Schwinger-Dyson equations obtained using the matrix elements variables and knowledgeable on the associated combinatorics may consider skipping this section.
In section \ref{sec:loop-eq-prod}, we describe the heart of this paper, that is the derivation of the Schwinger-Dyson equations and loop equations for a product matrix of the form $S_2=X_1X_1^{\dagger}X_2^{\dagger}X_2$. The loop equations take the form of a family of equations on the resolvents, that is the Stieltjes transforms (denoted $W_n(x_1,\ldots,x_n)$) of the $n$-point correlation functions. We present the results step by step to make the method transparent to the reader and the first few special cases that are the loop equations for $W_1(x)$, $W_2(x_1,x_2)$ and $W_3(x_1,x_2,x_3)$ are presented in details. This section ends with the main result, that is the loop equations satisfied by any $W_{g,n}(x_1,\ldots,x_n)$ as shown on equation \eqref{eq:loop-eq-general-expanded}, where $W_{g,n}(x_1,\ldots,x_n)$ is the coefficient of order $g$ of the $1/N$ expansion of $W_n(x_1,\ldots,x_n)$.
In section \ref{sec:spectral-curve-geometry}, we take on a geometrical point of view in order to compute the $W_{g,n}$ more effectively from the loop equations. We describe in details the \emph{spectral curve} geometry associated to the problem. We compute after a change of variables, $W_{0,2}(x_1,x_2)$, $W_{1,1}(x)$, $W_{2,1}(x)$, $W_{1,2}(x_1,x_2)$ and $W_{0,3}(x_1,x_2,x_3)$ (see equations \eqref{eq:W02}, \eqref{eq:W11}, \eqref{eq:W21}, \eqref{eq:W12}, \eqref{eq:W03}). We use these explicit computations to explore the analytic properties of the loop equations. These properties are expected to be of importance to establish a topological recursion formula allowing to systematically compute every $W_{g,n}$.
\section*{Acknowledgments}
Stephane Dartois would like to thank Valentin Bonzom, Alexandr Garbali, Jesper Ipsen and Paul Zinn-Justin for useful discussions and technical help related to this work as well as for references. This work was supported by the Australian Research Council grant DP170102028.
\section{One matrix case, Wishart ensemble}\label{sec:Wishart}
In this section, we illustrate the problem that is our interest in this paper on a simpler case, that is the (trivial) product of one matrix. This is the case of a Wishart matrix. We first recall the combinatorial representation of moments of a Wishart ensemble matrix. We then show how we can compute the average resolvent of a Wishart matrix using the Schwinger-Dyson equation method. It is only in the next section that we consider the case of the product of two Ginibre matrices. Thus the technically knowledgeable reader can skip this section and start reading section \ref{sec:loop-eq-prod}.
\subsection{Random Wishart matrices}
In this paper we always consider square matrices. In the Wishart matrices case it corresponds to setting the asymptotic size ratio parameter $c$ to $1$.
Let $X\in \mathcal{M}_{N\times N}(\mathbb{C})$ be a Ginibre random matrix. More concretely, $X$ is a random matrix whose entries are i.i.d. complex Gaussian with zero mean, or more formally, the entries $X_{i,j}$ are distributed according to the density
\begin{equation}
\frac{N}{2i\pi}e^{-N\lvert X_{i,j}\rvert^2}\mathrm{d}\bar{X}_{i,j}\mathrm{d}X_{i,j}.
\end{equation}
In particular we denote,
\begin{equation}
\mathrm{d}X^{\dagger}\mathrm{d}X=\prod_{i,j}\mathrm{d}\bar{X}_{i,j}\mathrm{d}X_{i,j},
\end{equation}
so that $X$ has the distribution
\begin{equation}
\mathrm{d}\mu(X)=\frac{N^{N^2}}{(2i\pi)^{N^2}}e^{-N\mathrm{Tr}(XX^{\dagger})}\mathrm{d}X^{\dagger}\mathrm{d}X.
\end{equation}
A (complex) Wishart random matrix is the random variable defined as the product $S_1=XX^{\dagger}$.\\
\
\noindent{\bf Combinatorics of moments.} The moments $m_k$ of order $k$ of a Wishart random matrix are defined as
\begin{equation}
m_k=\mathbb{E}\left(\mathrm{Tr}(S_1^k)\right).
\end{equation}
Further, for any sequence of positive integers $k_1,\ldots,k_n$ we can define moments $m_{k_1,\ldots,k_n}$ of order $k_1,\ldots,k_n$. Similarly to the moments of order $k$ they are defined as the expectation of products of traces of powers of $S_1$
\begin{equation}
m_{k_1,\ldots,k_n}=\mathbb{E}\left(\prod_{i=1}^n\mathrm{Tr}(S_1^{k_i})\right).
\end{equation}
As is for instance explained in \cite{DLN}, the moments of order $k$ can be computed as a sum over labeled bicolored combinatorial maps $\mathcal{M}$ with one black vertex. This combinatorial representation of moments implies that the moments have a $1/N$ expansion. That is
\begin{equation}\label{eq:genus-exp-moments}
m_k=\sum_{g\ge 0}N^{1-2g}m_k^{[g]},
\end{equation}
where $m_k^{[g]}$ are the coefficients of this expansion. This is a crucial point that allows one to solve the loop equations recursively. Note also that this expansion is finite, that is here $g<k/2$. Let us be a bit more explicit on this point.\\
We recall the definition of labeled bicolored combinatorial maps with possibly more than one black vertex.
\begin{definition}
A labeled bicolored combinatorial map is a triplet $\mathcal{M}=(E,\sigma_{\bullet},\sigma_{\circ})$ where,
\begin{itemize}
\item $E$ is the set of edges of $\mathcal{M}$
\item $\sigma_{\bullet},\sigma_{\circ}$ are permutations on $E$
\item $\mathcal{M}$ is said to be connected if and only if the group $\langle \sigma_{\bullet}, \sigma_{\circ}\rangle$ acts transitively on $E$.
\end{itemize}
\end{definition}
The cycles of $\sigma_{\circ}$ are called white vertices, the cycles of $\sigma_{\bullet}$ are called black vertices, and the cycles of $\sigma_{\bullet}\sigma_{\circ}$ are called faces. Combinatorial maps can be represented graphically \cite{DLN, countingsurfaces} as they encode embeddings of graphs on surfaces. We give a few examples in Fig. \ref{fig:map_examples_Wishart}.
\begin{figure}
\centering
\includegraphics[scale=0.8]{map_example_Wishart_bis.pdf}
\caption{Left: Map of genus $1$ contributing to the computation of $m_7$. Center: Connected map of genus $0$ contributing to the computation of $c_{4,5}$ and also to $m_{4,5}$. Left: Disconnected map with two genus $0$ components. Contribute to the computation of $m_{2,2}$. }
\label{fig:map_examples_Wishart}
\end{figure}\\
We define the set of combinatorial maps $\mathbb M_p=\{\mathcal{M}=(E,\sigma_{\bullet},\sigma_{\circ})\mid E=\{1,\ldots,p\}, \sigma_{\bullet}=\gamma=(123\ldots p)\}$. One shows, using Wick-Isserlis theorem \cite{WickThm,isserlis1918formula}, that the moments of order $k$ can be written as a sum over combinatorial maps $\mathcal{M}\in \mathbb M_p$ (see \cite{DLN} for details)
\begin{equation}
m_k=\sum_{\mathcal{M}\in \mathbb M_k}N^{V_{\circ}(\mathcal{M})-k+F(\mathcal{M})},
\end{equation}
where $V_{\circ}(\mathcal{M})$ is the number of white vertices of $\mathcal{M}$ and $F(\mathcal{M})$ is the number of faces of $\mathcal{M}$. Using the fact that $V_{\bullet}+V_{\circ}(\mathcal{M})-k+F(\mathcal{M})=2-2g(\mathcal{M})$, where $g(\mathcal{M})$ is the genus of the combinatorial map (that is the genus of the surface in which the corresponding graph embedds), one can show equation \eqref{eq:genus-exp-moments}.
\vspace{2mm}
\begin{remark}
Note that elements of $\mathbb M_p$ are necessarily connected as $\gamma$ acts transitively of $\{1,\ldots,p\}$.
\end{remark}
\vspace{2mm}
We now define the relevant set of maps for studying the moments of order $k_1,\ldots, k_n$. In this case we denote $p=\sum_{i=1}^n k_i$, $E=\{1,\ldots,p\}$ and $\gamma_{k_1,\ldots, k_n}=(12\ldots k_1)(k_1+1\ldots k_2)\ldots(k_{n-1}+1\ldots k_n)$
\begin{equation}
\mathbb M_{k_1,\ldots,k_n}=\{\mathcal{M}=(E,\sigma_{\bullet},\sigma_{\circ})\mid \sigma_{\bullet}=\gamma_{k_1,\ldots, k_n}\}.
\end{equation}
The maps in $\mathbb M_{k_1,\ldots,k_n}$ are possibly non-connected as $\gamma_{k_1,\ldots, k_n}$ does not act transitively on the set of edges. Consequently we define the corresponding set of connected maps
\begin{equation}
\mathbb M_{k_1,\ldots,k_n}^c=\{\mathcal{M}=(E,\sigma_{\bullet},\sigma_{\circ})\mid \sigma_{\bullet}=\gamma_{k_1,\ldots, k_n}, \langle \sigma_{\bullet},\sigma_{\circ}\rangle \textrm{ acts transitively on } E\}.
\end{equation}
We state without proof\footnote{The proof is very similar to the one black vertex case, already appearing in \cite{DLN}.} that
\begin{equation}
m_{k_1,\ldots,k_n}=\sum_{\mathcal{M}\in \mathbb M_{k_1,\ldots,k_n}} N^{V_{\circ}(\mathcal{M})-p+F(\mathcal{M})},
\end{equation}
where $p=\sum_i k_i$. We can define the associated cumulants $c_{k_1,\ldots,k_n}$ of the moments, through their relation to moments
\begin{align}\label{eq:mom-cum-for-traces}
m_{k_1,\ldots,k_n}=\sum_{K\vdash \{k_1,\ldots,k_n\}}\prod_{\kappa_i\in K}c_{\kappa_i}
\end{align}
This relation is just the moment-cumulant relation for the family of random variables $\bigl\{R_{k_i}:=\mathrm{Tr}(S_1^{k_i})\bigr\}$.
These cumulants can be expressed as sums over connected combinatorial maps
\begin{equation}\label{eq:connected-moments}
c_{k_1,\ldots,k_n}=\sum_{\mathcal{M}\in \mathbb M_{k_1,\ldots,k_n}^c} N^{V_{\circ}(\mathcal{M})-p+F(\mathcal{M})}.
\end{equation}
Thanks to the connected condition, this sum is a polynomial in $1/N$ as long as $n>1$. That is to say we have
\begin{equation}
c_{k_1,\ldots,k_n}=\sum_{g\ge 0}N^{2-n-2g}c_{k_1,\ldots,k_n}^{[g]}.
\end{equation}
This last equation is shown starting from \eqref{eq:connected-moments} and again using $V_{\bullet}+V_{\circ}(\mathcal{M})-k+F(\mathcal{M})=2-2g(\mathcal{M})$ with $V_{\bullet}=n$.\\
\noindent{\bf Large $N$ limit of moments of a Wishart matrix.} Using \eqref{eq:genus-exp-moments}, one can study the large $N$ limit of the moments of order $k$ of a Wishart matrix, that is one can compute the limit
\begin{equation}
\lim_{N\rightarrow \infty} \frac1N m_k=m^{[0]}_k.
\end{equation}
This limit is given by the number of planar, labeled, bicolored combinatorial maps with one black vertex and $k$ edges. The number of such maps is given by the Catalan number\footnote{Note that one obtains Catalan numbers when the ratio parameter is set to $c=1$, however for general values of $c$ one obtains the Narayana statistics on trees, that is polynomials in $c$ whose coefficients are Narayana numbers \cite{DR-NarayanaWishart}.} $C_k$ so that $m^{[0]}_k=C_k=\frac1{k+1}\binom{2k}{k}$. This allows to compute the large $N$ limit $W_{0,1}(x)$ of the moment generating function of the Wishart matrix
\begin{equation}\label{eq:W01-Wishart-expression}
W_{0,1}(x):=\lim_{N\rightarrow \infty}\frac1N \mathbb{E}\left( \mathrm{Tr}\left((x-W)^{-1} \right)\right)=\sum_{p\ge 0}\frac{m^{[0]}_p}{x^{p+1}} =\frac{x-\sqrt{x^2-4x}}{2x}.
\end{equation}
This last quantity is the Stieltjes transform of the limiting eigenvalues density of the Wishart matrix.
The knowledge of $W_{0,1}(x)$ allows in principle\footnote{In this specific case one can recover explicitly the limiting eigenvalue density via the inverse transformation. However in general it can be more tedious to compute the inverse transform. In the cases where the equation determining $W_{0,1}$ is an algebraic equation, one can deduce a system of polynomial equations on two quantities $u(x), v(x)$, one of them being (proportional to) the large $N$ limit of the eigenvalue density $\rho_{0,1}(x)$. We illustrate this fact in the later Remarks \ref{rem:poly-density}, \ref{rem:poly-density2}.} to recover the limiting eigenvalues density via the inverse transformation. \\
\noindent{\bf Schwinger-Dyson equation method.} In this part we use an alternative method to compute $W_{0,1}(x)$. We use the Wishart case as a pedagogical example. The Schwinger-Dyson equation method relies on the use of the simple identity
\begin{equation}
\sum_{a,b=1}^N\int \frac{N^{N^2}}{(2i\pi)^{N^2}}\mathrm{d}X^{\dagger}\mathrm{d}X \partial_{X^{\dagger}_{ab}}\left((X^{\dagger}S_1^{k})_{ab}e^{-N\mathrm{Tr}(XX^{\dagger})}\right)=0,
\end{equation}
after computing the derivatives explicitly we obtain the following set of relations between moments
\begin{equation}\label{eq:SD-}
\sum_{\substack{p_1,p_2\ge 0 \\ p_1+p_2=k}}m_{p_1,p_2}-Nm_{k+1}=0.
\end{equation}
In order to continue this computation we define the $n$-points resolvents $\overline{W}_n(x_1,\ldots,x_n)$ and their connected counterpart $W_n(x_1,\ldots,x_n)$
\begin{align}
\overline{W}_n(x_1,\ldots,x_n)&:=\mathbb{E}\left(\prod_{i=1}^n\mathrm{Tr}\left((x_i-S_1)^{-1}\right)\right)=\sum_{p_1,\ldots,p_n\ge0}\frac{m_{p_1,\ldots,p_n}}{x_1^{p_1+1}\ldots x_n^{p_n+1}} \\
W_n(x_1,\ldots,x_n)&=\sum_{p_1,\ldots,p_n\ge0}\frac{c_{p_1,\ldots,p_n}}{x_1^{p_1+1}\ldots x_n^{p_n+1}}.
\end{align}
Note that we will often name the $n$-points resolvents and their connected counterpart simply resolvents, unless the context makes it unclear which object we are discussing. $W_{0,1}(x)$ is (up to normalization) the large $N$ limit of $W_1(x)$. We have the relation
\begin{equation}\label{eq:diconnect-to-connect}
\overline{W}_n(x_1,\ldots,x_n)=\sum_{K\vdash\{1,\ldots,n\}}\prod_{K_i\in K}W_{\mid K_i\mid}(x_{K_i}),
\end{equation}
where we used the notation $x_{K_i}=\{x_j\}_{j\in K_i}$. The above relation is inherited from the moment-cumulant relation of equation \eqref{eq:mom-cum-for-traces}.
\vspace{2mm}
\begin{remark}
Note that $\overline{W}_1(x)=W_1(x)$.
\end{remark}
\vspace{2mm}
With these definitions in mind, one considers the equality
\begin{equation}
\sum_{k\ge 0}\frac1{x^{k+1}}\left(\sum_{\substack{p_1,p_2\ge 0 \\ p_1+p_2=k}}m_{p_1,p_2}-Nm_{k+1}\right)=0,
\end{equation}
leading after some rewriting to
\begin{equation}\label{eq:Wishart-1pt-res-disconnect}
\overline{W}_2(x,x)-NW_1(x)+N^2/x=0,
\end{equation}
or only in terms of the connected resolvents
\begin{equation}\label{eq:Wishart-1pt-res}
W_1(x)^2+W_2(x,x)-NW_1(x)+N^2/x=0.
\end{equation}
The (connected) resolvents inherit a $1/N$ expansion from the expansion of the cumulants,
\begin{equation}\label{eq:exp-conn-resolvents}
W_n(x_1,x_2,\ldots,x_n)=\sum_{g\ge0}N^{2-2g-n}W_{g,n}(x_1,x_2,\ldots,x_n)
\end{equation}
and thus we have
\begin{equation}
W_1(x)=\sum_{g\ge 0} N^{1-2g}W_{g,1}(x),\quad
W_2(x,x)=\sum_{g\ge 0}N^{-2g}W_{g,2}(x,x).
\end{equation}
In the large $N$ limit equation \eqref{eq:Wishart-1pt-res} reduces to an equation on $W_{0,1}(x)$,
\begin{equation}\label{eq:large-N-Wishart-1pt-res}
xW_{0,1}(x)^2-xW_{0,1}(x)+1=0.
\end{equation}
From which we select the solution which is analytic at infinity thus recovering expression \eqref{eq:W01-Wishart-expression}.\\
\begin{remark}\label{rem:poly-density}
From this last equation we can obtain a polynomial equation on $\rho_{0,1}(x)$, that is the corresponding limiting eigenvalue density. To this aim, one introduces the two following operators acting on functions,
\begin{align}
&\delta f(x)=\lim_{\epsilon \rightarrow 0^+}f(x+i\epsilon)-f(x-i\epsilon)\\
&s f(x)=\lim_{\epsilon \rightarrow 0^+}f(x+i\epsilon)+f(x-i\epsilon).
\end{align}
We have the following \emph{polarization} property, that is for two functions $f_1, f_2$, we have
\begin{align}
&\delta(f_1f_2)(x)=\frac12(\delta f_1(x)s f_2(x)+s f_1(x)\delta f_2(x))\\
&s (f_1f_2)(x)=\frac12(\delta f_1(x)\delta f_2(x)+s f_1(x)s f_2(x))
\end{align}
Starting from equation \eqref{eq:large-N-Wishart-1pt-res} one deduces the two equalities
\begin{align}
&\delta(xW_{0,1}(x)^2-xW_{0,1}(x)+1)=0\\
&s (xW_{0,1}(x)^2-xW_{0,1}(x)+1)=0.
\end{align}
After using the polarization formula, these equations boil down to the system on $u(x):=s W_{0,1}(x)$ and $v(x):=\delta W_{0,1}(x)$
\begin{align}
&xu(x)-x=0\\
&\frac{x}{2}(u(x)^2+v(x)^2)-xu(x)+2=0.
\end{align}
This in turn leads to $\rho_{0,1}(x)=\frac1{2i\pi}v(x)=\frac1{2\pi}\sqrt{\frac{x-4}{x}}$, where we choose the solution $v(x)$ that leads to a positive and normalized density.
\end{remark}
\section{Loop equations for the product of two Ginibre matrices}\label{sec:loop-eq-prod}
In this section we consider the problem of computing $W_{0,1}(x)$, $W_{0,2}(x_1,x_2)$ and $W_{1,1}(x)$ for a matrix $S_2=X_1X_1^{\dagger}X_2^{\dagger}X_2$ with $X_1, X_2$ two random $N\times N$ complex matrices with normal entries of mean zero. We compute these quantities by exclusive use of Schwinger-Dyson equation techniques. More generally, we obtain the general equations satisfied by any $W_{g,n}$ for $(g,n)\ge (0,1)$.
In the first subsection, we briefly explain the combinatorics underlying the computation of the moments of the matrix $S_2$ that justifies the existence of a $1/N$ expansion for the $W_{g,n}$.
In the second subsection we study in details the corresponding Schwinger-Dyson equations and obtain the loop equations satisfied by $W_{0,1}(x)$, $W_{0,2}(x_1,x_2)$ and $W_{1,1}(x)$ in this context. We show in particular that the loop equation satisfied by $W_{0,1}(x)$ is an algebraic equation of degree $3$ in $W_{0,1}$. Finally we describe the loop equations satisfied by any $W_{g,n}$.
\vspace{2mm}
\subsection{Combinatorics of the moments of $S_2$ and existence of $1/N$ expansion}
We describe here the combinatorics of the moments of the matrix $S_2$. This is a crucial point as this underlying combinatorics allows us to show that the cumulants of the random variables $\left\{\mathrm{Tr}(S_2^{i})\right\}_{i=0}^{\infty}$ have a $1/N$ expansion.
In the subsequent developments, we keep the same notation for the moments $m_k$, $m_{k_1,\ldots, k_n}$ but it should be clear that in this section and the following, the moments we consider are the moments of the matrix $S_2$, and that is so, in both the one trace case, and the multiple traces case. We have
\begin{equation}
m_k=\mathbb{E}\left(\mathrm{Tr}(S_2^k)\right), \quad m_{k_1,\ldots,k_n}=\mathbb{E}\left(\prod_{i=1}^n\mathrm{Tr}(S_2^{k_i})\right),
\end{equation}
where the expectation is taken with respect to the density
\begin{equation}\label{eq:two-mat-density}
\mathrm{d}\mu(X_1,X_2)=\left(\frac{N^{N^2}}{(2i\pi)^{N^2}}\right)^2e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\mathrm{d}X_1^{\dagger}\mathrm{d}X_1\mathrm{d}X_2^{\dagger}\mathrm{d}X_2.
\end{equation}
By using the Wick-Isserlis theorem, it is possible to give a combinatorial interpretation to the moments of $S_2$ (see for instance \cite{DLN}). The moments $m_k$ of $S_2$ write as a sum over combinatorial maps with one black vertex, $2k$ edges of two different types, type \RNum{1} and type \RNum{2}, such that there are $k$ edges of type \RNum{1} and $k$ edges of type \RNum{2}. Moreover the type of the edge alternates when going around the black vertex. Finally the white vertices can only be incident to edges of one given type. See Fig. \ref{fig:colored-maps-examples} for examples.
\begin{figure}
\centering
\includegraphics[scale=0.8]{colored_maps_bis.pdf}
\caption{Left: Example of a map with two types of edge contributing to the computation of $m_4$. Right: Example of a map with two types of edge contributing to the computation of $m_{2,1}$ and $c_{2,1}$.}
\label{fig:colored-maps-examples}
\end{figure}
We denote the set made of these maps by $\mathbb M_{2k}(2)$. In terms of permutations, these maps are such that $\sigma_{\bullet}=(12\ldots2k)$ and the action of $\sigma_{\circ}$ on the set of edges $E=\{1,2,3,4,\ldots,2k\}$ factorizes over the odd and even subsets $E_o=\{1,3,5,\ldots, 2k-1\}, E_e=\{2,4,6,\ldots,2k\}$. More formally we have the decomposition
\begin{equation}
\sum_{\mathcal{M}\in\mathbb M_{2k}(2)}N^{V_{\circ}(\mathcal{M})-2k+ F(\mathcal{M})}.
\end{equation}
Similarly, for moments of order $k_1,\ldots,k_n$, we have the set of maps $\mathbb M_{2k_1, 2k_2, \ldots, 2k_n}(2)$, such that there are $n$ black vertices with degree distribution $2k_1, 2k_2, \ldots, 2k_n$ and a total of $p=2\sum_i k_i$ edges. Types of edge alternate around each black vertex, and white vertices can only be incident to edges of the same type see Fig. \ref{fig:colored-maps-examples} for examples. We then have the decomposition
\begin{equation}
m_{k_1,\ldots, k_n}=\sum_{\mathcal{M} \in\mathbb M_{2k_1, 2k_2, \ldots, 2k_n}(2)} N^{V_{\circ}(\mathcal{M})-p+F(\mathcal{M})}.
\end{equation}
Similarly we can express the cumulants $c_{k_1,\ldots, k_n}$ for the family of random variables $\left\{\mathrm{Tr}(S_2^{i})\right\}_{i=0}^{\infty}$ as a sum over the set of connected maps $\mathbb M^c_{2k_1, 2k_2, \ldots, 2k_n}(2)$
\begin{equation}
c_{k_1,\ldots, k_n}=\sum_{\mathcal{M} \in\mathbb M^c_{2k_1, 2k_2, \ldots, 2k_n}(2)} N^{V_{\circ}(\mathcal{M})-p+F(\mathcal{M})}.
\end{equation}
The connected condition ensures that the $c_{k_1,\ldots, k_n}$ have a $1/N$ expansion for $n\ge1$. This $1/N$ expansion as well as the definition of $c_{k_1,\ldots, k_n}$ as the cumulants of the family $\left\{\mathrm{Tr}(S_2^{i})\right\}_{i=0}^{\infty}$ ensure that the resolvents for the matrix $S_2$ have the same structural properties than the resolvents of the Wishart matrix in equations \eqref{eq:diconnect-to-connect}, \eqref{eq:exp-conn-resolvents}, that is we also have for the matrix $S_2$
\begin{align}
\label{eq:res-con-to-disc-S_2}&\overline{W}_n(x_1,\ldots,x_n)=\sum_{K\vdash\{1,\ldots,n\}}\prod_{K_i\in K}W_{\mid K_i\mid}(x_{K_i}),\\
\label{eq:res-exp-S_2}&W_n(x_1,x_2,\ldots,x_n)=\sum_{g\ge0}N^{2-2g-n}W_{g,n}(x_1,x_2,\ldots,x_n).
\end{align}
\subsection{Equation on $W_{1}$ and $W_{0,1}$}
We now want to write Schwinger-Dyson equations for the moments of the matrix $S_2$ in order to obtain the loop equations for the resolvents. We start with the set of identities
\begin{align}\label{eq:FirstorderSDE}
&\sum_{a,b=1}^N\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\frac{\partial}{\partial X_{1,ab}^{\dagger}}\left(\bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k}\bigr]_{ab} e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right) =0 \displaybreak[1] \\
&\sum_{a,b=1}^N\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\frac{\partial}{\partial X_{2,ab}^{\dagger}}\left(\bigl[S_2^{k}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ab} e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right) =0.
\end{align}
After evaluating explicitly the action of the derivatives, we obtain relations,
\begin{align}
&\sum_{\substack{p_1+p_2=k \\ p_1,p_2\ge 0}} \mathbb{E}\left( \mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}X_2^{\dagger}X_2)\right)-N\mathbb{E}\left( \mathrm{Tr}(S_2^{k+1})\right)=0\\
&\sum_{\substack{p_1+p_2=k \\ p_1,p_2\ge 0}} \mathbb{E}\left( \mathrm{Tr}(S_2^{p_1}X_1X_1^{\dagger})\mathrm{Tr}(S_2^{p_2})\right)-N\mathbb{E}\left( \mathrm{Tr}(S_2^{k+1})\right)=0,
\end{align}
where for both equation, the first term comes from the evaluation of the derivative on the monomial, while the second term comes from the evaluation of the derivative on the exponential factor. Note however that these equations contain mixed terms of the form $\mathbb{E}\left( \mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}X_2^*X_2)\right)$ and $\mathbb{E}\left( \mathrm{Tr}(S_2^{p_1}X_1X_1^*)\mathrm{Tr}(S_2^{p_2})\right)$ that cannot be expressed in terms of the moments of $S_2$. Thus these two equations do not close on the set of moments of $S_2$. In order to obtain a set of relations that closes over the set of moments of $S_2$, we consider another identity involving higher derivatives. This is,
\begin{align}
\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\frac{\partial}{\partial X_{1,ab}^{\dagger}}\frac{\partial}{\partial X_{2,bc}^{\dagger}}\left( \bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ac}e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right)=0,
\end{align}
where we sum over repeated indices. After some additional algebra to evaluate the action of both derivative operators, one gets relations between moments and additional mixed quantities
\begin{multline}\label{eq:SecondorderSDE}
\sum_{\substack{p_1+p_2+p_3 =k+1 \\ p_1,p_2,p_3 \ge 0}} \mathbb{E}\bigl(\mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}) \mathrm{Tr}(S_2^{p_2})\bigr)+\frac{(k+1)(k+2)}{2}\mathbb{E}\bigl(\mathrm{Tr}(S_2^{k+1})\bigr) \\
-N\sum_{\substack{p_1+p_2=k+1 \\ p_1,p_2\ge 0}} \Bigl[ \mathbb{E}\bigl( \mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}X_2^*X_2)\bigr) +\mathbb{E}\bigl( \mathrm{Tr}(S_2^{p_1}X_1X_1^*)\mathrm{Tr}(S_2^{p_2})\bigr) \Bigr] \\
+N^2 \mathbb{E}\left( \mathrm{Tr}(S_2^{k+2}) \right)=0,
\end{multline}
where the first and second terms are obtained from the action of both derivatives operators on the monomial $\bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ac}$. The third term that involves mixed quantities is obtained by acting with one derivative operator on the monomial, while acting with the other derivative operator on the exponential factor. The last term is obtained from the action of both derivative operator on the exponential factor. These equations contain the mixed quantities already present in \eqref{eq:FirstorderSDE}. Thus we can use \eqref{eq:FirstorderSDE} to get rid of these terms in \eqref{eq:SecondorderSDE}. This leads to the equations on moments
\begin{align}
\sum_{\substack{p_1+p_2+p_3 =k+1\\ p_1,p_2,p_3 \ge 0}} \mathbb{E}\left( \mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}) \mathrm{Tr}(S_2^{p_2})\right) +\frac{(k+1)(k+2)}{2}\mathbb{E}\left(\mathrm{Tr}(S_2^{k+1})\right)-N^2 \mathbb{E}\left(\mathrm{Tr}(S_2^{k+2}) \right)=0,
\end{align}
which is trilinear in the traces of $S_2$. Notice that this family of equations extends to the value ``$k=-1$" by replacing the monomial $\bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ac}$ by $\bigl[X_1^{\dagger}X_2^{\dagger}\bigr]_{ac}$. Therefore we allow ourselves to set $k=k-1$ and to use our moments notation to get
\begin{align}
\sum_{\substack{p_1+p_2+p_3 =k\\ p_1,p_2,p_3 \ge 0}} m_{p_1,p_2,p_3}+\frac{k(k+1)}{2}m_k -N^2 m_{k+1}=0.
\end{align}
We then multiply the above equation by $\frac1{x^{k+1}}$ and sum over $k\ge 0$ in order to get an equation on the resolvents
\begin{equation}
\sum_{k\ge 0}\sum_{\substack{p_1+p_2+p_3 =k\\ p_1,p_2,p_3 \ge 0}}\frac{m_{p_1,p_2,p_3}}{x^{k+1}}+\sum_{k\ge 0}\frac{k(k+1)}{2}\frac{m_k}{x^{k+1}} -N^2 \frac{m_{k+1}}{x^{k+1}}=0,
\end{equation}
which after a few manipulations rewrites
\begin{equation}\label{eq:1pt-equation}
x^2\overline W_3(x,x,x)+x\partial_xW_1(x)+\frac12x^2\partial_x^2W_1(x)-N^2xW_1(x)+N^3 =0.
\end{equation}
Note the interesting structural replacement of $\overline W_2(x,x)$ appearing in \eqref{eq:Wishart-1pt-res-disconnect} by $\overline W_3(x,x,x)$ and the appearance of a derivative term.
Then we know from \eqref{eq:res-con-to-disc-S_2}, \eqref{eq:res-exp-S_2} that $\overline W_3(x,x,x)= N^3 W_{0,1}(x)^3+O(N)$ and $W_1(x)= N W_{0,1}(x)+O(1/N)$. Therefore we obtain the equation on $W_{0,1}(x)$
\begin{equation}\label{eq:W01-equation}
x^2W_{0,1}(x)^3-xW_{0,1}(x)+1=0.
\end{equation}
This last equation relates to the equation satisfied by the generating function $G(u)$ of particular Fuss-Catalan numbers \cite{Fuss-original, Mlotkowski2010, Rivass-FussCatalan}, $uG(u)^3-G(u)+1=0$ through the change of variables $W_{0,1}(x)=\frac1{x}G(1/x)$. Consequently we have
\begin{equation}
W_{0,1}(x)=\sum_{p\ge0} \frac{C_p[3]}{x^{p+1}},
\end{equation}
where $C_p[D]$ are the Fuss-Catalan numbers of order $D$, the usual Catalan numbers $C_p$ being the Fuss-Catalan numbers of order $2$, that is $C_p=C_p[2]$, and have the binomial coefficient form
\begin{equation}
C_p[D]=\frac1{(D-1)p+1}\binom{Dp}{p}.
\end{equation}
An explicit form of $W_{0,1}(x)$ can be written as follows. First define
\begin{equation}
K_{\pm}(u)=(\sqrt{1+u}\pm\sqrt{u})^{1/3},
\end{equation}
then $G(u)$ writes
\begin{equation}
G(u)=\frac{K_{+}\left(-\frac{27u}{4}\right)-K_{-}\left(-\frac{27u}{4}\right)}{\sqrt{-3u}}.
\end{equation}
Finally one has
\begin{equation}
W_{0,1}(x)=\frac1{x}G\left(\frac1{x}\right).
\end{equation}
We study the solutions and the structure of \eqref{eq:W01-equation} from a geometric perspective in the next sections.
\begin{remark}\label{rem:poly-density2}
Though in principle we need to first focus on the cut structure of $W_{0,1}$ to use the arguments that follow, we will in this remark content ourselves with a formal computation. Starting from equation \eqref{eq:W01-equation} we can also obtain a polynomial equation satisfied by the corresponding density by using the $\delta, s$ operators along the cut. Indeed with a similar method to that in Remark \ref{rem:poly-density} we have the equalities
\begin{align}
&\delta(x^2W_{0,1}(x)^3-xW_{0,1}(x)+1)=0\\
&s(x^2W_{0,1}(x)^3-xW_{0,1}(x)+1)=0.
\end{align}
This leads, using the same previously used notations, to the system
\begin{align}
&\frac{x^2}{4}(3u(x)^2+v(x)^2)-x=0\\
&\frac{x^2}{4}(u(x)^3+3v(x)^2u(x))-xu(x)+2=0
\end{align}
which can be solved and leads to
\begin{equation}\label{63}
\rho_{0,1}(x)=\frac1{2i\pi}v(x)=\frac1{2 \pi }\sqrt{\frac{\left(\sqrt{81-12 x}+9\right)^{2/3}}{2^{2/3} \sqrt[3]{3} x^{4/3}}+\frac{2^{2/3}
\sqrt[3]{3}}{\left(\left(\sqrt{81-12 x}+9\right) x\right)^{2/3}}-\frac{2}{x}},
\end{equation}
which is supported on $(0,27/4]$, see the plot of the distribution on Fig. \ref{fig:eigen-density-product}. Notice that this result can also be obtained by computing the free multiplicative product of two Mar\u cenko-Pastur distribution of parameters $c_{1,2}=1$. A functional form equivalent to (\ref{63}) is given in \cite{penson2011product}.
\end{remark}
\begin{figure}
\centering
\includegraphics[scale=0.52]{plot-product-ofmatrices-analytic.pdf}
\caption{Plot of the eigenvalue density of the matrix $S_2$ in the large $N$ regime.}
\label{fig:eigen-density-product}
\end{figure}
Equation \eqref{eq:1pt-equation} possesses a $\frac1{N}$ expansion. This expansion results in a set of relations between $W_{g,1}(x)$, $W_{g',2}(x,x)$ and $W_{g'',3}(x,x,x)$. Indeed we have
\begin{multline}
0=x^2\Bigl[\frac1N\sum_{g\ge 0}N^{-2g}W_{g,3}(x,x,x)+3N\sum_{g_1,g_2\ge 0}N^{-2(g_1+g_2)}W_{g_1,1}(x)W_{g_2,2}(x,x) \\
+N^3\sum_{g_1,g_2,g_3\ge 0}N^{-2(g_1+g_2+g_3)}W_{g_1,1}(x)W_{g_2,1}(x)W_{g_3,1}(x)\Bigr]\\
+xN\sum_{g\ge 0} N^{-2g}\partial_x W_{1,g}(x)+\frac{N}{2}x^2\sum_{g\ge 0}N^{-2g}\partial_x^2 W_1(x)-N^3x\sum_{g\ge 0}N^{-2g}W_{g,1}(x)+N^3.
\end{multline}
By collecting the coefficient of $N^{3-2g}$, we obtain the following tower of equations
\begin{multline}\label{eq:order-g-1pt}
0=x^2\left(W_{g-2,3}(x,x,x)+3\sum_{g_1+g_2=g-1}W_{g_1,1}(x)W_{g_2,2}(x,x)+\sum_{g_1+g_2+g_3=g}W_{g_1,1}(x)W_{g_2,1}(x)W_{g_3,1}(x)\right)\\
+x\partial_xW_{g-1,1}(x)+\frac{x^2}{2} \partial_x^2W_{g-1,1}(x)-xW_{g,1}(x)+P_{g,1}(x),
\end{multline}
where we have $P_{g,1}(x)=\delta_{g,0}$. In particular, the coefficient of $N^3$ of equation \eqref{eq:order-g-1pt} produces equation \eqref{eq:W01-equation}.
The coefficient of $N$ produces an equation on the next-to-leading order $W_{1,1}(x)$ also involving $W_{0,1}(x)$ and $W_{0,2}(x,x)$
\begin{equation}\label{eq:1pt-NLO}
3x^2W_{0,1}(x)W_{0,2}(x,x)+3x^2W_{0,1}(x)^2W_{1,1}(x)+x\partial_xW_{0,1}(x)+\frac{x^2}{2} \partial_x^2W_{0,1}(x)-xW_{1,1}(x)=0.
\end{equation}
More generally, the coefficient of $N^{3-2g}$ for a fixed value of $g$ produces the equation for $W_{g,1}(x)$ in terms of the functions $W_{g',n'}$ such that $2-2g-1<2-2g'-n'$ and $n'\le 3$. \\
\subsection{Equation for $W_2(x_1,x_2)$}
In this section we use Schwinger-Dyson equation techniques to obtain a loop equation for $W_2(x_1,x_2)$. We start with slightly different identities that involve an additional trace insertion $\mathrm{Tr}(S_2^q)$. This allows us to access relations between more general moments.\\
\noindent{\bf Schwinger-Dyson equations and loop equation for $W_2(x_1,x_2)$ and $W_{0,2}(x_1,x_2)$.} Consider the vanishing integrals of total derivatives
\begin{align}
\label{eq:2pttotalder1}&\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\frac{\partial}{\partial X_{1,ab}^{\dagger}}\left(\bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k+1}\bigr]_{ab} \mathrm{Tr}(S_2^q)e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right) =0 \\
\label{eq:2pttotalder2}&\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\frac{\partial}{\partial X_{2,ab}^{\dagger}}\left(\bigl[S_2^{k+1}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ab} \mathrm{Tr}(S_2^q)e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right) =0,
\end{align}
and the higher derivative one
\begin{align}
\label{eq:2pttotalder-higher}\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\frac{\partial}{\partial X_{1,ab}^{\dagger}}\frac{\partial}{\partial X_{2,bc}^{\dagger}}\left( \bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ac}\mathrm{Tr}(S_2^q)e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right)=0,
\end{align}
where repeated indices are summed. After evaluating explicitly the derivatives, the two first equations \eqref{eq:2pttotalder1} and \eqref{eq:2pttotalder2} lead to
\begin{align}
\label{eq:2ptSDE1}&\sum_{\substack{p_1+p_2=k+1 \\ \{p_i\ge 0\}}}\mathbb{E}\left(\mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}X_2^{\dagger}X_2)\mathrm{Tr}(S_2^q) \right)+q\mathbb{E}\left( \mathrm{Tr}(S_2^{k+q+1}X_2^{\dagger}X_2)\right)-N\mathbb{E}\left( \mathrm{Tr}(S_2^{k+2})\mathrm{Tr}(S_2^q)\right)=0 \\
\label{eq:2ptSDE2}&\sum_{\substack{p_1+p_2=k+1 \\ \{p_i\ge 0\}}}\mathbb{E}\left(\mathrm{Tr}(S_2^{p_1}X_1X_1^{\dagger}) \mathrm{Tr}(S_2^{p_2})\mathrm{Tr}(S_2^q)\right)+q\mathbb{E}\left(\mathrm{Tr}(S_2^{k+q+1}X_1X_1^{\dagger}) \right)-N\mathbb{E}\left(\mathrm{Tr}(S_2^{k+2})\mathrm{Tr}(S_2^q) \right)=0,
\end{align}
where the first term of both equations \eqref{eq:2pttotalder1} and \eqref{eq:2pttotalder2} is obtained from the action of the derivative operator on the non-traced monomial. The second term is obtained \textit{via} the action of the derivative operator on the traced monomial term $\mathrm{Tr}(S_2^q)$. The third term comes from the action of the derivative operator on the exponential factor. These two equations involve mixed terms and cannot be written solely in terms of the moments of $S_2$.
Meanwhile, the higher derivative equation \eqref{eq:2pttotalder-higher} leads to
\begin{multline}\label{eq:2ptSDE-higher}
\sum_{\substack{p_1+p_2+p_3=k+1 \\ \{p_i\ge 0\}}}\mathbb{E}\left(\mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2})\mathrm{Tr}(S_2^{p_3})\mathrm{Tr}(S_2^q) \right)+\frac{(k+1)(k+2)}{2}\mathbb{E}\left( \mathrm{Tr}(S_2^{k+1})\mathrm{Tr}(S_2^q) \right) \\
- N\sum_{\substack{p_1+p_2=k+1 \\ \{p_i\ge 0\}}}\left[ \mathbb{E}\left(\mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2}X_2^{\dagger}X_2)\mathrm{Tr}(S_2^q)\right) + \mathbb{E}\left( \mathrm{Tr}(S_2^{p_1}X_1X_1^{\dagger})\mathrm{Tr}(S_2^{p_2})\mathrm{Tr}(S_2^q) \right) \right] \\
+N^2\mathbb{E}\left(\mathrm{Tr}(S_2^{k+2})\mathrm{Tr}(S_2^q) \right)+2\sum_{\substack{p_1,p_2\ge 0\\ p_1+p_2=k+1}}q\mathbb{E}\left(\mathrm{Tr}(S_2^{p_1})\mathrm{Tr}(S_2^{p_2+q}) \right)
+\sum_{n=1}^q q \mathbb{E}\left( \mathrm{Tr}(S_2^{k+1+n})\mathrm{Tr}(S_2^{n})\right) \\
- Nq\left[ \mathbb{E}\left( \mathrm{Tr}(S_2^{q+k+1}X_2^{\dagger}X_2) \right) + \mathbb{E}\left( \mathrm{Tr}(S_2^{q+k+1}X_1X_1^{\dagger})\right) \right] =0,
\end{multline}
where the two first terms come from the action of both derivatives operators on the non-traced monomial. Each term of the second line comes from the action of one of the derivative on the exponential factor and of the other on the non-traced monomial. The first term of the third line of \eqref{eq:2ptSDE-higher} comes from the action of both derivatives on the exponential factor. The second term of the third line is obtained as a sum of the action of the $X_1^{\dagger}$ (resp. $X_2^{\dagger}$) derivative on the non-traced monomial and the action of the $X_2^{\dagger}$ (resp. $X_1^{\dagger}$) derivative on the traced monomial $\mathrm{Tr}(S_2^q)$. The last term of the third line is obtained from the action of both derivative operators on the traced monomial. Finally the two terms of the fourth line of \eqref{eq:2ptSDE-higher} are obtained by the action of $\partial_{X_{1,ab}^{\dagger}}$ (resp. $\partial_{X_{2,bc}^{\dagger}}$) on the traced monomial and $\partial_{X_{2,bc}^{\dagger}}$ (resp. $\partial_{X_{1,ab}^{\dagger}}$) on the exponential factor.
Combining equations \eqref{eq:2ptSDE1}, \eqref{eq:2ptSDE2} and \eqref{eq:2ptSDE-higher}, rewriting some of the sums in a nicer way and using our moments notation we obtain
\begin{multline}\label{eq:2pt-moment-relation}
\sum_{\substack{p_1+p_2+p_3=k+1 \\ \{p_i\ge 0\}}}m_{p_1,p_2,p_3,q} +\frac{(k+1)(k+2)}{2}m_{k+1,q} - N^2 m_{k+2,q}
+\sum_{\substack{p_1,p_2\ge 0\\ p_1+p_2=k+1}}qm_{p_1,p_2+q}\\
+\sum_{\substack{p_1,p_2 \ge 0 \\ p_1+p_2= k+q+1}}q m_{p_1,p_2} =0.
\end{multline}
After performing the shift $k\rightarrow k-1$ in \eqref{eq:2pt-moment-relation}, we multiply \eqref{eq:2pt-moment-relation} by $\frac1{x_1^{k+1}x_2^{q+1}}$, and sum over $k, q\ge 0$. Doing so we obtain the equation
\begin{align}
0=&\overline W_4(x_1,x_1,x_1,x_2)+\frac1{x_1}\partial_{x_1}\overline W_2(x_1,x_2)+\frac1{2}\partial_{x_1}^2\overline W_2(x_1,x_2) - \frac{N^2}{x_1}\overline W_2(x_1,x_2)-N^2A_2(x_1,x_2) \\
&+\frac{1}{x_1^2}\partial_{x_2}\left( x_1x_2\frac{\overline W_2(x_1,x_1)-\overline W_2(x_1,x_2)}{x_1-x_2} \right)+\frac1{x_1^2}\partial_{x_2}\left( \frac{x_1x_2\overline W_2(x_1,x_1)-x_2^2\overline W_2(x_2,x_2)}{x_1-x_2}\right).
\end{align}
with $A_2(x_1,x_2)=-\frac{N}{x_1^2}W_1(x_2)$. We re-express this equation in terms of the connected resolvents to obtain
\begin{multline}\label{eq:connected-2pt-resolvent-equation}
W_4(x_1,x_1,x_1,x_2)+3W_1(x_1)W_3(x_1,x_1,x_2)+3W_2(x_1,x_2)W_2(x_1,x_1)+3W_1(x_1)W_1(x_1)W_2(x_1,x_2) \\
+\frac1{x_1}\partial_{x_1} W_2(x_1,x_2)+\frac1{2}\partial_{x_1}^2 W_2(x_1,x_2) - \frac{N^2}{x_1} W_2(x_1,x_2)+\frac1{x_1^2}\partial_{x_2}\left(x_1x_2\frac{W_2(x_1,x_1)-W_2(x_1,x_2)}{x_1-x_2}\right)\\
+\frac1{x_1^2}\partial_{x_2}\left(\frac{x_1x_2W_2(x_1,x_1)-x_2^2W_2(x_2,x_2)}{x_1-x_2}\right)+\frac1{x_1^2}\partial_{x_2}\left(x_1x_2\frac{W_1(x_1)W_1(x_1)-W_1(x_1)W_1(x_2)}{x_1-x_2}\right) \\
+\frac1{x_1^2}\partial_{x_2}\left(\frac{x_1x_2W_1(x_1)W_1(x_1)-x_2^2W_1(x_2)W_1(x_2)}{x_1-x_2}\right)=0,
\end{multline}
where we used the fact that the terms factoring in front of $W_1(x_2)$ form the first loop equation \eqref{eq:1pt-equation}.
From this equation we can get an equation on $W_{0,2}$ by inserting the $1/N$ expansion of the resolvents appearing in \eqref{eq:connected-2pt-resolvent-equation} and collecting the coefficients of $N^2$. This equation involves only already computed quantities and can be re-expressed as
\begin{align}\label{eq:W02-equation}
\frac1{x_1}\left(3x_1W_{0,1}(x_1)^2-1 \right) W_{0,2}(x_1,x_2)+\frac1{x_1^2}\partial_{x_2}\left(x_1 x_2\frac{W_{0,1}(x_1)W_{0,1}(x_1)-W_{0,1}(x_1)W_{0,1}(x_2)}{x_1-x_2} \right) \nonumber \\
+\frac1{x_1^2}\partial_{x_2}\left( \frac{x_1x_2W_{0,1}(x_1)W_{0,1}(x_1)-x_2^2W_{0,1}(x_2)W_{0,1}(x_2)}{x_1-x_2}\right)=0.
\end{align}
\noindent{\bf First few relations between $c^{[0]}_{k}, c^{[0]}_{k_1,k_2}$.} One can extract relations between the $c^{[0]}_{k}, c^{[0]}_{k_1,k_2}$ from equation \eqref{eq:W02-equation}. These relations are obtained by expanding the equation at $x_1, x_2 = \infty$. The first few examples are
\begin{align}
&3 c^{[0]}_0 c^{[0]}_1-c^{[0]}_{1,1}=0,\\
&2 (c^{[0]}_1)^2+6 c^{[0]}_0 c^{[0]}_2-c^{[0]}_{1,2}=0,\\
&6 c^{[0]}_1 c^{[0]}_2+9 c^{[0]}_0 c^{[0]}_3-c^{[0]}_{1,3}=0.
\end{align}
These relations allow to obtain the $c^{[0]}_{k_1,k_2}$ recursively knowing that $c^{[0]}_0,\ c^{[0]}_1=1$. We can check these first few relations combinatorially. For illustrative purposes we display the combinatorial maps interpretation of $3 c^{[0]}_0 c^{[0]}_1-c^{[0]}_{1,1}=0$
\begin{equation}
3\, \left(\raisebox{-3mm}{\includegraphics[scale=0.65]{1st-term-comb-interpret.pdf}}\right)\quad - \quad \left(\raisebox{-4mm}{\includegraphics[scale=0.65]{2nd-term-comb-int-1.pdf}}\quad + \quad \raisebox{-3mm}{\includegraphics[scale=0.65]{3rd-term-comb-int-1.pdf}} \quad + \quad \raisebox{-4mm}{\includegraphics[scale=0.65]{4th-term-comb-int-1.pdf}} \quad \right)\, =0.
\end{equation}
More generally, one has
\begin{equation}\label{eq:planar-bicumulant-equation}
0=3\sum_{p_1+p_2+p_3=k-3}c^{[0]}_{p_1} c^{[0]}_{p_2}c^{[0]}_{p_3+1,q} - c^{[0]}_{k-1,q}+\sum_{m=0}^{k+q-2}q\ c^{[0]}_{k+q-m-2}c^{[0]}_{m}+\sum_{m=0}^{k-2}q\ c^{[0]}_{k-m-2}c^{[0]}_{m+q}.
\end{equation}\\
\subsection{General loop equations}
In this section we describe the general loop equations for $W_n(x_1,\ldots, x_n)$. Because of the use of higher derivatives for Schwinger-Dyson equations, the case of $W_3(x_1,x_2,x_3)$ is still special compared to the cases $W_{n<3}$. We thus give the corresponding Schwinger-Dyson equations in details before stating the corresponding loop equations. For the $W_{n>3}$ cases, the situation is very similar to the $W_{3}$ case. Therefore we refrain from presenting the detailed derivation, and only state the corresponding loop equations.\\
\noindent{\bf Loop and Schwinger-Dyson equations for $W_3(x_1,x_2,x_3)$.} We have to consider the equalities,
\begin{align}
\hspace{-4mm}\label{eq:3pttotalder1}&\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\partial_{X_{1,ab}^{\dagger}}\left(\bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k+1}\bigr]_{ab} \mathrm{Tr}(S_2^{q_1})\mathrm{Tr}(S_2^{q_2})e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right) =0 \\
\hspace{-4mm}\label{eq:3pttotalder2}&\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\partial_{X_{2,ab}^{\dagger}}\left(\bigl[S_2^{k+1}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ab} \mathrm{Tr}(S_2^{q_1})\mathrm{Tr}(S_2^{q_2})e^{-N\mathrm{Tr}(X_1X_1^{\dagger})}e^{-N\mathrm{Tr}(X_2X_2^{\dagger})}\right) =0\\
\hspace{-4mm}\label{eq:3pttotalder-higher}&\int \mathrm{d}X_1\mathrm{d}X_1^{\dagger} \mathrm{d}X_2\mathrm{d}X_2^{\dagger}\partial_{X_{1,ab}^{\dagger}}\partial_{X_{2,bc}^{\dagger}}\left( \bigl[X_1^{\dagger}X_2^{\dagger}X_2 S_2^{k}X_1X_1^{\dagger}X_2^{\dagger}\bigr]_{ac}\mathrm{Tr}(S_2^{q_1})\mathrm{Tr}(S_2^{q_2})e^{-N\left(\mathrm{Tr}(X_1X_1^{\dagger})-\mathrm{Tr}(X_2X_2^{\dagger})\right)}\right)=0.
\end{align}
The inspection of these Schwinger-Dyson equations reveals that the only type of terms that we have not already faced are obtained when both derivatives $\partial_{X_{1,ab}^{\dagger}}$, $\partial_{X_{2,bc}^{\dagger}}$ distribute over the two traced monomials $\mathrm{Tr}(S_2^{q_1})$, $\mathrm{Tr}(S_2^{q_2})$. The distributed action of derivatives on the traced monomial leads to the term
\begin{equation}
2q_1q_2\mathbb{E}\left(\mathrm{Tr}\left(S_2^{q_1+q_2+k+1}\right)\right)=2q_1q_2m_{k+q_1+q_2}.
\end{equation}
The generating function of this term appearing in the corresponding loop equation will be
\begin{multline}
\sum_{k,q_1,q_2\ge 0}\frac{2q_1q_2m_{k+q_1+q_2}}{x_1^{k+1}x_2^{q_1+1}x_3^{q_2+1}}= \\
\frac{2}{x_1}\frac{\partial^2}{\partial x_2\partial x_3}\left(\frac{(x_2-x_3)x_1x_2x_3W_1(x_1)-(x_1-x_3)x_1x_2x_3W_1(x_2)+(x_1-x_2)x_1x_2x_3W_1(x_3)}{\Delta(\{x_1,x_2,x_3\})} \right)
\end{multline}
where $\Delta(\{x_1,x_2,x_3\})=(x_3-x_2)(x_3-x_1)(x_2-x_1)$ is the Vandermonde determinant of the family of variables $\{x_1,x_2,x_3\}$. The remaining terms of the loop equations can be inferred by realizing that for all terms involved in either \eqref{eq:3pttotalder1}, \eqref{eq:3pttotalder2}, \eqref{eq:3pttotalder-higher}, one of the two traced monomials plays a spectator role for the action of the derivatives. Consequently, one obtains the loop equation,
\begin{multline}
0=\overline{W}_5(x_1,x_1,x_1,x_2,x_3)+\frac1{x_1}\partial_{x_1}\overline{W}_3(x_1,x_2,x_3)+\frac12\partial^2_{x_1}\overline{W}_3(x_1,x_2,x_3)-\frac{N^2}{x_1}\overline{W}_3(x_1,x_2,x_3)-N^2A_3(x_1,x_2,x_3)\\
+\frac{1}{x_1^2}\partial_{x_2}\left( x_1x_2\frac{\overline W_3(x_1,x_1,x_3)-\overline W_3(x_1,x_2,x_3)}{x_1-x_2} \right)+\frac1{x_1^2}\partial_{x_2}\left( \frac{x_1x_2\overline W_3(x_1,x_1,x_3)-x_2^2\overline W_3(x_2,x_2,x_3)}{x_1-x_2}\right)\\
+\frac{1}{x_1^2}\partial_{x_3}\left( x_1x_3\frac{\overline W_3(x_1,x_1,x_2)-\overline W_3(x_1,x_2,x_3)}{x_1-x_3} \right)+\frac1{x_1^2}\partial_{x_3}\left( \frac{x_1x_3\overline W_3(x_1,x_1,x_2)-x_3^2\overline W_3(x_3,x_3,x_2)}{x_1-x_3}\right)\\
+\frac{2}{x_1^3}\frac{\partial^2}{\partial x_2\partial x_3}\left(\frac{(x_2-x_3)x_1x_2x_3W_1(x_1)-(x_1-x_3)x_1x_2x_3W_1(x_2)+(x_1-x_2)x_1x_2x_3W_1(x_3)}{\Delta(\{x_1,x_2,x_3\})} \right),
\end{multline}\\
where we have set $A_3(x_1,x_2,x_3)=-\frac{N}{x_1^2}\overline{W}_2(x_2,x_3)$.
We now introduce some notations in order to shorten expressions. We denote
\begin{align}
&\tilde{\mathcal{W}}_{n+2}(x_1,x_1,x_1;x_2,\ldots,x_n)=\sum_{\mu\vdash [x_1,x_1,x_1]}\sum_{\bigsqcup_{i=1}^{|\mu|}J_i=\{x_2,\ldots,x_n\}}\prod_{\mu_i\in\mu}W_{|\mu_i|+|J_i|}(\mu_i,J_i)\\
&\tilde{\mathcal{W}}_{g,n+2}(x_1,x_1,x_1;x_2,\ldots,x_n)=\sum_{\mu\vdash [x_1,x_1,x_1]}\sum_{\substack{\bigsqcup_{i=1}^{|\mu|}J_i=\{x_2,\ldots,x_n\}\\\sum_{i=1}^{|\mu|}g_i=g+|\mu|-2}}\prod_{\mu_i\in\mu}W_{g_i,|\mu_i|+|J_i|}(\mu_i,J_i).
\end{align}
The notation $\mu \vdash [x_1,x_1,x_1]$ needs to be explained. The summation runs over the partitions $\mu$ of the list $[x_1,x_1,x_1]$ in the following sense. Firstly, in our notation the object $[x_a,x_b,x_c,\ldots]$ is a list of elements, that is an ordered multi-set. More concretely the order of appearance of the elements in the list is important and so for example the instances $[x_1,x_2,x_1,x_1,x_4]$, $[x_1,x_1,x_1,x_2,x_4]$ of lists are different (though they are the same multi-sets). We now come to explain what we mean by partitions of lists. A (denumerable\footnote{we will of course consider only the denumerable case since our lists are finite.}) list of elements can be represented as a set in the following way. We send a list to the set of pairs $\{(\textrm{element}, \textrm{position in the list})\}$. For instance, the list $[x_1,x_2,x_1,x_1,x_4]\mapsto\{(x_1,1),(x_2,2),(x_1,3),(x_1,4),(x_4,5)\}$ while the second list $[x_1,x_1,x_1,x_2,x_4]\mapsto \{(x_1,1),(x_1,2),(x_1,3),(x_2,4),(x_4,5)\}$ which are indeed two different sets. The partitions of the list $\mu$ are the partitions of the corresponding set of pairs $(\textrm{element}, \textrm{position in the list})$. However, note that the elements of the partitions forget about the position in the list and thanks to the symmetry of the functions $W_n$ functions should be seen as subsets of the corresponding multi-set. For instance, due to the fact that $\mu$ is really a partition of a list, the partition $\mu=\{\{x_1,x_1\},\{x_1\}\}$ with $\mu_1=\{x_1,x_1\}, \ \mu_2=\{x_1\}$ of the list $[x_1,x_1,x_1]$ appears three times in the sum.
Some further notations are also required. The sum over $\bigsqcup_{i=1}^{|\mu|}J_i=\{x_2,\ldots,x_n\}$ means that we sum over the decompositions into $|\mu|$ (possibly empty) subsets $J_i$ of the set $\{x_2,\ldots,x_n\}$. For instance, in the case $n=3$, one can consider the term indexed by the partition $\mu=\{\{x_1,x_1\},\{x_1\}\}$ and the decomposition $J_1=\emptyset, \ J_2=\{x_2,x_3\}$, which correspond to a term of the form $W_2(x_1,x_1)W_3(x_1,x_2,x_3)$ in the sum. Note that these definitions are very similar to the ones appearing in \cite[Definition 4]{Bouchard-Eynard}. We also introduce the notation
\begin{equation}
O_x=\frac1{x_1}\partial_{x_1}+\frac12\partial_{x_1}^2.
\end{equation}
Using these notations the corresponding equation for connected resolvents writes
\begin{align}\label{eq:3pt-loop-equation}
0=&\tilde{\mathcal{W}}_{5}(x_1,x_1,x_1;x_2,x_3)+O_xW_3(x_1,x_2,x_3)-\frac{N^2}{x_1}W_3(x_1,x_2,x_3)\nonumber\\
&+\frac{2}{x_1^3}\frac{\partial^2}{\partial x_2\partial x_3}\left(\frac{(x_2-x_3)x_1x_2x_3W_1(x_1)-(x_1-x_3)x_1x_2x_3W_1(x_2)+(x_1-x_2)x_1x_2x_3W_1(x_3)}{\Delta(\{x_1,x_2,x_3\})} \right)\nonumber\displaybreak[1]\\
&+\frac{1}{x_1^2}\partial_{x_2}\left( x_1x_2\left(\sum_{\substack{J\vdash[x_1,x_1,x_3]\\ J_i\neq\{x_3\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_2}- \sum_{\substack{J\vdash[x_1,x_2,x_3]\\ J_i\neq\{x_3\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_2}\right) \right)\nonumber\displaybreak[2]\\
&+\frac{1}{x_1^2}\partial_{x_2}\left( x_1x_2\sum_{\substack{J\vdash[x_1,x_1,x_3]\\ J_i\neq\{x_3\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_2}- x_2^2\sum_{\substack{J\vdash[x_2,x_2,x_3]\\ J_i\neq\{x_3\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_2}\right)\nonumber\displaybreak[2]\\
&+\frac{1}{x_1^2}\partial_{x_3}\left( x_1x_3\left(\sum_{\substack{J\vdash[x_1,x_1,x_2]\\ J_i\neq\{x_2\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_3}- \sum_{\substack{J\vdash[x_1,x_2,x_3]\\ J_i\neq\{x_2\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_3}\right) \right)\nonumber\displaybreak[2]\\
&+\frac{1}{x_1^2}\partial_{x_3}\left( x_1x_3\sum_{\substack{J\vdash[x_1,x_1,x_2]\\ J_i\neq\{x_2\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_3}- x_3^2\sum_{\substack{J\vdash[x_3,x_3,x_2]\\ J_i\neq\{x_2\}, \forall J_i}}\frac{\prod_{J_i\in J}W_{|J_i|}(J_i)}{x_1-x_3}\right).\\
\end{align}
We can now extract the corresponding equation of order $g$ (that is the coefficient of $N^{-1-2g}$ in the expansion of \eqref{eq:3pt-loop-equation}). The corresponding family of equations on $W_{g,3}$ can then be solved recursively provided that we know the $W_{g',n'}$ of lower orders,
\begin{multline}\label{eq:3pt-loop-equation-expansion}
0=\tilde{\mathcal{W}}_{g,5}(x_1,x_1,x_1;x_2,x_3)+O_xW_{g-1,3}(x_1,x_2,x_3)
-\frac{1}{x_1}W_{g,3}(x_1,x_2,x_3)\\
+\frac{2}{x_1^3}\frac{\partial^2}{\partial x_2\partial x_3}\left(\frac{(x_2-x_3)x_1x_2x_3W_{g,1}(x_1)-(x_1-x_3)x_1x_2x_3W_{g,1}(x_2)+(x_1-x_2)x_1x_2x_3W_{g,1}(x_3)}{\Delta(\{x_1,x_2,x_3\})} \right)\displaybreak[1]\\
+\frac{1}{x_1^2}\partial_{x_2}\left( x_1x_2\left(\sum_{\substack{J\vdash[x_1,x_1,x_3]\\ J_i\neq\{x_3\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_2}- \sum_{\substack{J\vdash[x_1,x_2,x_3]\\ J_i\neq\{x_3\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_2}\right) \right)\displaybreak[2]\\
+\frac{1}{x_1^2}\partial_{x_2}\left( x_1x_2\sum_{\substack{J\vdash[x_1,x_1,x_3]\\ J_i\neq\{x_3\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_2}- x_2^2\sum_{\substack{J\vdash[x_2,x_2,x_3]\\ J_i\neq\{x_3\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_2}\right)\displaybreak[2]\\
+\frac{1}{x_1^2}\partial_{x_3}\left( x_1x_3\left(\sum_{\substack{J\vdash[x_1,x_1,x_2]\\ J_i\neq\{x_2\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_3}- \sum_{\substack{J\vdash[x_1,x_2,x_3]\\ J_i\neq\{x_2\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_3}\right) \right)\displaybreak[2]\\
+\frac{1}{x_1^2}\partial_{x_3}\left( x_1x_3\sum_{\substack{J\vdash[x_1,x_1,x_2]\\ J_i\neq\{x_2\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_3}- x_3^2\sum_{\substack{J\vdash[x_3,x_3,x_2]\\ J_i\neq\{x_2\}, \forall J_i\\g=\sum_i g_i +4 -|J|}}\frac{\prod_{J_i\in J}W_{g_i,|J_i|}(J_i)}{x_1-x_3}\right).
\end{multline}
We now state in full generality the loop equations.\\
\noindent{\bf General loop equations.} We obtain the higher order loop equations in full generality by starting with Schwinger-Dyson equalities of the same type than \eqref{eq:3pttotalder1}, \eqref{eq:3pttotalder2}, \eqref{eq:3pttotalder-higher}, but we now insert more traces of monomials of the matrix $S_2$. Doing so we obtain more relations between moments, and those relations can be translated into relations involving $W_n$ with higher values of $n$. As before, this first set of relations cannot be used to compute the $W_n$ as it does not close. To solve this problem we perform the $1/N$ expansion which leads to a closed set of equations on $W_{g,n}$. We display both the equations on $W_n$ and the equations on $W_{g,n}$ for $(g,n)$ such that $2g-2+n>0$. With $I_{ij}=\{x_1,\ldots,x_n\}\backslash\{x_i,x_j\}$,
\begin{multline}
0=\tilde{\mathcal{W}}_{n+2}(x_1,x_1,x_1;x_2,\ldots, x_n)+O_xW_{n}(x_1,\ldots,x_n)
-\frac{N^2}{x_1}W_{n}(x_1,\ldots,x_n)\displaybreak[1]\\
+\frac2{x_1^3}\sum_{\substack{2\le i<j\le n}}\frac{\partial^2}{\partial x_i\partial x_j}\left( \frac{(x_i-x_j)x_1x_ix_jW_{n-2}(I_{ij})-(x_1-x_j)x_1x_ix_jW_{n-2}(I_{1j})+(x_1-x_i)x_1x_ix_jW_{n-2}(I_{1i})}{\Delta(\{x_1,x_i,x_j\})}\right)\displaybreak[2]\\
+\frac{1}{x_1^2}\sum_{i\in [\![ 2,n]\!]}\partial_{x_i}\left( x_1x_i\left(\sum_{\substack{J\vdash\{x_1,x_1\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\, - \hspace{-6mm}\sum_{\substack{J\vdash\{x_1,x_i\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\right) \right)\displaybreak[2]\\
+\frac{1}{x_1^2}\sum_{i\in [\![ 2,n]\!]}\partial_{x_i}\left( x_1x_i\hspace{-6mm}\sum_{\substack{J\vdash\{x_1,x_1\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\, - \hspace{2mm}x_i^2\hspace{-12mm}\sum_{\substack{J\vdash\{x_i,x_i\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\right).
\end{multline}
For the equations on $W_{g,n}$, write
\begin{multline}\label{eq:loop-eq-general-expanded}
0=\tilde{\mathcal{W}}_{g,n+2}(x_1,x_1,x_1;x_2,\ldots, x_n)+O_xW_{g-1,n}(x_1,\ldots,x_n)
-\frac{1}{x_1}W_{g,n}(x_1,\ldots,x_n)\displaybreak[1]\\
+\frac2{x_1^3}\sum_{\substack{2\le i<j\le n}}\frac{\partial^2}{\partial x_i\partial x_j}\left( \frac{(x_i-x_j)x_1x_ix_jW_{g,n-2}(I_{ij})-(x_1-x_j)x_1x_ix_jW_{g,n-2}(I_{1j})+(x_1-x_i)x_1x_ix_jW_{g,n-2}(I_{1i})}{\Delta(\{x_1,x_i,x_j\})}\right)\displaybreak[2]\\
+\frac{1}{x_1^2}\sum_{i\in [\![ 2,n]\!]}\partial_{x_i}\left( x_1x_i\left(\sum_{\substack{J\vdash\{x_1,x_1\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}\\g=\sum_lg_l -|J|+2}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{g_l,|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\, - \hspace{-6mm}\sum_{\substack{J\vdash\{x_1,x_i\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}\\g=\sum_lg_l -|J|+2}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{g_l,|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\right) \right)\displaybreak[2]\\
+\frac{1}{x_1^2}\sum_{i\in [\![ 2,n]\!]}\partial_{x_i}\left( x_1x_i\hspace{-6mm}\sum_{\substack{J\vdash\{x_1,x_1\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}\\g=\sum_lg_l -|J|+2}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{g_l,|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\, - \hspace{2mm}x_i^2\hspace{-12mm}\sum_{\substack{J\vdash\{x_i,x_i\}\\ \bigsqcup_{k=1}^{|J|}K_k=\{x_2,\ldots,x_n\}\backslash\{x_i\}\\g=\sum_lg_l -|J|+2}}\hspace{-12mm}\frac{\prod_{J_l\in J}W_{g_l,|J_l|+|K_l|}(J_l,K_l)}{x_1-x_i}\right).
\end{multline}
Using the family of equations \eqref{eq:loop-eq-general-expanded} one can recursively compute any $W_{g,n}$ knowing the initial conditions $W_{0,1}(x)$ and $W_{0,2}(x_1,x_2)$. Moreover, starting from these equations it should be possible to obtain a topological recursion like formula. Such a recursion formula certainly looks like the Bouchard-Eynard topological recursion formula introduced in \cite{Bouchard-and-al, Bouchard-Eynard}. Establishing such a formula strongly depends on the analytic properties of the $W_{g,n}$ as well as the geometric information contained in $W_{0,1}$ and $W_{0,2}$. Thus in the next section we try to make explicit some of these properties. We first focus on the geometry underlying the equation satisfied by $W_{0,1}$, and then describe the analytic properties of the higher order terms, by: 1. doing explicit computations and 2. studying the structure of the loop equations. A more detailed and systematic study of the analytical properties of the loop equations is postponed to further work on the product of $p$ rectangular Ginibre matrices.\\
\section{Spectral curve geometry}\label{sec:spectral-curve-geometry}
Before computing the first few solutions of the loop equations, we focus on studying the equation \eqref{eq:W01-equation} on $W_{0,1}$. Indeed, this equation defines an affine algebraic curve $\mathcal{C}$, called the spectral curve, where by affine algebraic curve we mean the locus of zero in $(x,y)\in \hat{\mathbb{C}}^2 = \left(\mathbb{C}\cup \{\infty\}\right)^2$ of the polynomial
\begin{equation}
P(x,y)=x^2y^3-xy+1.
\end{equation}
This set of zeros of $P$ in $\mathbb{C}^2$ is generically a (complex) codimension $1$ subset of $\mathbb{C}^2$. In particular it can be given the structure of a Riemann surface. Computing the solutions $W_{0,1}(x)$ of \eqref{eq:W01-equation} gives a parametrization of the curve away from the ramification points. One of the goals of this section is to introduce a global, nicer parametrization called rational parametrization of the curve. Using this parametrization allows us to simplify the resulting expressions of the solutions. Indeed in the original $x$ variables, the solutions of \eqref{eq:W01-equation} are multi-valued. However one can fix that by promoting these solutions to meromorphic functions on the full affine curve defined by equation \eqref{eq:W01-equation}, the curve being the Riemann surface of $W_{0,1}(x)$.
\subsection{Basic properties of the curve}
There are two finite ramification points in the $x$-plane, one at $(x_{r_1},y_{r_1})=(27/4,2/9)$, which is a simple ramification point and one at $(x_{r_2},y_{r_2})=(0,\infty)$ which is a double ramification point. There is also one ramification point at infinity $x_{r_\infty}=\infty$ which is a simple ramification point. These ramifications are found from the condition that $P(x,y)=0$ and $\partial_yP(x,y)=0$. We display the ramification profile in Fig. \ref{fig:ramif-profile}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.88]{ramification-profile-modified.pdf}
\caption{Ramification profile of the curve $\mathcal{C}$. We use colors to indicate permutations of sheets around ramification points.}
\label{fig:ramif-profile}
\end{figure}
The cut structure is readily described in \cite[Section 2.1 \& 2.2]{FLZ15}. It is pictured in Fig. \ref{fig:cut-structure}, where the lowest sheet of the figure corresponds to the \emph{physical} sheet that is corresponding to the solution analytic at infinity, whose coefficients of the Laurent expansion are the moments of $S_2$. The other two sheets correspond to the two other solutions of \eqref{eq:W01-equation} that are not analytic at infinity. Indeed they have a simple ramification point at infinity.
From the Fig. \ref{fig:cut-structure} we can infer that the monodromy group is generated by the transposition $\tau_1 =(12)$ (obtained by going around $x_{r_1}$ in the physical sheet) and $\tau_2=(132)$ (going around $x_{r_2}$). These permutations are represented using colors on Fig. \ref{fig:ramif-profile}.
The genus of the curve $\mathcal{C}$ can be obtained by considering the Newton polygon of the curve. The number of interior lattice points of the polygon drawn on Fig. \ref{fig:newtonpolygon} corresponds to the generic genus of the curve, that is the genus of the curve for generic enough coefficients of the polynomial $P$. However by fine tuning the coefficients of the polynomial one could in principle obtain a curve with smaller genus. The generic genus is the maximal genus the curve can have. In our case, $P(x,y)=x^2y^3-xy+1$, the number of lattice points in the Newton polygon is zero, thus the genus of the curve is zero. Since the genus of the curve is zero, there exists a rational parametrization. That is there exists two rational functions
\begin{align}
&x:\hat{\mathbb{C}} \rightarrow \hat{\mathbb{C}}\\
&y:\hat{\mathbb{C}} \rightarrow \hat{\mathbb{C}},
\end{align}
such that
\begin{equation}
x(z)^2y(z)^3-x(z)y(z)+1=0, \quad \forall z \in \hat{\mathbb{C}}.
\end{equation}
These two functions can be found by solving the following system on the coefficients of $Q_x(z),Q_y(z)$ and $P_x(z),P_y(z)$,
\begin{align}
&Q_x(z) x(z)=P_x(z)\\
&Q_y(z) y(z)=P_y(z)\\
&x(z)^2y(z)^3-x(z)y(z)+1=0,
\end{align}
where $Q_x(z),Q_y(z)$ and $P_x(z),P_y(z)$ are set to be polynomials of degree high enough for a solution to exist. Then one obtains explicitly one possible parametrization
\begin{equation}
x(z)=\frac{P_x(z)}{Q_x(z)}=\frac{z^3}{1+z}, \quad y(z)=\frac{P_y(z)}{Q_y(z)}=-\frac{1+z}{z^2}.
\end{equation}
Note that from this point of view, $y(z)$ is the analytic continuation of $W_{0,1}(x(z))$. The function $x$ can be seen as a cover $x:\mathcal{C}\rightarrow \hat{\mathbb{C}}$ of generic degree $3$ (that is there are generically three values of $z$ corresponding to the same value of $x$). As such, the zeroes of $\mathrm{d}x$ corrrespond to the ramifications point of the cover. One can then check that $\mathrm{d}x=0$ at $z_{r_1}=0$ and $z_{r_2}=-3/2$, corresponding to the values $x(0)=0$ and $x(-3/2)=27/4$. One also notices that the zero of $\mathrm{d}x$ at $z=0$ is a double zero, thus confirming the fact that $x_{r_1}$ is a double ramification point. Finally since $x=27/4$ is a simple ramification point, there is another pre-image of $27/4$ in $z$ variable, that is we have $x(3)=27/4$. This leads to the ramification profile shown on Fig.\ref{fig:ramif-profile}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{cut-structure.pdf}
\caption{Cut structure of $W_{0,1}$.}
\label{fig:cut-structure}
\end{figure}
\subsection{Computation of $w_{0,1}$ and $w_{0,2}$}
Using this parametrization we compute the functions
\begin{equation}\label{eq:wgn-def}
w_{g,n}(z_1,\ldots,z_n)=W_{g,n}(x(z_1),\ldots, x(z_n))\prod_{i=1}^n x'(z_i)+\frac{\delta_{g,0}\delta_{n,2}x'(z_1)x'(z_2)}{(x(z_1)-x(z_2))^2}.
\end{equation}
We also denote $\tilde{w}_{0,2}(z_1,z_2)=W_{0,2}(x(z_1),x(z_2))x'(z_1)x'(z_2)$. $w_{g,n}$ functions are meromorphic functions on $\mathcal{C}$, as such they are rational functions of their variables $z_i$. Consequently, they are much easier to manipulate than $W_{g,n}$ and their analytic properties are more transparent. For $w_{0,1}(z)$ we already know that $y(z) = W_{0,1}(x(z))$, thus
\begin{equation}
w_{0,1}(z)=y(z)x'(z)=-\frac{2z+3}{1+z}.
\end{equation}
\noindent The original functions $W_{g,n}$ can be recovered using the inverse function
\begin{equation}
z(x)=-xW_{0,1}(x)=_{\infty}-1-\frac{1}{x}-\frac{3}{x^2}-\frac{12}{x^3}-\frac{55}{x^4}+O\left(\frac{1}{x^5}\right).
\end{equation}
Indeed one has,
\begin{align}
&W_{g,n}(x_1,x_2,\ldots,x_n)=\frac{w_{g,n}(z_1,z_2,\ldots, z_n)}{x'(z_1)x'(z_2)\ldots x'(z_n)}\Bigr\rvert_{z_i=z(x_i)} \textrm{ for } (g,n)\neq (0,2), \\
&W_{0,2}(x_1,x_2)=\frac{\tilde w_{0,2}(z_1,z_2)}{x'(z_1)x'(z_2)}\Bigr\rvert_{z_1=z(x_1), z_2=z(x_2)}.
\end{align}
Note also that the corresponding coefficients of the expansion of $W_{g,n}$ at infinity, that is the $c^{[g]}_{k_1,\ldots, k_n}$, can be obtained by computing residues
\begin{equation}
c^{[g]}_{k_1,\ldots, k_n}=\underset{\{x_i\rightarrow \infty\}}{\textrm{Res}}x_1^{k_1}\ldots x_n^{k_n}W_{g,n}(x_1,x_2,\ldots,x_n)= \underset{\{z_i\rightarrow -1\}}{\textrm{Res}}x(z_1)^{k_1}\ldots x(z_n)^{k_n}w_{g,n}(x(z_1),x(z_2),\ldots,x(z_n)).
\end{equation}
It is also true that the residue in $z$ variables can equivalently be computed at infinity. The passage from the $W_{g,n}$ to the $w_{g,n}$ functions takes into account the Jacobian of the change of variables.\\
For future convenience, we define
\begin{equation}
\sigma(z)=\frac1{x(z)}(1-3x(z)y(z)^2),
\end{equation}
where $\sigma$ relates to $\partial_yP$ since $\sigma(z)=\frac1{x(z)^2}\partial_yP(x(z),y(z))$. So in particular $\sigma$ vanishes at the ramification point $(x_{r_1},y_{r_1})=(27/4,2/9)$ and $x(z)^2\sigma(z)$ has a zero of order $2$ at $(x_{r_2},y_{r_2})=(0,\infty)$. \\
\begin{figure}
\centering
\includegraphics[scale=0.9]{NewtonPolygon-curve.pdf}
\caption{Newton polygon for the affine curve $x^2y^3-xy+1=0$. The number of $\mathbb{N}^2$ lattice points inside the polygon gives the generic genus of the curve. Here there is no points inside the polygon so that the generic genus is zero, which implies that the genus is zero.}
\label{fig:newtonpolygon}
\end{figure}\\
\noindent{\bf Expression of $\tilde{w}_{0,2}$.} We have after multiplying \eqref{eq:W02-equation} by $x'(z_1)x'(z_2)$ and performing a few additional manipulations
\begin{multline}\label{eq:w02-equation}
\sigma(z_1) \tilde{w}_{0,2}(z_1,z_2)=\frac{x'(z_1)}{x(z_1)^2}\partial_{z_2}\left(x(z_1) x(z_2)\frac{y(z_1)^2-y(z_1)y(z_2)}{x(z_1)-x(z_2)} \right) \\
+\frac{x'(z_1)}{x(z_1)^2}\partial_{z_2}\left( \frac{x(z_1)x(z_2)y(z_1)^2-x(z_2)^2y(z_2)^2}{x(z_1)-x(z_2)}\right).
\end{multline}
From this equation $\tilde{w}_{0,2}(z_1,z_2)$ can be computed in the variables $z_1,z_2$, so that one obtains
\begin{equation}\label{eq:explicit-tilde-w02}
\tilde{w}_{0,2}(z_1,z_2)=\frac{z_2^2 z_1^2+2 (z_2 z_1^2+ z_2^2 z_1) + z_1^2 +z_2^2 +4 z_2 z_1}{(z_2 z_1^2+z_2^2
z_1+z_1^2+z_2^2+z_2 z_1)^2}.
\end{equation}
From this expression we can recover the limiting cumulants of the product of traces,
\begin{equation}
c^{[0]}_{i,j}=\underset{z_1, z_2\rightarrow \infty}{\textrm{Res}}\, x(z_1)^ix(z_2)^j\tilde{w}_{0,2}(z_1,z_2).
\end{equation}\\
We provide the reader with the first few orders on Table \ref{tab:2pt-moments}. These numbers can be obtained easily \textit{via} symbolic computation softwares.
\begin{remark}\label{rem:guess_2pt-coeff}
Using a table of coefficients $c_{i,j}^{[0]}$ for $i,j$ running from $1$ to $20$ it is possible to make an experimental guess for the explicit form of these coefficients. This is
\begin{equation}
c_{i,j}^{[0]}=\frac{2 i j }{3(i+j)}\binom{3i}{i}\binom{3j}{j}.
\end{equation}
In particular we have checked that these numbers satisfy the recurrence equation \eqref{eq:planar-bicumulant-equation} for the first few orders. It would be interesting to prove or disprove this guess \textit{via}, for instance, combinatorial means.
\end{remark}
\begin{table}[h]
\begin{center}
\begin{tabu}{l|c|c|c|c|c|c|c|c}
\diag{.1em}{.8cm}{$j$}{$i$}& 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline
1 & 3 & 20 & 126 & 792 & 5005 & 31824 & 203490 \\ \hline
2 & ** & 150 & 1008 & 6600 & 42900 & 278460 & 1808800 \\ \hline
3 & ** & ** & 7056 & 47520 & 315315 & 2079168 & 13674528 \\ \hline
4 & ** & ** & ** & 326700 & 2202200 & 14702688 & 97675200 \\ \hline
5 & ** & ** & ** & ** & 15030015 & 101359440 & 678978300 \\ \hline
6 & ** & ** & ** & ** & ** & 689244192 & 4649339520 \\ \hline
7 & ** & ** & ** & ** & ** & ** & 31549089600 \\ \hline
\end{tabu}
\caption{Table of the first few cumulants $c^{[0]}_{i,j}=\lim_{N\rightarrow \infty}\mathbb{E}\left(\mathrm{Tr}(S_2^i)\mathrm{Tr}(S_2^j)\right)-\frac{1}{N^2}\mathbb{E}\bigl(\mathrm{Tr}(S_2^i)\bigr)\mathbb{E}\bigl(\mathrm{Tr}(S_2^j)\bigr)$.\label{tab:2pt-moments}}
\end{center}
\end{table}
\noindent{\bf Universality for $w_{0,2}$.} In this paragraph we explain in detail and \textit{a posteriori}\footnote{Since they can already easily be inferred from the explicit result of equation \eqref{eq:explicit-tilde-w02}. } the analytic properties of $\tilde{w}_{0,2}$ and $w_{0,2}$. We first argue that $\tilde{w}_{0,2}$ does not have poles at the ramification points that is $z=-3/2,0$. We then consider the situation when $x(z_1)\rightarrow x(z_2)$. First starting from the above remark that $x(z)^2\sigma(z)=\partial_yP(x(z),y(z))$, we know that $x(z)^2\sigma(z)$ has a double zero at $z=0$ and a simple zero at $z=-3/2$, which makes it a source of poles as this factor appears in the denominator in front of the two terms of \eqref{eq:w02-equation-with-properfactors}, see below
\begin{multline}\label{eq:w02-equation-with-properfactors}
\tilde{w}_{0,2}(z_1,z_2)=\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}\partial_{z_2}\left(x(z_1) x(z_2)\frac{y(z_1)^2-y(z_1)y(z_2)}{x(z_1)-x(z_2)} \right) \\
+\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}\partial_{z_2}\left( \frac{x(z_1)x(z_2)y(z_1)^2-x(z_2)^2y(z_2)^2}{x(z_1)-x(z_2)}\right).
\end{multline}
We start by focusing on poles at the simple ramification point $z=-3/2$. We remind ourselves that $\mathrm{d}x$ vanishes at the ramification points, and so $x'(z)$ has a simple zero at $z=-3/2$. Therefore $\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}$ is holomorphic at $z_1=-3/2$. Moreover, $x(z_1)$ and $y(z_1)$ are holomorphic at $z_1=-3/2$. As a consequence $\tilde{w}_{0,2}$ is holomorphic at $z=-3/2$ in both $z_1$ and $z_2$ (thanks to the symmetry $z_1 \leftrightarrow z_2$).
We now come back to the ratio $\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}$ for $z_1=0$. A similar argument is valid at $z_1=0$. Indeed $x'(z_1)$ has a double zero at $z_1=0$ and this cancels the double zero of $x(z_1)^2\sigma(z_1)$ at $z_1=0$. In fact one can explicitly compute the ratio and find
\begin{equation}
\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}=\frac1{1+z_1}
\end{equation}
which confirms our argument. $x(z)$ is holomorphic at $z=0$, but $y(z)$ is not, indeed it has a double pole at $z=0$. So the terms $x(z_1)x(z_2)y(z_1)^2$ could bring a simple pole at $z_1=0$. However, using the fact that $\tilde{w}_{0,2}(z_1,z_2)$ is symmetric in its arguments, if such a simple pole exists at $z_1=0$ then one should have a simple pole at $z_2=0$. Using the fact that $x(z_2)$ has a third order zero at $z_2=0$, and $y(z_2)$ has a double pole at $z_2=0$ one can show that $\tilde{w}_{0,2}(z_1,z_2)$ is holomorphic at $z_2=0$, therefore the apparent singularity at $z_1=0$ is a removable singularity. Consequently, we have just shown that $\tilde{w}_{0,2}(z_1,z_2)$ is holomorphic at the ramification points $z=-3/2, 0$ in both its variables.
Other possible singularities may occur at the singularities of $x(z)$ which possesses a simple pole at $z=-1$ and when $x(z_1)\rightarrow x(z_2)$. First note that
\begin{equation}
\frac{y(z_1)^2-y(z_1)y(z_2)}{x(z_1)-x(z_2)},
\end{equation}
has a double zero when $z_1\rightarrow -1$, thus
\begin{equation}
\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}\partial_{z_2}\left(x(z_1) x(z_2)\frac{y(z_1)^2-y(z_1)y(z_2)}{x(z_1)-x(z_2)} \right)
\end{equation}
is holomorphic when $z_1\rightarrow -1$ since $\frac{x(z_1)x'(z_1)}{x(z_1)^2\sigma(z_1)}$ has a double pole at $z_1=-1$. A similar argument applies to the term
\begin{equation}
\frac{x'(z_1)}{x(z_1)^2\sigma(z_1)}\partial_{z_2}\left( \frac{x(z_1)x(z_2)y(z_1)^2-x(z_2)^2y(z_2)^2}{x(z_1)-x(z_2)}\right),
\end{equation}
thus showing that $\tilde{w}_{0,2}(z_1,z_2)$ is holomorphic at $z_1=-1$, and by symmetry at $z_2=-1$.\\
We are now left with the situation $x(z_1)\rightarrow x(z_2)$. A first possibility is $z_1\rightarrow z_2$. In this case both ratios
\begin{equation}\label{eq:2pt-ratio}
\frac{y(z_1)^2-y(z_1)y(z_2)}{x(z_1)-x(z_2)}, \quad \frac{x(z_1)x(z_2)y(z_1)^2-x(z_2)^2y(z_2)^2}{x(z_1)-x(z_2)},
\end{equation}
are holomorphic since the denominators and numerators have simultaneous simple zeroes. So $\tilde{w}_{0,2}(z_1, z_2)$ is holomorphic when $z_1 \rightarrow z_2$. However, since $x(z)$ is a covering of degree three, there exists two (not globally defined) functions, $d_1(z), d_2(z)$ that leaves $x$ invariant, that is $x\circ d_i=x, \, i\in \{1,2\}$. These functions are the (non-trivial) solutions of the equation
\begin{equation}\label{eq:Deck-transf}
\frac{d(z)^3}{1+d(z)}=\frac{z^3}{1+z}.
\end{equation}
This leads to the expressions
\begin{align}
&d_1(z)=-\frac12\frac{z^2+z+z\sqrt{(z-3) (1+z)}}{1+z}, \\
&d_2(z)=-\frac12\frac{z^2+z-z\sqrt{(z-3) (1+z)}}{1+z}.
\end{align}
One can check that $x(d_1(z))=x(d_2(z))=x(z)$. In order to understand the pole structure of $\tilde{w}_{0,2}(z_1, z_2)$, one also needs to know how does $y(z)$ changes when composed with one of the $d_i$. One has the simple identities for $i\in \{1,2\}$
\begin{equation}
y(d_i(z))=\frac{d_i(z)}{z}y(z).
\end{equation}
Using these identities, one expects poles when $z_1 \rightarrow d_{1,2}(z_2)$. Indeed, in this limit the numerators of \eqref{eq:2pt-ratio} does not have zeroes anymore, while the denominators have simple zeroes. Thus $\tilde{w}_{0,2}(z_1, z_2)$ should have double poles when $z_1 \rightarrow d_{1,2}(z_2)$. This is indeed what we find by requiring that the denominator of \eqref{eq:explicit-tilde-w02} vanishes.
\begin{remark}
The functions $d_i$ have interesting properties. Indeed they permute the sheets of the covering $x:\mathcal{C}\rightarrow \hat{\mathbb{C}}$. Their behavior in a small neighborhood around a ramification point relates to the local deck transformation group of the cover. \\
Let us first focus on the double ramification point $z=0$. It is a fixed point of both $d_1$ and $d_2$ and around $z=0$, we have $d_1(z)\sim_0 e^{-\frac{2i\pi}{3}} z$ and $d_2(z)\sim_0 e^{\frac{2i\pi}{3}} z$ thus they are inverse of each other locally, and generate the cyclic group $\mathbb{Z}_3$. This cyclic group is the group generated by the permutation of the sheets $\tau_2=(132)$. This group is the local deck transformation group around the ramification point at $z=0$. \\
We now consider the behavior of $d_1, d_2$ at $z=-3/2$. In this case, only $d_1$ fixes $z=-3/2$, while $d_2(-3/2)=3,\, d_2(3)=-3/2$, that is $d_2$ exchanges the ramification point with the point above it (see Fig. \ref{fig:ramif-profile}). Note however that one has $d_1(3)=d_2(3)=-3/2$ as the two solutions $d_1,d_2$ of equation \eqref{eq:Deck-transf} merge at $z=3$ (as they also do at $z=1$). This merging has the following interpretation. At $z=-3/2$ two of the three sheets of the covering coincide. Therefore, there remains effectively only two sheets to be permuted, that is why $d_1$ fixes $z=-3/2$ while $d_2$ permutes $z=-3/2$ with $z=3$. The action of the local deck transformation group at $z=-3/2$ relates to the action of $d_1$ in a small neighborhood of $z=-3/2$. Since $d_1(-3/2+\epsilon)-d_1(-3/2)\sim_{0}-\epsilon$, $d_1$ locally generates the cyclic group $\mathbb{Z}_2$ corresponding to the group generated by the permutation $\tau_1=(12)$. Similar arguments can be used to describe the local deck transformation group at the ramification point $z=\infty$.
\end{remark}
We now come to the universality statement. Indeed, we expect that a slightly different object than $\tilde w_{0,2}(z_1,z_2)$ takes a universal form. This is the reason for the shift introduced in \eqref{eq:wgn-def}. The statement is that $w_{0,2}(z_1,z_2)$ should have a universal form, that is it should be the unique meromorphic function on the sphere with a double pole of order $2$ on the diagonal with coefficient $1$ and otherwise regular. Indeed if we compute $w_{0,2}(z_1,z_2)$ we obtain
\begin{equation}
w_{0,2}(z_1,z_2)=\tilde w_{0,2}(z_1,z_2)+\frac{x'(z_1)x'(z_2)}{(x(z_1)-x(z_2))^2} = \frac{1}{(z_1-z_2)^2}.
\end{equation}
We find exactly the expected universal form for a genus zero spectral curve.\\
\noindent{\bf Comment on probabilistic interpretation of $W_{0,1}(x)$, $W_{1,1}(x)$ and $W_{0,2}(x_1,x_2)$.} As stated earlier, $W_1(x)$ is the Stieltjes transform of the eigenvalues density of the matrix $S_2$, that is
\begin{equation}
W_1(x)=\int_{-\infty}^{\infty}\mathrm{du} \frac{\rho_{1}(u)}{x-u}.
\end{equation}
In particular in the large $N$ limit we have that
\begin{equation}
W_{0,1}(x)=\int_{-\infty}^{\infty}\mathrm{du} \frac{\rho_{0,1}(u)}{x-u},
\end{equation}
and the computation of $W_{0,1}(x)$ uniquely determines $\rho_{0,1}(x)$. The same property is also true for the exact density, \textit{i.e.} $W_1(x)$ uniquely determines $\rho_1(x)$. This can be traced back to the Carlemann condition \cite{akhiezer-moment-problem}. Indeed the Stieltjes transform $W_1(x)$, (resp. $W_{0,1}(x)$) contains the information on the whole moment sequence of $\rho_1(x)$ (resp. $\rho_{0,1}(x)$). The sequence of moments of both distributions can be shown to satisfy the Carlemann condition, and thus one expects that the knowledge of the Stieltjes transform is sufficient to reconstruct the densities $\rho_1(x)$, $\rho_{0,1}(x)$. However it is known \cite{forrester2006asymptotic} that in general the truncation of the $1/N$ expansion of the resolvent does not determine a unique truncated density. Indeed, there exists, \textit{a priori}, multiple densities truncated at order $p$, $\rho^{(p)}_1(x)=\sum_{g\ge 0}^pN^{-2g}\rho_{g,1}(x)$ with the same truncated resolvent
\begin{equation}
\sum_{g\ge 0}^{p}N^{-2g}W_{g,1}(x) = \int_{-\infty}^{\infty}\mathrm{du} \frac{\rho^{(p)}(u)}{x-u}.
\end{equation}
That is the computation of the corrections to $W_{0,1}(x)$ only determines Stieltjes class of densities\footnote{Though this is not a rigorous justification, one can look at the truncated Carlemann criterion, for instance in the GUE case, and see that the Carlemann criterion is indeed not satisfied order-by-order in $1/N$. Only the large $N$ and the exact criterion are satisfied.}, often referred to as a \emph{smoothed} density. This is sufficient however to compute the corrections to the average $\mathbb{E}(\phi(x))$ where $\phi(x)$ is any function analytic on the support of $\rho_{0,1}(x)$. In particular, our later computation of the first few corrections to the large $N$ resolvent does not determine corrections $\rho_{1,1}(x), \rho_{2,1}(x),\ldots$\\
The probabilistic interpretation of $W_{0,2}$ goes as follows. $W_{2}$ is the Stieltjes transform of the connected part of the eigenvalue correlation function
\begin{equation}
W_{2}(x_1,x_2)=\int_{-\infty}^{\infty}\mathrm{d}u \mathrm{d}v \frac{\rho_2(u,v)}{(x_1-u)(x_2-v)},
\end{equation}
and
\begin{equation}
\rho_2(x_1,x_2)=\mathbb{E}\left(\sum_{i=1}^N\delta(x_1-\lambda_i)\sum_{j=1}^N\delta(x_2-\lambda_j)\right)- \rho_1(x_1)\rho_1(x_2),
\end{equation}
where the $\lambda_i$ are the eigenvalues of the matrix $S_2$. In the large $N$ limit, the centered random vector whose components are the traces of successive powers of the matrix $S_2$, $\left( \mathrm{Tr}(S_2^i)-\mathbb{E}(\mathrm{Tr}(S_2^i))\right)_{i=1}^k$ converges to a normal random vector of zero mean and variance $\textrm{Var}_{m,n}$
\begin{equation}
\textrm{Var}_{m,n}=c^{[0]}_{m,n}= \underset{z_1\rightarrow -1}{\textrm{Res}}\underset{z_2\rightarrow -1}{\textrm{Res}}x(z_1)^m x(z_2)^n w_{0,2}(z_1,z_2),
\end{equation}
where the normality of this \emph{centered} random vector at large $N$ follows from the fact that $W_n(x_1,\ldots,x_n)=O(1/N^{n-2})$, that is the higher cumulants of the limiting distribution of the family $\{\mathrm{Tr}(S_2^i)\}$ vanish at large $N$. This statement extends to the large $N$ limit of any linear statistics $A$ of the eigenvalues of the form
\begin{equation}
A=\sum_{i=1}^N a(\lambda_i),
\end{equation}
where $a$ is a sufficiently smooth function (analytic for instance), as we have
\begin{equation}
\textrm{Var}(A)=\oint_{\Gamma}\oint_{\Gamma}\frac{dx_1 dx_2}{(2i\pi)^2} a(x_1)a(x_2) W_{0,2}(x_1,x_2),
\end{equation}
with $\Gamma$ a contour encircling the cut $(0,27/4]$ of $W_{0,1}(x)$.\\
\subsection{Computation of $w_{1,1}$ and higher correlation functions.}
From these data one can access the first correction to the resolvent which allows in turn to access a first correction to the large $N$ density. The equation for $w_{1,1}(z)$ can be easily obtained from the equation \eqref{eq:1pt-NLO} on $W_{1,1}(x)$. It reads
\begin{equation}\label{eq:w11-z-variable-equation}
w_{1,1}(z)=\frac{3x(z)^2}{x'(z)\partial_yP(x(z),y(z))}y(z)\tilde w_{0,2}(z,z) + \frac{x(z)^2}{\partial_yP(x(z),y(z))}\left(\partial_z y(z)-\frac{x''(z)}{2x'(z)^2}\partial_zy(z)+\frac1{2x'(z)}\partial^2_z y(z) \right).
\end{equation}
This leads to the result of the next paragraph.\\
\noindent{\bf Expression of $w_{1,1}(z)$ and analytic properties of \eqref{eq:w11-z-variable-equation}.} We obtain,
\begin{equation}
w_{1,1}(z)=\frac{z^4+7 z^3+21 z^2+24 z+9}{z^2 (2 z+3)^4}.
\end{equation}
We notice that the poles are located at $z=0$ and $z=-3/2$, which are the zeroes of $\mathrm{d}x$. However, starting from \eqref{eq:w11-z-variable-equation} one can only infer that the poles of $w_{1,1}(z)$ can be located at $z=0, -3/2, -1$. Indeed, one can easily obtain from the analytic properties of $x(z), y(z)$ and $\tilde w_{0,2}(z,z)$ that the first term of the right hand side of \eqref{eq:w11-z-variable-equation} can have poles only at $z=0, -3/2$, and rule out singularities at $z=-1, \infty$. However when considering the derivatives term, that is the second term of equation \eqref{eq:w11-z-variable-equation}, one can not rule out poles at $z=-1$. The explicit computation shows that the coefficient of these poles is zero.
\begin{remark}
Note that we can also produce a guess for the coefficients $c^{[1]}_n$. We need however to prove our first guess of Remark \ref{rem:guess_2pt-coeff} for $c_{i,j}^{[0]}$ to be able to prove this guess using the Schwinger-Dyson equations. We provide our guess for purely informative purposes,
\begin{equation}
c^{[1]}_n=\frac{(n-1)^2n}{6(3n-1)}\binom{3n}{n}.
\end{equation}
\end{remark}
~\\
\noindent{\bf Expression for higher correlations.} Using the loop equations \eqref{eq:loop-eq-general-expanded} we can compute any $n$-point resolvents recursively at any order. We illustrate this claim by providing the first few resolvents of higher order.\\
\noindent\textit{One point case.}
\begin{align}
&w_{0,1}(z)=-\frac{2 z+3}{z+1}\\
\label{eq:W11} &w_{1,1}(z)=\frac{z^4+7 z^3+21 z^2+24 z+9}{z^2 (2 z+3)^4}\\
\label{eq:W21} &w_{2,1}(z)=\frac{9 z^9+153 z^8+1284 z^7+4227 z^6+7626 z^5+9246 z^4+8280 z^3+5220 z^2+1971 z+324}{z^3 (2 z+3)^{10}}.
\end{align}
\noindent\textit{Two points case.}
\begin{align}
\label{eq:W02}&\tilde{w}_{0,2}(z_1,z_2)=\frac{z_2^2 z_1^2+2 (z_2 z_1^2+ z_2^2 z_1) + z_1^2 +z_2^2 +4 z_2 z_1}{(z_2 z_1^2+z_2^2
z_1+z_1^2+z_2^2+z_2 z_1)^2}\\
\label{eq:W12}&w_{1,2}(z_1,z_2)=\frac{pol(z_1,z_2)}{z_1^2 \left(2 z_1+3\right){}^6 z_2^2 \left(2 z_2+3\right){}^6},
\end{align}
with $pol(z_1,z_2)$ a symmetric polynomial of $z_1,z_2$ of degree $12$,
\begin{multline}
pol(z_1,z_2)=128 z_2^6 z_1^6+1280 z_2^5 z_1^6+6144 z_2^4 z_1^6+12288 z_2^3 z_1^6+12480 z_2^2 z_1^6+6912 z_2 z_1^6+1728 z_1^6+1280 z_2^6 z_1^5+12800 z_2^5 z_1^5\\
+55680 z_2^4 z_1^5+108672 z_2^3 z_1^5+111168
z_2^2 z_1^5+62208 z_2 z_1^5+15552 z_1^5+6144 z_2^6 z_1^4+55680 z_2^5 z_1^4+215352 z_2^4 z_1^4+405000 z_2^3 z_1^4\\
+414234 z_2^2 z_1^4+233280 z_2 z_1^4+58320 z_1^4+12288 z_2^6 z_1^3+108672
z_2^5 z_1^3+405000 z_2^4 z_1^3+768312 z_2^3 z_1^3+809838 z_2^2 z_1^3+466560 z_2 z_1^3\\
+116640 z_1^3+12480 z_2^6 z_1^2+111168 z_2^5 z_1^2+414234 z_2^4 z_1^2+809838 z_2^3 z_1^2+888165 z_2^2
z_1^2+524880 z_2 z_1^2+131220 z_1^2+6912 z_2^6 z_1\\
+62208 z_2^5 z_1+233280 z_2^4 z_1+466560 z_2^3 z_1+524880 z_2^2 z_1+314928 z_2 z_1+78732 z_1+1728 z_2^6+15552 z_2^5+58320 z_2^4\\
+116640
z_2^3+131220 z_2^2+78732 z_2+19683.
\end{multline}
\noindent\textit{Three points case.}
\begin{align}
\label{eq:W03}w_{0,3}(z_1,z_2,z_3)=\frac{24}{\left(2 z_1+3\right){}^2 \left(2 z_2+3\right){}^2 \left(2 z_3+3\right){}^2}.
\end{align}
For all these computed $w_{g,n}$, $(g,n)\neq (0,1), (0,2)$ the poles are located at $z=0$ and $z=-3/2$. Therefore we can expect that the poles of $w_{g,n}$, for $2g-2+n>0$, are always located at $z=0$ and $z=-3/2$, however this remains to be proven.
\begin{remark}
The computed $w_{g,n}$ are rational functions of the $z_i$. We notice that the numerator of these rational functions seems to be a polynomial with positive integer coefficients. If this property is true for every $w_{g,n}$, it would be interesting to understand if these positive integers have an enumerative (combinatorics or geometry) meaning.
\end{remark}
\section{Conclusion}
In this first paper on loop equations for matrix product ensembles, we have shown how to obtain loop equations for any resolvents for a random matrix defined as a product of two square complex Ginibre matrices without resorting to an eigenvalues or singular values reformulation of the problem. Indeed, the eigenvalues reformulation is yet to access these observable quantities. We used these loop equations to compute several terms of the expansion of the any resolvents $W_n$. In particular we accessed $W_{0,2}$, giving us information on the fluctuations of linear statistics, as well as the first correction $W_{1,1}$ to $W_{0,1}$. We expect a similar technique to apply to the more general case of the product of $p\ge 2$ rectangular Ginibre (complex or real) as well as to some other product ensembles, for instance the ensembles introduced in \cite{ForIps18} that are closely related to the Hermite Muttalib-Borodin ensemble. \\
Several questions are suggested by this work. The most straightforward one concerns the establishment of a topological recursion formula for the $w_{g,n}$. In the present case this topological recursion formula is certainly similar to the one devised in \cite{Bouchard-and-al, Bouchard-Eynard} by Bouchard and al. and Bouchard and Eynard. We postpone the construction of such formula to further works. Another interesting question oriented towards enumerative geometry concerns the application of the same technical means to the matrix model introduced by Ambj\o rn and Chekhov in \cite{A-C2014, A-C2018} which generates hypergeometric Hurwitz numbers. In these works the spectral curve is obtained, however this is done \textit{via} a matrix-chain approach that requires $p-1$ of the $p$ matrices to be invertible, thus ruling out the fully general case of rectangular matrices. We hope this fully general case can be tackled using our \emph{higher} derivatives technique.\\
Yet another related question is the following. Free probability provides us with tools to determine the equation satisfied by the large $N$ limit of the resolvent of a product of matrices knowing the large $N$ limit of the resolvents of the members of the product. These tools have been generalized to some extent to the $2$-point resolvent in the works of Collins and al. \cite{collins2007second} in order to more systematically access the fluctuations of linear statistics. One question is then the following. Can we devise similar tools that would allow to construct the full set of loop equations for a product matrix knowing the loop equations satisfied by the member of the product (or, more realistically, the large $N$ sector of the loop equations)? \\
Finally, the loop equations can be interpreted as Tutte equations \cite{countingsurfaces, tutte1962, tutte1968}. The loop equations described in this paper can also be interpreted combinatorially, and it would be interesting to understand the more general case of maps with an arbitrary number of black vertices in such a combinatorial setting. Moreover, one would also like to understand if it is possible to merge two sets of Tutte equations together for two independent sets of maps with one type of edge in order to obtain Tutte equations for maps with two types of edges. The combinatorial interpretation of the free multiplicative convolution described in \cite[section 3.3]{DLN} may be a useful starting point.
\bibliographystyle{alpha}
| proofpile-arXiv_065-7192 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
The nomenclature ``B[e] stars'' was first used by \citet{conti-1976} to designate B-type stars that present forbidden emission lines in the optical spectrum. Later, \citet{Lamers-1998} suggested the expression ``stars with the B[e] phenomenon'' to describe these objects. This phenomenon was revised by \citet{Zickgraf-1999}, who associated it to the presence in the optical spectrum of B-type stars
with: (i) intense Balmer emission lines, and (ii) permitted and forbidden emission lines of neutral and singly ionized metals, such as O\,{\sc i} and Fe\,{\sc ii}. In addition, these stars also present strong excess in the near-IR and mid-IR, due to circumstellar (CS) dust.
However, these spectral characteristics are associated to the circumstellar medium and not to the object itself. \citet{Lamers-1998} noted a great heterogeneity among these objects, suggesting the existence of four classes of stars with the B[e] phenomenon, based on their evolutionary stage: pre-main sequence intermediate-mass stars, or Herbig Ae/B[e] or simply HAeB[e]; massive supergiant stars, or B[e] supergiants or sgB[e]; compact planetary nebulae, or cPNB[e]; and symbiotic stars, or SymB[e]. Thus, an important question that needs to be answered is how such different objects can have similar spectroscopic features. A possible answer is linked to the presence of a complex circumstellar environment, composed of a disk, as confirmed by polarimetric \citep{Magalhaes-1992} and interferometric measurements \citep{Domiciano-de-Souza-2011, Borges-Fernandes-2011} or by rings \citep{Kraus-2016}. The effect of binarity cannot be discarded either.
On the other hand, there is a large number of objects whose evolutionary stage is still unknown or poorly known, due to the absence of reliable stellar parameters, also including distance and interstellar extinction. This group of objects is usually called as simply unclassified B[e] stars or unclB[e] \citep{Lamers-1998}. \citet{Miroshnichenko_2007} proposed a new group of stars associated to the B[e] phenomenon, called as FS CMa stars, which is mainly formed by unclB[e] objects that would be close to or still on the main sequence in binary systems with mass exchange.
\begin{table*}
\caption{Our sample of unclassified B[e] stars and candidates observed with FEROS (our own spectra and also public ones retrieved from the ESO Science Archive Facility). }
\label{table:objects}
\centering
\begin{tabular}{cccccccccc}
\hline
&Name & IRAS ID & R.A. & Dec. & Date & JD & t$\sb{\rm exp}$ (s) & N & {\it S/N}\\
&(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) &(9) \\ \hline \hline
\multicolumn{10}{c}{\textbf{First Group}}\\\hline
\textbf{Galaxy} &
Hen 3-938 & IRAS 13491-6318 & 13 52 42.8 & -63 32 49.2 & 2005-04-18 & 2453479.3 & 600 & 1 &15\\
& & & & & & & 3600 & 1 &60\\
& & & & & 2016-06-14 & 2457554.1 & 2000 & 2 &40\\
&SS 255 & IRAS 14100-6655 & 14 13 59.0 & -67 09 20.6 & 2016-06-14 & 2457554.1 & 2400 & 2 &10\\
&Hen 2-91 & IRAS 13068-6255 & 13 10 04.8 & -63 11 30.0 & 2016-04-12 & 2457491.3 & 1100 & 1 &6\\
& & & & & 2016-08-14 & 2457615.0 & 1800 & 2$^*$ &11\\
& & & & & 2016-08-15 & 2457616.0 & 1800 & 1$^*$ &13\\
& & & & & 2016-08-16 & 2457617.0 & 1800 & 1$^*$ &15\\
& & & & & 2016-08-17 & 2457618.0 & 1800 & 1$^*$ &12\\ \hline
\textbf{SMC} &
LHA 115-N 82 & $\cdots$ & 01 12 19.7 & -73 51 26.0 & 2008-12-24 & 2454825.6 & 1800 & 1 &60\\
& & & & & 2015-07-06 & 2457210.3 & 3000 & 1$^*$ &50 \\ \hline
\textbf{LMC} &
ARDB 54 & $\cdots$ & 04 54 43.4 & -70 21 27.5 & 2014-11-24 & 2456986.3 & 900 & 2 &20\\
& & & & & 2015-12-01 & 2457358.1 & 1500 &2 &35 \\
&LHA 120-S 59 & $\cdots$ & 05 45 29.5 & -68 11 45.9 & 2015-12-06 & 2457363.2 & 2400 &2 &40\\
& & & & & 2016-12-04 & 2457727.1 &3400 &1 &35\\
& & & & & 2016-12-05 & 2457728.3 &3400 &1 &42\\ \hline \hline
\multicolumn{10}{c}{\textbf{Second Group}}\\\hline
\textbf{Galaxy} &
TYC 175-3772-1 & IRAS 07080+0605 & 07 10 43.9 & +06 00 07.9 & 2015-12-06 & 2457363.3 & 1500 & 2 &128 \\
&SS 147 & IRAS 07377-2523 & 07 39 48.0 & -25 30 28.2 & 2008-12-20 & 2454821.2 & 1800 & 1 &75\\
&CD-31 5070 & IRAS 07455-3143 & 07 47 29.3 & -31 50 40.3 & 2008-12-20 & 2454821.3 & 1500 & 2 &97 \\
& & & & & 2015-12-05 & 2457362.3 & 1200 & 2 &110 \\
& & & & & 2016-03-13 & 2457461.0 & 1200 & 2 &93 \\
& & & & & 2016-04-12 & 2457491.1 & 1100 & 2 &62\\
&V* FX Vel & IRAS 08307-3748 & 08 32 35.8 & -37 59 01.5 & 2008-12-21 & 2454822.3 & 900 & 2 &128\\
& & & & & 2015-10-12 & 2457308.3 & 400 & 2 &132\\
& & & & & 2016-03-20 & 2457468.1 & 500 & 1 &135\\
& & & & & 2016-04-12 & 2457491.1 & 400 & 2 &161\\
&BD+23 3183 & IRAS 17449+2320 & 17 47 03.3 & +23 19 45.3 & 2016-04-12 & 2457491.3 & 500 & 1 &72\\
& & & & & & & 1100 & 1 & 121\\\hline
\textbf{SMC} &
{[}MA93{]} 1116 & $\cdots$ & 00 59 05.9 & -72 11 27.0 & 2007-10-03 & 2454377.2 & 1800 & 2 &6\\
& & & & & 2007-10-04 & 2454378.2 & 1800 & 2 &6\\
\hline
\end{tabular}
\begin{tablenotes}
\item \textbf{Notes 1.} Column information: (1) name of the object; (2) IRAS identifier; (3) and (4) right ascension and declination from epoch 2000 obtained from CDS; (5) date of observation; (6) Julian Date (JD) of the start of the first exposure; (7) exposure time of each spectrum in seconds; (8) number of spectra at each observation (the public spectra obtained from ESO Science Archive Facility have an asterisk); (9) signal-to-noise ({\it S/N}) around 5500~\AA\, of each spectrum.
\item \textbf{Notes 2.} Our sample is divided in 2 groups (see the text).
\end{tablenotes}
\end{table*}
\begin{table*}
\caption{Photometric data of our sample collected from the literature.}
\label{table:objectsphotometry}
\centering
\begin{tabular}{ccccccc}
\hline
&Name & Year & U & B & V & References \\
&(1) & (2) & (3) & (4) & (5) & (6) \\ \hline \hline
\multicolumn{7}{c}{\textbf{First Group}} \\ \hline
\textbf{Galaxy} &
Hen 3-938 & 1990-1998 & 15.33 & 15.03 & 13.50 & 1 \\
& & 1990-1995 & 15.02 & 14.92 & 13.40 & 2 \\
& & 2009-10-01 & $\cdots$ & 14.78$\pm$0.04 & 13.40 & 3\\
&SS 255 & 2004-11-04 & $\cdots$ & 15.16 & 14.83 & 4 \\
&Hen 2-91 & 1980-03-01 & $\cdots$ & 15.30 & $\cdots$ & 5 \\
& & 1989-1993 & $\cdots$ & 15.20 & 14.38 & 6\\ \hline
\textbf{SMC} &
LHA 115-N 82 & 1989-07-5,7,8 & 14.24$\pm$0.02 & 14.37$\pm$0.02 & 14.25$\pm$0.02 & 7 \\
& & 1995-11-13 to 23 & 14.78$\pm$0.05 & 14.75$\pm$0.03 & 14.75$\pm$0.03 & 8 \\
& & 1999-01-08 & 14.43$\pm$0.01 & 14.35$\pm$0.01 & 14.24$\pm$0.01 & 9 \\ \hline
\textbf{LMC} &
ARDB 54 & 1968 & 12.79 & 12.93 & 12.71 & 10$^*$ \\
& & 1995-11-13 to 23 & 12.81$\pm$0.01 & 13.02$\pm$0.01 & 12.77$\pm$0.01 & 11 \\
& & 1999-01-08 & 12.80 & 12.96 & 12.71$\pm$0.01 & 9 \\
&LHA 120-S 59 & 1991-12-01 & 13.62 & 14.62 & 14.41$\pm$0.03 & 12 \\
& & 1995-11-13 to 23 & 13.37$\pm$0.03 & 14.51$\pm$0.03 & 14.02$\pm$0.02 & 11 \\ \hline \hline
\multicolumn{7}{c}{\textbf{Second Group}} \\ \hline
\textbf{Galaxy} &
IRAS 07080+0605 & 2007 & 12.30 & 12.31 & 12.15 & 13$^*$ \\
& & 1991-04-02 & $\cdots$ & 12.345 & 12.741 & 14\\
&IRAS 07377-2523 & 2007 & $\cdots$ & $\cdots$ & 12.8 & 13$^*$ \\
& & 2009-10-01 & $\cdots$ & 13.27$\pm$0.06 & 12.90$\pm$0.02 & 3\\
&IRAS 07455-3143 & 1971-1979 & 12.33 & 12.45 & 11.53 & 8 \\
& & 2009-10-01 &$\cdots$ & 12.41$\pm$0.04 & 11.51$\pm$0.02 & 3\\
& & 1989-1993 &$\cdots$ & 12.119 & 11.51 & 6\\
&V* FX Vel & 1994-02-18 & 10.898 & 10.973 & 10.795 & 15 \\
& & 2009-10-01 & $\cdots$ & 10.1$\pm$0.2 & 10.0$\pm$0.1 & 3\\
& & 1989-1993 & $\cdots$ & 9.776 & 9.724 & 6\\
&IRAS 17449+2320 & 2007 & 10.05 & 10.06 & 10.00 & 13$^*$ \\ \hline
\textbf{SMC} &
{[}MA93{]} 1116 & 1985-11-26 & 15.64 & 16.18 & 15.91 & 16 \\
& & 1999-01-08 & 14.91$\pm$0.07 & 15.56$\pm$0.07 & 15.01$\pm$0.07 & 9 \\
\hline
\end{tabular}
\begin{tablenotes}
\item \textbf{Notes.} Column information: (1) name of the object; (2) date of observation; (3)-(5) photometric data in the U, B, and V-bands; (6) references. Based on the literature, each set of data was taken at the same night.
\item \textbf{References.} (1) \citet[][Pico dos Dias Survey]{Vieira_2003}; (2) \citet{Torres_1995}; (3) \citet[][AAVSO Photometric All Sky Survey (APASS) DR9]{Henden-et-al-2015}; (4) \citet[][SPM 4.0 Catalog]{Girard-et-al-2011}; (5) \citet[][2MASS Catalog]{Cutri-et-al-2003}; (6) \citet[][The NOMAD-1 Catalog]{Zacharias-et-al-2004}; (7) \citet{Heydari-Malayeri-1990}; (8) \citet{Orsatti_1992}; (9) \citet{Massey_2002}; (10) \citet{Ardeberg_1972}; (11) \citet{Zaritsky_2004}; (12) \citet*{Gummersbach_1995}; (13) \citet{Miroshnichenko_2007}; (14) \citet[][ASCC-2.5 V3]{Kharchenko-2001}; (15) \citet{de_Winter_2001}; (16) \citet*{Massey_1989}. The asterisk means that no information about the exact dates of the photometric observations is provided, thus, we decided to assume the year of the publication.
\end{tablenotes}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{Hen3-938-balmer-art2}
\includegraphics[width=0.99\textwidth]{SS255-balmer-art2}
\includegraphics[width=0.99\textwidth]{Hen2-91-balmer-art2}
\includegraphics[width=0.99\textwidth]{N82-balmer-art2}
\includegraphics[width=0.99\textwidth]{ARDB54-balmer-art2}
\includegraphics[width=0.99\textwidth]{S59-balmer-art2}
\caption{Balmer line profiles observed in the FEROS spectra of our sample (group 1). The first five columns show H$\epsilon$, H$\delta$, H$\gamma$, H$\beta$, and H$\alpha$, respectively. The last column zooms in the H$\alpha$ wings.}
\label{fig:Balmer-lines}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{IRAS07080+0605-balmer-art2}
\includegraphics[width=1\textwidth]{IRAS07377-2523-balmer-art2}
\includegraphics[width=1\textwidth]{IRAS07455-3143-balmer-art2}
\includegraphics[width=1\textwidth]{FX-Vel-balmer-art2}
\includegraphics[width=1\textwidth]{IRAS17449+220-balmer-art2}
\includegraphics[width=1\textwidth]{MA93-1116-balmer-art2}
\caption{Continued (group2).}
\end{figure*}
Nowadays the number of stars with the B[e] phenomenon is around 150 objects identified in the Galaxy, Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), M31, M33, and M81 \citep{Lamers-1998, Miroshnichenko_2007, Kraus-et-al-2014, Levato_2014, Kamath_2014, Miszalski-Mikolajewska-2014a, Kamath-at-al-2017, Humphreys-et-al-2017, 2018MNRAS.480.3706K, 2019AJ....157...22H}. Just a few of them were deeply studied and have their evolutionary stage confirmed. Thus, in this paper we decided to study a sample of 12 unclB[e] stars or candidates to the B[e] phenomenon from the Galaxy, LMC and SMC, through the analysis of photometric and high-resolution spectroscopic data.
In Sect.~\ref{sec:sample}, we describe our observations and the public data used in this study.
In Sect.~\ref{sec:Spectral-description}, we present a general description of the main spectral features identified in our sample. In Sect.~\ref{sec:Physical parameters}, we present the methodology used to derive the physical parameters of each star, the kinematics of the circumstellar environment, and the period analysis of light curves of some objects. In Sect.~\ref{sec:Discussion of the nature of our objects}, we discuss the possible nature of our objects and in Sect.~\ref{sec:Conclusions}, we summarize our conclusions.
\section{Our Sample and Observations}
\label{sec:sample}
Our sample is composed of 12 objects: 8 from the Galaxy, 2 from LMC and 2 from SMC, which exhibit or may exhibit the B[e] phenomenon, as seen in Table~\ref{table:objects}. We analysed in a homogeneous way photometric and high-resolution spectroscopic data of these objects. The sample can also be divided in 2 groups: the first group is composed of Hen\,3-938, SS\,255, Hen 2-91, LHA 115-N82, ARDB\,54, and LHA 120-S59, for which the analysis of high-resolution spectra (public or ours) was done for the first time; and the second one is composed of IRAS 07080+0605, IRAS 07377-2523, IRAS 07455-3143, V* FX Vel, IRAS 17449+2320, and [MA93] 1116 that were already studied by different authors using high-resolution spectroscopy, but for which we provide more details about their nature.
\subsection{High-Resolution Spectroscopy}
For our analysis, we obtained high-resolution spectra using the \textit{Fiber-fed Extended Range Optical Spectrograph} \citep[FEROS,][]{Kaufer-et-al-1999} attached to the 2.2-m ESO-MPI telescope, at La Silla Observatory (Chile). FEROS is a bench-mounted echelle spectrograph, which provides a resolution of 0.03~\AA/pixel (R$\sim$48000) and a spectral coverage from 3600 to 9200~\AA.
The spectra of our sample were observed in 12 different epochs between 2005 and 2016 (Table\,\ref{table:objects}). The data obtained by us was reduced with the ESO/FEROS pipeline\footnote{\textsf{https://www.eso.org/sci/facilities/lasilla/instruments/feros/tools/ DRS.html}}, except for the spectra taken in 2005, which were reduced using MIDAS routines developed by our group, following standard echelle reduction procedures. The public data obtained from the ESO Science
Archive Facility was reduced by ESO phase 3\footnote{\textsf{http://archive.eso.org/cms/eso-archive-news/feros-pipeline-processed-data-available-through-phase-3.html}}. All spectra were corrected by heliocentric velocity and the {\it S/N} ratio is between 6 and 160 around 5500 \AA. We added up the spectra for stars that did not show variability during the night, in order to increase the {\it S/N}. We used standard \href{http://iraf.noao.edu/}{IRAF}\footnote{IRAF is distributed by the National
Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. See \textsf{http://iraf.noao.edu/}} tasks for normalization, cosmic ray removal and equivalent width measurements. We also used the TelFit code\footnote{\textsf{https://pypi.python.org/pypi/TelFit/1.3.2}} \citep*{Gullikson-et-al-2014} for telluric correction of our FEROS spectra.
\subsection{Photometry}
\label{sec:Photometry}
In addition to the spectroscopic data, we searched for public photometric data, in order to derive the light curve (LC) of some of our objects and identify any possible photometric variation and periodicity. We collected data from \textit{\href{http://vizier.u-strasbg.fr/viz-bin/VizieR}{VizieR}}\footnote{http://vizier.u-strasbg.fr/viz-bin/VizieR}, \textit{\href{http://www.astrouw.edu.pl/asas/}{All Sky Automated Survey}\footnote{http://www.astrouw.edu.pl/asas/}} \citep[ASAS,][]{Pojmanski-2003}, the \textit{\href{http://ogledb.astrouw.edu.pl/~ogle/CVS/}{Optical Gravitational Lensing Experiment}\footnote{http://ogledb.astrouw.edu.pl/$\sim$ogle/CVS/}} \citep[OGLE III,][]{Udalski-Szymanski-2008}, and also from the literature, as can be seen in Table \ref{table:objectsphotometry}.
\section{Spectral description}
\label{sec:Spectral-description}
From the high-resolution FEROS spectra, we described the spectral features and derived the radial velocities for all stars of our sample.
For the identification of the spectral lines, we have used the line lists provided by \citet{Moore_1945}, \citet{Thackeray_1967}, \citet*{Landaberry_2001},
NIST Atomic Spectra Database Lines Form\footnote{\textsf{http://physics.nist.gov/cgi-bin/AtData/lines/form}} and The Atomic Line List v2.04\footnote{\textsf{http://www.pa.uky.edu/$\sim$peter/atomic/}}. We also used the SpecView\footnote{\textsf{http://www.stsci.edu/institute/software$\_$hardware/specview}} identification tool for 1-D spectral visualization and analysis \citep{Busko_2000, Busko_2002, Busko_2002b, Busko_2012}.
Fig. \ref{fig:Balmer-lines} shows the Balmer lines (from H$\epsilon$ to H$\alpha$) present in the FEROS spectra. We note the different line profiles for each object, whose morphologies can be divided in four groups: (i) broad absorptions, probably of photospheric origin, as those for IRAS 07080+0605, (ii) broad absorptions superimposed with double or triple-peaked emissions of circumstellar origin in IRAS 07080+0605, IRAS 07377-2523 and IRAS 17449+2320; (iii) pure double- or triple-peaked emissions in IRAS 07455+3143, Hen 2-91 and SS 255; and (iv) P-Cygni profiles in Hen 3-938, [MA93] 1116 and ARDB 54.
As typically seen for stars with the B[e] phenomenon, Fe\,{\sc ii} lines (permitted and forbidden ones) are the most numerous in the spectra of our stars. They also show different line profiles, like single- and double-peaked emission, shell-type, P-Cygni and inverse P-Cygni profiles.
The [O\,{\sc i}] lines are one of the main defining characteristics of the B[e] phenomenon. These lines display single or double-peaked emission profiles in our sample, except for IRAS 07455+3143, which has no detectable [O\,{\sc i}] emission. Hence for all objects of our sample, except for IRAS 07455-3143, we confirmed the presence of the B[e] phenomenon.
We identified [Ca\,{\sc ii}] lines in some stars of our sample (IRAS 07080+0605, IRAS 07377-2523, IRAS 07455+3143, Hen 3-938, [MA93] 1116, LHA 115 N-82, and ARDB 54). Together with the lines of [O\,{\sc i}], they are excellent tracers for the kinematics of the circumstellar medium, as will be described in Sect.~\ref{subsec:fl}.
Notably, some of our objects display absorption lines of He\,{\sc i}, Mg\,{\sc ii} and Si\,{\sc ii}, which are used in empirical relations to derive the spectral type of B- and A-type stars (see Sect.~\ref{subsec:Spectral Classification}).
In agreement with the literature, we confirmed the presence of Li\,{\sc i} and Ca\,{\sc i} lines in V* FX Vel and one Ca\,{\sc i} line in IRAS 07455+3143, indicating a possible companion, as will be discussed in Sect.~\ref{subsubsec:FXVel} and ~\ref{subsubsec:IRAS07455}, respectively.
For some objects, we identified the existence of variability in comparison to the literature. In addition, based on multiple FEROS spectra, a clear variability is seen for four stars of our sample: IRAS 07455+3143, V* FX Vel, LHA 115 N-82, and LHA 120 S-59 (Fig. \ref{fig:Balmer-lines}).
The radial velocities of stars with no noticeable spectral variability were derived from [Fe\,{\sc ii}] and [O\,{\sc i}] lines (see Table~\ref{table:velocities}). These lines were chosen because they have, in general, symmetric emission line profiles. For stars with clear variability, we also measured radial velocities from detected absorption lines of He\,{\sc i}, Mg\,{\sc ii}, Si\,{\sc ii}, Ca\,{\sc i} and Li\,{\sc i} (see Table~\ref{table:velocities2}). The different radial velocities derived from permitted absorption and forbidden emission lines may also indicate binarity.
A detailed description of the spectral features present in the FEROS spectra of each star of our sample can be found in Appendix \ref{Apendix:Spectral Descriptions} and Table~\ref{table:atlas}.
\section{Physical parameters}
\label{sec:Physical parameters}
\begin{table}
\caption{The radial velocity of stars from our sample that do not present sensible spectral variability or for which we have just one spectrum. The radial velocities are the average of values obtained from [O\,{\sc i}] and [Fe\,{\sc ii}] lines.}
\label{table:velocities}
\centering
\begin{tabular}{llccc}
\hline
& Name & Date & [O\,{\sc i}] and [Fe\,{\sc ii}] \\
& & & (km~s$^{-1}$) \\ \hline\hline
\textbf{Galaxy} &
Hen 3-938 & 2005-04-18 & -22$\pm$3 \\
& & 2016-06-14 & -22$\pm$2 \\
& SS 255 & 2016-06-14 & 90$\pm$1 \\
& Hen 2-91 & 2016-04-12 & -47$\pm$4 \\
& & 2016-08-14 & -47$\pm$3 \\
& & 2016-08-15 & -46$\pm$2 \\
& & 2016-08-16 & -47$\pm$2 \\
& IRAS 07080+0605 & 2015-12-06 & 10$\pm$2 \\
& IRAS 07377-2523 & 2008-12-20 & 90$\pm$3 \\
& IRAS 17449+2320 & 2016-04-12 & -16$\pm$2 \\ \hline
\textbf{SMC} &
{[}MA93{]} 1116 & 2007-10-03 & 166$\pm$1 \\
& & 2007-10-04 & 166$\pm$1 \\ \hline
\textbf{LMC} &
ARDB 54 & 2014-11-24 & 240$\pm$6 \\
& & 2015-12-01 & 235$\pm$4 \\ \hline
\end{tabular}
\end{table}
\begin{table*}
\caption{Radial velocities, derived from permitted absorption and forbidden emission lines, for the objects with strong variability.}
\label{table:velocities2}
\centering
\begin{tabular}{llccccccc}
\hline
&Name & Date & [O\,{\sc i}] and [Fe\,{\sc ii}] & He\,{\sc i} &Mg\,{\sc ii} & Si\,{\sc ii} & Ca\,{\sc i} & Li\,{\sc i} \\
& & & (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) \\ \hline\hline
\textbf{Galaxy} &
IRAS 07455-3143 & 2008-12-20 & 106$\pm$6 &140$\pm$5 &146$\pm$1 &151$\pm$3 &120$\pm$1 &$\cdots$ \\
& & 2015-12-05 & 107$\pm$7 &111$\pm$5 &122$\pm$1 &122$\pm$1 &100$\pm$1 &$\cdots$\\
& & 2016-03-13 & 103$\pm$7 &77$\pm$6 &84$\pm$1 &84$\pm$3 &54$\pm$1 & $\cdots$\\
& & 2016-04-12 & 97$\pm$8 &37$\pm$7 &42$\pm$1 &44$\pm$5 &22$\pm$1 & $\cdots$\\
&V* FX Vel & 2008-12-21 & 22$\pm$1 &44$\pm$1 &51$\pm$1 &52$\pm$2 &24$\pm$1 &28$\pm$1\\
& & 2015-10-12 & 21$\pm$4 &22$\pm$1 &29$\pm$3 &31$\pm$3 &32$\pm$1 &30$\pm$1\\
& & 2016-03-20 & 16$\pm$4 &40$\pm$1 &42$\pm$2 &43$\pm$2 &23$\pm$1 &24$\pm$1\\
& & 2016-04-12 & 19$\pm$4 &37$\pm$1 &41$\pm$2 &42$\pm$2 &$\cdots$ &30$\pm$1\\ \hline
\textbf{SMC} &
LHA 115-N82 & 2008-12-24 & 206$\pm$2 &213$\pm$7 &220$\pm$3 &224$\pm$2 &$\cdots$ &$\cdots$\\
& & 2015-07-06 & 206$\pm$2 &179$\pm$4 &175$\pm$1 &181$\pm$3 &$\cdots$ &$\cdots$ \\ \hline
\textbf{LMC} &
LHA 120-S59 & 2015-12-07 & 293$\pm$4 &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
& & 2016-12-05 & 298$\pm$3 &301$\pm$6 &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
& & 2016-12-06 & 295$\pm$3 &292$\pm$4 &$\cdots$ &$\cdots$ &$\cdots$ &$\cdots$ \\
\hline
\end{tabular}
\end{table*}
\subsection{Spectral Classification}
\label{subsec:Spectral Classification}
The determination of the spectral type and luminosity class for objects with the B[e] phenomenon is rather complicated due to the absence, in general, of photospheric lines and the contamination by circumstellar emission. Thus, we need to deal with indirect methods, which have different levels of uncertainty.
\begin{table*}
\centering
\caption{Spectral type, luminosity class, intrinsic color index, and effective temperature for some stars of our sample, obtained using the different methods described in the text.}
\label{table:SpectralType}
\begin{tabular}{llcccccc}
\hline \hline
\multicolumn{7}{c}{\textbf{Method 1 } } \\
Star& \multicolumn{3}{l}{Sp.type: L.C.} & $(B-V)_0$ & $T_{\rm eff}$ [K] & \\ \hline
Hen 3-938 &\multicolumn{3}{l}{B0-B1: I} & -0.21$\pm$0.02 &23400$\pm$2600 & \\
IRAS 07080+0605 & \multicolumn{3}{l}{A0-A1: II} & -0.01$\pm$0.02 &9700$\pm$400 & \\
IRAS 07455-3143 & \multicolumn{3}{l}{B0-B1: II/III/V} &-0.28$\pm$0.02 &26000$\pm$4000 & \\
V* FX Vel & \multicolumn{3}{l}{B8-B9: III/V} &-0.09$\pm$0.02 &11500$\pm$900 & \\
IRAS 17449+2320 &\multicolumn{3}{l}{A1-A2: II/III} &0.03$\pm$0.02 &9200$\pm$300 & \\
{[}MA93{]} 1116 &\multicolumn{3}{l}{B1-B2: II/III/V} & -0.25$\pm$0.01 &21600$\pm$3000 & \\
LHA 115-N82 &\multicolumn{3}{l}{B8-B9: II/V} & -0.09$\pm$0.02 &11200$\pm$700 & \\
& \multicolumn{3}{l}{A0-A2: III} & 0.06$\pm$0.09 &9100$\pm$1000 & \\
ARBD 54 &\multicolumn{3}{l}{A0-A1: I} & 0.01$\pm$0.02 &9500$\pm$200 & \\ \hline \hline
\multicolumn{7}{c}{\textbf{Method 2}} \\
Star & \multicolumn{3}{l}{Date} & Mg\,{\sc ii}~4482\,\AA / He\,{\sc i}~4471\,\AA & Sp.type & $T_{\rm eff}$ [K] \\ \hline
IRAS 07377-2523 &\multicolumn{3}{l}{2008-12-21} &1.08$\pm$0.05 &B8-B9 & 12000$\pm$1000\\
IRAS 07455-3143 &\multicolumn{3}{l}{2016-04-13} &0.97$\pm$0.03 &$\sim$B8 & 12500$\pm$500\\
V* FX Vel &\multicolumn{3}{l}{2008-12-22} &5.57$\pm$1.04 &$\leq$A2 & $\leq$9000 \\
&\multicolumn{3}{l}{2015-12-06} &6.53$\pm$0.31 &$<$A2 & $<$9000 \\
&\multicolumn{3}{l}{2016-03-21} &4.34$\pm$0.26 &A0-A2 & 9500$\pm$500 \\
&\multicolumn{3}{l}{2016-04-13} &4.85$\pm$1.28 &$\sim$A2 & $\sim$9000 \\
IRAS 17449+2320 &\multicolumn{3}{l}{2016-04-13} &4.29$\pm$0.73 &A0-A2 & 9500$\pm$500 \\
\hline \hline
\multicolumn{7}{c}{\textbf{Method 3}} \\
Star & \multicolumn{3}{l}{Date} & He\,{\sc i}~ 4713\,\AA / Si\,{\sc ii}~6347\,\AA & He\,{\sc i}~5875\,\AA / Si\,{\sc ii}~6347\,\AA & $T_{\rm eff}$ [K] \\ \hline
IRAS 07080+0605 &\multicolumn{3}{l}{2015-12-07} &$\cdots$ &0.62$\pm$0.30 & 10500$\pm$1000 \\
IRAS 07377-2523 &\multicolumn{3}{l}{2008-12-21} &0.23$\pm$0.09 &1.15$\pm$0.20 & 12000$\pm$1000 \\
IRAS 17449+2320 &\multicolumn{3}{l}{2016-04-13} &0.11$\pm$0.02 &0.75$\pm$0.05 &10700$\pm$1000\\
\hline
\end{tabular}
\end{table*}
One of these methods was described by \citet{Borges-Fernandes-2009}, where through the observed color indices, it is possible to derive the intrinsic ones, such as $(U-B)_0$ and $(B-V)_0$, and the total extinction of each object (hereafter Method 1).
Based on empirical spectroscopic criteria, using equivalent width ratios of photospheric lines, we can also estimate the spectral classification for B- and A-type stars in the Galaxy and in the Magellanic Clouds. We chose the relation that associates the spectral type to the Mg\,{\sc ii}~4482\,\AA / He\,{\sc i}~4471\,\AA \ equivalent width ratio (hereafter Method 2), as done by \citet{Lennon-1997}, \citet{Evans-2003} and \citet[][their fig. 3]{Kraus-2008}. In order to estimate the effective temperature, we also used the He\,{\sc i}~4713\,\AA / Si\,{\sc ii}~6347\,\AA \ and He\,{\sc i}~5875\,\AA / Si\,{\sc ii}~6347\,\AA \ equivalent width ratios, as in \citet*[][their fig. 3]{Khokhlov-et-al-2017} (hereafter Method 3). Both Methods 2 and 3 are only used if these lines are of photospheric origin, i.e., they are in absorption without contamination from the wind or the circumstellar emission. The results from these different methods can be seen in Table~\ref{table:SpectralType} and a detailed analysis for each star is seen in Sect.~\ref{sec:Discussion of the nature of our objects}.
\subsection{Interstellar, circumstellar and total extinction}
\label{subsec:interstellar-extinction}
Stars with the B[e] phenomenon have a complex circumstellar structure, making it difficult to disentangle the interstellar and circumstellar contributions from the total extinction.
Therefore, in order to determine the interstellar extinction or color excess, $E(B-V)_{\text{IS}}$, of our Galactic objects, we used the diffuse interstellar band (DIB) at 5780~\AA \ and the empirical relation described by \citet{Herbig-1993}. For the objects that do not present this DIB and objects located in the SMC and LMC, we used values from \href{http://irsa.ipac.caltech.edu/applications/DUST/}{IRSA/Galactic Dust Reddening and
Extinction}\footnote{\textsf{http://irsa.ipac.caltech.edu/applications/DUST/}} \citep{Schlafly-2011}. In addition, for objects with declination of $\delta \gtrsim-30^{\circ}$, we also used \href{http://argonaut.skymaps.info/}{3D dust mapping}\footnote{\textsf{http://argonaut.skymaps.info/}} \citep{Green-et-al-2018}.
In order to derive the visual interstellar extinction, $A_V$, we assumed $A_V/E(B-V)_{IS}=$ 3.1 for Galaxy \citep*[e.g.,][]{Cardelli-et-al-1989}, 2.74 for SMC and 2.76 for LMC stars \citep[e.g.,][]{Gordon-et-al-2003}, see Table~\ref{table:interstellar-extinction}. For the total extinction of each object, we used the relation: $E(B-V)_{\text{T}} = (B-V)-(B-V) _0$. These color indices are obtained from Tables~\ref{table:objectsphotometry} and~\ref{table:SpectralType}, respectively. For the circumstellar extinction, we used the relation: $E(B-V)_{\text{CS}} = E(B-V)_{\text{T}} - E(B-V)_{\text{IS}}$. Our results can be seen in Table~\ref{table:interstellar-extinction} and in Sect.~\ref{sec:Discussion of the nature of our objects}.
\subsection{Modeling optical forbidden emission lines}
\label{subsec:fl}
Optical forbidden lines have been used to describe the kinematics of the circumstellar medium of stars
with the B[e] phenomenon \citep[e.g.,][]{2005A&A...441..289K, Aret_2016}. These lines are
optically thin, and their profiles mirror the kinematics within the line-forming regions.
But in contrast to the forbidden emission lines that are usually seen in low-density nebulae, the lines
of [O\,{\sc i}] $\lambda\lambda$5577,6300,6363 and [Ca\,{\sc ii}] $\lambda\lambda$7291,7323 are often
found associated with the high-density (quasi-)Keplerian circumstellar or circumbinary
rings or disks of the B[e] stars \citep{2010A&A...517A..30K, Kraus-2016, 2017AJ....154..186K,
2012MNRAS.423..284A, 2018A&A...612A.113T, 2018MNRAS.480..320M}. To extract the information about the
dynamics of the atomic and ionized gas around our objects and to search for indication of
circumstellar disks/rings, we model the profiles of the forbidden emission lines, focusing on the
[Ca\,{\sc ii}] and [O\,{\sc i}] lines.
Five of our objects display both sets of disk-tracing lines (Fig.\,\ref{fig:fits-OI-CaII}) whereas in
six objects only the [O\,{\sc i}] lines were detected (Fig.\,\ref{fig:fits-OI}), and one object only displays the [Ca\,{\sc ii}] lines (Fig.\,\ref{fig:fits-CaII}). The absence of the
[Ca\,{\sc ii}] lines in half of our sample could indicate that the density in their environments is
lower than in the other objects. This conclusion is in line also with the absence of [O\,{\sc i}]
$\lambda$5577 in these stars, which requires also higher densities than the [O\,{\sc i}]
$\lambda\lambda$6300,6363 lines, but not as high as the [Ca\,{\sc ii}] lines, to generate
measurable amounts of emission. No trend is seen regarding the presence or absence of individual sets of lines with respect to the lower metallicity of the Magellanic Cloud stars. This implies that the density structure within the circumstellar environment of these stars is not a direct consequence of the stellar mass-loss rate via a smooth wind which is known to be metallicity dependent.
The shapes of the profiles of the forbidden lines are either single-, double-, or multiple-peaked.
For the single-peaked lines, a pure Gaussian component cannot fit the shape. The profiles require a
non-Gaussian component, which might indicate that the gas revolves the central object.
For simplicity we utilize a pure kinematic model to reproduce the profile shapes and hence
the kinematics of the circumstellar gas. We assume that the emission originates from a thin ring
of material revolving the central star. To compute the profile function, we need to specify two
velocity components: the component of the rotational velocity projected to the line of sight
$v_{\rm rot, los}$, and a Gaussian component $v_{\rm gauss}$. The latter combines the broadening
contributions from thermal motion, which is on the order of 1--2\,km\,s$^{-1}$, and from possible
turbulent motion of the gas, which can be on the order of a few km\,s$^{-1}$. The resulting line profile is
convolved to the spectral resolution of 6.5\,km\,s$^{-1}$ of FEROS.
In cases where a single ring is insufficient to reproduce the observed profile shape,
we add one or more rings.
In some cases where the profiles are very asymmetric, we allow for gaps in the
rings. We note, however, that especially those multi-component models might not provide unique
solutions, and other scenarios might result in similar profiles.
Our results are included in Figs.\,\ref{fig:fits-OI-CaII} - \ref{fig:fits-CaII} and the parameters
needed for the model fits are listed in Tables\,\ref{tab:velocities-1} and \ref{tab:velocities-2}. For
an easier comparison with the models, we centered the observed line profiles around zero by correcting
for the radial velocities listed in Tables\,\ref{table:velocities} and \ref{table:velocities2}. A detailed discussion for each star is provided in Sect.~\ref{sec:Discussion of the nature of our objects}.
\subsection{Period analysis} \label{sec:45}
Six stars from our sample (IRAS\,07080+0605, V*FX~Vel, [MA 93] 116, LHA 115-N 82, LHA~120-S 59, and IRAS 07377-2523) were investigated in photometric surveys (Sect.~\ref{sec:Photometry}). Two of them were excluded from the present period analysis: IRAS 07377-2523 had very poor data coverage, while the LHA 115-N 82 data show that it is dominated by pronounced long-term variability that exceeds the total time span (Sect.~\ref{subsubsec:N82}).
\begin{landscape}
\begin{table}
\caption{Interstellar, circumstellar, and total color excess, $E(B-V)$, and visual interstellar extinction, $A_V$, for the objects of our sample.}
\label{table:interstellar-extinction}
\centering
\begin{tabular}{llccccccccc}
\hline
&Star &EW(DIBs) &$E(B-V)_{\text{IS}}^{\text{DIBs}}$ &$E(B-V)_{\text{IS}}^{\text{IRSA}}$ &$E(B-V)_{\text{IS}}^{\text{3D}}$ &$E(B-V)_{\text{IS}}$ &$E(B-V)_{\text{CS}}$ &$E(B-V)_{\text{T}}$ &$A_V$ &$E(B-V)_{\text{lit}}$ \\
&(1)&(2) &(3) &(4) &(5) &(6) &(7) &(8) &(9) & (10)\\
\hline \hline
\textbf{Galaxy} &
Hen 3-938 &0.85 &1.64$\pm$0.02 &2.36$\pm$0.08 &$\cdots$ &1.64$\pm$0.02 &0.10$\pm$0.04 &1.74$\pm$0.02 &5.08$\pm$0.06 &0.45$^\text{a}$\\
&SS 255 &$\cdots$ &$\cdots$ &0.45$\pm$0.01 &$\cdots$ &0.45$\pm$0.01 &$\cdots$ &$\cdots$ &1.40$\pm$0.03 &\\
&Hen 2-91 &1.54 &2.92$\pm$0.02 &6.56$\pm$1.34 &$\cdots$ &2.92$\pm$0.02 &$\cdots$ &$\cdots$ &9.05$\pm$0.06
&2.34$^\text{b}$, 1.87$^\text{c}$\\
&IRAS 07080+0605 &$\cdots$ &$\cdots$ &0.14$\pm$0.01 &0.05$\pm$0.02 & 0.05$\pm$0.02 &0.11$\pm$0.04 &0.16$\pm$0.02 &0.16$\pm$0.06 & $\sim$0.10$^\text{d}$\\
&IRAS 07377-2523 &0.50 &0.98$\pm$0.02 &0.85$\pm$0.03 &0.50$\pm$0.03 &0.50$\pm$0.03 &$\cdots$ &$\cdots$ &1.55$\pm$0.09 &$\sim$0.63$^\text{e}$\\
&IRAS 07455-3143 &0.70 &1.37$\pm$0.02 &0.93$\pm$0.01 &$\cdots$ &1.15$\pm$0.22 &0.05$\pm$0.23 &1.20$\pm$0.02 &3.56$\pm$0.68 &$\sim$1.13$^\text{a}$, 1.17$^\text{f}$\\
&V* FX Vel &0.02 &0.05$\pm$0.03 &1.16$\pm$0.03 &$\cdots$ &0.05$\pm$0.03 &0.22$\pm$0.05 &0.27$\pm$0.08 &0.15$\pm$0.09 &\\
&IRAS 17449+2320$^{*}$ &0.07 &0.14$\pm$0.03 &0.07$\pm$0.01 &0.05$\pm$0.02 &0.05$\pm$0.02 &$\cdots$ & $\cdots$ &0.16$\pm$0.06&\\
\hline
\textbf{SMC} &
{[}MA93{]} 1116&$\cdots$ &$\cdots$ &0.42$\pm$0.07$^{**}$ &$\cdots$ &0.42$\pm$0.07 &0.10$\pm$0.08 &0.52$\pm$0.01 &1.15$\pm$0.22 &0.35$^\text{g}$\\
&LHA 115-N82 &$\cdots$ &$\cdots$ &0.04$\pm$0.01 &$\cdots$ &0.04$\pm$0.01 &0.17$\pm$0.03 &0.21$\pm$0.02 &0.11$\pm$0.03 &0.03$^\text{h}$, 0.12$^{\text{i}}$\\
\hline
\textbf{LMC} &
ARDB 54 &$\cdots$ &$\cdots$ &0.11$\pm$0.01 &$\cdots$ &0.11$\pm$0.01 &0.13$\pm$0.03 &0.24$\pm$0.02 &0.30$\pm$0.03 &0.15$^\text{j}$\\
&LHA 120-S59 &$\cdots$ &$\cdots$ &0.40$\pm$0.01 &$\cdots$ &0.40$\pm$0.01 &$\cdots$ &$\cdots$ &1.10$\pm$0.03 & 0.15$^\text{j}$, 0.05$^\text{k}$\\
\hline
\end{tabular}
\begin{tablenotes}
\item \textbf{Notes 1.} Column information: (1) name of the object; (2) equivalent width of the DIB at 5780~\AA~ in m\AA; (3) interstellar color excess derived from the DIB; (4) interstellar color excess taken from \href{http://irsa.ipac.caltech.edu/applications/DUST/}{IRSA}; (5) interstellar color excess taken from \href{http://argonaut.skymaps.info/}{3D dust mapping}; (6) interstellar color excess adopted in this work; (7) circumstellar color excess; (8) total color excess; (9) visual interstellar extinction, $A_V$; (10) interstellar color excess from the literature.
\item \textbf{Notes 2.} ($^*$) the total color excess is negative using the colors from Table\,\ref{table:objectsphotometry}, thus, we decided to consider only its interstellar extinction; ($^{**}$) minimum value of interstellar color excess from IRSA.
\item \textbf{References.} (a) \citet{Vieira_2011};
(b) \citet{Pereira_2003};
(c) \citet{Cidale_2001};
(d) \citet{Miroshnichenko_2007};
(e) \citet*{Chen-et-al-2016};
(f) \citet{Orsatti_1992};
(g) \citet{Wisniewski_2007};
(h) \citet{Kamath_2014};
(i) \citet{Heydari-Malayeri-1990};
(j) \citet{Levato_2014};
(k) \citet{Gummersbach_1995}.
\end{tablenotes}
\end{table}
\begin{table}
\centering
\caption{Velocities needed for fitting the line profiles for the stars with both [O\,{\sc i}] and
[Ca\,{\sc ii}] forbidden lines. Multiple rows per line list the multiple fitting components. Parameters for the [O\,{\sc i}] $\lambda\lambda$6300,6364 lines are identical. All velocities have units km\,s\,$^{-1}$.}
\label{tab:velocities-1}
\begin{tabular}{lccccccccccccccc}
\hline
\hline
Line & & \multicolumn{2}{c}{IRAS\,07080+0605} & & \multicolumn{2}{c}{IRAS\,07377-2523} & & \multicolumn{2}{c}{Hen\,3-938} & & \multicolumn{2}{c}{LHA\,115-N\,82} & & \multicolumn{2}{c}{ARDB\,54}\\
& & $v_{\rm rot, los}$ & $v_{\rm gauss}$ & & $v_{\rm rot, los}$ & $v_{\rm gauss}$ & & $v_{\rm rot, los}$ & $v_{\rm gauss}$ & & $v_{\rm rot, los}$ & $v_{\rm gauss}$ & & $v_{\rm rot, los}$ & $v_{\rm gauss}$ \\
\hline
\protect{[O\,{\sc i}] $\lambda$5577} & & 25$\pm$0.5$^{a}$ & 10$\pm$1 & & 13$\pm$0.5 & 7.5$\pm$0.5 & & 9$\pm$0.5 & 6.0$\pm$0.5 & & --- & --- & & --- & ---\\
\protect{[O\,{\sc i}] $\lambda$6300} & & 25$\pm$0.5$^{a}$ & 10$\pm$1 & & 10$\pm$0.5 & 6.0$\pm$0.5 & & 9$\pm$0.5 & 6.0$\pm$0.5 & & 22$\pm$0.5 & 7.5$\pm$0.5 & & 13$\pm$0.5 & 2.5$\pm$0.5\\
& & & & & & & &
& & & 13$\pm$0.5 & 2.5$\pm$0.5 & & 3$\pm$0.5 & 2.5$\pm$0.5\\
\protect{[Ca\,{\sc ii}] $\lambda$7291} & & 25$\pm$0.5$^{a}$ & 10$\pm$1 & & 55$\pm$1 & 2.5$\pm$0.5 & & 7$\pm$0.5 & 6.0$\pm$0.5 & & 22$\pm$0.5 & 2.5$\pm$0.5 & & 16$\pm$0.5 & 2.5$\pm$0.5\\
& & --- & 2.5$\pm$0.5 & & & & &
& & & & & & & \\
\protect{[Ca\,{\sc ii}] $\lambda$7324} & & 25$\pm$0.5$^{a}$ & 10$\pm$1 & & 55$\pm$1 & 2.5$\pm$0.5 & & 7$\pm$0.5 & 6.0$\pm$0.5 & & 22$\pm$0.5 & 2.5$\pm$0.5 & & 16$\pm$0.5 & 2.5$\pm$0.5\\
& & --- & 2.5$\pm$0.5 & & & & &
& & & & & & & \\
\hline
\end{tabular}
\\
$^{a}$ Ring with a gap. For details see text.
\end{table}
\end{landscape}
\begin{figure*}
\centering
\includegraphics[scale=0.8]{fits_OI_Ca}
\caption{Fit (red) to the observed (black) profiles for stars that have both sets of forbidden lines,
[O\,{\sc i}] and [Ca\,{\sc ii}].}
\label{fig:fits-OI-CaII}
\end{figure*}
\begin{table}
\centering
\caption{Velocities needed for fitting the line profiles for the stars with only the [O\,{\sc i}]
forbidden lines. Multiple rows per object list the multiple fitting components. Parameters for the
[O\,{\sc i}] $\lambda\lambda$6300,6364 lines are identical. All velocities have units km\,s\,$^{-1}$.}
\label{tab:velocities-2}
\begin{tabular}{lccc}
\hline
\hline
Object & & $v_{\rm rot, los}$ & $v_{\rm gauss}$ \\
\hline
V*\,FX\,Vel & & 43$\pm$0.5 & 1$\pm$0.5 \\
& & 29$\pm$0.5 & 1$\pm$0.5 \\
& & 19$\pm$0.5 & 1$\pm$0.5$^{a}$ \\
& & 8$\pm$0.5 & 1$\pm$0.5$^{a}$ \\
\hline
SS\,255 & & 7.5$\pm$0.5 & 4.5$\pm$0.5 \\
\hline
IRAS\,17449+2320 & & 27$\pm$0.5 & 1$\pm$0.5 \\
& & 15$\pm$0.5 & 1$\pm$0.5 \\
& & --- & 1$\pm$0.5 \\
\hline
Hen\,2-91 & & 31$\pm$0.5 & 1$\pm$0.5$^{a}$ \\
& & 19$\pm$0.5 & 1$\pm$0.5$^{a}$ \\
& & 7$\pm$0.5 & 1$\pm$0.5 \\
\hline
\protect{[MA93]\,1116} & & 15$\pm$0.5 & 1$\pm$0.5$^{a}$ \\
& & --- & 2.5$\pm$0.5 \\
\hline
LHA\,120-S\,59 & & 45$\pm$0.5 & 2.5$\pm$0.5 \\
& & 28$\pm$0.5 & 2.5$\pm$0.5 \\
& & 16$\pm$0.5 & 2.5$\pm$0.5 \\
\hline
\end{tabular}
\smallskip
$^{a}$ Ring with a gap. For details see text.
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.92]{fits_OI}
\caption{Fit (red) to the observed (black) profiles for stars with only [O\,{\sc i}] forbidden lines.}
\label{fig:fits-OI}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.92]{fits_Ca}
\caption{Fits (red) to the observed (black) [Ca\,{\sc ii}] profiles for IRAS\,07455-3134, using a ring model with one (solid) and two (dashed) gaps.}
\label{fig:fits-CaII}
\end{figure}
In order to find periodic variabilities, we used the Lomb-Scargle algorithm \citep{Lomb76,Scar82} designed to perform period analysis for unequally spaced data, with the implementation affiliated to the \textsc{astropy} package\footnote{\textsf{http://www.astropy.org/}}\citep{VaCo12,VaIz15}. With this tool, we prepared the periodograms on the time series of the filter that is better covered by data. The frequency powers were normalised to unity by the residuals of least-square fits of the data around their mean value (generalised Lomb-Scargle periodogram).
The most reliable results (due to the quality of original data) of this study are concentrated in Fig.~\ref{f:main}.
As there are no previous variability studies of these stars, we decided to compare the results of our period analysis with the period reported for each star in VizieR, which probably is some period automatically inferred from Vizier data. A discussion for each star can be found in Sect.~\ref{sec:Discussion of the nature of our objects}.
\begin{figure}\centering
\includegraphics[scale=.73,clip,trim=3mm 3mm 3mm 0mm]{pg00-4s2g}
\caption{The results of the period analysis of three stars, one per row. The star, filter and photometric survey are indicated on the top of the left panels, which plot the light curve. The most probable period (labelled on the bottom of the right panels) was used to fit a harmonic curve to the data points (markers) on the left. The phased data points are plotted on the right.}\label{f:main}
\end{figure}
\section{The (possible) nature of our objects}
\label{sec:Discussion of the nature of our objects}
Combining the information obtained from the spectral features and their variabilities, the stellar parameters and extinction, the forbidden line dynamics, and the period analysis we are now in the position to discuss the possible nature of our sample stars. To supplement the most plausible evolutionary state of the individual objects, we plot in Fig.~\ref{fig:traks-solar-lmc-art} the HR diagrams including evolutionary tracks for solar, SMC, and LMC metallicities (top row), and pre-main sequence evolutionary tracks (bottom row) for solar and SMC metallicities. We present the objects following the outline of Table\ref{table:objects}, embedding and combining our results with what is known from the literature. In addition, the summary of the physical parameters is provided in Table~\ref{table:Physical-parameters}, and the SEDs of the objects, displaying clear infrared excess emission, are compiled in Fig.~\ref{fig:SEDs-model}.
\subsection*{First group}
\subsection{Galactic stars}
\subsubsection{Hen 3-938}
Hen 3-938 (Hen 938, PDS 67, IRAS 13491-6318) was cataloged by \citet{Allen-Swings-1976} as a peculiar Be star, because it exhibited NIR excess and forbidden emission lines in the optical spectrum. These authors also reported the presence of TiO bands in absorption. \citet{Gregorio-Hetem_1992}, through the analysis of medium-resolution spectra (0.4~\AA/pixel), classified it as a probable Herbig Ae/Be star, also reporting the presence of P-Cygni profiles in the Balmer lines. Later, \citet{Miroshnichenko_1999} analyzed photometric and spectroscopic data (R$\sim$1000) and determined a B0 spectral type for this object, also reporting the presence of [O\,{\sc i}] and Fe\,{\sc ii} emission lines, and He\,{\sc i} lines with P-Cygni profiles, but not TiO bands. These authors also suggested that Hen 3-938 has more similarities with B[e] supergiants than with Herbig Ae/Be stars, but it might be a star evolving towards the planetary nebula stage, due to its similarities with HD 51585, a post-AGB star \citep{Arkhipova-1992}.
From Method 1, we classified Hen 3-938 as a \mbox{B0-1\,I} star, probably being a B[e] supergiant, in agreement with \citet{Miroshnichenko_1999}. This classification is reinforced by the high extinction that we derived from our DIB, and the high luminosity, determined from the distance of $\sim 6.2$\,kpc, obtained from Gaia DR2, even considering the high uncertainty (see Table~\ref{table:Physical-parameters}). In addition, from the HR diagram (left panel, Fig.~\ref{fig:traks-solar-lmc-art}), we derived $M_{\rm ZAMS} \sim$ 20 M$_\odot$.
This scenario of a hot and luminous star is also favoured by our FEROS spectra, where we could identify the presence of Balmer, Fe\,{\sc ii}, and especially He\,{\sc i} lines showing P-Cygni profiles (Sect.~\ref{sec:35}).
The narrow single-peaked profiles of the forbidden lines might contain a slight rotation component. For the [O\,{\sc i}] lines we find $v_{\rm rot, los} = 9\pm 0.5$\,km\,s$^{-1}$, whereas for the [Ca\,{\sc ii}] lines it would be with $v_{\rm rot, los} = 7\pm 0.5$\,km\,s$^{-1}$ slightly lower but still comparable. A Gaussian component of $v_{\rm gauss} = 6\pm 0.5$\,km\,s$^{-1}$ is needed in all lines (see Table~\ref{tab:velocities-1} and Fig.~\ref{fig:fits-OI-CaII}). The lines from 2016 are very similar, but less intense (see Fig.\,\ref{fig:Hen-3-938-Bep-art}). If the star has a Keplerian disk or ring, the system might be seen close to pole-on.
Our spectra taken in 2005 and 2016 show a variation in the line intensities, which has also been reported in the literature. Such variability is not common in B[e] supergiants, but we could not find any signature of a companion. On the other hand, we cannot completely discard a pre-main sequence scenario for this object, as proposed by \citet{Gregorio-Hetem_1992} and \citet{Vieira_2003}, who also suggested that Hen 3-938 could be associated to a star-forming region in Centaurus.
\subsubsection{SS 255}
Not much is known about SS 255 (IRAS 14100-6655, 2MASS J14135896-6709206). It was discovered as an H$\alpha$ emission-line star by \citet{Stephenson-and-Sanduleak-1977}, who listed it as number 255 in their catalog. Recently, \citet{Miszalski-Mikolajewska-2014a} suggested that SS 255 strongly resembles a B[e] star, however, its NIR colors do not show evidence of hot dust. Thus, due to the similarities with SS73 24, these authors suggested the classification of SS 255 as a Be star.
Due to the scarcity of photometric measurements, we could not apply the Method 1 and derive its spectral type and luminosity class. However, based on the presence of He\,{\sc i} lines in emission, its spectral type can be B2 or earlier \citep[e.g.,][]{Zickgraf-1986,Miroshnichenko-2007A}. Thus, for our analysis we assume a B2 spectral type and a mean effective temperature of 19500$\pm$2500\,K.
Due to the lack of knowledge about its luminosity class, we assume a mean value for the bolometric correction ($-1.68$), considering the values provided by \citet{Humphreys-McElroy-1984}.
SS 255 is the most distant Galactic object of our sample, $\sim$10.3\,kpc (although this value is not so reliable, due to the high uncertainty of its parallax), but it has a relatively low extinction.
In addition, the [O\,{\sc i}] lines are narrow, symmetric and extremely intense, being modeled with a single rotating ring ($v_{\rm rot, los} = 7.5\pm 0.5$\,km\,s$^{-1}$, $v_{\rm gauss} = 4.5\pm 0.5$\,km\,s$^{-1}$), see Table~\ref{tab:velocities-2} and Fig.~\ref{fig:fits-OI}. If these lines are originated from a gaseous disk, this disk must be seen close to pole-on.
\begin{landscape}
\begin{figure}
\centering
\begin{tabular}{@{}ccc@{}}
\includegraphics[width=78mm]{traks-solar-artV3} &
\includegraphics[width=78mm]{traks-smc-art} &
\includegraphics[width=78mm]{traks-lmc-art} \vspace{-5.2mm} \\
\includegraphics[width=78mm]{pre-main-sequence-solar}&
\includegraphics[width=78mm]{pre-main-sequence-smc}
\end{tabular}
\caption{Position of the stars of our sample in the HR diagram, considering (a) evolutionary tracks with (dashed lines) and without (solid lines) rotation for solar (left panel) and SMC metallicities (middle panel) from \citet{Georgy-2013}, and without rotation for LMC metallicity (right panel) from \citet{Schaerer-et-al-1993}; and (b) pre-main sequence tracks for similar metallicities to the solar and SMC ones from \citet{Bernasconi-Maeder-1996} in the left and middle panels, respectively.}
\label{fig:traks-solar-lmc-art}
\end{figure}
\end{landscape}
Even considering the high uncertainties for SS 255, we estimated for the first time its parameters. Based on its position in the HR diagram, this star may have a ZAMS mass between 5 and 7 M$_\odot$, being at the end of the main sequence or close to it (left panel, Fig.~\ref{fig:traks-solar-lmc-art}). However, based on the intense emission lines, especially nebular lines, like [O\,{\sc ii}], [S\,{\sc ii}] and [N\,{\sc ii}], in our spectra (Sect.~\ref{sec:36}), a scenario as a post-AGB star not hot enough to excite [O\,{\sc iii}] lines, seems very favourable.
\subsubsection{Hen 2-91}
The nature of Hen 2-91 (SS73 39, IRAS 13068-6255, MN7, THA 17-18) is very uncertain. Some articles have classified it either as a planetary nebula \citep*{Webster-1966,Henize-1967, Allen-1973a,Frew-2013}, an M star with emission \citep{MacConnell-1983, Bidelman-1998}, an emission-line star \citep{The-1962, Weaver-1974}, a peculiar Be star \citep{Allen-Swings-1976,Allen-1982}, a B[e] star \citep{Lamers-1998}, or as a FS CMa candidate \citep{Miroshnichenko-2007A}. Hen 2-91 is also in the catalogues of OB stars \citep{Reed-2003}, and evolved massive stars \citep*{Gvaramadze-2010}.
The very intense and double-peaked Balmer lines may indicate an extended nebula or even a circumstellar disk. The FEROS spectra taken in five different nights in 2016 (in a period of 4 months) do not present sensible variations (Sect.~\ref{sec:38}).
No significant variations are seen in the [O\,{\sc i}] line profiles of Hen\,2-91 observed between April
and August 2016. The wiggly profiles suggest at least three ring components
($v_{\rm rot, los} = 31\pm 0.5$; 19$\pm$0.5; and 7$\pm$0.5\,km\,s$^{-1}$) of which two
are asymmetric. We implemented a big symmetric gap around the blue peak, into the high-velocity ring,
excluding velocities smaller than $-5.4$\,km\,s$^{-1}$ and a gap, symmetric around the red peak, into
the medium-velocity ring, excluding velocities larger than 17.2\,km\,s$^{-1}$. No noticeable Gaussian
component is needed for the fit (Table~\ref{tab:velocities-2} and Fig.~\ref{fig:fits-OI}).
Hen 2-91 is located at $\sim$ 5\,kpc, but this measurement is very uncertain. Its interstellar extinction is high based on the DIB present in our spectra. However, the value provided by IRSA is much higher and seems to be very imprecise. Thus, we decided to assume the value obtained from our spectra.
Unfortunately, there are only few photometric measurements available in the literature and no diagnostic absorption line is visible in our FEROS spectra. Thus, we could not apply any of the three methods to obtain the physical parameters of Hen 2-91.
Based on the BCD method \citep{Barbier-Chalonge1941, Chalonge-Divan_1952}, \citet{Cidale_2001} derived $T_{\rm eff} =$ 32500$\pm$2600\,K and B0 type for Hen 2-91, which are not in agreement with the spectral features that we have identified, especially the presence of He\,{\sc i} lines in absorption, and the absence of He\,{\sc ii}, Si\,{\sc iv} and other high-ionization lines.
Thus, based on the high uncertainty in its distance and extinction, and the lack of any reliable stellar parameter, it is not possible a deeper discussion about the nature of this star.
\begin{table*}
\centering
\caption{Physical parameters of the stars of our sample. }
\label{table:Physical-parameters}
\begin{tabular}{llcccccc}
\hline \hline
& Star &Distance$^{*}$ &BC$^{**}$ &$M_{\rm bol}$ &$T_{\rm eff}$ &$\log (L/L_\odot$) &$R/R_\odot$ \\
& &(pc) &(mag) &(mag) &(K) & & \\
\hline \hline
\multicolumn{8}{c}{\textbf{First group}} \\ \hline
\textbf{Galaxy} &
Hen 3-938 &6228$^{+1409}_{-1010}$ &-2.20 &-7.75$\pm$0.58 &23400$\pm$2600 &5.00$\pm$0.20 &19$\pm$3\\
&SS 255 &10321$^{+2524}_{-1818}$ &-1.68 &-3.31$\pm$0.45 &19500$\pm$2500 &3.22$\pm$0.33 &$\sim$4\\
\hline
\textbf{SMC} &
LHA 115-N82 & $18.95\pm0.07${$^{***}$} &-0.13 &-4.69$\pm$0.30 &9100$\pm$1000 &3.77$\pm$0.28 &31$\pm$7\\
\hline
\textbf{LMC} &
ARDB54 &$18.22\pm0.05${$^{***}$} &-0.27 &-6.08$\pm$0.13 &9500$\pm$200 &4.33$\pm$0.13 &54$\pm$6\\
&LHA 120-S59 & &-1.55 &-6.85$\pm$0.07 &17500$\pm$500$^\text{L}$ &4.63$\pm$0.15 &23$\pm$3\\
\hline \hline
\multicolumn{8}{c}{\textbf{Second group}} \\ \hline
\textbf{Galaxy} &
IRAS 07080+0605 &535$^{+15}_{-14}$ &-0.29 &3.06$\pm$0.12 &10100$\pm$700 &0.67$\pm$0.18 & $\sim$1\\
&IRAS 07377-2523 &4100$^{+ 521}_{-418}$ &-0.59 &-2.40$\pm$0.33 &12000$\pm$1000 &2.86$\pm$0.21 &6$\pm$1\\
&IRAS 07455-3143 &12262$^{+3154}_{-2327}$ &-0.59 &-8.07$\pm$1.12 &12500$\pm$1000 &5.12$\pm$0.20 &78$\pm$12 \\
&V*FX Vel &353$^{+6}_{-6}$ &-0.17 &2.73$\pm$0.13 &9500$\pm$500 &0.81$\pm$0.13 &$\sim$1\\
&IRAS 17449+2320 &740$^{+22}_{-21}$ &-0.14 &0.35$\pm$0.07 &9350$\pm$400 &1.75$\pm$0.11 &$\sim$3\\
\hline
\textbf{SMC} &
{[}MA93{]} 1116 &$18.95\pm0.07${$^{***}$} &-1.80 &-5.99$\pm$0.27 &21600$\pm$3000 &4.29$\pm$0.26 &10$\pm$2\\
\hline
\end{tabular}
\begin{tablenotes}
\item \textbf{Notes.} ($^{*}$) Distances from Gaia DR2 for Galactic stars provided by \citet{Bailer-Jones}. We caution that the values for objects further away than 4 kpc have high uncertainties. These values might considerably change with Gaia DR3; ($^{**}$) Bolometric correction ($BC$) from \cite{Humphreys-McElroy-1984};
($^{***}$) distance modulus for SMC taken from \cite{Graczyk-et-al-2014}, and for LMC taken from \cite{Udalski-et-al-1998}. $^\text{(L)}$ from \cite{Levato_2014}.
\end{tablenotes}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=\linewidth, clip]{LHA_155_N82_ASAS-OGLEII_band_V_band_I}
\caption{Light curves of LHA 115-N82 obtained from ASAS and OGLE III surveys, taken from 2001 until 2010, in the V- and I-bands. The vertical dashed lines indicate the FEROS spectra taken in 2008.}
\label{fig:LHA_155_N82_ASAS-OGLEII_band_V_band_I}
\end{figure}
\subsection{SMC star}
\subsubsection{LHA 115-N82}
\label{subsubsec:N82}
LHA 115-N 82 (LIN 495, N82, 2dFS 2837) was originally identified as a nebula with H$\alpha$ in emission by \citet{Henize-1956}, who gave the designation of N82 to this object. In the Catalog of Stars with Emission Lines and Planetary Nebulae of \citet{Lindsay-1961}, LHA 115-N 82 was listed as LIN 495. Its evolutionary stage was firstly suggested by \citet{Heydari-Malayeri-1990}, who based on photometric and spectroscopic data, classified it as a B[e] supergiant of spectral type B7-8. Later \citet{Evans_2004}, in a spectroscopic survey (low-resolution, R$\backsimeq$1500) in the SMC (2dF survey of SMC), classified this star with a possible composed spectra: AF/B[e]. Recently \citet{Kamath_2014}, based on the analysis of low-resolution spectra, classified this object as a post-AGB/RGB candidate.
Our FEROS spectra taken in 2008 and 2015 show strong variability in the line profiles, radial velocities and {\it V/R} (for the profiles with double peaks), as described in Sect.~\ref{sec:310}.
Based on the Method 1, using photometric measurements taken in 1989-1999 (see Table~\ref{table:objectsphotometry}), we could derive two possible sets of parameters for LHA 115-N82, a late-B or an early-A type star. Based on very weak or even absent He\,{\sc i} lines and strong Mg\,{\sc ii} lines in absorption seen in our FEROS spectra taken in 2008 and 2015, the classification as an early-A star seems to be more favourable. Thus, assuming the distance of SMC and the low extinction obtained from IRSA, we derived the parameters for LHA 115-N82 (see Table~\ref{table:Physical-parameters}).
From the HR diagram, considering the evolutionary tracks for SMC stars, we note the post-main sequence nature of this star, with $M_{\rm ZAMS}$ of 7$-$9\,M$_\odot$ (middle panel, Fig.~\ref{fig:traks-solar-lmc-art}).
On the other hand, the light curves of LHA 115-N82 in the V- and I-band (Fig.~\ref{fig:LHA_155_N82_ASAS-OGLEII_band_V_band_I}) show a long-term increase of brightness. In the V-band, there is an increase from $\sim$14 mag in 2002 to $\sim$13.2 mag in 2010. In the I-band, the star goes from $\sim$13.5 mag around 2001 to $\sim$12.75 mag in 2009. In addition, due to the low dispersion of the data in the I-band, we can see two minima around 2003 and 2005, followed by two brightness increases. This behaviour of the light curves is similar to that one seen in Luminous Blue Variables (LBV) during their eruptions, especially as recently reported for R40, also a SMC star \citep{Campagnolo-et-al-2018}.
The effect of these eruptions is seen in the spectrum of LBVs, in a transition from a B-type during the quiescence (without eruptions) to an A- or even late-F type, depending on how strong the eruption is and the amount of matter that is ejected, forming a pseudo-photosphere.
For the Method 1, we used photometric data taken $\sim$10 years before the ASAS and OGLE III data, indicating an even lower brightness for LHA 115-N82 (14.25-14.75 mag in the V-band), and a probably higher effective temperature during the quiescent stage, as a B-type star. However, our spectrum taken in 2008, during its brightest phase, does not show a sensible variation, being of an early A-type star, indicating a not so strong eruption in 2005. Due to the absence of photometric data later than 2010, we cannot confidently say at which stage (quiescence or eruption) the 2015 spectra were observed. However, based on their characteristics, it seems that this star is still under the effect of an eruption. In addition, the presence of Paschen lines in absorption may also indicate a cool and dense photosphere, typical of LBVs after eruption \citep{Mehner-et-al-2013}.
The behaviour of the Balmer and Fe\,{\sc ii} lines is also interesting, showing a more intense blue emission and inverse P-Cygni profiles in 2008, and a more intense red emission and P-Cygni profiles in 2015 (see Fig.~\ref{fig:Bep-LHA 115-N82-a}). The absorption component of these lines show radial velocity variation, being blueshifted in 2015 and redshifted in 2008. This is also seen in the He\,{\sc i} absorption profiles (Fig.~\ref{fig:Bep-LHA 115-N82-a}), indicating the presence of rotating absorbing material around the star.
The two sets of observations of LHA\,115-N\,82 reveal similar profiles of the forbidden lines with no significant
variability, indicating a stable emitting region. Due to the better quality, we show the fits to the 2015 data. The [Ca\,{\sc ii}] lines
suffer from low SNR and possible remnants of telluric pollution. The profiles might be modeled
with a single ring ($v_{\rm rot, los} = 22\pm 0.5$\,km\,s$^{-1}$, $v_{\rm gauss} = 2.5\pm
0.5$\,km\,s$^{-1}$). The same ring, but with a higher Gaussian component of $v_{\rm gauss} = 7.5\pm
0.5$\,km\,s$^{-1}$ is seen in the profiles of the [O\,{\sc i}] lines. However, these lines require a
second, lower velocity ring component ($v_{\rm rot, los} = 13\pm 0.5$\,km\,s$^{-1}$, $v_{\rm gauss} =
2.5\pm 0.5$\,km\,s$^{-1}$) for a reasonable fit, see Table~\ref{tab:velocities-1} and Fig.~\ref{fig:fits-OI-CaII}. The decreasing velocity with decreasing density is typical for Keplerian disks or rings.
This scenario is really a puzzle, because LHA 115-N82 is not massive enough to be a LBV. In addition, LBVs do not show [O\,{\sc i}] lines, but LHA 115-N82 does. Thus, we are maybe observing a post-main sequence B[e] star, showing instabilities that cause eruptions, like an ``LBV impostor". More simultaneous spectroscopic and photometric data are necessary for a better comprehension of this star.
\subsection{LMC stars}
\subsubsection{ARDB 54}
ARDB 54 (SOI 720) was first observed by \citet{Ardeberg_1972}, who based on UBV fluxes, suggested that it might be a multiple star or an emission-line object. Later, \citet*{Stock_1976}, from objective prism spectrum, classified ARDB 54 as a B9 Ib star. Recently, \citet{Levato_2014} analysed medium-resolution spectra and photometric data and classified it as a B[e] supergiant with an effective temperature of 10000\,K and $\log$($L$/L$_\odot$) of 4.57.
Our FEROS spectra taken in 2014 and 2015 do not show variability (Sect.~\ref{sec:311}). However, in comparison with the spectra taken by \citet{Levato_2014} in 2011, there is a noticable spectral variation.
The absence of Paschen lines, weak forbidden lines and weak IR excess probably indicate a small amount of ionized circumstellar material.
The spectra of ARDB\,54 are very noisy and we limit our model attempts to the profiles seen in 2015.
The [Ca\,{\sc ii}] $\lambda$7291 line appears double-peaked, which can be approximated with a single
ring with $v_{\rm rot, los} = 16\pm 0.5$\,km\,s$^{-1}$. The same model is used for the [Ca\,{\sc ii}]
$\lambda$7324 line, but this line is very noisy and contaminated on its blue edge possibly with a cosmic
ray so that this fit can only be regarded as suggestive. The triangular shape of the [O\,{\sc i}] lines
imply in multi-components. We achieved a reasonable fit using two rings with $v_{\rm rot, los} = 13\pm
0.5$\,km\,s$^{-1}$ and $v_{\rm rot, los} = 3\pm 0.5$\,km\,s$^{-1}$. All lines have the same Gaussian
contribution of $v_{\rm gauss} = 2.5\pm 0.5$\,km\,s$^{-1}$. In total, this object has three rings traced
by the forbidden lines (Table~\ref{tab:velocities-1} and Fig.~\ref{fig:fits-OI-CaII}). The decrease in density with rotation velocity hints towards a Keplerian disk scenario.
In our analysis, using the Method 1, we concluded that ARDB 54 is actually an A0-1I ($T_{\rm eff} \sim$ 9500\,K) star, which is in agreement with the spectral features of our FEROS spectra, especially the weakness or the absence of He\,{\sc i} lines. Assuming the LMC distance and the low extinction derived from IRSA, we derived the physical parameters of ARDB 54. From the HR diagram, using the evolutionary tracks for LMC stars from \citet{Schaerer-et-al-1993}, we confirm that this star is an A[e] supergiant with $M_{\rm ZAMS}$ = 10$-$12 M$_\odot$ (Fig.~\ref{fig:traks-solar-lmc-art}). Thus, ARDB 54 is the third A[e] supergiant already identified, the first one in the LMC. The other two A[e] supergiants are the SMC star LHA 115-S23 \citep{Kraus-2008} and the Galactic object HD 62623 \citep{Meilland-et-al-2010}.
\subsubsection{LHA 120-S59}
LHA 120-S59 (S59, AL 415, OGLE LMC-LPV-83573) was first identified by \citet{Henize-1956} as an H$\alpha$ emission star. \citet{Gummersbach_1995}, based on spectroscopic and photometric data, suggested that LHA 120-S 59 is a B[e] supergiant with B5II spectral type and effective temperature of 14000\,K. Recently, \citet{Levato_2014}, based on the analysis of medium-resolution spectra and OGLE photometric data, also suggested a B[e] supergiant classification, but with B2-3 spectral type, effective temperature of 19000\,K and $\log(L/$L$_\odot$) of 4.64. These authors also reported radial velocity variations associated to variable ($B-V$) and UV excess, suggesting the presence of a companion. From our FEROS spectra taken in 2015 and 2016, we also noticed line profile and radial velocity variations (Sect.~\ref{sec:312}).
If this scenario is correct, the orbital period might be the period of 83.6 d inferred from period analysis (Fig.~\ref{f:main}), which is very close to the VizieR value (83.4 d).
Concerning the physical parameters of this star, we decided to assume a mean value for the effective temperature of 17500$\pm$500\,K, as provided in the literature \citep{Levato_2014}, due to the absence of diagnostic lines in absorption (Mg\,{\sc ii} and Si\,{\sc ii}) and the impossibility of convergence for a set of parameters using the Method 1. Assuming the distance of LMC and the extinction for this object, as derived by IRSA, we obtained its bolometric correction, bolometric magnitude, luminosity and radius, as seen in Table~\ref{table:Physical-parameters}. From the HR diagram, considering the evolutionary tracks for LMC stars from \citet{Schaerer-et-al-1993}, we classify LHA 120-S59 as a B[e] supergiant with $M_{\rm ZAMS} =$ 12$-$15 M$_\odot$ (Fig.~\ref{fig:traks-solar-lmc-art}, right panel).
The high temperature, IR excess and broad Balmer lines (the broadest of our sample) suggest that this star probably has a large amount of ionized circumstellar gas.
The [O\,{\sc i}] lines can be modeled with a combination of at least three rotating rings ($v_{\rm rot, los} = 45\pm 1$; $28\pm0.5$; and $16\pm0.5$\,km\,s$^{-1}$), each with a Gaussian component of $v_{\rm gauss} = 2.5\pm 0.5$\,km\,s$^{-1}$ (see Table~\ref{tab:velocities-2}), supporting the picture of a Keplerian disk around LHA 120-S59. We show the fit to the
data taken in 2016 in Fig.~\ref{fig:fits-OI}, but the same model reproduces the line profiles in the spectra taken in 2015.
In addition, molecular emission was also seen in its environment, being in line with the classification of this star as a B[e] supergiant \citep{2013A&A...558A..17O}.
\subsection*{Second group}
\subsection{Galactic stars}
\subsubsection{IRAS 07080+0605}
IRAS 07080+0605 (TYC 175-3772-1, HBHA 717-01) was detected by \citet{Kohoutek-1999} in a survey for stars with H$\alpha$ in emission. Later \citet{Miroshnichenko_2007}analysed high-resolution spectra (R$\sim$70000) and identified the presence of the B[e] phenomenon, classifying it as a FS CMa A-star with low luminosity. These authors also identified a strong IR excess, suggesting a binary nature with mass transfer. However, no direct evidence of a companion was found, especially due to the absence of radial velocity variations.
It is the second closest star (535 pc) of our sample. This is in agreement with the low interstellar extinction obtained from the 3D dust mapping (Table~\ref{table:interstellar-extinction}), which we consider more reliable than the value provided by IRSA.
From Method 1, an A0-1II type was derived, which is, in principle, corroborated by the absence of the He\,{\sc i} $\lambda$4471 line in our spectrum. However, the presence of other He\,{\sc i} lines may weaken this classification. Unfortunately, we could not use other common diagnostic lines for A-type stars, such as Ca\,{\sc ii} H \& K lines and H$\epsilon$, because they are contaminated by wind emission.
Assuming this possible classification and a mean effective temperature of 10100$\pm$700\,K, we derived the bolometric magnitude (M$_{\text{bol}}$), luminosity and radius of IRAS 07080+0605 (Table~\ref{table:Physical-parameters}) in agreement with the results of \citet{Miroshnichenko_2007}.
The spectral variability, as described in Sect.~\ref{sec:31}, is not sufficient to confirm a binary nature for this object, as proposed by \citet{Miroshnichenko_2007}. The presence of inverse P-Cygni profiles, seen in some Ca\,{\sc ii} and Fe\,{\sc ii} lines, may indicate that an accretion process is ongoing. The triple peaked components and asymmetries seen in some line profiles, and also cited by \citet{2018PASP..130k4201A}, imply in a complex circumstellar environment, possible composed of a disk and a nebular component.
The [O\,{\sc i}] lines in IRAS\,07080+0605 are clearly double-peaked although the 5577\,\AA \ line is
rather noisy. All three lines can be modeled with a single ring with a rotational velocity,
projected to the line-of-sight, of $v_{\rm rot, los} = 25\pm 0.5$\,km\,s$^{-1}$ and an additional
Gaussian component of $v_{\rm gauss} = 10\pm 0.5$\,km\,s$^{-1}$ (Table~\ref{tab:velocities-1}). This extra broadening might be either
ascribed to some turbulence related to the accretion flow of the gas or, alternatively, might be interpreted
as an indication for a certain width of the emitting ring and hence a slight variation of the rotation
velocity, in contrast to the infinitesimal thin ring with constant rotation velocity used in the model.
The slight depression of the red peak (Fig.~\ref{fig:fits-OI-CaII}) can be achieved if we allow for a gap in the ring around the
maximum radial velocity ($>$24.8\,km\,s$^{-1}$). The [Ca\,{\sc ii}] lines appear composite. While the
same ring model can be used to approximate the broad component, an additional pure Gaussian component with $v_{\rm gauss} = 2.5\pm 0.5$\,km\,s$^{-1}$ is needed to account for the narrow central peak (Table~\ref{tab:velocities-1}).
Our period analysis, excluding periods nearly equal to our total time span, indicates as the most powerful period the one of 248.2 d, very close to the 248.7 d of VizieR. None the less, the phased data with this period are empty for $\sim50\%$ of this potential variability cycle. Given that the five highest powers have similar values (Fig.~\ref{f:app}), we suggest that the most probable period is that of 72 d, noting however the scarcity of data.
These results, in association with the strong IR excess, as seen in the Spitzer \citep{2004ApJS..154...18H} spectrum of IRAS 07080+0605 with the presence of intense PAHs bands (Fig~\ref{fig:SEDs-model}), and the presence of a cold molecular cloud along the line-of-sight detected in the $K$-band spectrum of IRAS 07080+0605 \citep{2018PASP..130k4201A}, may reinforce a young nature for this object. This scenario seems to be favoured by the position of IRAS 07080+0605 in the HR diagram (left panel, Fig.~\ref{fig:traks-solar-lmc-art}).
\subsubsection{IRAS 07377-2523}
IRAS 07377-2523 (SS 147) was detected by \citet{Stephenson-and-Sanduleak-1977} in their survey searching for H$\alpha$-emitting stars. \citet*{Parthasarathy-2000}, based on low resolution spectroscopy, classified it as a B8 III-IVe star. It was also selected as a massive young stellar object (YSO) candidate by \citet{Mottram-2007}. In the same year, based on the analysis of high-resolution spectra, IRAS 07377-2523 was classified as a FS CMa star by \citet{Miroshnichenko_2007}, who suggested a B8/A0 spectral type.
Through the analysis of the FEROS spectra, we could also derive a B8-B9 spectral type with an effective temperature of 12000$\pm$1000 K, based on different equivalent width ratios (Methods 2 and 3).
The presence of shell-type profiles seen in the Balmer and Fe\,{\sc ii} lines may indicate a circumstellar environment seen edge-on (Sect.~\ref{sec:32}).
From the modeling of the forbidden lines, IRAS 07377-2523 seems to be surrounded by at least three rotating rings, one for each density tracer (Table~\ref{tab:velocities-1}). The [Ca\,{\sc ii}] lines, though very noisy (Fig.~\ref{fig:fits-OI-CaII}), display the highest rotation velocity, ($v_{\rm rot, los} = 55\pm 1$\,km\,s$^{-1}$, $v_{\rm gauss} = 2.5\pm 0.5$\,km\,s$^{-1}$) followed by the [O\,{\sc i}] $\lambda$5577 line ($v_{\rm rot, los} = 13\pm 0.5$\,km\,s$^{-1}$, $v_{\rm gauss} = 7.5\pm 0.5$\,km\,s$^{-1}$) and the [O\,{\sc i}] $\lambda\lambda$6300,6363 lines ($v_{\rm rot, los} = 10\pm 0.5$\,km\,s$^{-1}$, $v_{\rm gauss} = 6\pm 0.5$\,km\,s$^{-1}$). This trend of decreasing velocity with decreasing density is what is typically seen in Keplerian disks.
Its distance is around 4.1 kpc and adding this to an interstellar color excess of $E(B-V)_{IS} = 0.5$, obtained from the 3D dust mapping, we derived some physical parameters of IRAS 07377-2523 (Table~\ref{table:Physical-parameters}). Thus, placing it in the HR diagram reveals a post-main sequence scenario, as suggested by \citet{Parthasarathy-2000}, for a star with roughly 5$\pm$1 M$_\odot$. However, a pre-main-sequence nature cannot be discarded (see left panel of Fig.~\ref{fig:traks-solar-lmc-art}).
\subsubsection{IRAS 07455-3143}
\label{subsubsec:IRAS07455}
IRAS 07455-3143 (CD-31 5070, ALS 782, Hen 3-78) was classified by \citet{Orsatti_1992}, based on UBV photometry, as an early B-type star. Later, \citet{Miroshnichenko_2007} classified it as FS CMa star of spectral type B7/B8 based on an analysis of their high-resolution spectra. Due to the presence of Li\,{\sc i} and Ca\,{\sc i} lines, they also suggested the presence of a late-type companion (K-type).
Based on our FEROS spectra, this is the only object of our sample, for which we could not confirm the presence of the B[e] phenomenon, due to the absence of [O\,{\sc i}] lines (Sect.~\ref{sec:33}).
From Method 1, we classified IRAS 07455-3143 as a B0-1 II/III/V star with $T_{\rm eff} \sim$ 25500\,K, in agreement with \citet{Orsatti_1992}, who suggested an early B-type star. However, this classification is hampered, due to the absence of He\,{\sc ii} and Si\,{\sc iv} lines in our spectra, and also He\,{\sc i} lines in emission. On the other hand, from Method 2, we derived a B8-type with $T_{\rm eff} \sim$ 12500\,K, in agreement with the spectral features seen in our spectra and with \citet{Miroshnichenko_2007}.
IRAS 07455-3143 exhibits an intense spectral variability, as seen in our spectra taken in four different nights in 2008, 2015, and 2016 (Sect.~\ref{sec:33}), which may indicate a binary scenario for this star.
In our spectra only the Ca\,{\sc i} line at 6717.7~\AA \ was identified. The presence of the Li\,{\sc i} line at 6707.7~\AA\ is doubtful. Thus, the presence of the Ca\,{\sc i} line, associated to radial velocity variations (Table~\ref{table:velocities2}), may indicate a complex scenario. There, absorption lines from He\,{\sc i}, Mg\,{\sc ii}, and Si\,{\sc ii} may come from the primary B star, the Ca\,{\sc i} line may come from a cool companion and the stable forbidden emission lines may come from a circumbinary disk or rings.
IRAS\,07455-3134 displays only the [Ca\,{\sc ii}]
lines but lacks [O\,{\sc i}]. The profiles of the [Ca\,{\sc ii}] lines are double-peaked with the red
peak more intense than the blue one (Fig.\,\ref{fig:fits-CaII}). If we interpret the double-peaks as due
to rotation, then this star might be surrounded by a very compact, high-density ring of gas. To model
the profile shape, we apply a ring with $v_{\rm rot, los} = 32\pm 0.5$\,km\,s$^{-1}$ and a negligible
Gaussian component ($v_{\rm gauss} = 1\pm 0.5$\,km\,s$^{-1}$). To suppress the blue peak, we implement
an asymmetric gap to exclude velocities from -31.8\,km\,s$^{-1}$ over the maximum of 32\,km\,s$^{-1}$
and reaching values of -24.5\,km\,s$^{-1}$. However, the central part of the profile does not fit and the implementation of a
second gap, symmetric around zero velocity (from -18.35\,km\,s$^{-1}$ to 18.35\,km\,s$^{-1}$) reduces
too much of the intensity (Fig.\,\ref{fig:fits-CaII}). Therefore, we conclude that the ring does
not necessarily have gaps, but displays density inhomogeneities. Such inhomogeneities might cause
variabilities of the [Ca\,{\sc ii}] profiles, which change in line with the [Fe\,{\sc ii}] lines shown
in Fig.\,\ref{fig:Bep-IRAS07455-3143}.
In addition, the Fe\,{\sc ii} lines in our spectra show shell-type profiles that may suggest an edge-on orientation of this circumbinary environment.
IRAS 07455-3143 is the most distant object of our sample with a high uncertainty of its value of 12.3 kpc due to so far very imprecise parallax measurement (0.008$\pm$0.026 mas). Assuming this uncertainty, a B8-type, and the mean interstellar extinction, obtained from the DIB and from IRSA, we estimated the physical parameters of this star (Table~\ref{table:Physical-parameters}). According to the HR diagram (Fig.~\ref{fig:traks-solar-lmc-art}, left panel), it can be a post-main sequence (supergiant) star with $M_{\rm ZAMS} \sim$ 20 M$_\odot$.
\subsubsection{V* FX Vel}
\label{subsubsec:FXVel}
V* FX Vel (IRAS 08307-3748, WRAY 15-231) was classified by \citet*{Strohmeier-1968}, \citet{Kukarkin-1972} and \citet{Malkov-2006} as an eclipsing binary. However, \citet{Eggen-1978} questioned the eclipsing nature and classified it as a B9 III-IV star. Later, V* FX Vel was classified as a FS CMa star by \citet{Miroshnichenko_2007} who, based on the analysis of high-resolution spectra, suggested a binary scenario composed of an A and a K stars. Recently \citet{Tisserand-2013}, based on the analysis of low-resolution (R$\sim$3000$-$7000) spectra suggested an A3III type. \citet*{Avvakumova-2013} classified V*~FX~Vel as an eclipsing variable again.
We classified this star as a B8-9 III/V ($T_{\rm eff} \sim$ 11500\,K) from Method 1, and as an A0-2 ($T_{\rm eff} \sim$ 9500\,K or even lower) from Method 2. However, the B8-9 classification can be discarded, as the Mg\,{\sc ii} line at 4482 \AA\, is much stronger than the He\,{\sc i} line at 4471 \AA\ . This is typical for A-type stars, further favouring the A0-2 classification in agreement with \citet{Miroshnichenko_2007}.
Through the analysis of our FEROS spectra of 2008, 2015 and 2016, we noticed a strong variability in the line profiles and radial velocities (Sect.~\ref{sec:34} and Table~\ref{table:velocities2}). These variations, associated with the identification of Li\,{\sc i} and Ca\,{\sc i} lines in just one of the four spectra (Fig.~\ref{fig:Bep-FXVel}), reinforce the suggestion of an eclipsing binary for this object, as suggested in the literature.
If the variations are indeed due to binarity, the photometric period inferred from the period analysis could be the binary period. We found that the highest-power period is 387.9 d. The VizieR-inferred period (286 d) is also within the periodogram of V*FX Vel, but with much smaller power (by one order of magnitude), i.e.\ higher probability of not being the main variability period (Fig.~\ref{f:app}).
Actually, the complex spectra of V* FX Vel suggest that the absorption lines from He\,{\sc i}, Mg\,{\sc ii}, and Si\,{\sc ii} may come from a primary A star, the Ca\,{\sc i} and Li\,{\sc i} lines may come from a cool companion and the forbidden emission lines and shell-type Fe\,{\sc ii} lines from a circumbinary disk or rings.
Only [O\,{\sc i}] lines, and none [Ca\,{\sc ii}] lines, are displayed in the spectra of V*\,FX\,Vel. The
spectrum from 2008 has the highest quality (see Fig.\,\ref{fig:Bep-FXVel}), and we limit our modeling
to the lines from that year. The profiles are clearly asymmetric, implying multiple components. Our best
fit model (Table\,\ref{tab:velocities-2} and Fig.\,\ref{fig:fits-OI}) consists of four rings with velocities $v_{\rm rot, los} =
43\pm 0.5$; $29\pm0.5$; $19\pm0.5$; and $8\pm0.5$\,km\,s$^{-1}$. For all rings, the Gaussian component
is negligible ($v_{\rm gauss} = 1\pm 0.5$\,km\,s$^{-1}$). To account for the asymmetry, we need to
implement a gap into the two rings with the lowest velocities. For the ring with 19\,km\,s$^{-1}$, this gap is
symmetric around the red peak, excluding velocities $>$17.2\,km\,s$^{-1}$, whereas the ring with
8\,km\,s$^{-1}$ requires a large ($>$ one quarter) gap to suppress partly the red peak and the central
region. This gap corresponds to a lack in velocities starting from 7.88\,km\,s$^{-1}$ over the maximum
red peak and reaching down to -1.4\,km\,s$^{-1}$. In a scenario with a close companion, these rings might originate from previous interaction phases.
From its SED and Spitzer spectrum (Fig.~\ref{fig:SEDs-model}), we confirm an intense IR excess and the presence of the silicate band at 10 $\mu$m.
V* FX Vel is the closest star of our sample ($\sim$ 353\,pc), in agreement with the low interstellar extinction derived from the DIB present in our spectra. From its parameters and position in the HR diagram (Fig.~\ref{fig:traks-solar-lmc-art}, left panel), we note that is a low-mass star, with $M_\text{ZAMS} <$ 2 M$_\odot$. However, we cannot discard a pre-main sequence nature for V* FX Vel.
\subsubsection{IRAS 17449+2320}
IRAS 17449+2320 (BD+23 3183) was detected for the first time by \citet{Stephenson-1986A} as a new H$\alpha$ emission line star. \citet{Downes-Keyes-1988} obtained a low-resolution spectrum and classified it as a Be star. Later \citet{Miroshnichenko_2007}, based on the analysis of their high-resolution spectra, identified the presence of the B[e] phenomenon, and classified it as an A0V star. The B[e] phenomenon in IRAS 17449+2320 was furthermore confirmed by the medium resolution (R$=$13000$-$18000) spectra of \citet*{Aret_2016} as well as by our FEROS spectra.
From Methods 1, 2, and 3, we also derived a spectral type A0-A2, with a mean effective temperature of \mbox{$9350\pm400$}~K, in agreement with \citet{Miroshnichenko_2007}.
We noticed, from the comparison with the literature, a high spectral variability on a time scale of few days \citep{Sestito-2017}. However, we could not confirm this, because we have spectra taken in just one night.
The broad absorption, associated to a central double-peaked emission, seen in the Balmer lines and the shell-type profiles of Fe\,{\sc ii} lines (Sect.~\ref{sec:37}) may indicate the presence of a circumstellar disk seen edge-on. On the other hand, the [O\,{\sc i}] emission lines are broad and almost flat-topped. This kind of profile might be
reproduced with a combination of three components, of which only two contain a rotation velocity
($v_{\rm rot, los} = 27\pm 0.5$ and $15\pm0.5$\,km\,s$^{-1}$). No significant Gaussian component is
needed ($v_{\rm gauss} = 1\pm 0.5$\,km\,s$^{-1}$), see Table~\ref{tab:velocities-2} and Fig.~\ref{fig:fits-OI}. In the absence of a second density indicator, it remains open whether these rings represent rotating or equatorially outflowing gas.
IRAS 17449+2320 also displays an IR excess with a weak silicate band in emission at 10\,$\mu$m, as seen in its Spitzer spectrum (Fig.~\ref{fig:SEDs-model}).
IRAS 17449+2320 is one of the closest stars of our sample ($\sim$ 740\,pc), in agreement with the very low extinction obtained from the 3D dust mapping (Table~\ref{table:interstellar-extinction}). From the parameters obtained by us (Table~\ref{table:Physical-parameters}) and the position in the HR diagram, IRAS 17449+2320 has $M_{\rm ZAMS} = 2-3$ M$_\odot$, being still in the main-sequence or close to its end (left panel, Fig.~\ref{fig:traks-solar-lmc-art}).
\subsection{SMC star}
\subsubsection{[MA93] 1116}
[MA93] 1116 (Cl* NGC 346 KWBBE 200, 2MASS J00590587-7211270) belongs to the SMC and was classified as a compact H\,{\sc ii} source by \citet{Meyssonnier-Azzopardi-1993}. Based on a photometric survey, [MA93] 1116 was later classified as a classical Be star located in the open cluster NGC 346\footnote{NGC 346 is an open cluster with intense star formation \citep{Nota_2006, Sabbi_2007}.} by \citet*{Keller-1999}.
The presence of the B[e] phenomenon was mentioned for the first time by \citet{Wisniewski_2007}, based on the analyze of high-resolution spectra. The same authors, using photometric data and Kurucz atmospheric models, derived its physical parameters: $T_{\rm eff}\sim$19000~K, $\log$($L$/L$_\odot$)$\sim$4.4, and $R_{\rm star}\sim$14 R$_\odot$. From these characteristics, \citet{Wisniewski_2007} suggested that [MA93] 1116 would be a B[e] supergiant. Later, \citet{Whelan-2013} analysed the Spitzer spectrum and identified silicate emission and strong PAH bands at 6.2 $\mu$m, 7.7 $\mu$m, 8.6 $\mu$m and 11.3$\mu$m. Due to these characteristics, [MA93] 1116 was classified as a class A PAH spectrum, which is typical of non-isolated Herbig Ae/Be stars. \citet{Kamath_2014} analyzed low-resolution spectra and identified the presence of forbidden lines, classifying [MA93] 1116 as a planetary nebula candidate. Recently \citet{Paul-et-al-2017}, based on the spectral energy distribution and using theoretical spectral templates, derived a B0.5 spectral type, $T_{\rm eff}\sim$29000~K, $\log(L/$L$_\odot)\sim$4.41 and age of 2.5 Myr. Due to these physical parameters, these authors suggested a classification as HAeBe star for [MA93] 1116.
Our spectra has a richness of emission lines and no absorption lines were identified (Sect.~\ref{sec:39}). From the literature, it is possible to see a sensible variability in the line profiles.
The [O\,{\sc i}] lines in [MA93]\,1116 display a narrow, symmetric central component which can be fit
with a pure Gaussian ($v_{\rm gauss} = 2.5\pm 0.5$\,km\,s$^{-1}$). This Gaussian is superimposed on a
broader asymmetric component, for which we find a good fit using a ring ($v_{\rm rot, los} = 15\pm
0.5$\,km\,s$^{-1}$, $v_{\rm gauss} = 1\pm 0.5$\,km\,s$^{-1}$) with a symmetric gap around the red peak,
excluding velocities higher than 14.1\,km\,s$^{-1}$ (see Table~\ref{tab:velocities-2} and Fig.~\ref{fig:fits-OI}).
From its light curve, the period of 573.6 d for [MA 93] 1116 is reported in the present paper for the first time (Fig.~\ref{f:main}).
Based on Method 1, we classified it as a B1-2 star with a $T_{\rm eff}$ = 21600$\pm$3000~K. This classification is in agreement with the spectral features identified in our FEROS spectra, especially the presence of He\,{\sc i}, O\,{\sc ii} and [O\,{\sc ii}] lines in emission and the absence of He\,{\sc ii} lines. Thus, considering the SMC distance and the minimum interstellar extinction obtained from IRSA, we derived $\log(L_*/$L$_\odot) = 4.29\pm0.35$ and a radius of $10\pm3$\,$R_\odot$ (see Table~\ref{table:Physical-parameters}). From the HR diagram, considering the evolutionary tracks for SMC \citep{Georgy-2013} the classification of [MA93] 1116 as a B[e] supergiant with $M_{\rm ZAMS} =$ 9$-$12 \,M$_\odot$ (middle panel, Fig.~\ref{fig:traks-solar-lmc-art}), seems to be more favourable, being in a good agreement with the results of \citet{Wisniewski_2007}.
On the other hand, the classification as a Herbig Ae/B[e] star cannot be discarded, due to the presence of silicates and PAHs bands in the Spitzer spectrum (Fig.~\ref{fig:SEDs-model}), as cited by \citet{Whelan-2013}. Thus, from the HR diagram, considering the pre-main sequence tracks with SMC metallicity from \citet{Bernasconi-Maeder-1996}, [MA93] 1116 has $M_{\rm ZAMS}$ around 15 \,M$_\odot$ (middle panel, Fig.~\ref{fig:traks-solar-lmc-art}).
\section{Conclusions}
\label{sec:Conclusions}
We analysed photometric and high-resolution spectroscopic data for a sample of 12 unclassified B[e] stars and candidates: 8 from the Galaxy, 2 from LMC and 2 from SMC. For six of them (Hen\,3-938, Hen 2-91, SS\,255, LHA 115-N82, ARDB\,54, and LHA 120-S59) the analysis of high-resolution spectra was done for the first time. For the other six (IRAS 07080+0605, IRAS 07377-2523, IRAS 07455-3143, IRAS 17449+2320, V* FX Vel, and [MA93] 1116), our analysis of new high-resolution data provided more information about their nature, variability and/or binarity.
We confirmed the presence of the B[e] phenomenon for all objects, except for IRAS 07455-3143. For eight stars, we obtained spectra taken in more than one night, being possible to identify, for most of them, variabilities in the line profiles, radial velocities and {\it V/R}. Even for stars observed for just one night, it was possible to identify variabilities from the comparison with the literature.
Based on different methods and considering the distance provided by Gaia DR2, we derived the effective temperature (spectral type), bolometric magnitude, luminosity, radius (luminosity class), and interstellar extinction for most of our stars. For LHA 120-S 59, we assumed the effective temperature from the literature and derived the other parameters. For Hen 2-91, due to the absence of reliable parameters, we could obtain no further information on its nature.
Based on the SED, we identified that all stars of our sample have IR excess. From the Spitzer spectra of some of them, we found indication of dust, hinting at a dense and complex circumstellar environment. Our analysis of [Ca\,{\sc ii}] and [O\,{\sc i}] line profiles reveals that all stars have indication for one or more gaseous rings in (quasi-) Keplerian rotation around the central star or binary system. It is important to notice that no trend was seen as a function of the metallicity, indicating that the circumstellar density structure of these stars is not metallicity dependent.
From the period analysis of light curves of four objects, we found that for two of them, namely V* FX Vel and LHA 120-S59, the estimated photometric periods are in good agreement with the literature. For IRAS 07080+0605, we found a different period compared to the literature, although dubious due to scarcity of data. For [MA93] 1116, its period was obtained for the first time. The presence of these periodic variabilities may indicate binarity, however, except for V* FX Vel, our spectroscopic analysis did not find evidence of companions.
By comparison of the position of our stars in the HR diagram to evolutionary tracks for solar, SMC and LMC metallicities, we found that: (i) IRAS 07080+0605 and V*~FX~Vel are A[e] stars with uncertain classification, probably either main sequence or pre-main sequence objects; (ii) IRAS 07377-2523 is a B[e] star, either in a post- or a pre-main sequence phase; (iii) IRAS 17449+2320 is a B[e] star probably at the main sequence or close to its end; (iv) SS\,255 can be another B[e] star in a similar stage as IRAS 17449+2320, but based on its spectral features, a post-AGB nature seems more favourable; (v) LHA 120-S59 is a B[e] supergiant; (vi) Hen\,3-938, and [MA93] 1116 can be B[e] supergiants or HAeB[e]; and (vii) IRAS 07455-3143 is a B supergiant. However, our most remarkable results are the identification of ARDB\,54 as the third A[e] supergiant, the first one in the LMC, and of LHA 115-N82, as an intermediate mass and post-main sequence B[e] star, but with a light curve showing eruptions similar to LBVs, i.e. a ``LBV impostor".
More observations using different techniques, including interferometry and polarimetry, associated to simultaneous high-resolution spectroscopy and photometry are necessary to confirm the nature of these peculiar objects, their variability and binary fraction. This will certainly allow for a better comprehension of the B[e] and also A[e] phenomena in environments with different metallicities.
\section*{Acknowledgements}
We thank the anonymous referee for his/her very constructive comments that helped us to improve this paper.
CAHC acknowledges financial support from Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'ivel Superior (Brazil-CAPES) through PhD. grant. MK acknowledges financial support from GA\,\v{C}R (grant number 17-02337S). The Astronomical Institute Ond\v{r}ejov is supported by the project RVO:67985815. DP acknowledges financial support from Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq - Brazil) through grant 300235/2017-8.
This study was financed in part by the Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'ivel Superior - Brasil (CAPES) - Finance Code 001. Parts of the observations also obtained with the MPG 2.2-m telescope were supported by the Ministry of Education, Youth and Sports of the Czech Republic, project LG14013 (Tycho Brahe: Supporting Ground-based Astronomical Observations). We would like to thank the observers (S. Ehlerova and A. Kawka) for obtaining the data.
Part of this project has received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sk\l{}odowska-Curie Grant Agreement No. 823734.
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. This research has also made use of Astropy, a community-developed core Python package for Astronomy \citep{aspy13}.
This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
\addcontentsline{toc}{section}{Acknowledgements}
| proofpile-arXiv_065-7194 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Modern high energy theoretical physics is a unified theory of
all particles and all interactions. It is Theory of Everything,
because it gives a universal description of the
processes occurring on modern accelerators, and processes in the Universe.
Theory of everything (abbr. TOE) - hypothetical
combined physical and mathematical theory describing all known
fundamental interactions. This theory unifies all four
fundamental interactions in nature. The main problem of building
TOE is that quantum mechanics and general
theory of relativity have different applications.
Quantum mechanics is mainly used to describe the microworld, and
general relativity is applicable to the macro world.
But it does not mean that such theory cannot be constructed.
Modern physics requires from TOE the unification of four
fundamental interactions: \\
$\bullet $ gravitational interaction; \\
$\bullet $ electromagnetic interaction; \\
$\bullet $ strong nuclear interaction; \\
$\bullet $ weak nuclear interaction.
The first step towards this was the unification of the electromagnetic and
weak interactions in the theory of electro-weak interaction created by
in 1967 by Stephen Weinberg, Sheldon Glashow and Abdus Salam.
In 1973, the theory of strong interaction was proposed.
The main candidate as TOE
is F-theory, which operates with a large number of dimensions.
Thanks to the ideas of Kaluza and Klein it became
possible to create theories operating with large extra dimensions.
The use of extra dimensions prompted the answer to the question about
why the effect of gravity appears much weaker than
other types of interactions. The generally accepted answer is that
gravity exists in extra dimensions, therefore its effect on
observable measurements weakened.
F-theory is a string twelve-dimensional theory defined on
energy scale of about 10$^{19}$~GeV \cite {1.}. F-theory
compactification
leads to a new type of vacuum, so to study supersymmetry we
must compactify the F-theory on Calabi-Yau manifolds.
Since there are many Calabi-Yau manifolds, we are dealing with
a large number of new models implemented in low-energy
approximation. Studying the singularities of Calabi manifold
determines the physical characteristics of topological solitonic
states which plays the role of particles in high energy physics.
Compactification of F-theory on different
Calabi-Yau manifolds allows to calculate topological invariants.
Let us consider in more detail the compactification of F-theory on threefolds Calabi Yau.
\section{Calabi-Yau threefold compactification}
Twelve-dimensional space describing space-time
and internal degrees of freedom, we compactify as follows:
\[R^6 \times X^6 \ ,\]
where $R^6$ - six-dimensional space-time, on which acts
conformal group SO(4, 2), and $X^6 $ - threefold,
which is three-dimensional
Calabi Yau complex manifold \cite{2.}.
\subsection{Toric representation of threefolds}
Let's consider weighted
projective space defined as follows:
\[P^4_{\omega_1,\ldots,\omega_5 }=P^4/Z_{\omega_1}\times \ldots \times Z_{\omega_5}\ , \]
where $P^4$ - four-dimensional projective space,
$Z_{\omega_i}$ - cyclic group of order $\omega_i$.
On weighted projective space $P^4_{\omega_1,\ldots,\omega_5 }$
is defined polynomial $W(\varphi_1, \ldots, \varphi_5)$,
called superpotential which satisfies the homogeneity condition
\[W(x^{\omega_1}\varphi_1, \ldots, x^{\omega_5}\varphi_5)=x^d W(\varphi_1, \ldots, \varphi_5)\ ,\]
where $d=\sum\limits_{i=1}^5\omega_i$, $\varphi_1, \ldots, \varphi_5 \in P^4_{\omega_1,\ldots,\omega_5 } $.
The set of points $p\in P^4_{\omega_1,\ldots,\omega_5 } $,
satisfying the condition $W(p)=0$ forms Calabi-Yau threefold
$X_d(\omega_1, \ldots, \omega_5)$ .
The simplest examples of toric varieties \cite{3.} are projective spaces.
Let's consider $P^{2}$ defined as follows:
\[P^{2} = \frac{C^{3}/{0}}{C/{0}}, \]
where dividing by $ C/{0}$ means identification of points connected
by equivalence relation
\[(x, y, z)\sim(\lambda x, \lambda y, \lambda z) \]
\[\lambda \in C/{0}, \]
$ x, y, z $ are homogeneous coordinates. Elliptic curve in $P^{2}$
is described by the Weierstrass equation
\[y^2z = x^3 + axz^2 + bz^3. \]
In general Calabi-Yau manifold can be described by Weierstrass form
\[y^2=x^3+xf+g,\]
which describes an elliptic fibration (parametrized
by $(y, x)$) over the base, where $f, g$ - functions defined on the base.
In some divisors $D_i$ the layer are degenerated. Such divisors are zeros
of discriminant
\[\Delta=4f^3+27g^2.\]
The singularities of Calabi-Yau manifold are singularities of its
elliptic fibrations. These singularities are
coded in polynomials $f, g$ and their type determines the gauge group and
matter content of compactified F-theory.
The classification of singularities of elliptic fibrations was given by Kodaira and presented table 1.\\
\begin{center}
\emph{\textbf{Table 1.}} {\it Kodaira classification of singularities of elliptic fibrations}
\begin{tabular}{|c|c|c|}\hline
$ord(\Delta)$&Type of fiber & Type of singularity \\ \hline
0&smooth&no\\ \hline
n&$I_n$&$A_{n-1}$ \\ \hline
2&$II$&no \\ \hline
3&$III$&$A_{1}$ \\ \hline
4&$IV$&$A_{2}$ \\ \hline
n+6&$I^*_n$&$D_{n+4}$ \\ \hline
8&$IV^*$&$E_{6}$ \\ \hline
9&$III^*$&$E_{7}$ \\ \hline
10&$II^*$&$E_{8}$ \\ \hline
\end{tabular}
\end{center}
\vspace*{3mm}
The classification of elliptic fibers is presented in Figure 1.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.38\textwidth]{1.eps}}
\begin{center}\bf{Fig.1. The classification of elliptic fibers.}\end{center}
\end{figure}
\section{Calculation of topological invariants}
\subsection{Ramon-Ramon charges}
One of the most interesting problems of modern high-energy physics is the calculation of topological invariants - analogs of high-energy observables in physics. In this aspect, symmetries and the use of the apparatus of algebraic geometry play an indispensable role. We considered orbifold as simplest non-flat constructions. For D3-branes on such internal space
$C^n/\Gamma$ the representations are characterized by gauge groups
$G=\oplus_iU(N_i)$. In this case the superpotential is of N=4 $U(N)$ super
Yang-Mills,
\[W_{N=4}=\mbox{tr}X^1[X^2,X^3],\]
where $X^i$ are chiral matter fields in production of fundamental
representation $V^i\cong C^{N_i}$ of the group $U(N_i)$.
Blow up modes of orbifold singularities can be considered as coordinates of complexified Kahler moduli space. Quiver diagrams are used for discribing
D-branes near orbifold point.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.38\textwidth]{quiv.eps}}
\begin{center}\bf{Fig.2. The $C^3/Z_3$ quiver.}\end{center}
\end{figure}
In this case it is possible to calculate Euler character defined as
\[\chi(A,B)=\sum_i(-1)^i\mbox{dimExt}^i(A,B),\]
where $\mbox{Ext}^0(A,B)\equiv \mbox{Hom}(A,B)$ and
$A, B$ are coherent sheaves over projective space, $P^N$ (general case),
which are representations of orbifold space after blowing up procedure.
Since we will deal with orbifolds $C^3/Z_3$ in the future, it is necessary to emphasize the following equivalence relation
\[(x_1x_2x_3)\sim(e^{2i\pi/3}x_1, e^{2i\pi/3}x_2, e^{2i\pi/3}x_3), \ e^{2i\pi/3}\in Z_3\]
Orbifold is not a manifold, since it has singularities at a point $(0, 0, 0)$. Blowing up the singularity of the orbifold $C^3/Z_3$, we obtain a sheave
${\cal{O}}_{P^2}(-3)$ with which we will work further.
In particular, the Euler matrix for sheaves ${\cal{O}}_{P^2}$,
${\cal{O}}_{P^2}(1)$, ${\cal{O}}_{P^2}(2)$
over projective space, $P^2$ looks like
\[ \chi({\cal{O}}_{P^2}(1), {\cal{O}}_{P^2}(2)) = \left( \begin{array}{ccc}
1 & 3& 6 \\
0& 1 &3 \\
0 & 0 & 1 \end{array} \right).\]
\vspace{2mm}
Transposed matrix has the form
\bigskip
\[ \left( \begin{array}{ccc}
1 & 3& 6 \\
0& 1 &3 \\
0 & 0 & 1 \end{array} \right)
\Rightarrow
\left( \begin{array}{ccc}
1 & 0& 0 \\
3& 1 &0 \\
6& 3 & 1 \end{array} \right).\]
\vspace{2mm}
The rows of matrices are RR-charges characterizing the sheaves:
\begin{equation}
{\cal{O}}_{P^2}(-3)=(6\ 3\ 1), {\cal{O}}_{P^2}(-2)=(3\ 1\ 0),
{\cal{O}}_{P^2}(-1)= (1\ 0\ 0),
\end{equation}
\begin{equation}
{\cal{O}}_{P^2}=(0\ 0\ 1), {\cal{O}}_{P^2}(1)=(0\ 1\ 3),
{\cal{O}}_{P^2}(2)= (1\ 3\ 6),
\end{equation}
\vspace{2mm}
which can be written through large volume charges $(Q_4, Q_2, Q_0)$:
\[Q_4=n_1-2n_2+n_3, \ \ Q_2=-n_1+n_2, \ \ Q_0=\frac{n_1+n_2}{2}\]
included in the definition of the Chern character $ch(n_1n_2n_3)$
\[ch(n_1n_2n_3)=Q_4+Q_2w+Q_0w^2,\]
where $w$ - Wu number.
Then sheaves (1), (2) describe fractional branes \cite{4.}
\[{\cal{O}}_{P^2}(-3)=(1\ -3\ \ \frac{9}{2}), {\cal{O}}_{P^2}(-2)=(1\ -2\ \ \frac{4}{2}), {\cal{O}}_{P^2}(-1)= (1\ -1\ \ \frac{1}{2}), \]
\[{\cal{O}}_{P^2}=(1\ 0\ 0), {\cal{O}}_{P^2}(1)=(1\ 1\ \ \frac{1}{2}),
{\cal{O}}_{P^2}(2)= (1\ 2\ \ \frac{4}{2}),\]
General formula for Chern character of bundle $E$:
\[ch(E)=k+c_1(E)+\frac{1}{2}(c_1(E)^2-2c_2(E))+\ldots ,\]
where $c_i(E)$ are the Chern classes of line bundle $E$.
In our case of a line bundle ${\cal{O}}_{P^2}(k)$, only the first Chern class is nonzero, and therefore the formula for the Chern character is following
\begin{equation}
ch(E)=k+c_1(E)+\frac{1}{2}c_1^2
\end{equation}
As
\[1+c_1(E)+\ldots + c_n(E)=\prod\limits_{i=1}^{n}(1+w_i),\]
then $c_1(E)=w_1=w$ and formula (3) can be rewritten
\begin{equation}
ch(n_1n_2n_3)=Q_4+Q_2w+Q_0w^2,
\end{equation}
where Ramon-Ramon charges $(n_1n_2n_3)$
characterize the bundle $E$, the rank of the line bundle $Q_4=1, Q_2=c_1$ by the fundamental cycle, $Q_0=\frac{c_1^2}{2}$ from a comparison of formulas (3) and (4).
Thus fractional sheaves ${\cal{O}}_{P^2}(k)$ are characterized by $Q_0, Q_2, Q_4$ Ramon-Ramon charges, which have special type, calculated for $C^3/Z_3$ case.
\subsection{BPS central charge}
As we are interested in the moduli spaces, we give them a visual definition.
Suppose we have a cube curve with the parameter $\lambda$
\begin{equation}
y^2-x(x-1)(x-\lambda)=0
\end{equation}
As $\lambda$ - the variable value, then the equation (5) describes a continuous family of cubic curves. The parameter spaces describing continuous families of manifolds are called moduli spaces. We form $\frac{dx}{y}$ the form where $y$ are determined from equation (5). It turns out that periods
$\pi_1(\lambda),\ pi_2(\lambda)$:
\[\pi_1(\lambda)=2\int\limits_0^1\frac{dx}{[x(x-1)(x-\lambda)]^{1/2}}, \ \
\pi_2(\lambda)=2\int\limits_1^{\lambda}\frac{dx}{[x(x-1)(x-\lambda)]^{1/2}}\]
satisfy Picard-Fuchs equation
\begin{equation}
\frac{1}{4}\pi_i+(2\lambda -1)\frac{d\pi_i}{d\lambda}+\lambda(\lambda -1)
\frac{d^2\pi_i}{d\lambda^2}=0\ .
\end{equation}
Periods that satisfy equation (6) describe the moduli space of a cubic curves.
For the moduli space of a line bundle ${\cal{O}}_{P^2}(-3)$, Picard-Fuchs equation and its solutions are written as
\[
\Biggl(z\frac{d}{dz}\Biggr)^3+27z\Biggl(z\frac{d}{dz}\Biggr)\Biggl(z\frac{d}{dz}+\frac{1}{3}\Biggr)\Biggl(z\frac{d}{dz}+\frac{2}{3}\Biggr)\Pi=0
\]
\[
\Pi_0=1,
\]
\[\Pi_1=\frac{1}{2i\pi}log\ z=t=w_0,\]
\[\Pi_2=t^2-t-\frac{1}{6}=-\frac{2}{3}(w_0-w_1) \ .\]
The BPS central charge \cite{5.} associated with the D-brane over $C^3/Z_3$ with Ramon-Ramon-charge $n=(n_1n_2n_3)$ and with the Picard-Fuchs period
$\Pi=(\Pi_0\Pi_1\Pi_2)$ is given by the formula
\[Z(n)=n\cdot\Pi\]
The central charge associated with the sheave ${\cal{O}}_{P^2}(k)$ is given by the formula
\[Z({\cal{O}}_{P^2}(k))=-(k+\frac{1}{3}w_0)+\frac{1}{3}w_1+\frac{1}{2}k^2 +
\frac{1}{2}k+\frac{1}{3}\ .
\]
\section{Conclusion}
In the framework of F-theory we prsented the ideology of extra dimensional spaces. It was stressed the exceptional role of topological invariants for Calabi-Yau manifolds. We have considered the special type of the space of extra dimensions - orbifold $C^/Z_3$. Using blowing up procedure of singularity we calculated special type of topological invariant - Ramon-Ramon central charges of fractional sheaves, in which is encoded the information about the structure of line bundles. Consideration of moduli space of orbifold leads us to the equation of Picard-Fuchs periods, through which we calculated central charge for sheave ${\cal{O}}_{P^2}(k)$. This topological invariant is of importance because of information of stability of D-branes as bound states of fractional branes or sheaves presented in this paper.
| proofpile-arXiv_065-7196 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Since their discovery by \citet{Kurtz197812, Kurtz1982Rapidly}, only 70 rapidly oscillating Ap (roAp) stars have been found \citep{Smalley2015KIC, Joshi2016NainitalCape, Cunha2019Rotation, Balona2019Highfrequencies}. Progress in understanding their pulsation mechanism, abundance, and the origin of their magnetic fields has been hindered by the relatively small number of known roAp stars. A key difficulty in their detection lies in the rapid oscillations themselves, requiring dedicated observations at a short enough cadence to properly sample the oscillations. In this paper, we show that the {\em Kepler}\ long-cadence data can be used to detect roAp stars, despite their pulsation frequencies being greater than the Nyquist frequency of the data.
As a class, the chemically peculiar A type (Ap) stars exhibit enhanced features of rare earth elements, such as Sr, Cr and Eu, in their spectra \citep{Morgan1933Evidence}. This enhancement is the result of a stable magnetic field on the order of a few to tens of kG \citep{Mathys2017Ap}, which typically allows for the formation of abundance `spots' on the surface, concentrated at the magnetic poles \citep{Ryabchikova2007Pulsation}. In most, but not all Ap stars, photometric and spectral variability over the rotation cycle can be observed \citep{Abt1995Relation}. Such characteristic spot-based modulation manifests as a low-frequency modulation of the light curve which is readily identified, allowing for the rotation period to be measured \citep[e.g.][]{Drury2017Large}.
The roAp stars are a rare subclass of the Ap stars that exhibit rapid brightness and radial velocity variations with periods between 5 and 24 min and amplitudes up to 0.018 mag in Johnson $B$ \citep{Kurtz2000Introduction, Kochukhov2009Asteroseismology}. They oscillate in high-overtone, low-degree pressure (p) modes \citep{Saio2005Nonadiabatic}. The excitation of high-overtone p-modes, as opposed to the low-overtones of other pulsators in the classical instability strip is suspected to be a consequence of the strong magnetic field -- on the order of a few to tens of kG -- which suppresses the convective envelope at the magnetic poles and increases the efficiency of the opacity mechanism in the region of hydrogen ionisation \citep{Balmforth2001Excitation,Cunha2002Theoretical}. Based on this, a theoretical instability strip for the roAp stars has been published by \citet{Cunha2002Theoretical}. However, discrepancies between the observed and theoretical red and blue edges have been noted, with several roAp stars identified to be cooler than the theoretical red edge.
A further challenge to theoretical models of pulsations in magnetic stars are oscillations above the so-called acoustic cutoff frequency \citep{Saio2013Pulsation,Holdsworth2018LCO}. In non-magnetic stars, oscillations above this frequency are not expected. However, in roAp stars the strong magnetic field guarantees that part of the wave energy is kept inside the star in every pulsation cycle, for arbitrarily large frequencies \citep{sousaandcunha2008}. For that reason, no theoretical limit exists to the frequency of the modes. Nevertheless, for a mode to be observed, it has to be excited. Models show that the opacity mechanism is capable of exciting modes of frequency close to, but below, the acoustic cutoff frequency. The excitation mechanism for the oscillations above the acoustic cutoff is thought to be turbulent pressure in the envelope regions where convection is no longer suppressed \citep{Cunha2013Testing}.
The magnetic axis of roAp stars is closely aligned with the pulsation axis, with both being inclined to the rotation axis. Observation of this phenomenon led to the development \citep{Kurtz1982Rapidly} and later refinement \citep{Dziembowski1985Frequency,Shibahashi1985Rapid,Shibahashi1985Rotational,Shibahashi1993Theory,Takata1994Selection,Takata1995Effects,Bigot2011Theoretical} of the oblique pulsator model. The roAp stars present a unique testbed for models of magneto-acoustic interactions in stars, and have been widely sought with both ground and space-based photometry.
The launch of the \textit{Kepler} Space Telescope allowed for the detection of oscillations well below the amplitude threshold for ground-based observations, even for stars fainter than 13 magnitude. The vast majority of stars observed by {\em Kepler}\ were recorded in long-cadence (LC) mode, with exposures integrated over 29.43\,min. A further allocation of 512 stars at any given time were observed in the short-cadence (SC) mode, with an integration time of 58.85\,s. These two modes correspond to Nyquist limits of 283.21 and 8496.18\,\mbox{$\muup$Hz}\, respectively \citep{Borucki2010Kepler}. In its nominal mission, \textit{Kepler} continuously observed around 150 000 stars in LC mode for 4 yr.
The \textit{Kepler} SC data have been used to discover several roAp stars \citep{Kurtz2011First,Balona2011Kepler,Balona2013Unusual,Smalley2015KIC}, and to detect pulsation in previously known roAp stars with the extended K2 mission \citep{Holdsworth2016HD,Holdsworth2018K2}. Until now, only SC data have been used for identification of new roAp stars in the \textit{Kepler} field. However, with the limited availability of SC observation slots, a wide search for rapid oscillators has not been feasible. Although ground-based photometric data have been used to search for roAp stars \citep[e.g.][]{Martinez1991Cape,Joshi2005NainitalCape,Paunzen2012Hvar,Holdsworth2014Highfrequency}, most previous work in using {\em Kepler}\ to identify such stars has relied solely on SC observations of targets already known to be chemically peculiar. The number of targets in the {\em Kepler}\ field that possess LC data far outweigh those with SC data, but they have been largely ignored in the search for new roAp stars.
The key difficulty in searching for rapid oscillations in the LC data is that each pulsation frequency in the Fourier spectrum is accompanied by many aliases, reflected around integer multiples of the sampling frequency. Despite this, it has previously been shown by \citet{Murphy2013SuperNyquist} that the Nyquist ambiguity in the LC data can be resolved as a result of the barycentric corrections applied to {\em Kepler}\ time stamps, leading to a scenario where Nyquist aliases can be reliably distinguished from their true counterparts even if they are well above or below the nominal Nyquist limit. The barycentric corrections modulate the cadence of the photometric observations, so that all aliases above the Nyquist limit appear as multiplets split by the orbital frequency of the {\em Kepler}\ telescope ($1/372.5$\,d, 0.03\,\mbox{$\muup$Hz}). Furthermore, the distribution of power in Fourier space ensures that in the absence of errors, the highest peak of a set of aliases will always be the true one. An example of distinguishing aliases is shown in Fig.~\ref{fig:10195926} for the known roAp star KIC\,10195926. The true pulsation is evident as the highest peak in the LC data, and is not split by the {\em Kepler}\ orbital frequency.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/10195926_example_plot.pdf}
\caption{Amplitude spectra of the \textbf{a)} long- and \textbf{b)} short-cadence {\em Kepler}\ data of KIC\,10195926, a previously known roAp star \citep{Kurtz2011First}. The primary oscillation is detectable even though it lies far above the Nyquist frequency (shown in integer multiples of the Nyquist frequency as red dotted lines) for the LC data. The green curves show the ratio of measured to intrinsic amplitudes in the data, showing the effects of apodization. \textbf{c)} shows the aliased signal at 406.2\,\mbox{$\muup$Hz}\ of the true oscillation at \textbf{d)}, 972.6\,\mbox{$\muup$Hz}, distinguishable by both the {\em Kepler}\ orbital separation frequency (dashed blue lines) and maximum amplitudes. The Python code P{\sc{yquist}} has been used to plot the apodization \citep{Bell2017Destroying}.}
\label{fig:10195926}
\end{figure*}
This technique, known as super-Nyquist asteroseismology, has previously been used with red giants and solar-like oscillators on a case-by-case basis \citep{Chaplin2014SuperNyquist,Yu2016Asteroseismology,Mathur2016Probing}, as well as in combinations of LC data with ground-based observations for compact pulsators \citep{Bell2017Destroying}. Applications in the context of roAp stars have been limited only to frequency verification of SC or other data \citep{Holdsworth2014KIC, Smalley2015KIC}. Our approach makes no assumption about the spectroscopic nature or previous identification of the target, except that its effective temperature lies in the observed range for roAp stars. We further note that super-Nyquist asteroseismology is applicable to the Transiting Exoplanet Survey Satellite \citep[TESS;][]{Ricker2014Transiting} and future space-based missions \citep{Murphy2015Potential, Shibahashi2018SuperNyquist}.
In this paper, we report six new roAp stars whose frequencies are identified solely from their LC data; KIC\,6631188, KIC\,7018170, KIC\,10685175, KIC\,11031749, KIC\,11296437, and KIC\,11409673. These are all found to be chemically peculiar A/F stars with enhanced Sr, Cr, and/or Eu lines.
\section{Observational data \& Analysis}
\subsection{Target selection}
We selected \textit{Kepler} targets with effective temperatures between 6000 and 10\,000\,K according to the `input' temperatures of \citet{Mathur2017Revised}. We significantly extended the cooler edge of our search since few roAp stars are known to lie close to the red edge of the instability strip. We used the \textit{Kepler} LC light curves from Quarters 0 through 17, processed with the PDCSAP pipeline \citep{Stumpe2014Multiscale}, yielding a total sample of 69\,347 stars. We applied a custom-written pipeline to all of these stars that have LC photometry for at least four quarters. Nyquist aliases in stars with time-bases shorter than a full Kepler orbital period (4 quarters) have poorly defined multiplets and were discarded from the sample at run-time \citep[see][for details]{Murphy2013SuperNyquist}.
In addition to the automated search, we manually inspected the light curves of the 53 known magnetic chemically peculiar (mCP) stars from the list of \citet{Hummerich2018Kepler}. These stars have pre-existing spectral classification, requiring only a super-Nyquist oscillation to be identified as a roAp star.
\subsection{Pipeline}
The pipeline was designed to identify all oscillations between 580 and 3500\,\mbox{$\muup$Hz}\ in the {\em Kepler}\ LC data, by first applying a high-pass filter to the light curve, removing both the long-period rotational modulation between $0$ and $50$\,\mbox{$\muup$Hz}\ and low-frequency instrumental artefacts. The high-pass filter reduced all power in this given range to noise level. The skewness of the amplitude spectrum values, as measured between 0 and 3500\,\mbox{$\muup$Hz}, was then used as a diagnostic for separating pulsators and non-pulsators, following \citet{Murphy2019Gaiaderived}. Stars with no detectable pulsations, either aliased or otherwise, tend to have a skewness lower than unity, and were removed from the sample.
After filtering out non-pulsators, each frequency above 700\,\mbox{$\muup$Hz}\ at a signal-to-noise ratio (SNR) greater than 5 was then checked automatically for sidelobes. These sidelobes are caused by the uneven sampling of \textit{Kepler}'s data points once barycentric corrections to the time stamps have been made, as seen in Fig.~\ref{fig:10195926}. A simple peak-finding algorithm was used to determine the frequency of highest amplitude for each frequency in the set of aliases, which was further refined using a three-point parabolic interpolation to mitigate any potential frequency drift. The frequencies above a SNR of 5 were then deemed aliases if their sidelobes were separated by the {\em Kepler}\ orbital frequency for a tolerance of $\pm$0.002\,\mbox{$\muup$Hz}. Frequencies that did not display evidence of Nyquist aliasing were then flagged for manual inspection.
The general process for the pipeline can be summarised as follows:
\begin{enumerate}
\item High-pass filter the light curve and calculate the skewness of the amplitude spectrum between 0 and 3500\,\mbox{$\muup$Hz}; if skewness is less than unity, move to next star.
\item For each peak greater than 700\,\mbox{$\muup$Hz}\ with a SNR above 5, identify all sidelobes and determine whether they are separated by the {\em Kepler}\ orbital frequency.
\item If at least one peak is not an alias, flag the star for manual inspection.
\end{enumerate}
\subsection{Apodization}
The high-pass filter was designed to remove all signals between 0 and 50\,\mbox{$\muup$Hz}. This had the additional effect of removing the reflected signals at integer multiples of the sampling frequency, regardless of whether they are aliased or genuine. As a result, the pipeline presented here cannot reliably identify oscillations close to integer multiples of the sampling frequency (2$\nu_{\rm Nyq}$).
However, we note that if any of these stars are indeed oscillating in these regions, or even above the Nyquist frequency, the measured amplitude will be highly diminished as a result of the non-zero duration of {\em Kepler}\ integration times, a phenomenon referred to as apodization \citep{Murphy2015Investigating,Hekker2017Giant} or phase smearing \citep{Bell2017Destroying}. The amplitudes measured from the data ($A_{\rm measured}$) are smaller than their intrinsic amplitudes in the {\em Kepler}\ filter by a factor of $\eta$,
\begin{eqnarray}
\eta = \dfrac{A_{\rm measured}}{A_{\rm intrinsic}} = {\rm sinc}\Big[\dfrac{\nu}{2\nu_{\rm Nyq}} \Big],
\end{eqnarray}
where $\nu$ and $\nu_{\rm Nyq}$ are the observed and Nyquist frequencies, respectively. This equation shows that frequencies lying near integer multiples of the sampling frequency are almost undetectable in \textit{Kepler} and other photometric campaigns. The factor $\eta$ is shown as the green curves in Fig.~\ref{fig:10195926}. For the results in Sec.~\ref{sec:results}, both measured and intrinsic amplitudes are provided.
\section{Results}
\label{sec:results}
Each star found to have non-alias high-frequency pulsations by the pipeline was manually inspected. Of the flagged candidates, 4 were previously identified roAp stars in the {\em Kepler}\ field, KIC\,10195926 \citep{Kurtz2011First}, KIC\,10483436 \citep{Balona2011Rotation}, KIC\,7582608 \citep{Holdsworth2014KIC}, and KIC\,4768731 \citep{Smalley2015KIC}. The fifth previously known {\em Kepler}\ roAp star KIC\,8677585 \citep{Balona2013Unusual}, was not identified by the pipeline, due to the primary frequency of 1659.79\,\mbox{$\muup$Hz}\ falling just within range of the filtered region. We further identified one more high-frequency oscillator during manual inspection of the 53 stars in the mCP sample of \citet{Hummerich2018Kepler}.
For all six newly identified stars, we calculated an amplitude spectrum in the frequency range around the detected pulsation, following the method of \citet{Kurtz1985Algorithm}. The frequencies were then optimised by non-linear least-squares. The signal-to-noise ratio (SNR) of the spectrum was calculated for the entire light curve by means of a box kernel convolution of frequency width 23.15\,\mbox{$\muup$Hz}\ (2\,\mbox{d$^{-1}$}), as implemented in the L{\sc{ightkurve}} Python package \citep{lightkurve}.
\subsection{Stellar properties}
The properties of the six new high-frequency oscillators examined in this work are provided in Table~\ref{tab:sample}. Temperatures were obtained from LAMOST DR4 spectroscopy \citep{Zhao2012LAMOST}. Since the temperatures of roAp stars are inherently difficult to measure as a result of their anomalous elemental distributions \citep{Matthews1999Parallaxes}, we inflated the low uncertainties in the LAMOST catalogue ($\sim$40\,K) to a fixed 300\,K. For the one star with an unusable spectrum in LAMOST, KIC\,6631188, we took the temperature from the stellar properties catalogue of \cite{Mathur2017Revised}.
We derived apparent magnitudes in the SDSS $g-$band by re-calibrating the KIC $g-$ and $r-$bands following equation~1 of \citet{Pinsonneault2012Revised}. Distances were obtained from the Gaia DR2 parallaxes using the normalised posterior distribution and adopted length scale model of \citet{Bailer-Jones2018Estimating}. This produced a distribution of distances for each star, from which Monte Carlo draws could be sampled. Unlike \citet{Bailer-Jones2018Estimating}, no parallax zero-point correction has been applied to our sample, since it has previously been shown by \citet{Murphy2019Gaiaderived} to not be appropriate for \textit{Kepler} A stars.
\begin{table}
\centering
\caption{Properties of the 6 new roAp stars.}
\begin{tabular}{lccccr}
\hline
KIC & $g$ Mag & T$_{\rm eff}$ (K) & $\log{\rm L/\rm L_\odot}$ & $\rm M/\rm M_\odot$ \\
\hline
6631188 & 13.835 & 7700\,$\pm$\,300 & 1.124\,$\pm$\,0.034 & 1.83\,$\pm$\,0.25 \\
7018170 & 13.335 & 7000\,$\pm$\,300 & 0.987\,$\pm$\,0.026 & 1.69\,$\pm$\,0.25 \\
10685175 & 12.011& 8000\,$\pm$\,300 & 0.896\,$\pm$\,0.022 & 1.65\,$\pm$\,0.25 \\
11031749 & 12.949 & 7000\,$\pm$\,300 & 1.132\,$\pm$\,0.041 & 1.78\,$\pm$\,0.25 \\
11296437 & 11.822 & 7000\,$\pm$\,300 & 1.055\,$\pm$\,0.018 & 1.73\,$\pm$\,0.25 \\
11409673 & 12.837 & 7500\,$\pm$\,300 & 1.056\,$\pm$\,0.031 & 1.75\,$\pm$\,0.25 \\
\hline
\end{tabular}
\label{tab:sample}
\end{table}
Standard treatment of bolometric corrections \citep[e.g.][]{Torres2010Use} are unreliable for Ap stars, due to their anomalous flux distributions. Working in SDSS $g$ minimises the bolometric correction, since the wavelength range is close to the peak of the spectral energy distribution of Ap stars. We obtained $g$-band bolometric corrections using the {\sc IsoClassify} package \citep{Huber2017Asteroseismology}, which interpolates over the {\sc mesa} Isochrones \& Stellar Tracks (MIST) tables \citep{Dotter2016MESA} using stellar metallicities, effective temperatures, and surface gravities obtained from \citet{Mathur2017Revised}. Extinction corrections of \citet{Green2018Galactic} as queried through the {\sc Dustmaps} python package \citep{Green2018Dustmaps}, were applied to the sample. The corrections were re-scaled to SDSS $g$ following table~A1 of \citet{Sanders2018Isochronea}. To calculate luminosities, we followed the methodology of \citet{Murphy2019Gaiaderived}, using a Monte Carlo simulation to obtain uncertainties. Masses were obtained via an interpolation over stellar tracks, and are discussed in more detail in Sec.~\ref{sec:modelling}.
\section{Discussion of individual stars}
\label{sec:pulsations}
\subsection{KIC~6631188}
\begin{table*}
\centering
\caption{Non-linear least squares fits of the rotation frequency and the pulsation multiplets. The zero-points of the fits were chosen to be the centre of each light curve. Rotational frequencies, \mbox{$\nu_{\rm rot}$}, are calculated from the low-frequency portion of the unfiltered light curve when available, whereas the oscillation frequencies are calculated on the high-pass filtered light curve. $\delta \nu$ is the difference in frequency between the current and previous row. $^\dagger$Rotation has been calculated from amplitude modulation of its frequencies (cf. Sec.~\ref{sec:amp_variability}).}
\begin{tabular}{
l
l
r@{$\,\pm\,$}l
r@{$\,\pm\,$}l
r@{$\,\pm\,$}l
r@{$\,\pm\,$}l
c
r
}
\hline
KIC & Label & \multicolumn{2}{c}{Frequency} & \multicolumn{2}{c}{Amplitude$_{\rm \,\,measured}$} & \multicolumn{2}{c}{Amplitude$_{\rm \,\,intrinsic}$} & \multicolumn{2}{c}{Phase} & $\delta \nu$ & $\delta \nu / \nu_{\rm rot}$\\
& & \multicolumn{2}{c}{(\mbox{$\muup$Hz})} & \multicolumn{2}{c}{(mmag)} & \multicolumn{2}{c}{(mmag)} &\multicolumn{2}{c}{(rad)} & (\mbox{$\muup$Hz}) \\
\hline
6631188 & \mbox{$\nu_{\rm rot}$} & 2.300\,47&0.000\,04 & 2.053&0.016 & 2.053&0.016
\vspace{0.05cm} \\
&$\nu_1$ - 2\mbox{$\nu_{\rm rot}$} & 1488.918\,89&0.000\,25 & 0.018&0.001 & 0.163&0.009 & 2.171&0.058 &\\
&$\nu_1$ - \mbox{$\nu_{\rm rot}$} & 1491.219\,21&0.000\,42 & 0.011&0.001 & 0.100&0.009 & -2.601&0.095 & 2.300&1.000\\
&$\nu_1$ & 1493.519\,47&0.000\,04 & 0.123&0.001 & 1.119&0.009 & 0.672&0.009 &2.300&1.000\\
&$\nu_1$ + \mbox{$\nu_{\rm rot}$} & 1495.819\,89&0.000\,74 & 0.006&0.001 & 0.058&0.009 & 0.906&0.169 &2.300&1.000\\
&$\nu_1$ + 2\mbox{$\nu_{\rm rot}$} & 1498.121\,02&0.000\,33 & 0.014&0.001 & 0.130&0.009 & -2.092&0.074 &2.301&1.000\\
\hline
7018170 & \mbox{$\nu_{\rm rot}$} & 0.1591&0.0054{$^\dagger$}
\vspace{0.05cm} \\
&$\nu_1$ - 2\mbox{$\nu_{\rm rot}$} & 1944.982\,10&0.001\,23 & 0.008&0.001 & 0.092&0.013 & 3.128&0.322 & \\
&$\nu_1$ - \mbox{$\nu_{\rm rot}$} & 1945.142\,81&0.000\,67 & 0.015&0.001 & 0.170&0.013 & -3.038&0.176 & 0.161&1.010\\
&$\nu_1$ & 1945.301\,73&0.000\,22 & 0.047&0.001 & 0.513&0.013 & -2.687&0.058 & 0.159&0.999\\
&$\nu_1$ + \mbox{$\nu_{\rm rot}$} & 1945.454\,78&0.003\,06 & 0.003&0.001 & 0.037&0.013 & -4.674&0.806 & 0.153&0.962\\
&$\nu_1$ + 2\mbox{$\nu_{\rm rot}$} & 1945.621\,68&0.003\,33 & 0.003&0.001 & 0.034&0.013 & -1.768&0.874 & 0.167&1.049\\
\vspace{0.05cm} \\
&$\nu_2$ - \mbox{$\nu_{\rm rot}$} & 1920.120\,72&0.001\,41 & 0.007&0.001 & 0.082&0.013 & 0.738&0.369 & \\
&$\nu_2$ & 1920.278\,31&0.000\,95 & 0.011&0.001 & 0.123&0.013 & 0.504&0.249 & 0.158&0.991\\
&$\nu_2$ + \mbox{$\nu_{\rm rot}$} & 1920.439\,44&0.002\,47 & 0.004&0.001 & 0.047&0.013 & 0.268&0.648 & 0.161&1.013\\
\vspace{0.05cm} \\
&$\nu_3$ - \mbox{$\nu_{\rm rot}$} & 1970.165\,86&0.001\,69 & 0.006&0.001 & 0.066&0.013 & -1.657&0.444 & \\
&$\nu_3$ & 1970.324\,09&0.001\,47 & 0.007&0.001 & 0.077&0.013 & -2.795&0.385 & 0.158&0.995\\
&$\nu_3$ + \mbox{$\nu_{\rm rot}$} & 1970.483\,30&0.001\,83 & 0.006&0.001 & 0.061&0.013 & -2.552&0.481 & 0.159&1.001\\
\hline
10685175 & \mbox{$\nu_{\rm rot}$} & 3.731\,18&0.000\,01 & 4.951&0.010 & 4.951&0.010
\vspace{0.05cm} \\
&$\nu_1$ - 2\mbox{$\nu_{\rm rot}$} & 2775.545\,47&0.003\,38 & 0.004&0.003 & 0.191&0.150 & 1.077&0.800 &\\
&$\nu_1$ - \mbox{$\nu_{\rm rot}$} & 2779.225\,90&0.002\,03 & 0.006&0.003 & 0.337&0.161 & -1.803&0.481 & 3.680&0.986\\
&$\nu_1$ & 2783.008\,00&0.000\,96 & 0.013&0.003 & 0.765&0.173 & 2.953&0.228 &3.782&1.014\\
&$\nu_1$ + \mbox{$\nu_{\rm rot}$} & 2786.689\,62&0.003\,00 & 0.004&0.003 & 0.266&0.187 & 0.437&0.708 &3.682&0.987\\
&$\nu_1$ + 2\mbox{$\nu_{\rm rot}$} & 2790.983\,95&0.002\,84 & 0.005&0.003 & 0.310&0.206 & -2.863&0.671 &4.294&1.151\\
\hline
11031749 & $\nu_1$ & 1372.717\,24&0.000\,16 & 0.0261&0.0006 &0.205&0.005 & 0.899 &0.021\\
\hline
11296437 & \mbox{$\nu_{\rm rot}$} & 1.624\,58&0.000\,01 & 1.705&0.002 & 1.705&0.002
\vspace{0.05cm} \\
&$\nu_1$ - \mbox{$\nu_{\rm rot}$} & 1408.152\,04&0.000\,44 & 0.0026&0.0003 & 0.020&0.002 & -2.229&0.100 \\
&$\nu_1$ & 1409.776\,71&0.000\,02 & 0.0450&0.0003 & 0.352&0.002 & 2.437&0.006 &1.625&1.000\\
&$\nu_1$ + \mbox{$\nu_{\rm rot}$} & 1411.402\,13&0.000\,49 & 0.0023&0.0003 & 0.018&0.002 & 0.823&0.112 &1.625&1.001\\
\vspace{0.05cm} \\
&$\nu_2$ & 126.791\,38&0.000\,05 & 0.0222&0.0003 & 0.0242&0.0003 & -1.716&0.012 \\
&$\nu_3$ & 129.151\,22&0.000\,04 & 0.0317&0.0003 & 0.0345&0.0003 & 2.080&0.008 \\
\hline
11409673 & \mbox{$\nu_{\rm rot}$} & 0.940\,16&0.000\,02 & 0.865&0.006 & 0.865&0.006
\vspace{0.05cm} \\
&$\nu_1$ - \mbox{$\nu_{\rm rot}$} & 2499.985\,30&0.001\,01 & 0.021&0.001 & 0.307&0.018 &-1.571&0.057& \\
&$\nu_1$ & 2500.926\,65&0.003\,21 & 0.007&0.001 & 0.097&0.018 &0.667&0.199 &0.941&1.001 \\
&$\nu_1$ + \mbox{$\nu_{\rm rot}$} & 2501.866\,33&0.001\,11 & 0.019&0.001 & 0.280&0.018 &-1.757&0.069 &0.940&0.999 \\
\hline
\end{tabular}
\label{tab:pulsators}
\end{table*}
KIC\,6631188 has previously been identified as a rotational variable with a period of 5.029\,d \citep{Reinhold2013Rotation}, or 2.514\,d \citep{Reinhold2015Rotation}. The unfiltered light curve of KIC\,6631188 shows a series of low-frequency harmonic signals beginning at multiples of 2.30\,\mbox{$\muup$Hz}\ (Fig.~\ref{fig:6631188}). Although the highest amplitude signal corresponds to a rotational period of 2.514\,d, the true rotation period was confirmed by folding the light curve on the 2.30\,\mbox{$\muup$Hz}\ frequency, yielding a period of 5.03117\,$\pm$\,0.00004\,d. The folded light curve shows clear double-wave spot-based modulation, implying that both magnetic poles are observed
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/6631188_shared.pdf}
\caption{\textbf{a)} Amplitude spectrum of KIC\,6631188 out to the Nyquist frequency of 283.2\,\mbox{$\muup$Hz}. The inset shows the low-frequency region with peaks due to rotation. \textbf{b)} Light curve folded at the rotation period of 5.03 d and binned at a factor of 50:1. \textbf{c)} Amplitude spectrum of KIC\,6631188 after a high-pass filter has removed the low-frequency signals -- the true oscillation frequency of 1493.52\,\mbox{$\muup$Hz}\ has the highest amplitude (green). All other peaks flagged as aliases above the 5 SNR are marked in blue. The red dashed lines denote integer multiples of the Nyquist frequency. \textbf{d)} Zoomed region of the primary frequency before, and \textbf{e)}, after pre-whitening $\nu_1$. The residual power in $\nu_1$ is due to frequency variability. The four sidelobes are due to rotational modulation of the pulsation amplitude (see text). The top x-axis, where shown, is the corresponding frequency in d$^{-1}$.}
\label{fig:6631188}
\end{figure*}
After high-pass filtering the light curve, the primary pulsation frequency of 1493.52\,\mbox{$\muup$Hz}\ is observable in the super-Nyquist regime. We see evidence for rotational splitting through the detection of a quintuplet, indicating an $\ell=2$ or distorted $\ell=1$ mode. It seems likely that the star is a pure quadrupole pulsator, unless an $\ell=1$ mode is hidden at an integer multiple of the sampling frequency -- where its amplitude would be highly diminished as a result of apodization. It is also possible that other modes are of low intrinsic amplitude, making their detection in the super-Nyquist regime difficult. We can measure the rotational period of KIC\,6631188 from the sidelobe splitting as 5.0312\,$\pm$\,0.0003\,d in good agreement with the low-frequency signal. We list the pulsation and rotational frequencies in Table~\ref{tab:pulsators}.
We are able to provide further constraints on the geometry of the star by assuming that the rotational sidelobes are split from the central peak by exactly the rotation frequency of the star. We chose a zero-point in time such that the phases of the sidelobes were equal, and then applied a linear least squares fit to the data. For a pure non-distorted mode, we expect the phases of all peaks in the multiplet to be the same. We find that the phases are not identical, implying moderate distortion of the mode (Table~\ref{tab:6631188_forcefit}).
\begin{table}
\centering
\caption{Linear least squares fit to the pulsation and force-fitted sidelobes in KIC\,6631188. The zero-point for the fit is BJD 2455692.84871, and has been chosen as such to force the first pair of sidelobe phases to be equal.}
\label{tab:6631188_forcefit}
\begin{tabular}{
lccc
}
\hline
ID & {Frequency} & {Amplitude$_{\rm \,\,intrinsic}$} & {Phase} \\
& {(\mbox{$\muup$Hz})} & {(mmag)} & {(rad)}\\
\hline
$\nu_1-2$\mbox{$\nu_{\rm rot}$} & 1488.9185 & 0.163\,$\pm$\,0.009 & 1.579\,$\pm$\,0.058\\
$\nu_1-$\mbox{$\nu_{\rm rot}$} & 1491.2190 & 0.100\,$\pm$\,0.009 & 1.337\,$\pm$\,0.095\\
$\nu_1$ & 1493.5195 & 1.121\,$\pm$\,0.009 & 2.845\,$\pm$\,0.009\\
$\nu_1+$\mbox{$\nu_{\rm rot}$} & 1495.8199 & 0.058\,$\pm$\,0.009 & 1.337\,$\pm$\,0.167\\
$\nu_1+2$\mbox{$\nu_{\rm rot}$} & 1498.1204 & 0.130\,$\pm$\,0.009 & 2.859\,$\pm$\,0.075\\
\hline
\end{tabular}
\end{table}
The oblique pulsator model can also be applied to obtain geometric constraints on the star's magnetic obliquity and inclination angles, $\beta$ and $i$, respectively. The frequency quintuplet strongly suggests that the pulsation in KIC\,6631188 is a quadrupole mode. We therefore consider the axisymmetric quadrupole case, where $\ell=2$ and $m=0$ and apply the relation of \citet{Kurtz1990Rapidly} for a non-distorted oblique quadrupole pulsation in the absence of limb-darkening and spots:
\begin{eqnarray}
\tan{i}\tan{\beta}= 4 \dfrac{A_{+2}^{(2)}+A_{-2}^{(2)}}{A_{+1}^{(2)}+A_{-1}^{(2)}}.
\label{eqn:quint}
\end{eqnarray}
Here $i$ is the rotational inclination angle, $\beta$ is the angle of obliquity between the rotation and magnetic axes, and $A_{\pm1,2}^{(1,2)}$ are the amplitudes of the first and second sidelobes of the quadrupole pulsation. Using the values of Table~\ref{tab:6631188_forcefit}, we find that $\tan{i}\tan{\beta}=7.4\,\pm\,0.7$, and provide a summary of values satisfying this relation in Fig.~\ref{fig:ibeta_combo}. Since $i+\beta \geq 90^\circ$, both pulsation poles should be visible in the light curve over the rotation cycle of the star, a result consistent with observations of the double-wave light curve with spots at the the magnetic poles.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/opm_sidelobes.pdf}
\caption{Possible $i + \beta$ combinations for the roAp stars where analysis of the multiplets allows us to set constraints on their geometry. The shaded region marks the uncertainty. In stars for which $i + \beta > 90 ^\circ$, both magnetic poles are observed. For KIC\,7018170, only the primary $\nu_1$ solution has been shown. KIC\,10685175 has been omitted as its uncertainty dominates the figure.}
\label{fig:ibeta_combo}
\end{figure}
\subsection{KIC~7018170}
The low-frequency variability of KIC\,7018170 exhibits no sign of rotational modulation, which is probably a result of the \mbox{PDCSAP} pipeline removing the long-period variability (Fig.~\ref{fig:7018170}). It is therefore unsurprising that KIC\,7018170 has not been detected as an Ap star in the {\em Kepler}\ data -- the automatic removal of low-frequency modulation causes it to appear as an ordinary non-peculiar star in the LC photometry.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/7018170_shared.pdf}
\caption{Same as in Fig.~\ref{fig:6631188} for KIC\,7018170. In \textbf{a)} however, the long-period rotational modulation has been largely removed by the PDCSAP flux pipeline leading to a jagged light curve (\textbf{b}). Panels \textbf{f)} and \textbf{g)} show the secondary frequencies $\nu_2$ and $\nu_3$ extracted after manual inspection of the filtered light curve.}
\label{fig:7018170}
\end{figure*}
The rotational signal is clearly present in the sidelobe splitting of the primary and secondary pulsation frequencies. The high-pass filtered light curve reveals the primary signal, $\nu_1$, at 1945.30\,\mbox{$\muup$Hz}, with inspection of the amplitude spectrum revealing two more modes; $\nu_2$ and $\nu_3$, at frequencies of 1920.28 and 1970.32\,\mbox{$\muup$Hz}, respectively. All three of these modes are split by 0.16\,\mbox{$\muup$Hz}\ which we interpret as the rotational frequency. KIC\,7018170 exhibits significant frequency variability during the second half of the data, which destroys the clean peaks of the triplets. To analyse them in detail, we analysed only the first half of the data where frequency variability is minimal. This provided a good balance between frequency resolution and variability in the data.
To estimate the large frequency separation, $\Delta \nu$, defined as the difference in frequency of modes of the same degree and consecutive radial order, we apply the general asteroseismic scaling relation,
\begin{eqnarray}
\dfrac{\Delta \nu}{\Delta\nu_{{\odot}}} = \sqrt{\dfrac{\rho}{\rho_{\odot}}} = \dfrac{(M/{\rm M}_{\odot})^{0.5} (T_{\rm eff} / {\rm T}_{\rm eff, \odot})^3}{(L/{\rm L}_{\odot})^{0.75}}
\label{eqn:dnu}
\end{eqnarray}
with adopted solar values $\Delta \nu_\odot$ = 134.88\,$\pm$\,0.04\,\mbox{$\muup$Hz}, and T$_{\rm eff, \odot} = 5777$\,K \citep{Huber2011Testing}. Using the stellar properties in Table~\ref{tab:sample}, we estimate the large separation as 55.19\,$\pm$\,7.27\,\mbox{$\muup$Hz}. The separation from the primary frequency $\nu_1$ to $\nu_2$ and $\nu_3$ are 24.86 and 25.18\,\mbox{$\muup$Hz}, respectively, indicating that the observed modes are likely of alternating even and odd degrees. While we can not determine the degrees of the modes from the LC data alone, it suggests that the primary frequency $\nu_1$ is actually a quintuplet with unobserved positive sidelobes.
If $\nu_1$ is instead a triplet, it would have highly asymmetric rotational sidelobe peak amplitudes which the other modes do not exhibit. Asymmetric sidelobe amplitudes are a signature of the Coriolis effect \citep{Bigot2003Are}, but whether this is the sole explanation for such unequal distribution of power in the amplitude spectra can not be known without follow-up observations. $\nu_1$ is thus more likely to be a quintuplet as the large separation would then be 50.04\,\mbox{$\muup$Hz}, a value much closer to the expected result. The positive suspected sidelobes at frequencies of 1945.46 and 1945.62\,\mbox{$\muup$Hz}\ have a SNR of 1.16 and 1.78 respectively, well below the minimum level required for confirmation. To this end, we provide a fit to the full quintuplet in Table~\ref{tab:pulsators}, but note that the frequencies should be treated with caution.
Assuming that $\nu_2$ and $\nu_3$ are triplets, while $\nu_1$ is a quintuplet, we forced the rotational sidelobes to be equally separated from the pulsation mode frequency by the rotation frequency. Lacking the rotational signal from the low-frequency amplitude spectrum, we instead obtained the rotational frequency by examining the variability in amplitudes of the modes themselves. As the star rotates, the observed amplitudes of the oscillations will modulate in phase with the rotation. We provide a full discussion of this phenomenon in Sec.~\ref{sec:amp_variability}. We obtained a rotation frequency of 0.160\,$\pm$\,0.005\,\mbox{$\muup$Hz}, corresponding to a period of 72.7\,$\pm$\,2.5\,d. We used this amplitude modulation frequency to fit the multiplets by linear least squares to test the oblique pulsator model. By choosing the zero-point in time such that the phases of the $\pm\nu_{\rm rot}$ sidelobes of $\nu_1$ are equal, we found that $\nu_1$, does not appear to be distorted, and $\nu_3$ only slightly. $\nu_2$ is heavily distorted, as shown by the unequal phases of the multiplet in Table~\ref{tab:7018170_forcefit}.
\begin{table}
\centering
\caption{Linear least squares fit to the pulsation and force-fitted sidelobes in KIC\,7018170. The zero-point for the fit is BJD 2455755.69582, and has been chosen as such to force the sidelobe phases of $\nu_1$ to be equal.}
\label{tab:7018170_forcefit}
\begin{tabular}{lccc}
\hline
ID & Frequency & Amplitude$_{\rm \,\,intrinsic}$ & Phase \\
& (\mbox{$\muup$Hz}) & (mmag) & (rad) \\
\hline
$\nu_1$ - 2\mbox{$\nu_{\rm rot}$} & $1944.98350$ & $0.090\,\pm\,0.013$ & $-2.799\,\pm\,0.141$ \\
$\nu_1$ - \mbox{$\nu_{\rm rot}$} & $1945.14259$ & $0.172\,\pm\,0.013$ & $-3.080\,\pm\,0.074$ \\
$\nu_1$ & $1945.30169$ & $0.510\,\pm\,0.013$ & $-2.683\,\pm\,0.025$ \\
$\nu_1$ + \mbox{$\nu_{\rm rot}$} & $1945.46079$ & $0.031\,\pm\,0.013$ & $-3.080\,\pm\,0.403$ \\
$\nu_1$ + 2\mbox{$\nu_{\rm rot}$} & $1945.61988$ & $0.033\,\pm\,0.013$ & $-2.170\,\pm\,0.389$ \\
$\nu_2$ - \mbox{$\nu_{\rm rot}$} & $1920.11921$ & $0.081\,\pm\,0.013$ & $\phantom{-}0.407\,\pm\,0.161$ \\
$\nu_2$ & $1920.27831$ & $0.122\,\pm\,0.013$ & $\phantom{-}0.522\,\pm\,0.107$ \\
$\nu_2$ + \mbox{$\nu_{\rm rot}$} & $1920.43741$ & $0.046\,\pm\,0.013$ & $-0.109\,\pm\,0.283$ \\
$\nu_3$ - \mbox{$\nu_{\rm rot}$} & $1970.16500$ & $0.066\,\pm\,0.013$ & $-1.828\,\pm\,0.192$ \\
$\nu_3$ & $1970.32410$ & $0.076\,\pm\,0.013$ & $-2.786\,\pm\,0.165$ \\
$\nu_3$ + \mbox{$\nu_{\rm rot}$} & $1970.48320$ & $0.061\,\pm\,0.013$ & $-2.562\,\pm\,0.206$\\
\hline
\end{tabular}
\end{table}
We can again constrain the inclination and magnetic obliquity angles for the modes. In the case of a pure dipole triplet,
\begin{eqnarray}
\tan{i}\tan{\beta} = \dfrac{A_{+1}^{(1)}+A_{-1}^{(1)}}{A_{0}^{(1)}},
\label{eqn:dipole}
\end{eqnarray}
where again $A_{\pm1}^{(1)}$ are the dipole sidelobe amplitudes, and $A_{0}^{(1)}$ is the amplitude of the central peak. Using Table~\ref{tab:7018170_forcefit}, we find that $\tan{i}\tan{\beta} = 1.0\,\pm\,0.2$, and $\tan{i}\tan{\beta} = 1.7\,\pm\,0.4$ for $\nu_2$, and $\nu_3$, respectively. Using Eqn.\,\ref{eqn:quint}, we find $\tan{i}\tan{\beta} = 2.4\,\pm\,0.4$ for $\nu_1$, which agrees with $\nu_3$ within the large errors, while disagreeing with $\nu_2$, which appears to be $\pi$\,rad out of phase. We provide a summary of values satisfying these relations in Fig.~\ref{fig:ibeta_combo}.
\subsection{KIC~10685175}
KIC\,10685175 was detected by manual inspection of the mCP stars from \citet{Hummerich2018Kepler}, and was not flagged by the pipeline. The star shows obvious rotational modulation in the low-frequency region of the amplitude spectrum (Fig.~\ref{fig:10685175}). The period of rotation, 3.10198\,$\pm$\,0.00001\,d was determined from the low-frequency signal at 0.322\,\mbox{$\muup$Hz}.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/10685175_shared.pdf}
\caption{\textbf{a)} Amplitude spectrum of KIC\,10685175 out to the nominal Nyquist frequency. The inset shows the low-frequency region of the spectrum corresponding to the rotation frequency. \textbf{b)} Light curve folded at the rotation period of 5.03 d and binned at a ratio of 50:1. \textbf{c)} Amplitude spectrum of KIC\,10685175 after pre-whitening -- the true oscillation frequency of 2783.00\,\mbox{$\muup$Hz}\ can be observed as the signal of maximum amplitude (green). All other peaks flagged as aliases above the 5 SNR are marked in blue. The red dashed lines denote integer multiples of the Nyquist frequency. \textbf{d)} Zoomed region of the primary frequency before, and \textbf{e)}, after pre-whitening $\nu_1$.}
\label{fig:10685175}
\end{figure*}
To study the roAp pulsations, we subtracted the rotational frequency and its first 30 harmonics. Although the amplitude spectrum is too noisy to reveal {\em Kepler}\ orbital sidelobe splitting, the true peak is evident as the signal with the highest power: 2783.01\,\mbox{$\muup$Hz}. This frequency lies close to a multiple of the {\em Kepler}\ sampling rate, and thus has a highly diminished amplitude. The primary frequency appears to be a quintuplet split by the rotational frequency. However, the low SNR of the outermost rotational sidelobes necessitates careful consideration. The outer sidelobes, at frequencies of 2775.55 and 2790.47\,\mbox{$\muup$Hz}\ have a SNR of 1.28 and 0.49 respectively. Similar to KIC\,7018170, we provide a fit to the full suspected quintuplet in Table~\ref{tab:pulsators}, but again note that the frequencies of the outermost sidelobes should be treated with caution. If we are to assume that the pulsation is a triplet, while ignoring the outer sidelobes, then Eqn.\,\ref{eqn:dipole} yields a value of $\tan{i}\tan{\beta}=0.9\,\pm\,0.4$, implying that $i+\beta<90^\circ$ and that only one pulsation pole is observed. The large uncertainty however indicates that either one or two poles can be observed if the pulsation is modelled as a triplet. On the other hand, if we consider the star as a quadrupole pulsator, we obtain a value of $\tan{i}\tan{\beta}=1.7\,\pm\,1.6$, a result almost completely dominated by its uncertainty.
We again investigate the distortion of the mode by assuming that the multiplet is split by the rotation frequency, and find that all phases agree within error, implying minimal distortion of the mode (Table~\ref{tab:10685175_forcefit}). However, it should be noted that the low SNR of the spectrum greatly inflates the uncertainties on the amplitudes and phases of the fit.
\begin{table}
\centering
\caption{Linear least squares fit to the pulsation and force-fitted sidelobes in KIC\,10685175. The zero-point for the fit is BJD 2455689.78282, and has been chosen as such to force the first pair of sidelobe phases to be equal.}
\label{tab:10685175_forcefit}
\begin{tabular}{
lccc
}
\hline
ID & {Frequency} & {Amplitude$_{\rm \,\,intrinsic}$} & {Phase} \\
& {(\mbox{$\muup$Hz})} & {(mmag)} & {(rad)}\\
\hline
$\nu_1-2$\mbox{$\nu_{\rm rot}$} & 2775.54564 & 0.192\,$\pm$\,0.150 & 0.842\,$\pm$\,0.782\\
$\nu_1-$\mbox{$\nu_{\rm rot}$} & 2779.27682 & 0.432\,$\pm$\,0.161 & 0.324\,$\pm$\,0.373\\
$\nu_1$ & 2783.00800 & 0.764\,$\pm$\,0.173 & 0.585\,$\pm$\,0.226\\
$\nu_1+$\mbox{$\nu_{\rm rot}$} & 2786.73919 & 0.243\,$\pm$\,0.187 & 0.324\,$\pm$\,0.772\\
$\nu_1+2$\mbox{$\nu_{\rm rot}$} & 2790.47037 & 0.099\,$\pm$\,0.204 & 0.049\,$\pm$\,2.051\\
\hline
\end{tabular}
\end{table}
\subsection{KIC~11031749}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/11031749_shared.pdf}
\caption{Same as Fig.~\ref{fig:6631188} for KIC\,11031749. No rotation signal can be observed in either \textbf{a)} the low-frequency amplitude spectrum or \textbf{d)} rotational splitting. We provide the full 4 yr light curve in lieu of a folded light curve (\textbf{b}).}
\label{fig:11031749}
\end{figure*}
KIC\,11037149 does not appear to demonstrate spot-based amplitude modulation in its light curve (Fig.~\ref{fig:11031749}), nor does it show signs of rotational frequency splitting - consistent with a lack of rotational modulation. Despite this, it is clear that it possesses unusual chemical abundances of Sr, Cr, and Eu from its spectrum (Sec.~\ref{sec:spectra}). We theorise two possibilities behind the lack of observable modulation. If the angles of inclination or magnetic obliquity are close to $0^\circ$, no modulation would be observed since the axis of rotation is pointing towards {\em Kepler}. However, this assumes that the chemical abundance spots are aligned over the magnetic poles, which has been shown to not always be the case \citep{Kochukhov2004Multielement}. Another possibility is that the period of rotation could be much longer than the time-base of the {\em Kepler}\ LC data. While the typical rotational period for A-type stars is rather short \citep{Royer2007Rotational,Royer2009Rotation,Murphy2015Observational}, the rotation for Ap types can exceed even 10 yr due to the effects of magnetic braking \citep{Landstreet2000Magnetic}. Indeed, a non-negligible fraction of Ap stars are known to have rotational periods exceeding several centuries \citep{Mathys2015Very}, and so it is possible for the star to simply be an extremely slow rotator.
The aliased signal of the true pulsation is visible even in the unfiltered LC data (Fig.~\ref{fig:11031749}). After filtering, we identified one pulsation frequency ($\nu_1$: 1372.72\,\mbox{$\muup$Hz}), and provide a fit in Table~\ref{tab:pulsators}. With no clear multiplet structure around the primary frequency, we are unable to constrain the inclination and magnetic obliquity within the framework of the oblique pulsator model. Indeed, without any apparent low-frequency modulation we are unable to even provide a rotation period. This represents an interesting, although not unheard of challenge for determining the rotation period. Since the photometric and spectral variability originate with the observed spot-based modulation, neither method can determine the rotation period without a longer time-base of observations. Regardless, we include KIC\,11031749 in our list of new roAp stars as it satisfies the main criterion of exhibiting both rapid oscillations and chemical abundance peculiarities.
\subsection{KIC~11296437}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/11296437_shared.pdf}
\caption{Same as Fig.~\ref{fig:6631188} for KIC\,11296437. In the SNR periodogram, it is not always the case that the power distribution in the super-Nyquist regime will ensure that the highest amplitude frequency will also be the highest SNR frequency.}
\label{fig:11296437}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/11409673_shared.pdf}
\caption{\textbf{a)} Amplitude spectrum of KIC\,11409673 out to the nominal Nyquist frequency. The inset shows the low-frequency region of the spectrum corresponding to the rotation frequency. \textbf{b)} Light curve folded and binned at a ratio of 50:1. \textbf{c)} Amplitude spectrum after a high-pass filter has removed the low-frequency signals -- the true oscillation frequency can be observed as the signal of maximum amplitude (green). All other peaks flagged as aliases above the 5 SNR are marked in blue. The red dashed lines denote integer multiples of the Nyquist frequency. \textbf{d)} Zoomed region of the primary frequency before, and \textbf{e)}, after pre-whitening $\nu_1$. The top x-axis, where available, is the corresponding frequency in d$^{-1}$.}
\label{fig:11409673}
\end{figure*}
KIC\,11296437 is a known rotationally variable star \citep{Reinhold2013Rotation}, whose period of rotation at 7.12433\,$\pm$\,0.00002\,d is evident in the low-frequency region of the amplitude spectrum (Fig.~\ref{fig:11296437}). Folding the light curve on this period shows only a single spot or set of spots, implying that $i$+$\beta < 90^\circ$.
The primary pulsation frequency was found to be 1409.78\,\mbox{$\muup$Hz}\ in the high-pass filtered light curve. This mode shows two sidelobes split by 1.625\,\mbox{$\muup$Hz}\ which is in good agreement with the rotation frequency. Two low-frequency modes, $\nu_2$ and $\nu_3$, are also present below the Nyquist frequency, at 126.79 and 129.15\,\mbox{$\muup$Hz}\ respectively. These modes are clearly non-aliased pulsations, as they are not split by the {\em Kepler}\ orbital period. However, neither of them are split by the rotational frequency. We provide a fit to these frequencies in Table~\ref{tab:pulsators}.
We apply the oblique pulsator model to $\nu_1$ by assuming the mode is split by the rotation frequency, and find that all three phases agree within error, implying that $\nu_1$ is not distorted. The same test can not be applied to $\nu_2$ and $\nu_3$ due to their lack of rotational splitting. We can further constrain the geometry of the star by again considering the sidelobe amplitude ratios (Eq.\,\ref{eqn:dipole}). Using the values in Table~\ref{tab:11296437_forcefit}, we find that $\tan{i}\tan{\beta} = 0.11\,\pm\,0.01$, and provide a summary of angles satisfying these values in Fig.~\ref{fig:ibeta_combo}, which demonstrates that only one pulsation pole should be observed over the rotation cycle ($i$+$\beta < 90^\circ$).
\begin{table}
\centering
\caption{Linear least squares fit to the pulsation and force-fitted sidelobes in KIC\,11296437. The zero-point for the fit is BJD 2455690.64114.}
\label{tab:11296437_forcefit}
\begin{tabular}{lccc}
\hline
ID & Frequency & Amplitude$_{\rm \,\,intrinsic}$ & Phase \\
& (\mbox{$\muup$Hz}) & (mmag) & (rad) \\
\hline
$\nu_1 - $\mbox{$\nu_{\rm rot}$} & $1408.15213$ & $0.020\,\pm\,0.002$ & $-0.678\,\pm\,0.117$ \\
$\nu_1$ & $1409.77671$ & $0.352\,\pm\,0.002$ & $-0.681\,\pm\,0.007$ \\
$\nu_1 + $\mbox{$\nu_{\rm rot}$} & $1411.40129$ & $0.018\,\pm\,0.002$ & $-0.678\,\pm\,0.131$ \\
\hline
\end{tabular}
\end{table}
KIC\,11296437 is highly unusual in that it displays both high-frequency roAp pulsations and low-frequency $p-$mode pulsations that are typically associated with $\delta$ Scuti stars. A lack of rotational splitting in the low-frequency modes suggests that the star might be a binary composed of a $\delta$ Scuti and roAp star component. If KIC\,11296437 is truly a single component system, then it would pose a major challenge to current theoretical models of roAp stars. In particular, the low-frequency modes at 126.79 and 129.15\,\mbox{$\muup$Hz}\ are expected to be damped by the magnetic field according to previous theoretical modelling \citep{Saio2005Nonadiabatic}. KIC\,11296437 would be the first exception to this theory amongst the roAp stars.
On the other hand, it would also be highly unusual if KIC\,11296437 were a binary system. It is rare for Ap stars to be observed in binaries, and much more so for roAp stars. Currently, there is one known roAp star belonging to a spectroscopic binary \citep{Hartmann2015Radialvelocity}, with several other suspected binaries \citep{Scholler2012Multiplicity}. Stellar multiplicity in roAp stars is important for understanding their evolutionary formation and whether tidal interactions may inhibit their pulsations.
\subsection{KIC~11409673}
KIC\,11409673 is a peculiar case, as it has previously been identified as an eclipsing binary, and later a heartbeat binary \citep{Kirk2016Kepler}. We note that a radial velocity survey of heartbeat stars has positively identified KIC\,11409673 as a roAp star \citep{Shporer2016Radial}. Here we provide independent confirmation of this result through super-Nyquist asteroseismology, as their result has been ignored in later catalogues of roAp stars. KIC\,11409673 has a clear low-frequency variation at 0.94\,\mbox{$\muup$Hz}, corresponding to a rotational period of 12.3107\,$\pm$\,0.0003 d. Similar to KIC\,6631188, the low-frequency region is dominated by a higher amplitude signal at 2$\nu_{\rm rot}$ consistent with observations of the double-wave nature, as seen in Fig.~\ref{fig:11409673}. The rotation period of 12.31\,d is found by folding the light curve, and is confirmed after high-pass filtering the light curve and examining the triplet centred around the primary frequency of 2500.93\,\mbox{$\muup$Hz}. Similar to KIC\,7018170, KIC\,11409673 exhibits strong frequency variation which negatively affects the shape of the multiplets. We thus split the LC data into four equally spaced sections and analysed the multiplet separately in each section. This reduced the issues arising from frequency variation, despite leading to a decrease in frequency resolution. The results of the least squares analysis for the first section of data are presented in Table \ref{tab:pulsators}.
Again, applying the oblique pulsator model by assuming that the sidelobes be separated from the primary frequency by the assumed rotation frequency, we fit each section of data via least squares. By choosing the zero-point in time such that the phases of the sidelobes are equal, we are able to show that the mode is not distorted, as the three phases agree within error. This is the case for all four separate fits. The results of this test for the first section of the data are shown in Table~\ref{tab:11409673_forcefit}, with the remaining sections in Appendix~\ref{tab11409673_pulsations_appendix}. We find that $\tan{i}\tan{\beta}=6.1\,\pm\,1.1$ using Eqn.\,\ref{eqn:dipole} for the first section of data, implying that both spots are observed in agreement with the light curve.
\begin{table}
\centering
\caption{Linear least squares fit to the pulsation and force-fitted sidelobes in KIC\,11409673. The zero-point for the fit is BJD 2455144.43981. The data have been split into four equally spaced sets, with the sidelobes force-fitted in each set. We show only the results of the first-set below and provide the rest in Appendix\,\ref{sec:app}. The results for each set are similar, and agree within the errors.}
\label{tab:11409673_forcefit}
\begin{tabular}{lccc}
\hline
ID & Frequency & Amplitude$_{\rm \,\,intrinsic}$ & Phase \\
& (\mbox{$\muup$Hz}) & (mmag) & (rad) \\
\hline
$\nu_1 - $\mbox{$\nu_{\rm rot}$} & $2499.98645$ & 0.306\,$\pm$\,0.018 & $1.0173\,\pm\,0.0579$ \\
$\nu_1$ & $2500.92665$ & 0.097\,$\pm$\,0.018 & $1.1551\,\pm\,0.1834$ \\
$\nu_1 + $\mbox{$\nu_{\rm rot}$} & $2501.86684$ & 0.280\,$\pm$\,0.018 & $1.0173\,\pm\,0.0632$ \\
\hline
\end{tabular}
\end{table}
\section{Discussion}
\subsection{Acoustic cutoff frequencies}
As discussed in Sec.~\ref{sec:intro}, several roAp stars are known to oscillate above their theoretical acoustic cutoff frequency ($\nu_{\rm ac}$). The origin of the pulsation mechanism in these super-acoustic stars remains unknown, and presents a significant challenge to theoretical modelling. We therefore calculate whether the stars presented here oscillate above their theoretical acoustic cutoff frequency following the relation
\begin{eqnarray}
\dfrac{\nu_{\rm ac}}{\nu_{\rm ac, \odot}} = \dfrac{{\rm M}/{\rm M}_\odot ({\rm T}_{\rm eff}/{\rm T}_{\rm eff, \odot})^{3.5}}{{\rm L}/{\rm L}_\odot},
\label{eqn:cutoff}
\end{eqnarray}
where $\nu_{\rm ac, \odot}$ = 5300\,\mbox{$\muup$Hz}\ is the theoretical acoustic cutoff frequency of the Sun \citep{Jimenez2011ACOUSTIC}. Using the values provided in Table~\ref{tab:sample}, we find that KIC\,7018170 and KIC\,11409673 both oscillate above their theoretical limit (1739.5 and 2053.7\,\mbox{$\muup$Hz}\ respectively). The remaining stars do not oscillate above their theoretical acoustic cutoff frequency. KIC\,11031739, however, lies almost exactly on the border of the acoustic cutoff frequency within the errors.
\subsection{Spectral classification}
\label{sec:spectra}
The LAMOST \citep[Large Sky Area Multi-Object Fiber Spectroscopic Telescope;][]{Zhao2012LAMOST} survey has collected low-resolution spectra between 3800-9000\,\AA\ for objects in the \textit{Kepler} field.
We obtained LAMOST spectra from the 4th Data Release (DR4). All stars presented here have at minimum one low-resolution spectrum available from LAMOST. However, the spectrum of KIC\,6631188 is of unusable SNR. We thus obtained a high-resolution spectrum of KIC\,6631188 on April 17 2019 using the HIRES spectrograph \citep{Vogt1994HIRES} at the Keck-I 10-m telescope on Maunakea observatory, Hawai`i. The spectrum was obtained and reduced as part of the California Planet Search queue \citep[CPS,][]{Howard2010CALIFORNIA}. We obtained a 10-minute integration using the C5 decker, resulting in a S/N per pixel of 30 at $\sim$\,6000\,\AA\ with a spectral resolving power of $R\sim$\,60 000. This spectrum has been down-sampled to match the MK standard spectrum.
Fig.~\ref{fig:lamost_spectra} presents the spectra of KIC\,6631188, KIC\,7018170, KIC\,11031749, KIC\,11296437, and KIC\,11409673, with MK standard stars down-sampled to match the resolution of either the HIRES or LAMOST via the SPECTRES package \citep{Carnall2017SpectRes}. KIC\,10685175 has a pre-existing spectral classification of A4\,V\,Eu \citep{Hummerich2018Kepler}, and thus is not re-classified in this work.
In KIC 6631188, there is a strong enhancement of Sr\,{\sc{ii}} at 4077\,\AA\ and 4215\,\AA. The 4111\,\AA\ line of Cr\,{\sc{ii}} is present, which is used to confirm a Cr peculiarity, but the strongest line of Cr\,{\sc{ii}} 4172\,\AA\ line is not enhanced. The hydrogen lines look narrow for a main-sequence star, but the metal lines are well bracketed by A9\,V and F1\,V. We place this star at F0\,V\,Sr.
KIC\,7018170 shows evidence of chemical peculiarities which are only mild, but the spectrum is of low SNR. The 4077\,\AA\ line is very strong, which is indicative of an over abundance of Sr or Cr or both, but matching peculiarities in other lines of these elements are less clear. Sr\,{\sc{ii}} at 4215\,\AA\ is marginally enhanced, and the 4111 and 4172\,\AA\ lines of Cr\,{\sc{ii}} are also only marginally enhanced. The Eu\,{\sc{ii}} line at 4205\,\AA\ is significantly enhanced and is matched with a small enhancement at $4128-4132$\,\AA. We place KIC\,7018170 as a F2\,V\,(SrCr)Eu type star.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/spectra.pdf}
\caption{Keck and LAMOST spectra of KIC\,6631188, KIC\,7018170, KIC\,11031749, KIC\,11296437, and KIC\,11409673 from top to bottom. MK standard spectra have been down-sampled to match the LAMOST resolution. MK standard spectra have been obtained from \url{https://web.archive.org/web/20140717065550/http://stellar.phys.appstate.edu/Standards/std1_8.html}
}
\label{fig:lamost_spectra}
\end{figure*}
The Ca\,{\sc{ii}} K line in KIC\,11031749 is broad and a little shallow, typical of magnetic Ap stars. The 4077\,\AA\ line is very strong, suggesting enhancement of Sr and/or Cr. A mild enhancement of other Sr and Cr lines suggests both are contributing to the enhancement of the 4077\,\AA\ line. There is mild enhancement of the Eu\,{\sc ii} 4205\,\AA\ line and the 4128-4132\,\AA\ doublet suggesting Eu is overabundant. It is noteworthy that the Ca\,{\sc{i}} 4226\,\AA\ line is a little deep. We thus classify KIC\,11031749 as F1\,V\,SrCrEu.
In KIC\,11296437, there is a strong enhancement of Eu\,{\sc ii} at 4130\,\AA\ and 4205\,\AA. There is no clear enhancement of Sr\,{\sc ii} at 4216\,\AA\ but a slightly deeper line at 4077\,\AA\ which is also a line of Cr\,{\sc ii}. The 4111\,\AA\ line of Cr\,{\sc ii} is present, which is used to confirm a Cr peculiarity, but the 4172\,\AA\ line does not look enhanced which is normally the strongest line. The hydrogen lines look narrow for a main-sequence star, but the metal lines are well bracketed by A7\,V and F0\,V. We place this star at A9\,V EuCr.
For KIC\,11409673, there is enhanced absorption at 4216\,\AA\ which is a classic signature of a Sr overabundance in an Ap star. This is usually met with an enhancement in the 4077\,\AA\ line, but that line is also a line of Cr. The 4077\,\AA\ line is only moderately enhanced, but enough to support a classification of enhanced Sr. Since the 4172\,\AA\ line is normal, it appears that Cr is not enhanced. Other Cr lines cannot be relied upon at this SNR. Since the Eu\,{\sc{ii}} 4205\,\AA\ absorption line is strong, it appears that Eu is overabundant. There is no other evidence for a Si enhancement. The Ca\,{\sc{ii}} K line is a little broad and shallow for A9, which suggests mild atmospheric stratification typical of magnetic Ap stars. The hydrogen lines are a good fit intermediate to A7 and F0. We thus classify this star as A9\,V\,SrEu.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/HRD.pdf}
\caption{Left panel: Positions of the previously known (blue) and new roAp stars (red) discussed in this paper. Uncertainties have only been shown on the new roAp stars for clarity. Although stellar tracks are computed in 0.05-M$_\odot$ intervals, only every second track has been displayed here. Right panel: Interpolated and observed frequencies from pulsation modelling of the roAp stars, coloured by effective temperature. The observed frequencies are taken as the signal of highest amplitude. Circles mark previously known roAp stars, whereas outlined circles mark the six new stars. Uncertainties on the interpolated frequencies are obtained through a Monte Carlo simulation.}
\label{fig:HRD}
\end{figure*}
\subsection{Positions in the H-R diagram}
\label{sec:modelling}
To place our sample on the HR diagram (Fig.~\ref{fig:HRD}), we derived new stellar tracks based on the models of \citet{Cunha2013Testing}. We performed linear, non-adiabatic pulsation calculations for a grid of models covering the region of the HR diagram where roAp stars are typically found. We considered models with masses between 1.4 and 2.5 M$_\odot$, in steps of 0.05 M$_\odot$, and fixed the interior chemical composition at $Y=0.278$ and $X=0.705$.
The calculations followed closely those described for the polar models discussed in \citet{Cunha2013Testing}. The polar models consider that envelope convection is suppressed by the magnetic field, a condition required for the excitation by the opacity mechanism of high radial order modes in roAp stars. Four different cases were considered for each fixed effective temperature and luminosity in the grid. The first case considered an equilibrium model with a surface helium abundance of $Y_{\rm surf}=0.01$ and an atmosphere that extends to a minimum optical depth of $\tau_{\rm min}=3.5\times 10^{-5}$. For this case the pulsations were computed with a fully reflective boundary condition. The other three cases considered were in all similar to this one, except that the above options were modified one at a time to: $Y_{\rm surf}=0.1$; $\tau_{\rm min}=3.5\times 10^{-4}$; transmissive boundary condition \citep[see][for further details on the models]{Cunha2013Testing}. We provide a summary of these models in Table~\ref{tab:models}. We note that the impact of the choice of $Y$ and $X$ on the frequencies of the excited oscillations is negligible compared to the impact of changing the aspects of the physics described above.
\begin{table}
\centering
\caption{Model parameters of the non-adiabatic calculations. Shown are the surface helium abundance $Y_{\rm surf}$, minimum optical depth $\tau_{\rm min}$, and outer boundary condition in the pulsation code.}
\label{tab:models}
\begin{tabular}{lccc}
\hline
Model & $Y_{\rm surf}$ & $\tau_{\rm min}$ & Boundary condition \\
\hline
1 &$0.01$ & $3.5x10^{-5}$& Reflective\\
2 & $0.1$ & $3.5x10^{-5}$& Reflective \\
3 &$0.01$ & $3.5x10^{-4}$& Reflective \\
4 &$0.01$ & $3.5x10^{-5}$& Transmissive \\
\hline
\end{tabular}
\end{table}
For each fixed point in the track we calculated the frequency of maximum growth rate as a comparison to our observed frequencies. The observed frequency was assumed to be the mode of highest linear growth rate. However, it should be noted that this may not necessarily be the case. Using these tracks, we performed linear interpolation to obtain an estimate of the masses and frequencies derived from modelling. Uncertainties on the masses have been artificially inflated to account for uncertainties in metallicity, temperature and luminosity (0.2, 0.1, and 0.05\,$\rm M/\rm M_\odot$ respectively), following \citet{Murphy2019Gaiaderived}. An extra error component of 0.1\,$\rm M/\rm M_\odot$ is included to account for unknown parameters in the stellar modelling, such as overshooting and mixing length. These four contributions are combined in quadrature, yielding a fixed uncertainty of 0.25\,$\rm M/\rm M_\odot$. Frequency interpolation is performed for each model (1 through 4), with the plotted value being the median of these results. The uncertainty in the interpolation of both frequency and mass is obtained from a Monte Carlo simulation sampled from the uncertainty in the temperature and luminosity of the stars. We s how the results of the positions in the H-R diagram and comparison of interpolated frequencies in Fig.~\ref{fig:HRD}.
For frequencies below $\sim$1800\,\mbox{$\muup$Hz}\, the agreement between theory and observations is reasonable, albeit with discrepancies on a star-by-star case that may be due to an incomplete modelling of the physics of these complex stars. However, for stars with higher characteristic frequencies there seems to be two distinct groups, one lying below and the other clearly above the 1:1 line. Coloured by temperature, we note that the group lying above the 1:1 line tends to be cooler in general. The suppression of envelope convection is key to the driving of roAp pulsations by the opacity mechanism. As it is harder for suppression to take place in the coolest evolved stars, one may question whether an additional source of driving is at play in these stars. In fact, it has been shown in a previous work that the driving of the very high frequency modes observed in some well known roAp stars cannot be attributed to the opacity mechanism. It was argued that they may, instead, be driven by the turbulent pressure if envelope convection is not fully suppressed \citep{Cunha2013Testing}. Whether that mechanism could contribute also to the driving of the modes observed in the stars laying clearly above the 1:1 line on the right panel of Fig. 10 is something that should be explored in future non-adiabatic modelling of roAp stars.
\subsection{Intrinsic amplitude and phase variability}
\label{sec:amp_variability}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/AM_all3.pdf}
\caption{Time and frequency domain analysis of the primary frequencies for each of the roAp stars, where the colour shows the normalised amplitude of the signal. The amplitude spectra (right) are obtained by taking the periodogram of the wavelet at the primary frequency, and are arbitrarily normalised. All but KIC\,11031749 show strong frequencies in agreement the spot-based rotational modulation of the light curve, confirming their oblique pulsating nature. The wavelet for KIC\,11409673 is taken from the sidelobe frequency $\nu_1$-\mbox{$\nu_{\rm rot}$}\ due to the central frequency being of low amplitude. The gaps in KIC\,10685175 are due to missing quarters in the {\em Kepler}\ photometry.}
\label{fig:modulation}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/70181_all_mod.pdf}
\caption{Amplitude modulation of the modes $\nu_1$, $\nu_2$, and $\nu_3$ in KIC\,7018170 from top to bottom. All three modes are modulated in phase with each-other, in agreement with the oblique pulsator model.}
\label{fig:701_all_mod}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/11409673_crossing.pdf}
\caption{Amplitude and phase variations of KIC\,7018170 (left) and KIC\,11409673 (right). The top panels show the folded amplitude variations on the rotation period, whereas the bottom panels show the folded phase variations. The pulsation amplitude coincides with the rotational light extremum. A $\pi$ rad phase change is observed when the line-of-sight passes over pulsation nodes. For KIC\,7018170, the phase change occurs at a rotational phase of 0.} The solid lines mark the binned data for clarity.
\label{fig:11409673_crossing}
\end{figure*}
Many of the known roAp stars have shown significant variation in the amplitudes and phases of their pulsation frequencies over the observation period. To examine this variability in our sample, we conducted a time and frequency domain analysis where a continuous amplitude spectrum was generated by sliding a rectangular window across the light curve. The Fourier amplitude and phase was then calculated within the window at each point. For each star, the window length was chosen to be $100/\nu_1$\,d in width to minimise phase and amplitude uncertainty whilst correctly sampling the frequency. Both rectangular and Gaussian windows were tested and found to have minimal difference in the resultant amplitudes. Before calculating the variability, the rotational frequency and corresponding harmonics were pre-whitened manually to minimise any potential frequency beating.
Amplitude modulation of the normal modes provides a measure of the rotation period of these stars and confirms the nature of their oblique pulsations. If the pulsation amplitudes goes to zero, then a node crosses the line-of-sight and is observed, and (for a dipole mode) the pole is at $90^\circ$ to the line-of-sight. However, if the geometry is such that the amplitude never goes to zero (e.g. $\alpha$~Cir), then we must always see a pole. This leads to a periodic modulation of the pulsation amplitude, which is different to the spot-based rotational modulation seen in the light curves of Ap stars in general. Similarly, a $\pi$\,rad phase change should be observed in the phase of the frequency whenever a node crosses the line-of-sight. The continuous amplitude spectrum and corresponding rotational signal is shown in Fig.~\ref{fig:modulation}. The rotation signal was obtained by examining the amplitude spectrum of the modulation along the primary frequency of each star.
KIC\,6631188 has obvious low-frequency modulation in its light curve, making identification of the rotation period straightforward (Sec.~\ref{sec:pulsations}). Regardless, it makes for a useful test case for confirming modulation of its primary frequency. The amplitude spectrum of the modulation signal in Fig.~\ref{fig:modulation} has a curious peak at twice the rotational frequency, 4.60\,\mbox{$\muup$Hz}, corresponding to a period of 2.52\,d. This result implies that the spots are not necessarily aligned along the magnetic and pulsation axes, as was shown to be possible by \cite{Kochukhov2004Multielement}.
The analysis of KIC\,7018170 greatly benefits from amplitude modulation of its modes, as the \mbox{PDCSAP} flux pipeline removes any low-frequency content in the light curve. The modulation is calculated for all three modes present in the LC data, with the amplitude spectrum taken on the weighted average signal, which is thus dominated by $\nu_1$. The frequency components of each mode multiplet are in phase, in good agreement with the oblique pulsator model. Indeed, it is quite remarkable that the secondary modes ($\nu_2$, $\nu_3$) in KIC\,7018170 exhibit such clear amplitude modulation despite being of low SNR (Fig.~\ref{fig:701_all_mod}). The amplitude spectrum of the variation in the signal is found to peak at 0.160\,\mbox{$\muup$Hz}\, agreeing with the rotation frequency identified from sidelobe splitting.
No evidence of amplitude modulation can be found in KIC\,11031749. We can however speculate on the nature of the lack of amplitude modulation and attribute it to two possibilities; either the position of the pulsation pole does not move relative to the observer, or the rotation period is much longer than the 4-yr {\em Kepler}\ data. Thus, no rotation period can be ascribed based on modulation of the principal frequency. KIC\,11296437 shows a periodic signal in its amplitude modulation corresponding to the rotational period for only the high-frequency ($\nu_1$) mode, with a frequency of 1.63\,\mbox{$\muup$Hz}. The low-frequency modes ($\nu_2$, $\nu_3$) show no evidence of amplitude modulation, suggesting that they could possibly belong to an orbital companion. KIC\,11409673 has a clear low-frequency signal and harmonic present in the light curve beginning at 0.940\,\mbox{$\muup$Hz}. Amplitude modulation of its primary oscillation frequency shows evidence of rotation in good agreement with the low-frequency signal. The peak in the amplitude spectrum of the modulation confirms the rotation period of 12.310\,d derived in Sec.~\ref{sec:pulsations}.
Fig.~\ref{fig:11409673_crossing} shows the variation of the pulsation amplitude and phase over the rotation period of the two stars in this work found to have observable phase crossings, KIC\,7018170 and KIC\,11409673. The maximum amplitude coincides with light maximum, which is expected when the spots producing the light variations are closely aligned with the magnetic and pulsation poles. The amplitude does not reach zero in KIC\,7018170, but almost does in KIC\,11409673. Since the amplitude does not go to zero for KIC\,7018170, we can see that the mode is distorted.
\subsection{Phase modulation from binarity}
Both KIC\,6631188 and KIC\,11031749 show long term phase variations independent of their rotation. If the frequency variability of KIC\,11031749 is modelled as modulation of its phase due to binary reflex motion from an orbital companion, its orbital properties can be derived following the phase modulation technique \citep{Murphy2014Finding}. To examine this, the light curve was separated into 5-day segments, where the phase of each segment at the primary frequency was calculated through a discrete Fourier transform. We then converted the phases into light arrival times ($\tau$) by dividing by the frequency, from which a map of the binary orbit was constructed. A Hamiltonian Markov Chain Monte Carlo sampler was then applied to fit the time delay curve through the use of {\sc PyMC3} \citep{Salvatier2016Probabilistic}. The sampler was run simultaneously over 4 chains for 5000 draws each, with 2000 tuning steps. The resulting fit is shown in Fig.~\ref{fig:11031749_PM}, with extracted orbital parameters in Table~\ref{tab:11031749_PM}. These parameters give a binary mass function $f(m_1, m_2, sini)$ = 0.000\,327 M$_\odot$. Assuming a primary mass $M_1$ from Table~\ref{tab:sample} of 1.78 M$_\odot$, we obtain a lower limit on the mass of the companion to be 0.105 $M_\odot$, placing its potential companion as a low-mass M-dwarf.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/PM2.pdf}
\caption{Left panel: Observed time delay for KIC\,6631188. The time delay, $\tau$, is defined such that it is negative when the pulsating star is nearer to us than the barycenter of the system. The blue line is not a orbital solution fit, but rather the time delay as obtained by binning the signal for clarity. If the observed signal is truly periodic, it appears to be on a timescale longer than the LC data. Right panel: Observed time delays (black dots) and the fitted model (green line) for KIC\,11031749. The phase modulation suggests a binary system of low eccentricity, whose orbital period is comparable to or longer than the time-base of the LC photometry.}
\label{fig:11031749_PM}
\end{figure*}
\begin{table}
\centering
\caption{Orbital parameters of KIC\,11031749 obtained through phase modulation. $\varpi$ is the angle from the nodal point to the periapsis, $i$ is the inclination angle. $a$ is the semimajor axis, $e$ is the eccentricity, and $\phi_p$ is the phase of periastron.}
\label{tab:11031749_PM}
\begin{tabular}{lcr}
\hline
Quantity & Value & Units\\
\hline
$P_{\rm orb}$ & 1035.5\,$\pm$\,0.5 & d\\
$(a_1 \sin{i})/c$ & 68.86\,$\pm$\,0.13 & s\\
$e$ & 0.203\,$\pm$\,0.004 & \\
$\phi_p$ & 0.80\,$\pm$\,0.03 & \\
$\varpi$ & 0.08\,$\pm$\,0.01 & rad \\
$f(m_1, m_2, sini)$ & 0.000\,327 & M$_\odot$ \\
$M_2 \sin{i}$ & 0.105\,$\pm$\,0.001 & M$_\odot$ \\
\hline
\end{tabular}
\end{table}
KIC\,6631188 also shows signs of frequency modulation (Fig.~\ref{fig:11031749_PM}). However, if this modulation is truly from a stellar companion then its orbital period must be much longer than the time base of the {\em Kepler}\ data. The PM method can only be used when at least one full binary orbit is observed in the time delay curve. Thus, no orbital solution can be presented here.
It is important to note the scarcity of Ap stars in binary systems. Indeed, the much smaller subset of roAp stars have a low chance of being found in a binary \citep{Abt1973Binary, North1998Binaries,Folsom2014Candidate}, however, few techniques can adequately observe the low-mass companion presented here. Although frequency modulation in roAp stars has been inferred in the past to be a consequence of binary motion, two out of the six stars presented in this work show evidence of coherent frequency/phase modulation. Whether this modulation is a consequence of changes in the pulsation cavity, magnetic field, or externally caused by orbital perturbations of a companion remains to be seen, and requires spectroscopic follow-up to rule out orbital motion via a radial velocity analysis.
\section{Conclusions}
We presented the results of a search for rapid oscillators in the \textit{Kepler} long-cadence data using super-Nyquist asteroseismology to reliably distinguish between real and aliased pulsation frequencies. We selected over 69\,000 stars whose temperatures lie within the known range of roAp stars, and based on a search for high-frequency non-alias pulsations, have detected unambiguous oscillations in six stars - KIC\,6631188, KIC\,7018170, KIC\,10685175, KIC\,11031749, KIC\,11296437, and KIC\,11409673. LAMOST or Keck spectra of five of these stars shows that they exhibit unusual abundances of rare earth elements, the signature of an Ap star, with the final target , KIC\,10685175, already being confirmed as chemically peculiar in the literature.
This research marks a significant step in our search for roAp stars, and indeed, all high-frequency pulsators. To the best of our knowledge, this is the first time super-Nyquist asteroseismology has been used solely for identification of oscillation modes to such a high frequency. Although we expect many new roAp stars to be found in the \textit{TESS} Data Releases, {\em Kepler}\ had the advantage of being able to observe stars of much fainter magnitude for a longer time-span, revealing pulsations of lower amplitude.
\section*{Acknowledgements}
We are thankful to the entire \textit{Kepler} team for such incredible data. DRH gratefully acknowledges the support of the Australian Government Research Training Program (AGRTP) and University of Sydney Merit Award scholarships. This research has been supported by the Australian Government through the Australian Research Council DECRA grant number DE180101104. DLH and DWK acknowledge financial support from the Science and Technology Facilities Council (STFC) via grant ST/M000877/1. MC is supported in the form of work contract funded by national funds through Fundação para a Ciência e Tecnologia (FCT) and acknowledges the supported by FCT through national funds and by FEDER through COMPETE2020 by these grants: UID/FIS/04434/2019, PTDC/FIS-AST/30389/2017 \& POCI-01-0145-FEDER-030389. DH acknowledges support by the National Science Foundation (AST-1717000).
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawai`ian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This research was partially conducted during the Exostar19 program at the Kavli Institute for Theoretical Physics at UC Santa Barbara, which was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
\bibliographystyle{mnras}
| proofpile-arXiv_065-7204 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{methods}
Beyond the calibration required for the \textit{projection}-SWAP, there are three additional constraints that must be satisfied to achieve high fidelity \textit{coherent}-SWAP gates. First, the resonant SWAP pulse must remain phase coherent with the qubits in their doubly-rotating reference frame between calibrations (i.e.\ for hours). Second, because of the constraint that the exchange interaction is always positive, the time-averaged exchange pulse necessarily has some static component, which leads to evolution under an Ising interaction \cite{SOM}. These Ising phases must be calibrated out. Finally, voltage pulses on any gate generally displace both electrons by some small amount. This movement induces phase shifts in both qubits, since they are located in a large magnetic field gradient. These phase shifts must be compensated for.
To satisfy these additional tuning requirements, we first ensure that our RF exchange pulse remains phase coherent. Each qubit's reference frame is defined by the microwave signal generator controlling it, so by mixing together the local oscillators of these signal generators, we obtain a beat frequency that is phase locked to the doubly-rotating two-qubit reference frame. We then amplitude modulate this signal to generate our exchange pulses. A detailed schematic is shown in the supplementary information \cite{SOM}.
To calibrate for the single and two-qubit Ising phases, we use state tomography on both qubits before and after applying a SWAP gate. In these measurements, we vary the input states to distinguish between errors caused by two-qubit Ising phases, and the single qubit phase shifts. We choose a SWAP time and amplitude such that the Ising phases cancel out, which for this particular configuration occurs for a 302 ns SWAP gate. The single qubit phase shifts were measured to be 180$^{\circ}$ for $Q_3$ and 140$^{\circ}$ for $Q_4$. By superimposing the SWAP pulse with a dc exchange pulse, one can compensate for the Ising phases at arbitrary SWAP lengths, leading to faster operation \cite{SOM}.
\section{Author Contributions}
AJS, MJG, and JRP designed and planned the experiments, AJS fabricated the device and performed the measurements, MJG provided theory support, LFE and MB provided the isotopically enriched heterostructure, AJS, MJG, and JRP wrote the manuscript with input from all the authors.
\section{Competing Interests}
The authors declare no competing financial interests.
\section{Data Availability}
The data supporting the findings of this study are available within the paper and its Supplementary Information \cite{SOM}. The data are also available from the authors upon reasonable request.
\begin{acknowledgments}
Funded by Army Research Office grant No.\ W911NF-15-1-0149, DARPA grant No.\ D18AC0025, and the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4535. Devices were fabricated in the Princeton University Quantum Device Nanofabrication Laboratory. The authors acknowledge the use of Princeton’s Imaging and Analysis Center, which is partially supported by the Princeton Center for Complex Materials, a National Science Foundation MRSEC program (DMR-1420541)
\end{acknowledgments}
| proofpile-arXiv_065-7207 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Hitchin \cite{Hit87S} introduced the map now named after him, and showed that it defines a completely integrable system in the complex-algebraic sense. Subsequently Beauville, Narasimhan and Ramanan \cite{BNR} constructed a correspondence --- indeed nowadays refered to as the BNR correspondence --- which among other things characterizes the generic fiber of a Hitchin map as a compactified Jacobian. Our paper is concerned with a parabolic version of these results in the setting of algebraic geometry. By this we mean that we work over an arbitrary algebraically closed field $k$.
To be concrete, let us fix a smooth projective curve $X$ over $k$ of genus $g(X)\geq 2$ and a finite subset $D\subset X$, which we shall also regard as a reduced effective divisor on $X$. We also fix a positive integer $r$ which will be the rank of vector bundles on $X$ that we shall consider (but if $k$ has characteristic $2$, we shall assume $r\geq 3$ in order to avoid issues involving ampleness) and we specify for each
$x\in D$ a finite sequence $m^{\scriptscriptstyle\bullet} (x)=(m^1(x), m^2(x), \dots, m^{\sigma_x}(x))$ of positive integers summing up to $r$. We refer to these data as a \emph{quasi-parabolic structure
; let us denote this simply by $P$. A \emph{quasi-parabolic vector bundle of type $P$}
is then a rank $r$ vector bundle $E$ on $X$ which for every $x\in D$ is endowed with a filtration $E|_{x}=F^0(x)\supset F^1(x)\supset \cdots \supset F^{\sigma_x}(x)=0$ such that $\dim F^{j-1}(x)/ F^{j}(x)=m^j(x)$.
A \emph{parabolic Higgs field} on a such a bundle is a ${\mathcal O}_X$-homomorphism $\theta : E\to E\otimes_{{\mathcal O}_X} \omega_X(D)$ with the property that it takes each $F^j(x)$ to $F^{j+1}(x)\otimes_{{\mathcal O}_X}T^*_x(X)$. We call it a \emph{weak parabolic Higgs field}, if it only takes
$F^j(x)$ to $F^{j}(x)\otimes_{{\mathcal O}_X}T^*_x(X)$. A weak parabolic Higgs field $\theta$ has a characteristic polynomial
with coefficients as an element of $$\mathcal{H}:=\prod_{j=1}^rH^0 (X, (\omega(D))^{\otimes j})$$ and the characteristic polynomial itself defines
the \emph{spectral curve} in the cotangent bundle of $X$ that is finite over $X$.
With the help of Geometric Invariant Theory one can construct moduli spaces of such objects, but this requires
``polarization data'', which in the present context take the form of a \emph{weight function $\alpha$} which assigns to every $x\in D$ a set of real numbers
$0= \alpha_0(x)<\alpha_1(x)<\cdots <\alpha_{\sigma_x}(x)=1$. As we will recall later, this then gives rise to notions of parabolic structures and corresponding stability conditions.
And leads to quasi-projective varieties parametrizing the classes of $\alpha$-stable objects of type $P$: for the parabolic vector bundles we get ${\textbf{M}}_{P,\alpha}$, for weak parabolic Higgs bundles we get $\mathbf{Higgs}^W_{P,\alpha}$ and for ordinary parabolic Higgs bundles we get $\mathbf{Higgs}_{P,\alpha}$, the latter being contained in $\mathbf{Higgs}^W_{P,\alpha}$ as a closed subset.
If we choose $\alpha$ generic, then the notions of semistability and stability coincide, so that these have an interpretation as coarse moduli spaces, and
the varieties in question will be nonsingular. By assigning to a Higgs field the coefficients of its characteristic polynomial we obtain a (Hitchin) map $h^W_{P,\alpha}: \mathbf{Higgs}^W_{P,\alpha}\to \mathcal{H}$.
We prove that $h^W_{P,\alpha}$ is flat, show that each connected component of the generic fiber of $h^{W}_{P,\alpha}$ is a torsor of the Picard variety of the corresponding spectral curve and compute the number of connected components.
But our main results concern the image $\mathcal{H}_{P}$ of $\mathbf{Higgs}_{P,\alpha}$ and the resulting morphism
$h_{P,\alpha}: \mathbf{Higgs}_{P,\alpha}\to \mathcal{H}_{P}$. We characterize $\mathcal{H}_{P}$ as an affine subspace of $\mathbf{Higgs}_{P,\alpha}$ (this was obtained earlier by Baraglia and Kamgarpour \cite{BK18}) and prove essentially that
$h_{P,\alpha}$ has all the properties that one would hope for.
We have a commutative diagram
\begin{displaymath}
\begin{diagram}
\mathbf{Higgs}_{P,\alpha}&\rTo^{h_{P,\alpha}}& \mathcal{H}_{P}\\
\dInto& &\dInto\\
\mathbf{Higgs}^W_{P,\alpha}&\rTo^{h^W_{P,\alpha}}& \mathcal{H}
\end{diagram}
\end{displaymath}
but beware that this is not Cartesian unless all the $m^j(x)$ are equal to $1$.
We give a concrete description of generic fibers of $h_{P,\alpha}$ and we also obtain the parabolic BNR correspondence in this setting,
which roughly speaking amounts to (see Theorem \ref{parabolic BNR}):
\begin{theorem}[Parabolic BNR Correspondence]\label{main02}
There is a one to one correspondence between:
$$\left\{ \begin{array}{c}\text{isomorphism classes of parabolic Higgs bundles}\\ \text{with prescibed characteristic polynomial}
\end{array} \right\} $$
and
\[ \left\{\begin{array}{c}\text{line bundles over the normalized spectral curve}\\ \text{with a fixed degree determined by the parabolic data}\end{array} \right\}.\]
In particular, generic fibers of $h_{P,\alpha}$ are connected.
\end{theorem}
Furthermore, we compute the dimension of the parabolic nilpotent cones and derive from this (see Theorem \ref{flat}):
\begin{theorem}\label{main01}
When $\mathbf{Higgs}_{P,\alpha}$ is smooth, the parabolic Hitchin map $h_{P,\alpha}$ is flat and surjective.
\end{theorem}
Let us now indicate how this relates to previous work.
After the fundamental work of Hitchin and Beauville-Narasimhan-Ramanan mentioned above, several papers investigated various properties of the Hitchin map over the complex field, for example in \cite{Lau88}, \cite{ Fal93}, \cite{Gin01}. Niture \cite{N91} constructed the moduli space of (semi-)stable Higgs bundles over an algebraically closed field and showed the properness of Hitchin maps.
In the parabolic setting, Yokogawa \cite{Yo93C,Yo95} constructed the moduli space of (semi-)stable parabolic Higgs bundles and the weak version of this notion and proved that a weak parabolic Hitchin map is proper. His construction works over any algebraically closed field.
Logares and Martens \cite{LM10}, working over the complex field, studied the generic fibers and constructed a Poisson structure on $\mathbf{Higgs}^W_{P,\alpha}$ and proved that $h^W_{P,\alpha}$ is an integrable system in the Poisson sense.
Scheinost and Schottenloher \cite{SS95}, also working over $\mathbb{C}$, defined the parabolic Hitchin map $h_{P,\alpha}$ and proved by means of a non-abelian Hodge correspondence that $h_{P,\alpha}$ is an algebraically completely integrable system. Baraglia, Kamgarpour and Varma \cite{V16,BKV18,BK18} generalized this to a $G$-parahoric Hitchin system, here $G$ can be a simple simply connected algebraic group over $\mathbb{C}$.
\\
We close this section by describing how this paper is organized. In section 2, we recall the parabolic setting and review the properties of ${\textbf{M}}_{P,\alpha}$, $\mathbf{Higgs}_{P,\alpha}$ and $\mathbf{Higgs}^W_{P,\alpha}$. In section 3, we recall the construction of the Hitchin maps $h^W_{P,\alpha}$ and $h_{P,\alpha}$ and determine the corresponding parabolic Hitchin base space ${\mathcal H}_P$ as in \cite{BK18}. In section 4, we set up the parabolic BNR correspondence (Theorem \ref{main02}) and determine the generic fibers of a parabolic Hitchin map. In section 5, we do the same for a weak parabolic Hitchin map. And finally, in section 6, we compute the dimension of parabolic nilpotent cones and prove Theorem \ref{main01}. We also prove the existence of very stable parabolic vector bundle. As an application, we use co-dimension estimate to give an embedding of conformal blocks into theta functions.
\\
\noindent\textbf{Acknowledgements} The authors thank Eduard Looijenga. Discussions with Eduard motivated our proof of the first main theorem and significantly affected the organization of this paper. The authors also thank Bingyi Chen, Yifei Chen, H\'el\`ene Esnault, Yi Gu, Peigen Li, Yichen Qin and Junchao Shentu, Xiaotao Sun and Xiaokui Yang for helpful discussions.
\section{Parabolic and weak parabolic Higgs Bundles}\label{sec 2}
\subsection{Parabolic vector bundles}
We use the notions and the notation introduced above. In particular, we fix $X$ and a set of quasi-parabolic data $P=(D, \{m^{\scriptscriptstyle\bullet}(x)\}_{x\in D})$.
We denote by $P_x\subseteq { GL}_r=G$ to be the standard parabolic subgroup with Levi type $\{m^j(x)\}$. We also fix a weight function $\alpha=\{\alpha_{{\scriptscriptstyle\bullet}}(x)\}_{x\in D}$ and call $(P, \alpha)$ a parabolic structure.
We fix a positive integer $r$ and let $E$ be a rank $r$ vector bundle over $X$ endowed with a quasi-parabolic structure of type $P$.
\begin{remark}\label{rem:convention}
From now on, we will use calligraphic letters ${\mathcal E}, {\mathcal F}, \ldots$ to denote parabolic bundles of a given type (with certain quasi-parabolic structure), and use the normal upright Roman letters $E, F,\ldots$ to denote underlying vector bundles. We will also consider a local version (where $X$ is replaced by the spectrum of a DVR). Then $D$ will be the closed point, and we will write $\sigma$, $\{m^{j}\}_{j=1}^{\sigma}$ and $\{\alpha_{j}\}_{j=1}^{\sigma}$ instead.
\end{remark}
Let be given a parabolic vector bundle ${\mathcal E}$ on $X$. Then every coherent ${\mathcal O}_X$-submodule $F$ of $E$ inherits from $E$ a quasi-parabolic structure so that it may be regarded as a parabolic vector bundle ${\mathcal F}$. Note that the weight function $\alpha$ for ${\mathcal E}$ determines one for ${\mathcal F}$. Similarly, for any line bundle $L$ on $X$ we have a natural parabolic structure on $E\otimes_{{\mathcal O}_X}L$, which we then denote by ${\mathcal E}\otimes_{{\mathcal O}_X}L$. For more details, please refer to \cite{Yo93C}.
An endomorphism of ${\mathcal E}$ is of course a vector bundle endomorphism of $E$ which preserves the filtrations $F^{\scriptscriptstyle\bullet}(x)$. We call this a \emph{strongly parabolic endomorphism} if it takes $F^i(x))$ to $F^{i+1}(x)$ for all $x\in D$ and $i$. We denote the subspaces of ${ End}_{{\mathcal O}_X}(E)$ defined by these properties
$$
ParEnd({\mathcal E}) \text{ resp.\ } SParEnd({\mathcal E}).
$$
Similarly we can define the sheaf of parabolic endomorphisms and sheaf of strongly parabolic endomorphisms, denoted by ${\mathcal P} ar{\mathcal E} nd({\mathcal E})$ and ${\mathcal S}{\mathcal P} ar{\mathcal E} nd({\mathcal E})$ respectively.
\begin{remark}
Following \cite{Yo95}, we have
\begin{equation}\label{(3.4)} {\mathcal P} ar {\mathcal E} nd({\mathcal E})^{\vee}={\mathcal S} {\mathcal P} ar{\mathcal E} nd({\mathcal E})\otimes_{{\mathcal O}_X}{\mathcal O}_X(D).
\end{equation}
\end{remark}
We now define the \emph{parabolic degree (or $\alpha$-degree)} of ${\mathcal E}$ to be
\[\text{par-}deg({\mathcal E}):=deg(E)+\sum_{x\in D}\sum_{j=1}^{\sigma_x}\alpha_{j}(x)m^j(x).\]
And the \emph{parabolic slope or $\alpha$-slope} of ${\mathcal E}$ is given by \[\text{par-}\mu({\mathcal E})=\frac{\text{par-}deg({\mathcal E})}{r}\]
\begin{definition}
A parabolic vector bundle ${\mathcal E}$ is said to be (semi-)stable if for every proper coherent ${\mathcal O}_X$-submodule $F\subsetneq E$ , we have
\[
\text{par-}\mu({\mathcal F})<\text{par-}\mu({\mathcal E}) \text{ resp. }(\leq),
\]
where the parabolic structure on ${\mathcal F}$ is inherited from ${\mathcal E}$.
\end{definition}
There exists a coarse moduli space for semistable parabolic vector bundles of rank $r$ with fixed quasi-parabolic type $P$ and weights $\alpha$. For the constructions and properties, we refer the interested readers to \cite{MS80,Yo93C,Yo95}. Denote the moduli space by ${\textbf{M}}_{P,\alpha}$ (the stable locus is denoted by ${\textbf{M}}_{P,\alpha}^s$). ${\textbf{M}}_{P,\alpha}$ is a normal projective variety of dimension
\begin{align*}
\dim({\textbf{M}}_{P,\alpha})&=(g-1)r^2+1+\sum\limits_{x\in D}\frac{1}{2}(r^2-\sum_{j=1}^{\sigma_x}(m^{j}(x))^2)\\
&=(g-1)r^2+1+\sum\limits_{x\in D}\dim(G/P_{x}),
\end{align*}
\subsection{Parabolic Higgs bundles}
Let us define the parabolic Higgs bundles. It is reasonable that a general Higgs bundle should be a cotangent vector of a stable parabolic vector bundle in its moduli space.
Recall in (\ref{(3.4)}) that ${\mathcal P} ar{\mathcal E} nd({\mathcal E})$ is naturally dual to ${\mathcal S}{\mathcal P} ar{\mathcal E} nd({\mathcal E})(D)$.
Yokogawa \cite{Yo95} showed:
\[ T^*_{\left[{\mathcal E}\right]}{\textbf{M}}_{P,\alpha}^{s}
=(H^1(X,{\mathcal P} ar{\mathcal E} nd({\mathcal E})))^*
\cong H^0(X,{\mathcal S} {\mathcal P} ar {\mathcal E} nd({\mathcal E})\otimes_{{\mathcal O}_X}\omega_X(D)).
\]
So we define the parabolic Higgs bundles as follows:
\begin{definition} A \emph{parabolic Higgs bundle} on $X$ with fixed parabolic data $(P,\alpha)$ is a parabolic vector bundle ${\mathcal E}$ together with a Higgs field $\theta$,
\[
\theta:{\mathcal E}\to {\mathcal E}\otimes_{{\mathcal O}_X}\omega_X(D)
\] such that $\theta$ is a strongly parabolic map between ${\mathcal E}$ and ${\mathcal E}\otimes_{{\mathcal O}_X}\omega_X(D)$.
If $\theta$ is merely parabolic, we say that $({\mathcal E},\theta)$ is a \emph{weak parabolic Higgs bundle}.
\end{definition}
\begin{remark}\label{rem:abeliancat}
The category of (weak) parabolic filtered Higgs sheaves is an abelian category with enough injectives which contains the category of (weak) parabolic Higgs bundles as a full subcategory. See \cite[Definition 2.2]{Yo95}.
\end{remark}
One can similarly define the stability condition for a (weak) parabolic Higgs bundles. A (weak) parabolic Higgs bundle $({\mathcal E},\theta)$ is called $\alpha$-semi-stable (resp. stable) if for all proper sub-Higgs bundle $(F,\theta)\subsetneq (E,\theta)$, one has $\text{par-}\mu({\mathcal F})\leq \text{par-}\mu({\mathcal E})$ (resp. $<$). Similar to the vector bundle case, an $\alpha$-stable parabolic Higgs bundle $({\mathcal E},\theta)$ is simple, i.e. $ParEnd({\mathcal E},\theta)\cong k$.
As mentioned in the introduction, Geometric Invariant Theory shows that the $\alpha$-stable objects define a moduli spaces $\mathbf{Higgs}^W_{P,\alpha}$ and $\mathbf{Higgs}_{P,\alpha}$ that are normal quasi-projective varieties (see\cite{MS80}, \cite{Yo95} and \cite{Yo93C}). We have
\[\dim(\mathbf{Higgs}^W_{P,\alpha} )= (2g-2+\deg(D))r^2 + 1.\]
and $\mathbf{Higgs}_{P,\alpha}$ is a closed subvariety of $\mathbf{Higgs}^W_{P,\alpha}$ (see \cite[Remark 5.1]{Yo95}) and
\[\dim(\mathbf{Higgs}_{P,\alpha}) =2(g-1)r^2 +2+ \sum_{x\in D}2\dim(G/P_{x})=2\dim({\textbf{M}}_{P,\alpha}).\]
For generic $\alpha$, a bundle (or pair) is $\alpha$-semistable if and only if it is $\alpha$-stable. In these cases, the moduli spaces ${\textbf{M}}_{P,\alpha}$, $\mathbf{Higgs}_{P,\alpha}$ and $\mathbf{Higgs}^W_{P,\alpha}$ are smooth. In what follows, we will always assume that \textbf{$\alpha$ is generic in this sense.} For simplicity, \emph{we will always drop the weight $\alpha$ in the subscripts and abbreviate the parabolic structure $(P,\alpha)$ as $P$.}
\section{The (weak) parabolic Hitchin Maps}
Weak parabolic Hitchin maps are defined by Yokogawa \cite[Page 495]{Yo93C}. According to \cite[Theorem 4.6]{Yo93C} and \cite[Remark 5.1]{Yo95}, $\mathbf{Higgs}^W_P$ is a geometric quotient by an algebraic group ${ PGL}(V)$ of some ${ PGL}(V)$-scheme $\mathcal{Q}$. On $X_{{\mathcal Q}}=X\times {\mathcal Q}$ one has a universal family of stable weak parabolic Higgs bundles $(\tilde{{\mathcal E}},\tilde{\theta})$ and a surjection $V\otimes_k {\mathcal O}_{X_{\mathcal Q}}\twoheadrightarrow \tilde{{\mathcal E}}$. Thus the coefficients of the characteristic polynomial of $\tilde{\theta}$
\begin{displaymath}
(a_1(\tilde{\theta}),\cdots, a_n(\tilde{\theta})):=({ tr}_{{\mathcal O}_{X_{\mathcal Q}}}(\tilde{\theta}),{ tr}_{{\mathcal O}_{X_{\mathcal Q}}}(\wedge^2_{{\mathcal O}_{X_{\mathcal Q}}}\tilde{\theta}), \cdots,\wedge^r_{{\mathcal O}_{X_{\mathcal Q}}}\tilde{\theta}))
\end{displaymath}
determine a section of $\bigoplus_{i=1}^r(\pi_X^*\omega_X(D))^{\otimes i}$ over $X_{{\mathcal Q}}$. We write $\mathbf{H}^0(X,(\omega_X(D))^{\otimes i})$
for the affine variety underlying $H^0(X,(\omega_X(D))^{\otimes i})$.
Since
\[
H^{0}(X_{\mathcal Q},\bigoplus_{i=1}^r(\pi_X^*\omega_X(D))^{\otimes i}) ={ Hom}_{\operatorname{\mathbf{Sch}}}({\mathcal Q},\prod_{i=1}^r \mathbf{H}^0(X,(\omega_X(D))^{\otimes i})),
\]
the characteristic polynomial of $\tilde{\theta}$ defines a morphism of schemes
\[
{\mathcal Q}\to \prod_{i=1}^r \mathbf{H}^0(X,(\omega_X(D))^{\otimes i}).
\]
This map is equivariant under the ${ PGL}(V)$-action \cite[p.\ 495]{Yo93C} and hence factors through the moduli space $\mathbf{Higgs}^W_P$.
\begin{definition}
The \emph{Hitchin base space} for the pair $(X,D)$ is
\[
{\mathcal H}:=\prod_{i=1}^r \mathbf{H}^0(X,(\omega_X(D))^{\otimes i})
\]
and
\[
h_P^W: \mathbf{Higgs}^W_P\to {\mathcal H}
\]
is called the \emph{weak parabolic Hitchin map}.
\end{definition}
Note that $h_P^W$ is pointwise defined as $({\mathcal E},\theta)\mapsto (a_1(\theta),\cdots,a_r(\theta))\in {\mathcal H}$.
It is easy to see $$\dim ({\mathcal H}) =r^2(g-1)+\frac{r(r+1)\deg(D)}{2},$$ and in general a generic fiber of $h^W_{P}$ has smaller dimension than ${\mathcal H}$.
We shall now define an affine subspace ${\mathcal H}_P$ of ${\mathcal H}$ (which as the notation indicates depends on $P$) such that $h_{P}^W(\mathbf{Higgs}_P)\subset {\mathcal H}_P$.
Baraglia and Kamgarpour \cite{BK18} have already determined parabolic Hitchin base spaces for all classical groups\footnote{Their notation for ${\mathcal H}_P$ is ${\mathcal A}_{{\mathcal G},P}$.}. Moreover when $k=\mathbb{C}$, they show in \cite{BKV18} that $h_{P}$ is surjective by symplectic methods.
We here do the calculation for $G={ GL}_{r}$, not just for completeness, but also because it involves some facts of Young tableaux which will we need later.
Our proof is simple and direct. In Section \ref{sec bnr}, we will give a proof of surjectivity over general $k$.
\\
\subsection*{Intermezzo on partitions}
A partition of $r$ is a sequence of integers $n_{1}\geq n_{2}\geq\cdots\geq n_{\sigma}> 0$ with sum $r$. Its conjugate partition is the sequence of integers $\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{n_{1}}>0$ (also with sum $r$) given by
$\mu_{j}=\#\{\ell:n_{\ell}\geq j, 1\leq\ell\leq \sigma\}.$ It is customary to depict this as a Young diagram:
For example for $(n_{1},n_{2},n_{3})=(5,4,2)$, we have the Young diagram:
\begin{displaymath}
\large\yng(5,4,2)
\end{displaymath}
We can read the conjugate partition from the diagram: $$(\mu_{1},\mu_{2},\mu_{3},\mu_{4},\mu_{5})=(3,3,2,2,1).$$
Number the boxes as indicated:
\begin{center}
\small\begin{ytableau}
1&4&7&9&11\\
2&5&8&10\\
3&6
\end{ytableau}
\end{center}
For each partition of $r$, we assign a level function: $j\rightarrow \gamma_{j}, 1\leq j\leq r$, such that $\gamma_{j}=l$ if and only if $$\sum_{t\leq l-1}\mu_{t}< j\leq\sum_{t\leq l}\mu_{t}.$$
For example, combined with the former numbered Young Tableau, $\gamma_{j}$ is illustrated as following:
\begin{center}
\small\begin{ytableau}
1&2&3&4&5\\
1&2&3&4\\
1&2
\end{ytableau}
\end{center}
It is clear that:
\begin{align}\label{comb fact 1}
&\sum_{j}\gamma_{j}=\sum_{t}t\mu_{t}=\sum_{i}\sum_{j\leq n_{i}}j=\sum_{i}\frac{1}{2}n_{i}(n_{i}+1).\\
\label{flagmui}&\sum_{i=1}^{\sigma}(n_i)^{2}=\sum_{t=1}^{n_{\sigma}}t^{2}(\mu_{t}-\mu_{t+1})
=\sum_{t=1}^{n_{\sigma}}(2t-1)\mu_{t}.
\end{align}
In the following, we reorder the Levi type $\{m^{j}(x)\}_{j=1}^{\sigma_{x}}$ from large to small as $\{n_{j}(x)\}_{j=1}^{\sigma_{x}}$, so that
$n_1(x)\ge n_2(x)\ge \cdots \ge n_{\sigma_x}(x)>0$. This is a partition of $r$.
\begin{definition} The \emph{parabolic Hitchin base} for the parabolic data $P$ is
\[
\mathcal{H}_{P}:=\prod_{j=1}^r\mathbf{H}^{0}\Big(X,\omega_X^{\otimes j}\otimes{\mathcal O}_X\big(\sum_{x\in D}(j-\gamma_{j}(x))\cdot x\big) \Big)\subset {\mathcal H},
\]
where the right hand side is regarded as an affine space.
\end{definition}
\begin{lemma}\label{dim formula} $\dim \mathcal{H}_{P}=\frac{1}{2}\dim \mathbf{Higgs}_P$
\end{lemma}
\begin{proof}
Recall that $\dim\mathbf{Higgs}_P=\dim T^{*}{\textbf{M}}_{P}=2\dim {\textbf{M}}_{P}$. By Riemann-Roch theorem, we have
\begin{align*}
\dim\mathcal{H}_{P}&=\sum_{j=1}^{r}\dim H^{0}\Big(X,\omega_X^{\otimes j}\otimes{\mathcal O}_X\big(\sum\limits_{x\in D}(j-\gamma_j(x))\cdot x\big)\Big)\\
&=1+r(1-g)+\frac{r(r+1)}{2}(2g-2)+\sum_{j=1}^{r}\sum_{x\in D}(j-\gamma_j(x))\\
&=1+r^{2}(g-1)+\frac{r(r+1)\deg D}{2}-\sum_{x\in D}\sum_{j=1}^{r}\gamma_j(x)\\
&=\dim({\textbf{M}}_P)+\frac{1}{2}\sum\limits_{x\in D}\Big(r+\sum\limits_{l=1}^{\sigma_x}m^l(x)^{2}-2\sum\limits_{j=1}^{r}\gamma_j(x)\Big)\\
&=\frac{1}{2}\dim \mathbf{Higgs}_P
\end{align*}
The last equality follows from (\ref{comb fact 1}), (\ref{flagmui}).
\end{proof}
\begin{theorem}\label{image parabolic}
For $({\mathcal E},\theta)\in\mathbf{Higgs}_P$, $h_{P}^W({\mathcal E},\theta)\in {\mathcal H}_P$. i.e., we have
$$
a_{j}(\theta)\in H^{0}\Big(X,\omega_{X}^{\otimes j}\otimes{\mathcal O}_X\big(\sum_{x\in D}(j-\gamma_{j}(x))\cdot x\big)\Big).
$$
\end{theorem}
Without loss of generality, we may assume $D=x$. We denote the characteristic polynomial of $\theta$ as $\lambda^{r}+a_{1}\lambda^{r-1}+\cdots a_{r}$ where $a_{i}={ tr}(\wedge^{i}\theta)$.
We denote the formal local ring at $x$ by ${\mathcal O}$ with natural valuation denoted by $v$. We denote its fraction field by $\mathcal{K}$. We fix a local coordinate $t$ in a formal neighborhood of $x$ and choose local section $\frac{dt}{t}$ to get a trivialization of $\omega_{X}(x)$ near $x$. Then the characteristic polynomial around $x$ becomes
\[f(t,\lambda):=\lambda^{r}+b_{1}\lambda^{r-1}+\cdots b_{r},\]
where $b_i\in {\mathcal O}$.
\begin{proof}
Following the above argument, we only need to show that: $$v(b_{i})\geq\gamma_{i}, \quad 1\leq i\leq r$$
It amounts to prove the following statement:
\begin{claim}
Let ${\mathcal E}$ be a free ${\mathcal O}$-module of rank $r$. $F^{\bullet}$ is a filtration ${\mathcal E}\otimes_{{\mathcal O}} k$. Denote by $n_{1}\geq n_{2}\geq\cdots n_{\sigma}>0$ a partition of $r$, with $n_{i}=\dim_{k}\frac{F^{i-1}}{F^{i}}, 1\leq i\leq \sigma$. Then for $\theta\in{ End}_{{\mathcal O}}({\mathcal E})$ such that $\theta$ respects $F^{\bullet}$, $$ v({ tr}(\wedge_{{\mathcal O}}^{i}\theta^{i}))\geq \gamma_{i}$$
\end{claim}
Now we prove the claim. Lift $F^{\bullet}$ to a filtration ${\mathcal F}^{\bullet}$ on ${\mathcal E}$. This induces a filtration of $\wedge_{{\mathcal O}}^{i}{\mathcal E}$ with associated graded ${\mathcal O}$-module:
$$\bigoplus_{\delta_{1}+\cdots+\delta_{\sigma}=i}\wedge_{{\mathcal O}}^{\delta_{1}}({\mathcal F}^{0}/{\mathcal F}^{1})\otimes\cdots\otimes\wedge_{{\mathcal O}}^{\delta_{\sigma}}({\mathcal F}^{\sigma-1}/{\mathcal F}^{\sigma}).$$
Any $\theta$ as above induces a map in each summand, this map has trace has valuation no less than $\min\{\delta_{1},\cdots,\delta_{\sigma}\}$. Since ${ tr}(\wedge_{{\mathcal O}}^{i}\theta^{i})$ is the sum of these traces, then our claim follows from intermezzo above. \end{proof}
Yokogawa \cite[Corollary 5.12, Corollary 1.6]{Yo93C} showed that $h^{W}_{P}$ is projective and $\mathbf{Higgs}_P\subset\mathbf{Higgs}^{W}_{P}$ is a closed sub-variety. By Theorem \ref{image parabolic}, the image of $\mathbf{Higgs}_P$ under $h^W_P$ is contained in ${\mathcal H}_{P}\subset {\mathcal H}$.
We denote this restriction
$$
h_P=h_P^W|_{\mathbf{Higgs}_P}:\mathbf{Higgs}_P\to{\mathcal H}_P
$$
and refer to it as the \emph{parabolic Hitchin map}. We conclude that:
\begin{propdef}\label{hproper}
The \emph{parabolic Hitchin map} for the parabolic structure $P$ is the morphism
$$
h_P=h_P^W|_{\mathbf{Higgs}_P}:\mathbf{Higgs}_P\to{\mathcal H}_P
$$
This morphism is proper.
\end{propdef}
\subsection*{Spectral curves} In the next two sections, we determine generic fibers of the (weak) parabolic Hitchin map. As in \cite{BNR}, we introduce the spectral curve to realize the Hitchin fibers as a particular kind of sheaves on the spectral curve.
One observe that ${\mathcal H}$ is also the Hitchin base of $\omega_{X}(D)$-valued Higgs bundles. So for $a\in {\mathcal H}$, one has the spectral curve $X_a\subset \P({\mathcal O}_X\oplus \omega_X(D))$ for $\omega_X(D)$-valued Higgs bundles, cut out by the characteristic polynomial $a\in {\mathcal H}$. Denote the projection by $\pi_a:X_a\to X$. One can compute the arithmetic genus as:
$$P_{a}(X_{a})=1-\chi(X,\pi_{*}{\mathcal O}_{X_{a}})
=1+r^{2}(g-1)+\frac{r(r-1)}{2}\deg(D).
$$
When we work
in the weak parabolic case, $X_a$ is smooth for generic $a\in {\mathcal H}$. On the other hand, for any $a\in {\mathcal H}_P$, the spectral curve $X_a$ is singular (except for the Borel case). Yet for a generic $a\in{\mathcal H}_{P}$, $X_{a}$ is integral, totally ramified at $x\in D$ and smooth elsewhere. Please refer to the appendix.
\section{Generic Fiber of Parabolic Hitchin Map}\label{sec bnr}
In this section, we determine generic fibers of parabolic Hitchin map. We will start from a local analysis, and then derive from it the parabolic correspondence as stated in Theorem \ref{main02}. The analysis of local case is also of its own interest.
\subsection{Local case}
Suppose we're given the triple $(V,F^{\bullet},\theta)$ as following,
\begin{itemize}
\item[(a)] $V$ is a free ${\mathcal O}=k[[t]]$-module of rank $r$, with filtration $F^{\bullet}V$: $$V=V^{0}\supset V^1\supset\cdots\supset V^{\sigma}=t\cdot V.$$
with $\dim V^{i}/V^{i+1}=m_{i+1}$. As before we rearrange $(m_{i})$ as $(n_{i})$ to give a partition of $r$.
\item[(b)] $\theta:V\rightarrow V$ be a $k[[t]]$ module morphism and $\theta(V^{i})\subset V^{i+1}$.
\item[(c)] $\text{char}_\theta=f(\lambda,t)=\prod_{i=1}^{n_{1}}f_{i}$, each $f_{i}$ is an Eisenstein polynomial with $\deg(f_{i})=\mu_{i}$, here $(\cdots,\mu_{i},\cdots)$ as before is conjugate partition. Besides the constant term of $f_{i}$ are different in $t\cdot k[[t]]$.
\end{itemize}
Let $A:={\mathcal O}[\lambda]/(f)$, and $A_i=k[[t]][\lambda]/(f_i(t,\lambda))$, then each $A_{i}$ is a DVR and let $\tilde{A}=\prod_{i=1}^{n_{1}}A_i$. Then we have a natural injection $A\hookrightarrow \tilde{A}$. $\tilde{A}$ can be treated as normalization of $A$. Then:
\begin{claim}
$V$ is a principal $\tilde{A}$-module.
\end{claim}
From the Intermezzo, we know $\sigma=\mu_{1}$. It is easy to see $\theta^{\sigma}(v)\in tV$ for $\forall v\in V$. We define ${ Ker} f_{i}:=\{v\in V|f_{i}(\theta)(v)=0\}$, it's easy to see ${ Ker} f_{i}$ is a direct summand of $V$.
\begin{proposition}\label{first step direct summand}
The image of
$${ Ker} f_{1}\rightarrow V\rightarrow V/{ Ker} f_{i}$$
is ${ Ker}\bar{f}_{1}$, for $1< i\leq n_{1}$.
Here $\bar{f}_{1}$ is the induced map of $f_{1}$ on $V/{ Ker} f_{i}$. In particular, ${ Ker} f_{1}\oplus { Ker} f_{i}$ is a direct summand of $V$.
\end{proposition}
\begin{proof}
We denote the natural quotient map $V\rightarrow V/{ Ker} f_{i}$ by $q_{i}$.Then
$$q_{i}^{-1}({ Ker}\bar{f}_{1})=\{v\in V|f_{1}(v)\in { Ker} f_{i}\}$$
For simplicity we write $f_{1}$ as $\lambda^{\mu_{1}}+\alpha_{1}$, here $\alpha_{1}\in tk[[t]]\backslash t^{2}k[[t]]$ by generic condition. We denote $f_{1}(v)=w\in { Ker} f_{i}$. By definition, to show $q_{i}({ Ker} f_{1})={ Ker}\bar{f}_{1}$, it suffices to show $$\exists w^{'}\in { Ker} f_{i}, f_{1}(w^{'})=w$$
This amounts to solve the following linear equations:
\[\left\{\begin{array}{rcl}
(\theta^{\mu_{1}}+\alpha_{1})w^{'}&=&w\\
(\theta^{\mu_{i}}+\alpha_{i})w^{'}&=&0
\end{array}\right.\]
i.e. $ (-\alpha_{i}\theta^{\mu_{1}-\mu_{i}}+\alpha_{1})w^{'}=w$.
It is easy to see that $\theta$-action on $V$ is continuous with respect to $t$-adic topology on $V$, thus $V$ can be treated as $ k[[t]][[\lambda]]$-module. We rewrite:
$$(-\alpha_{i}\theta^{\mu_{1}-\mu_{i}}+\alpha_{1})=t\phi_{t}(\theta),$$ then $\phi_{t}(\theta)^{-1}$ is a well-defined map on $V$, since we assume $f_1$ and $f_i$ have different constant terms $t^{2}\not|(\alpha_{i}-\alpha_{1})$.
Notice that $f_{1}(v)=w\Rightarrow\theta^{\mu_{1}}(v)\equiv w (\text{mod } t)$. Since $\theta^{\mu_{1}}v\in tV$, we know that $w\in tV$, then we can find a (unique) $$w^{'}=\phi_{t}^{-1}(w/t),$$ such that $q_{i}(v-w^{'})=q_{i}(v)$ and $v-w^{'}\in { Ker} f_{1}$. Thus $q_{i}:{ Ker} f_{1}\rightarrow { Ker}\bar{f}_{1}$ is surjective.
Since ${ Ker}\bar{f}_{1}$ is a direct summand of $V/\ker f_{i}$,
${ Ker} f_{1}\oplus { Ker} f_{i}$ is a direct summand of $V$.
\end{proof}
\begin{proposition}\label{decomposition we want}
We have the following decomposition:
$$V\simeq \bigoplus_{i=1,\ldots,n_{1}}{ Ker} f_{i}$$
That is to say $V$ is a principal $\tilde{A}$-module.
\end{proposition}
\begin{remark}
It is obvious that we Can Not lift a principal $A$-module structure to $\tilde{A}$-module structure. The reason lies in that over a principal $A$-module, we do not have a filtration $F^{\bullet}$ of type $(m_1, m_2,\ldots,m_{\sigma})$, and a $\theta$ strongly preserves it. This proposition actually shows the effect of parabolic condition on local structure of Higgs bundles.
\end{remark}
\begin{proof}
We prove this by induction both on rank of $V$ and the number of irreducible factors of $\text{char}_\theta$. From Proposition \ref{first step direct summand}, we know that ${ Ker} f_{1}\oplus { Ker} f_{i}$ is direct summand of $V$ for $\forall i, 2\leq i\leq n_{1}$.
Consider the map:
$$q_{1}:V\rightarrow V/{ Ker} f_{1}$$
Since ${ Ker} f_1\oplus{ Ker} f_i$ is a direct summand of $V$ by Proposition \ref{first step direct summand}, $q_{1}({ Ker} f_{i})$ is a direct summand and is contained in ${ Ker}\bar{f_{i}}\subset V/{ Ker} f_{1}$.
Because ${ Ker} f_{i}\cap { Ker} f_{1}=0$, $q_{1}$ is injective when restricted to ${ Ker} f_{i}$. By passing to $V\otimes_{{\mathcal O}} K$, and the obvious decomposition:
$$V\otimes_{{\mathcal O}} K=\bigoplus_{i=1}^{n_{1}}{ Ker} f_{i}\otimes_{{\mathcal O}}K$$
We know that $rk({ Ker} f_{i})=rk({ Ker}\bar{f}_{i})$,
then:
$$q_{1}({ Ker} f_{i})={ Ker}\bar{f}_{i}$$
Thus we only need to prove that:
$$V/{ Ker} f_{1}=\bigoplus_{i=2}^{n_{1}}{ Ker}\bar{f}_{i}$$
$\theta$ acts on $V/{ Ker} f_{1}$ with characteristic polynomial $\prod_{i=2}^{\sigma}f_{i}$. The filtration on $V$, actually induces a filtration on $V/{ Ker} f_{1}$.\footnote{because ${ Ker} f_{1}\cap V_{j}$ is a direct summand of $V_{j}$} To use induction, we only need to show that the length of this filtration is $\mu_{2}$. This follows from that ${ Ker} f_{1}$ is rank one module over $A_{1}$ which is a DVR.
Then by induction, we have decomposition on $V/{ Ker} f_{1}$,i.e:
\begin{equation}
V/{ Ker} f_{1}\simeq \bigoplus_{i=2,\ldots,n_{1}}{ Ker} \bar{f}_i
\end{equation}
As $q_{1}:{ Ker} f_{i}\rightarrow { Ker}\bar{f}_i$ is surjective, we have the decomposition.
\end{proof}
In the following, we fix a generic $a\in {\mathcal H}_{P}$. For simplicity, we assume $D=x$. Our first goal is normalize the singular spectral curve $X_{a}$ and analyse local property of its normalization.
\subsection{Normalization of spectral curves}
We denote $N:\tilde{X}_{a}\to X_a$ as normalization of $X_{a}$. And we denote by $\tilde{\pi}$ the composition map:
$$\tilde{\pi}: \tilde{X}_{a}\xrightarrow{N} X_{a}\xrightarrow{\pi} X.$$
As in Theorem \ref{image parabolic}, $f\in {\mathcal O}[\lambda]\cong k[[t]][\lambda]$ define the spcetral curve locally, so that the formal completion of the local ring of $X_a$ at $x$ is $A:={\mathcal O}[\lambda]/(f)$.
Notice that ${ Spec} (A)$ and $X_a-\{\pi^{-1}(x)\}$ form an fpqc covering of $X_a$. Since $X_a-\{\pi^{-1}(x)\}$ is smooth, we only need to construct the normalization of ${ Spec}(A)$. For generic choose of $a\in {\mathcal H}_P$, we may assume the coefficient $b_i\in {\mathcal O}$ has valuation $\gamma_i$. We denote the Newton polygon of $f$ by $\Gamma$.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{Figure1.pdf}
\caption{Newton Polygon of Characteristic Polynomial}
\label{Huawei}
\end{figure}
Figure \ref{Huawei} is a Newton Polygon of characteristic polynomial corresponding to the example in the Intermezzo.
We define $C=\Gamma+{\mathbb R}_{\ge 0}^2$, so that $\Gamma$ determines $C$ which is a closed convex subset of ${\mathbb R}_{\ge 0}^2$. Let $p_0, p_1, \dots ,p_s$, be the `singular' points of $\partial C$:
points where it has an angle $<\pi$ (so that $p_s$ lies on the $x$-axis). The standard theory of toric modifications was developed in the 1970's and is due to several people. For the construction we refer the readers to the introduction paper \cite{Oka09} and references there in. It assigns to $\Gamma$ a toric modification $\pi: T_\Gamma\to \mathbb{A}^2$ of $\mathbb{A}^2$, where $T_\Gamma$ is a normal variety.
The morphism $\pi$ is proper and is an isomorphism over $\mathbb{A}^2\backslash \{(0,0)\}$. The exceptional locus $\pi^{-1}(0,0)$ is the union of irreducible components $\{D_e\}$, where $e$ runs over the edges of $\Gamma$.
For every edge $e$ of $\Gamma$, denote by $f_e$ the subsum of $f$ over $e\cap {\mathbb Z}^2$.
\begin{assumption}\label{generic on boud}
We impose the genericity condition that all these roots of $f_{e}$
are nonzero and pairwise distinct for all $e\in\partial \Gamma$. This is the concept `non-degeneracy' in \cite{Oka09}.
\end{assumption}
Under this assumption, according to \cite[Theorem 22]{Oka09}. The strict transform $\hat Z(f)$ of $Z(f)\subset \mathbb{A}^{2}$ in $T_\Gamma$ is the normalization of $Z(f)$, and meets $D_e$ transversally in a set that can be effectively indexed by the connected components of $e\backslash {\mathbb Z}^2$. In particular, $Z(f)$ has as many branches at the origin as connected components of $\Gamma\backslash \mathbb{Z}^2$.
In our case, the slope of $e$ is $-1/\mu_{e}$ for some $\mu_{e}\in\{\mu_{1},\ldots,\mu_{n_1}\}$, then one can check that each branch of $Z(f)$ whose strict transform meets $D_e$ is formally given by an Eisenstein equation of degree $\mu_{e}$. To conclude:
\begin{proposition}\label{decomposition of char}
Under Assumption \ref{generic on boud}, $f$ decomposes in $k[[t,\lambda]]$ into a product of Eisenstein polynomials $f=\prod_{i=1}^{n_1} f_i$. Exactly $\#\{i|\mu_{i}=\mu_{e}\}$ of them are of degree $\mu_{e}$ (but the difference of two such have their constant terms not divisible by $t^2$), and $\prod_i k[[t]][\lambda]/(f_i)$ is the normalization of $k[[t]][\lambda]/(f)$.
\end{proposition}
\begin{remark}
This is a stronger conclusion than that in \cite[Chapter 2, Proposition 6.4]{Neu}, because of our Assumption \ref{generic on boud}.
\end{remark}
\begin{corollary}\label{ramification}
For generic $a\in{\mathcal H}_{P}$, there are $n_{1}$ (the length of conjugate partition) points in $\tilde{X}_{a}$ over $x\in X$. Ramification degrees are $(\mu_{1},\mu_{2},\ldots,\mu_{n_{1}})$. The geometric genus of $\tilde{X}_a$ is $$ P_g(\tilde{X}_a)=r^2(g-1)+1+\dim(G/P_x).$$
\end{corollary}
\begin{proof}
The ramification degree is due to degree of Eisenstein polynomials defining strict transform of local branches. The geometric genus $P_g(\tilde{X}_a)$ then follows from the ramification degrees.
\end{proof}
\subsection{Parabolic BNR correspondence}
This subsection is devoted to build the parabolic BNR correspondence(also stated as Theorem \ref{main02}):
\begin{theorem}[Parabolic BNR Correspondence for ${ GL}_{r}$]\label{parabolic BNR}
For generic $a\in{\mathcal H}_{P}$, there is a one to one correspondence between: $$\left\{\begin{array}{c}\text{Parabolic Higgs bundle } (\mathcal{E},\theta)\\
\text{ with }\ \text{char}_\theta=a,\ \deg(E)=d
\end{array} \right\}\leftrightarrow\{\text{degree } \delta \text{ line bundles over } \tilde{X}_a\}$$
where $\delta=(r^2-r)(g-1)+\dim(G/P_x)+d$.
\end{theorem}
By the classical BNR correspondence, a parabolic Higgs bundle $({\mathcal E},\theta)$ corresponds to a torsion free rank one ${\mathcal O}_{X_a}$-module $V$ with a filtration on $V_{\pi^{-1}(x)}$. Then to prove this theorem, we only need to check that $V$ with filtration induces a $N_*{\mathcal O}_{\tilde{X}_a}$ module structure. Since the normalization map is finite and isomorphism over $X_a-\pi^{-1}(x)$, we reduce to consider the local problem near $x$. By our argument in former subsection, when we
specialize $({\mathcal E},\theta)\in h^{-1}(a)$ to the marked point $x$, we exactly get the triple as in Local Case. Then $E$ has a locally free rank $1$ $\tilde{\pi}_*{\mathcal O}_{\tilde{X}_a}$-module structure induced by $({\mathcal E},\theta)$. Now we can prove our Theorem \ref{parabolic BNR}:
\begin{proof}[Proof of Theorem \ref{parabolic BNR}]
Firstly, given a parabolic Higgs bundle $(\mathcal{E},\theta)$ with $\text{char}_\theta=a$, $\deg(E)=d$ by proposition \ref{decomposition we want} and discussion before, we have a line bundle $L$ of degree $\delta$ over $\tilde{X}_a$ such that $\tilde{\pi}_*L=E$. There is an action of $\theta$ on $\tilde{\pi}_{*}L$ induced by the $\tilde{\pi}_{*}{\mathcal O}_{\tilde{X}_a}$-module structure on $\tilde{\pi}_{*}L$, and $\text{char}_\theta=a$ since $X_a$ is integral. Hence $(\tilde{\pi}_{*}L, \theta)=({\mathcal E},\theta)$.
Conversely, given a degree $\delta$ line bundle $L$ over $\tilde{X}_a$, a Young tableaux argument shows that there exists a unique filtration $$L=L_0\supset L_1\supset \cdots \supset L_{\sigma}=L(-\tilde{\pi}^{-1}(x))$$ such that the graded terms have the same dimension as the Levi type of $P_x$. The push forward filtration on $\tilde{\pi}_*L$ and $\tilde{\pi}_*{\mathcal O}_{\tilde{X}_a}$-module structure induce a parabolic Higgs bundles structure on $(\tilde{\pi}_{*}L, \theta)$ with $\text{char}_\theta=a$ and $\deg=d$. Again, as before, this gives us the correspondence.
The degree $\delta$ can be calculated using Riemann-Roch theorem, as $P_g(\tilde{X}_a)=r^2(g-1)+1+\dim(G/P_x)$ in Corollary \ref{ramification}.
\end{proof}
\begin{corollary}
Under the same assumption of Theorem \ref{parabolic BNR}, for generic $a\in \mathcal{H}_{P}$, the parabolic Hitchin fiber $h_{P}^{-1}(a)$ is isomorphic to $\operatorname{Pic}^{\delta}(\tilde{X}_{a})$.
\end{corollary}
\begin{proof}
By Theorem \ref{parabolic BNR}, we only need to check the stability of $(\tilde{\pi}_*L,\theta)$ for line bundle $L$ over $\tilde{X}_a$. However, the spectral curve $X_a$ is integral, which tells that there is no proper sub-Higgs bundle of $(\tilde{\pi}_*L,\theta)$, hence it is a stable parabolic Higgs bundle.
\end{proof}
\begin{remark}
Scheinost and Schottenloher \cite{SS95} proved a similar result over $\mathbb{C}$ by uniquely extending the eigen line bundle on $X_{a}-\pi^{-1}(D)$ to $\tilde{X}_{a}$. This extension is announced there. We use a different strategy here which is similar to that in \cite{BNR} to prove the correspondence.
\end{remark}
Notice that we only put generic condition on the characteristic polynomial of Higgs field $\theta$, but due to the decomposition, for all $({\mathcal E},\theta)\in h^{-1}(a)$ we have:
\begin{corollary}
The Jordan blocks of $\theta\mod t$ is of size $(\mu_{1},\mu_{2},\ldots,\mu_{n_1})$
\end{corollary}
\begin{proof}
By Theorem \ref{decomposition we want}, $V$ has a natural $\tilde{A}$ module structure. Since each $A_{i}$ is a DVR, we may find $e_{i}\in { Ker} f_{i}$, such that it is free module over $k[[t]]$, with basis:
$$\{v,\theta v,\ldots,\theta^{\mu_{i}-1}v\}.$$
After mod out $t$, then the matrix of $\theta$ on ${ Ker} f_{i}$ with respect to this basis is a Jordan block of size $\mu_{i}$.
\end{proof}
\begin{remark}
It means that when given a sufficiently general characteristic polynomial, each global Higgs field $\theta$ with this prescribed characteristic polynomial has same Jordan normal form after reduction at marked point $x\in D$. Actually, they are the so-called Richardson elements. We refer the readers to \cite{Ba06} for more details.
\end{remark}
If we replace ${ GL}_{r}$ by ${ SL}_{r}$, we also have coarse moduli spaces ${\textbf{M}}_{P,\alpha}^{\circ}$, $\mathbf{Higgs}_{P,\alpha}^{\circ}$. And the parabolic Hitchin space is:
\[
{\mathcal H}_{P}^{\circ}:=\prod_{j=2}^r\mathbf{H}^{0}\left(X,\omega_X^{\otimes j}\otimes{\mathcal O}_X(\sum_{x\in D}(j-\gamma_{j}(x))\cdot x)
\right)
\]
We use '$\circ$' to emphasize trace zero. We denote corresponding parabolic Hitchin map as $h_{P,\alpha}^{\circ}$. Considering the following commutative diagram:
\[
\begin{diagram}
\mathbf{Higgs}^{\circ}_{P,\alpha}&\rTo^{h^{\circ}_{P,\alpha}}& \mathcal{H}^{\circ}_{P}\\
\dInto& &\dInto\\
\mathbf{Higgs}_{P,\alpha}&\rTo^{h_{P,\alpha}}& \mathcal{H}_{P}
\end{diagram}
\]
It follows that $h^{\circ}_{P,\alpha}$ is proper. Then by our parabolic BNR correspondence Theorem \ref{parabolic BNR}, generic fibers of $h^{\circ}_{P,\alpha}$ is Prym variety of $\operatorname{Pic}(\tilde{X}_{a})$. Then by dimension argument and properness, $h^{\circ}_{P,\alpha}$ is surjective.
\section{Generic fiber of weak parabolic Hitchin maps}
In this section, we give a concrete description of generic fibers of the weak parabolic Hitchin map.
In what follows, we fix $a\in {\mathcal H}$, such that $X_a$ is smooth and $\pi_a$ is unramified over $x$. And abbreviate $\pi_a$ as $\pi$. To simplify notation, we omit $\delta$ using $Pic(X_{a})$ to denote some connected component of its Picard variety.
Choose a marked point $q\in X_{a}$, thus we have an embedding $\tau:X_{a}\rightarrow Pic(X_{a})$. Then we have a universal line bundle over $Pic(X_{a})\times X_{a}$ which is the pull back of a Poincar\'e line bundle. We denote the universal line bundle by ${\mathscr P}$.
Consider the following projection:
\begin{equation}\label{Univ Higgs from spectra} Pic(X_a)\times X_a \xrightarrow{id\times \pi} Pic(X_a)\times X
\end{equation}
We denote $V:=(id\times \pi)_{*}{\mathscr P}$, which is a rank $r$ vector bundle over $Pic(X_{a})\times X$. Thus the $(id\times\pi)_*{\mathcal O}_{Pic(X_a)\times X}$-module structure induces a Higgs field $$\theta_{Pic}:V\to V\otimes_{{\mathcal O}_X}\pi_X^*\omega_X(x).$$ And $(V,\theta_{Pic})$ can be viewed as the universal family of Higgs bundles on $X$ with characteristic polynomial $a$. For simplicity, we use $V|_{x}$ to denote the restriction of $V$ to $\{x\}\times Pic(X_{a})$.
\begin{proposition}\label{Torus over Pic}
We have a group scheme ${\mathfrak T}$ over $Pic(X_{a})$, for any point $z\in Pic(X_a)$, the fiber ${\mathfrak T}(z)$ is the centralizer $T$ of $\theta|_x$.
\end{proposition}
\begin{proof}
Restricting $\theta_{Pic}$ to $\{x\}\times Pic(X_{a})$ gives $$\theta_{Pic}|_x:V|_x\to V\otimes_{{\mathcal O}_X}\pi_x^*\omega_X(x)|_x\cong V|_x$$ which is regular semi-simple everywhere because $\pi$ is unramified over $x$.
Denote by ${\mathcal A} ut(V|_x)$ the group scheme of local automorphisms of vector bundle $V|_x$. Then we consider the centralizer of $\theta_{Pic}|_x:V|_x\to V|_x$ in ${\mathcal A} ut(V|_x)$ over $Pic(X_{a})\times \{x\}$. This gives us a group scheme ${\mathfrak T}$ over $Pic(X_{a})$, fiber-wise it is a maximal torus in $G$.
\end{proof}
In the following we construct a flag bundle $\mathfrak{Fl}$ on $Pic(X_a)$ classifies all the possible filtrations at $x$. Fiber-wise this is isomorphic to $G/P_x$. And show that ${\mathfrak T}$ acts on it naturally.
\begin{definition}
Denote by $\text{Fr}(V|_x)$ the frame bundle given by the vector bundle $V|_x$. We define the (partial) flag bundle $\mathfrak{Fl}$ over $Pic(X_{a})$ as the associate bundle
$$\text{Fr}(V|_x)\times_G G/P_x.$$ Here $P_x$ is the parabolic subgroup given by the parabolic structure at $x$.
\end{definition}
By definition, $\mathfrak{Fl}$ parametrize all the vector bundle filtrations with type given by $P_x$ on $V|_x$. Denote by $W_x$ the Coxeter subgroup corresponding to the parabolic subgroup $P_x$, we have
\begin{lemma}\label{compatible with theta} $\mathfrak{T}$ acts on $\mathfrak{Fl}$ over $Pic(X_a)$, and the fixed points $\mathfrak{Fl}^{\mathfrak{T}}$ is a $W_x$ torsor on $Pic(X_a)$.
\end{lemma}
\begin{proof}
We know that ${\mathfrak T}\subset{\mathcal A} ut(V|_{x})$ as a sub-group scheme, thus ${\mathfrak T}$ acts on $\mathfrak{Fl}$.
Since fiber-wise we know that the invariant point of $G/P_x$ under the action of $T\subset P_x$ is bijective to $W_{x}$, we finish the proof.
\end{proof}
Now we can give a description of the weak parabolic Hitchin fiber $(h^{W}_P)^{-1}(a)$.
\begin{theorem}\label{main5} For general $a\in {\mathcal H}$, we have $(h^{W}_P)^{-1}(a)\cong\mathfrak{Fl}^{\mathfrak T}$.
\end{theorem}
\begin{remark}
The intuition of this theorem is that:
Filtrations coming from a parabolic structure should be compatible with the Higgs field at $x$, thus they corresponds to the fixed points of ${\mathfrak T}$ action on $\mathfrak{Fl}$.
\end{remark}
\begin{proof}
Fiber-wise, fixed points are those $\{P_{i}\}\subset G/P_{x}$, such that $P_{i}\supset Z_{G}(\theta_{x})=T$. Since then $\theta_{x}\in\mathfrak{p}_{i}$, filtration determined by $P_{i}$ is compatible with $\theta_{x}$.
Conversely, $({\mathcal E},\theta)$ lies in $(h^{W}_{P})^{-1}(a)$, meaning that $E$ is a line bundle over $X_{a}$, and has a filtration at $x$ compatible with $\theta_{x}$. A sub-bundle of parabolic sub-groups, $P'\subset {\mathcal A} ut(V|_{x})$, determines a filtration of $V|_x$. This filtration is compatible with $\theta_{x}$ if and only if $\theta_{x}\in\mathfrak{p}'$. Since $\theta_{x}$ is regular semi-simple, this implies that $P'\supset \mathfrak{T}$. Thus it is a fixed point of $\mathfrak{T}$-action in $\mathfrak{Fl}$.
\end{proof}
Since in our case $G={ GL}_{r}$, we can give a more explicit description. First, we denote $\pi^{-1}(x)$ by $\{y_{1},\ldots,y_{r}\}\subset X_{a}$. Then we restrict the universal line bundle ${\mathscr P}$ to each $Pic(X_a)\times {y_i}$, and denote it by ${\mathscr P}|_{y_i}$. One has
\begin{equation}\label{vxdecomp} V|_x\cong \oplus_{i=1}^r {\mathscr P}|_{y_i}\end{equation} since $\pi^{-1}(x)$ are $r$-distinct reduced points.
Indeed, factors in the decomposition (\ref{vxdecomp}) are eigenspaces of $\theta_{Pic}|_x$. So under this decomposition, $\theta_{Pic}|_x$ is a direct sum of $\theta_{y_i}:{\mathscr P}_{y_i}\to{\mathscr P}_{y_i}$ and ${\mathfrak T}$ preserve the decomposition. To conclude:
\begin{corollary}
The connected components $\pi_{0}((h^{W}_P)^{-1}(a))$ is bijective to the Coxeter group $W_{x}$ associated with $P_x$.
\end{corollary}
\section{Global Nilpotent Cone of the Parabolic Hitchin Maps}\label{sec 4}
In this section, we study global properties of (weak) parabolic Hitchin maps, i.e. flatness and surjectivity.
\begin{definition}
We call $h_{P}^{-1}(0)$ (resp. $(h^{W}_{P})^{-1}(0)$) the parabolic global nilpotent cone (resp. the weak parabolic global nilpotent cone). We denote $h_{P}^{-1}(0)$(resp. $(h^{W}_{P})^{-1}(0)$) by ${\mathcal N} il_{P}$(resp. ${\mathcal N} il_{P}^{W}$).
\end{definition}
By Lemma \ref{dim formula}, we have
\begin{align}&\label{dimestimate0.0} \dim(\text{fiber of }h_{P}) \geq \dim({\textbf{M}}_P)\\
&\label{dimestimate0.1} \dim(\text{fiber of }h_P^W) \geq r^2(g-1)+1+\frac{r(r-1)\deg(D)}{2}.
\end{align}
\subsection{$\mathbb{G}_m$-actions on $\mathbf{Higgs}^W_P$ and $\mathbf{Higgs}_P$}
There is a natural $\mathbb{G}_m$-action on $({\mathcal E},\theta)$ given by $({\mathcal E},\theta)\mapsto ({\mathcal E},t\theta), t\in \mathbb{G}_{m}$. It preserves stability and leaves Hilbert polynomials invariant. Thus it can be defined on moduli spaces, i.e $\mathbf{Higgs}_{P}$ and $\mathbf{Higgs}^{W}_{P}$. This action was first studied by Simpson in \cite{Sim90} and \cite{Sim92}. It contains a lot of information of both moduli spaces and Hitchin maps.
There is also a natural $\mathbb{G}_m$-action on ${\mathcal H}$ and ${\mathcal H}_{P}$: \[(a_1,a_2,\cdots,a_r)\mapsto (ta_1,t^2a_2,\cdots,t^ra_r),\]
and $h_{P}$, $h^{W}_{P}$ are equivariant under this $\mathbb{G}_m$-action. This can be used to show the flatness of Hitchin map \cite{Gin01} if one has the dimension estimate of the global nilpotent cones. We will use deformation theory to estimate the dimension of the global nilpotent cones in the next sub-section.
\subsection{The dimension of the global nilpotent cone}
The study of infinitesimal deformations of parabolic Higgs bundles was done in \cite{Yo95} and \cite{BR94}.
\subsubsection{Parabolic global nilpotent cone}
In this sub-subsection, we will use infinitesimal method to calculate the dimension of the parabolic global nilpotent cone.
Recall that in \cite[Theorem 4.6]{Yo93C},\cite[Remark 5.1]{Yo95}, $\mathbf{Higgs}_P$ is a geometric quotient by an algebraic group ${ PGL}(V)$ of some ${ PGL}(V)$-scheme ${\mathcal Q}$. Moreover, one has a universal family of framed stable parabolic Higgs bundles $$(\tilde{{\mathcal E}},\tilde{\theta}) \text{ with surjection }V\otimes_k {\mathcal O}_{X_{\mathcal Q}}\twoheadrightarrow \tilde{{\mathcal E}}.$$
Denote the quotient map by $q:{\mathcal Q} \to \mathbf{Higgs}_P$. Restricting the universal family $(V\otimes_k {\mathcal O}_{X_{{\mathcal Q}}}\twoheadrightarrow\tilde{{\mathcal E}},\tilde{\theta})$ to $q^{-1}({\mathcal N} il_{P})$, we get:
$$\mathbb{U}_{{\mathcal N} il_P}:=(V\otimes_k {\mathcal O}_{X_{q^{-1}({\mathcal N} il_{P})}}\twoheadrightarrow\tilde{{\mathcal E}},\tilde{\theta}).$$
For any scheme $S$ and flat family $(V\otimes_k {\mathcal O}_S\twoheadrightarrow {\mathcal E}_{S},\theta_S)$ of parabolic Higgs bundles with $\text{char}_{\theta_{S}}=0$ on $S$, there is a map $\phi:S\to q^{-1}({\mathcal N} il_{P})$ such that $$({\operatorname{id}}_X\times\phi)^*\mathbb{U}_{{\mathcal N} il_P}\cong (V\otimes_k {\mathcal O}_S\twoheadrightarrow {\mathcal E}_{S},\theta_S).$$
To determine the dimension of ${\mathcal N} il_P$, it is sufficient to calculate the dimension of each irreducible component with reduced structure. Restrict to any generic point $\eta$ of ${\mathcal N} il_P$, $\theta_\eta:=\tilde{\theta}|_{q^{-1}(\eta)^\text{red}}$ gives a filtration $\{{ Ker}(\theta_\eta^i)\}$ of vector bundles of $\tilde{E}|_{q^{-1}(\eta)^\text{red}}$ (i.e. the graded terms are all vector bundles), because $X\times \eta$ is a curve. Spread out this.
\begin{lemma}\label{irrW} There exists an irreducible open subset $W\subset{\mathcal N} il^{\text{red}}_P$ with generic point $\eta$, such that $\theta_W:=\tilde{\theta}|_{q^{-1}(W)^\text{red}}$ gives a filtration ${ Ker}(\theta_W^i)$ of vector bundles of $E_{W}:=\tilde{E}|_{q^{-1}(W)^\text{red}}$ over $X\times W$.
\end{lemma}
We fix some notations for filtered bundle maps. Let $E^1,E^2$ and $E$ be vector bundles on $X$ with decreasing filtrations by subbundles $K_i^\bullet$, $i=1,2,\emptyset$ on each of them .
We denote by \[{\mathcal H} om^{\text{fil}}(E^1,E^2)\text{ and }{\mathcal E} nd^{\text{fil}}(E)\] the coherent subsheaf of ${\mathcal H} om(E_1,E_2)$ (resp. ${\mathcal E} nd(E)$) consisting of those local homomorphisms preserving filtrations. And ${\mathcal H} om^{\text{s-fil}}(E_1,E_2)$ (resp. ${\mathcal E} nd^{\text{s-fil}}(E)$) consists of the local homomorphisms $\phi$ such that $\phi(K^{j}_1|_{U})\subset K^{j+1}_2|_U$ (resp. $\phi(K^{j}|_{U})\subset K^{j+1}|_U$ ).
Let us denote the decreasing filtration:
\[K^\bullet_W:E_{W}=K^0_W\supset K^1_W\supset\cdots\supset K^{r'}_W=0.\]
induced by $\{{ Ker}(\theta_W^i)\}$ on $E_{W}$. For $x\in D$ we also use $x$ to denote the closed immersion $\{x\}\times W\to X\times W$.
\begin{lemma}\label{finer flag} At each punctured point $x:\{x\}\times W\rightarrow X\times W$, $K^{\bullet}_W|_{x}$ is a coarser flag than the parabolic structure ${\mathcal E}_W|_{x}$, so \[\begin{array}{r}SParHom({\mathcal E}_{W},{\mathcal E}_{W}\otimes\omega_X(D))\cap Hom^{\text{s-fil}} (E_W,E_W\otimes\omega_X(D))\\
=Hom^{\text{s-fil}} (E_W,E_W\otimes\omega_X(D))
\end{array}\]
and \[\theta_W\in Hom^{\text{s-fil}} (E_W,E_W\otimes\omega_X(D)).\]
\end{lemma}
\begin{proof} $\theta_W$ is a strongly parabolic map thus $F^{i}(x)\subset K_W^i$. In other words, ${\mathcal E}_{W}|_{x}$ is a finer flag than $K^{\bullet}_W|_{x}$. Besides, $\theta_W\in SParHom({\mathcal E}_{W},{\mathcal E}_{W}\otimes\omega_X(D))\cap Hom^{\text{s-fil}} (E_W,E_W\otimes\omega_X(D))$ by definition.
\end{proof}
Thus the nilpotent parabolic bundle ${\mathcal E}_{W}$ has a filtration of vector bundles which do not depend on the surjection $V\otimes_k {\mathcal O}_{q^{-1}(W)^{\text{red}}}\twoheadrightarrow E_{W}$.
\begin{theorem}\label{main 1} The space of infinitesimal deformations in $W$ of a nilpotent parabolic Higgs bundle $({\mathcal E},\theta)$, is canonically isomorphic to $\H^1(X,{\mathcal A}^\bullet)$. Here ${\mathcal A}^\bullet$ is the following complex of sheaves on $X$:
$$0\to {\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap{\mathcal E} nd^{\text{fil}}(E)\xrightarrow{ad(\theta)}
({\mathcal S}{\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap {\mathcal E} nd^{\text{s-fil}}(E) )\otimes\omega_X(D)\to 0$$
which is isomorphic to
$$0\to {\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap {\mathcal E} nd^{\text{fil}}(E)\xrightarrow{ad(\theta)}
{\mathcal E} nd^{\text{s-fil}}(E) \otimes\omega_X(D)\to 0.$$
\end{theorem}
\begin{proof} An infinitesimal deformation of a parabolic pair $({\mathcal E},\theta)=u\in W$ is a flat family $(\boldsymbol{{\mathcal E}},\boldsymbol{\theta})$ with $\text{char}_{\boldsymbol{\theta}}=0$ parametrized by ${ Spec}(k[\epsilon]/\epsilon^2)$ together with a given isomorphism of $({\mathcal E}, \theta)$ with the specialization of $(\boldsymbol{{\mathcal E}},\boldsymbol{\theta})$. By the local universal property of $\mathbb{U}_{{\mathcal N} il_P}$, $(\boldsymbol{{\mathcal E}},\boldsymbol{\theta})$ is the pull back of $(\tilde{{\mathcal E}},\tilde{\theta})$ by a map $\phi:{ Spec}(k[\epsilon]/\epsilon^2)\to {\mathcal N} il_P$. Moreover, if the deformation is inside $W$, then $\phi$ factor through $q^{-1}(W)^{\text{red}}$ and $(\boldsymbol{{\mathcal E}},\boldsymbol{\theta})$ is a pull back of $({\mathcal E}_W,\theta_W)$.
Thus $$\boldsymbol{K}^{\bullet}:=(id_X\times\phi)^*K^{\bullet}_W$$ is a filtration on $\boldsymbol{E}$ such that $$\boldsymbol{\theta}\in SParHom(\boldsymbol{{\mathcal E}},\boldsymbol{{\mathcal E}}\otimes\omega_X(D))\cap Hom^{\text{s-fil}} (\boldsymbol{E},\boldsymbol{E}\otimes\omega_X(D)).$$ $K^{\bullet}_W$ do not depend on the surjection $V\otimes_k {\mathcal O}_{q^{-1}(W)^{red}}\twoheadrightarrow E_{W}$, so $\boldsymbol{K}^{\bullet}$ is uniquely determined by $(\boldsymbol{{\mathcal E}},\boldsymbol{\theta})$.
Let us denote the projection by $\pi:X_{\epsilon}=X\times{ Spec}(k[\epsilon]/\epsilon^2)\to X$.
Tensoring $(\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet},\boldsymbol{\theta})$ with $$0\to (\epsilon)\to k[\epsilon]/\epsilon^2 \to k\to 0 ,$$ we have an extension of filtered parabolic Higgs ${\mathcal O}_{X_\epsilon}$-modules
\begin{equation}\label{ext2} 0\to (\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet},\boldsymbol{\theta})(\epsilon)\to (\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet},\boldsymbol{\theta}) \to ({\mathcal E},K^{\bullet},\theta)\to 0 .\end{equation}
Pushing forward (\ref{ext2}) by $\pi$, we have an extension $$0\to ({\mathcal E},K^{\bullet},\theta)\to \pi_*(\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet},\boldsymbol{\theta}) \to ({\mathcal E},K^{\bullet},\theta)\to 0$$ of locally free filtered parabolic Higgs ${\mathcal O}_X$-modules. The left inclusion will recover the ${\mathcal O}_{X_\epsilon}$-module structure of $\pi_*(\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet},\boldsymbol{\theta})$. Thus $(\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet},\boldsymbol{\theta})$ is formally determined by an element in \[{ Ext}_{\text{fil}-par-Higgs-{\mathcal O}_X}(({\mathcal E},K^{\bullet},\theta),({\mathcal E},K^{\bullet},\theta)).\]
One can reinterpret the extension class using \v Cech cohomology. Let ${\mathcal U}=\{U_i\}_{i}$ be an affine finite covering of $X$, trivializing $E$ and all $K^j$. Then on each $U_i$, there is a splitting
\[\phi_i:({\mathcal E},K^{\bullet})|_{U_i}\to(\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet})|_{U_i}\]
preserving the two compatible filtrations. The Higgs fields induce a filtered map \[\psi_i=\boldsymbol{\theta}\phi_i-\phi_i\theta:({\mathcal E},K^{\bullet})|_{U_i}\to({\mathcal E},K^{\bullet})|_{U_i}.\]
Thus the extension $(\boldsymbol{{\mathcal E}},\boldsymbol{K}^{\bullet})$ is given by a \v Cech one cycle $(\phi_{ij}:=\phi_i-\phi_j)$ with value in $${\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap {\mathcal E} nd^{\text{fil}}(E)$$ and a \^Cech $0$-cycle $(\psi_i:=\boldsymbol{\theta}\phi_i-\phi_i\theta)$ with value in
$$ {\mathcal S} {\mathcal P} ar{\mathcal E} nd({\mathcal E})\otimes\omega_X(D)\cap {\mathcal E} nd^{\text{s-fil}}(E)\otimes\omega_X(D).$$
One has \begin{align*}\delta(\phi_{ij})_{abc}&=\phi_{bc}-\phi_{ac}+\phi_{ab}\\
&=\phi_b-\phi_c-\phi_a+\phi_c+\phi_a-\phi_b=0,
\end{align*}
and \begin{align*}\delta(\psi_i)_{ab}&=\boldsymbol{\theta}\phi_{ab}-\phi_{ab}\theta\\
&=\theta\phi_{ab}-\phi_{ab}\theta=ad(\theta)(\phi_{ab}).
\end{align*}
It means that $((\phi_{ij}),(\psi_i))$ is a \v Cech 1-cocycle of the following complex of sheaves ${\mathcal A}^\bullet$ on $X$:
\[0\to {\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap{\mathcal E} nd^{\text{fil}}(E)\xrightarrow{ad(\theta)}
({\mathcal S}{\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap {\mathcal E} nd^{\text{s-fil}}(E) )\otimes\omega_X(D)\to 0\]
If the extension is trivial, then $\phi_i=( 1, \phi_i')$ where $$\phi_i'\in {\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap{\mathcal E} nd^{\text{fil}}(E)$$ and $$\psi_i=\theta\phi'-\phi'\theta=ad(\theta)(\phi'_i).$$ Thus trivial extensions correspond to \v Cech 1-coboundary of ${\mathcal A}^\bullet$.
On the other hand, if we have a \v Cech 1-cocycle $((\phi_{ij}),(\psi_i))$, then use $\begin{bmatrix}I&\phi_{ij}\\ 0&I\end{bmatrix}$ to glue $$\{({\mathcal E},K^\bullet)|_{U_i}\oplus ({\mathcal E},K^\bullet)|_{U_i}\}$$ with the local Higgs field $\begin{bmatrix}\theta&\psi_i\\0&\theta\end{bmatrix}$. One can check that the gluing condition of the local Higgs fields:
\[\begin{bmatrix}\theta&\psi_i\\0&\theta\end{bmatrix}\begin{bmatrix}I&\phi_{ij}\\ 0&I\end{bmatrix}=\begin{bmatrix}\theta&\psi_j\\0&\theta\end{bmatrix}\begin{bmatrix}I&\phi_{ij}\\ 0&I\end{bmatrix}\] is equivalent to the cocycle condition \[\delta(\psi_i)_{ab}=ad(\theta)(\phi_{ab}).\]
If $((\phi_{ij}),(\psi_i))$ is a coboundary, i.e. \[((\phi_{ij}),(\psi_i))=\left( (\phi_i'-\phi_j'),(ad(\theta)(\phi_i')) \right).\] One can check :
\[\begin{bmatrix}-\phi'_i\\I\end{bmatrix}:({\mathcal E},K^\bullet)|_{U_i}\to({\mathcal E},K^\bullet)|_{U_i}\oplus ({\mathcal E},K^\bullet)|_{U_i}\] can be glued to a global splitting of filtered Higgs bundle.\end{proof}
The filtration $K^\bullet$ of bundles on $E$ is equivalent as a ${\mathcal P}$ reduction, where ${\mathcal P}\subset { GL}_r$ is the corresponding parabolic subgroup. We denote the principal ${\mathcal P}$-bundle by $^{\mathcal P} E$, and let $U\subset {\mathcal P}$ be the unipotent radical. We denote their Lie algebras by $\mathfrak{n}$, $\mathfrak{p}$. Thus ${\mathcal E} nd^{\text{fil}}(E)\cong ad_{^{\mathcal P} E}$, ${\mathcal E} nd^{\text{s-fil}}({\mathcal E})\cong ad_{^{\mathcal P} E}(\mathfrak{n})$.
By Lemma \ref{finer flag}, we have $P_x\subset {\mathcal P}$ for all $x\in D$. Denoting by $\mathfrak{p}_x$ the Lie algebra of $P_x$, we have
\begin{equation}\label{n/u} 0\to {\mathcal P} ar{\mathcal E} nd({\mathcal E})\cap {\mathcal E} nd^{\text{fil}}(E)\to {\mathcal E} nd^{\text{fil}}(E)\to \bigoplus_{x\in D}i_{x*}\text{ }\mathfrak{p}_{/\mathfrak{p}_x}\to 0.\end{equation}
According to (\ref{n/u}) we have \[0\to{\mathcal A}^\bullet\to{\mathcal A}'^\bullet\to \bigoplus_{x}i_{x*}\text{ }\mathfrak{p}_{/\mathfrak{p}_{x}}\to 0,\] where ${\mathcal A}'^\bullet$ is
\[0\to {\mathcal E} nd^{\text{fil}}(E)\xrightarrow{ad(\theta)}
{\mathcal E} nd^{\text{s-fil}}(E) \otimes\omega_X(D)\to 0.\]
We also need the following Lemmas.
\begin{lemma} Assume ${\mathcal P}$ be a parabolic subgroup of ${ GL}_r$, $U$ be its unipotent radical. Denote by $\mathfrak{p}$, ${\mathfrak g}$ and $\mathfrak{n}$ their Lie algebras. ${\mathcal P}$ act on them by conjugation. One have $\mathfrak{n}^\vee\cong \mathfrak{g}/\mathfrak{p}$ as ${\mathcal P}$-linear representations.
\end{lemma}
\begin{proof} For ${\mathfrak g}=\mathfrak{gl}_r$, the form $\beta:{\mathfrak g}\times{\mathfrak g}\to k\quad (A,B)\mapsto { tr}(AB)$ is a non-degenerate ${ GL}_r$-equivariant bilinear form. Thus the isomorphism holds.
\end{proof}
\begin{lemma}\label{adlemma} Let $E$ be a finite dimensional vector space over a field. $\theta:E\to E$ is a nilpotent endomorphism. $\mathfrak{p}$ is the parabolic algebra preserving the decreasing filtration given by $\{{ Ker}(\theta^{i})\}$. Let $\mathfrak{n}$ be the nilpotent radical. Then $ad(\theta):\mathfrak{p}\to \mathfrak{n}$ is surjective.
\end{lemma}
\begin{proof} Do induction on the number of Levi factors of $\mathfrak{p}$.\end{proof}
\begin{proposition}\label{dimestimate1} we get the dimension estimate \begin{equation*}\dim_k(T_{u}W)=\dim_k(\H^1(X,{\mathcal A}^\bullet)) =\dim({\textbf{M}}_P).
\end{equation*}
Thus any irreducible component of ${\mathcal N} il_P$ has same dimension as ${\textbf{M}}_P$. In particular, ${\mathcal N} il_P$ is equi-dimensional.
\end{proposition}
\begin{proof} One have \begin{align*}\chi(X,{\mathcal A}'^\bullet)&=\chi({\mathcal E} nd^{\text{fil}}(E))-\chi({\mathcal E} nd^{\text{s-fil}}(E) \otimes\omega_X(D))\\
&=\chi(ad_{^{\mathcal P} E}(\mathfrak{p}))-\chi(ad_{^{\mathcal P} E}(\mathfrak{n}) \otimes\omega_X)-\deg(D)\cdot \dim_k(\mathfrak{n})\\
&=\chi(ad_{^{\mathcal P} E}(\mathfrak{p}))+\chi(ad_{^{\mathcal P} E}(\mathfrak{g}/\mathfrak{p}))-\deg(D)\cdot \dim_k(\mathfrak{n})\\
&= r^2(1-g)-\deg(D)\cdot \dim_k(\mathfrak{n})
\end{align*}
Thus $
\chi(X,{\mathcal A}^\bullet)= \chi(X,{\mathcal A}'^\bullet)-\sum_{x\in D} \dim_k(\mathfrak{p}_{/\mathfrak{p}_{x}})
= r^2(1-g)-\sum_{x} \dim(G/P_x).$
$\H^0(X,{\mathcal A}^\bullet)$ are those endomorphism of ${\mathcal E}$ commuting with $\theta$, then by stability of $({\mathcal E},\theta)$, we have $h^0(X,{\mathcal A}^\bullet)=1$. Base change ${\mathcal A}^\bullet$ to the generic point $\xi$ of $X$, we have $${\mathcal A}^\bullet_\xi:{\mathcal E} nd^{\text{fil}}(E_\xi)\xrightarrow{ad(\theta)_{\xi}} {\mathcal E} nd^{\text{s-fil}}(E) \otimes\omega_X(D)_{\xi}\cong{\mathcal E} nd^{\text{s-fil}}(E_{\xi}).$$ This map is surjective by Lemma \ref{adlemma}. Thus $\tau^{\geq 1}{\mathcal A}^{\bullet}$ is supported on finitely many closed points of $X$ and $\H^2(X,\tau^{\geq 1}{\mathcal A}^\bullet)=0$. By
$$\tau^{\leq 0}{\mathcal A}^\bullet\to{\mathcal A}^\bullet\to\tau^{\geq 1}{\mathcal A}^\bullet\xrightarrow{+1},$$ we have $h^2(X,{\mathcal A}^\bullet)=0$, thus
$$\dim_k(T_{u}W)=\dim_k(\H^1(X,{\mathcal A}^\bullet))=1-\dim_k(\chi(X,{\mathcal A}^\bullet)) =\dim({\textbf{M}}_P).$$
\end{proof}
\begin{theorem}\label{flat} If $\mathbf{Higgs}_P$ is smooth, then the parabolic Hitchin map $h_P$ is flat and surjective.
\end{theorem}
\begin{proof} The proof is similar as in \cite[Corollary 1]{Gin01}. For $\forall$ $s\in {\mathcal H}_P-\{0\}$, $\bar{\mathbb{G}_m\cdot s}$ contains $0$. Since $h_P$ is equivariant under the $\mathbb{G}_m$-action, for each point $t\in \mathbb{G}_m\cdot s$, $h_P^{-1}(t)\cong h_P^{-1}(s)$. Thus $$\dim(h_P^{-1}(s))=\dim(\text{generic fiber of }h_P|_{\bar{\mathbb{G}_m\cdot s}})\leq \dim h_P^{-1}(0).$$ By (\ref{dimestimate0.0}), we have $\dim(h_P^{-1}(s))=\dim({\textbf{M}}_P)$ for any $s\in{\mathcal H}_P$. Since $\mathbf{Higgs}^{W}_{P}$ and ${\mathcal H}$ are both smooth, $h_P$ is flat.
Because all fibers are of dimension $\frac{1}{2}\mathbf{Higgs}_P=\dim\mathcal{H}_{P}$, $h_{P}$ is dominant. Since $h_P$ is proper by Proposition \ref{hproper}, it is surjective.
\end{proof}
\subsubsection{Weak Parabolic global nilpotent cone}
Let us compute the dimension of the weak parabolic global nilpotent cone to show the flatness of $h^{W}_{P}$. However for a weak parabolic Higgs bundles $({\mathcal E},\theta)$ in ${\mathcal N} il_{P}^{W}$, the filtration $\{\ker(\theta^{i})\}$ and the parabolic filtration are not compatible, it is not obvious to construct a complex dominating deformation within ${\mathcal N} il^{W}_{P}$ as before.
We still can calculate $\dim{\mathcal N} il^{W}_{P}$ by dominating ${\mathcal N} il^W_{P,\alpha}$ by finite union of ${\mathcal N} il^{W}_{B_i,\beta_i}$. Here $B_i$ is a Borel quasi-parabolic structure refining $P$.
More precisely, for a Borel parabolic structure $(B,\beta)$, the weak parabolic nilpotent cone coincide with the parabolic nilpotent cone, thus by Theorem \ref{dimestimate1}, ${\mathcal N} il^W_{B,\beta}$ has the expected dimension $r^2(g-1)+1+\frac{r(r-1)\deg(D)}{2}$.
For any generic point $\eta$ of ${\mathcal N} il^W_{P,\alpha}$, by restricting the universal family on $\{D\}\times \eta$, it is not difficult to see there exist a Borel refinement $B_\eta$ of $P$, such that for general $({\mathcal E},\theta)$ in the $\eta$-irreducible component of ${\mathcal N} il^W_{P,\alpha}$, $\theta$ preserve the filtration given by $B_\eta$.
One can choose a parabolic weight $\beta_\eta$ for each $B_\eta$, such that stability is preserved after the forget the parabolic structure from $B_\eta,\beta_\eta$ to $(P,\alpha)$. In other words, the forgetful map is well defined on the moduli spaces and restrict to $f_{\eta}:{\mathcal N} il_{B_\eta,\beta_\eta}^W \to {\mathcal N} il^W_{P,\alpha}$ which dominate the generic point $\eta$. Thus $\sqcup_{\eta}{\mathcal N} il_{B_\eta,\beta_\eta}^W$ dominate ${\mathcal N} il^W_{P,\alpha}$ and we conclude:
\begin{theorem} The weak parabolic nilpotent cone has dimension $$r^2(g-1)+1+\frac{r(r-1)\deg(D)}{2}.$$ Indeed, if $\mathbf{Higgs}_{P,\alpha}^W$ is smooth, the weak parabolic Hitchin map $h_{P,\alpha}^W:\mathbf{Higgs}_{P,\alpha}^W \to{\mathcal H}$ is flat.
\end{theorem}
\subsection{Existence of very stable parabolic bundles}
\begin{definition} We recall that a system of parabolic Hodge bundles is a parabolic Higgs bundle $({\mathcal E},\theta)$ with following decomposition:
$${\mathcal E}\cong \oplus{\mathcal E}^i$$
such that $\theta$ is decomposed as a direct sum of $\theta_i:{\mathcal E}^i\to {\mathcal E}^{i+1}$. Here $\{{\mathcal E}^{i}\}$ are subbundles with induced parabolic structures.
\end{definition}
If a parabolic Higgs bundle $({\mathcal E},\theta)$ is a fixed point of $\mathbb{G}_m$- action, then it has a structure of system of parabolic Hodge bundles. We have the following lemma similar to \cite[Lemma 4.1]{Sim92}, \cite[Theorem 8]{Sim90} and \cite[Theorem 5.2]{Yo95}:
\begin{lemma}\label{Gm fixed point} If the parabolic Higgs bundle $({\mathcal E},\theta)$ satisfies $({\mathcal E},\theta)\cong ({\mathcal E},t\cdot\theta)$ for some $t\in \mathbb{G}_m(k)$ which is not a root of unity, then ${\mathcal E}$ has a structure of system of parabolic Hodge bundles. In particular, if $\theta\neq 0$, then the decomposition ${\mathcal E}\cong \oplus{\mathcal E}^i $ given by the system of parabolic Hodge bundles is non-trivial.
\end{lemma}
\begin{remarks} One conclude that given a parabolic Higgs bundle $({\mathcal E},\theta)$, if ${\mathcal E}$ is stable and $\theta\neq 0$, it can not be fixed by the $\mathbb{G}_m$-action.
\end{remarks}
A section $s$ of ${\mathcal S} {\mathcal P} ar {\mathcal E} nd({\mathcal E})\otimes\omega_X(D)$ is nilpotent if $({\mathcal E},s)\in {\mathcal N} il_P$.
\begin{definition}
A stable parabolic bundle ${\mathcal E}$ is said to be very stable if there is no non-zero nilpotent section in $H^0(X,{\mathcal S} {\mathcal P} ar {\mathcal E} nd({\mathcal E})\otimes\omega_X(D))$.
\end{definition}
\begin{theorem} The set of very stable parabolic bundles contains a non-empty Zariski open set in the moduli of stable parabolic bundles ${\textbf{M}}_P$.
\end{theorem}
\begin{proof} Denote by $N^0$ the open dense subset of $\mathbf{Higgs}_P$ consists of $({\mathcal E},\theta)$ such that ${\mathcal E}$ is a stable parabolic vector bundle. Then $\pi: N^0\to {\textbf{M}}_P$ by forgetting the Higgs fields is a well defined projection.
$N^0$ is $\mathbb{G}_m$-equivariant in $\mathbf{Higgs}_P$, and $\pi$ is also $\mathbb{G}_m$-equivariant. Denote by $Z_1$ the set of $({\mathcal E},\theta)$ with ${\mathcal E}$ stable, $\theta$ nilpotent and nonzero. One observe that $Z_1\subset {\mathcal N} il_P$, $Z_1$ is $\mathbb{G}_m$-equivariant and all the stable parabolic bundle which is not very stable is contained in $\pi(Z_1)$.
Because ${\mathcal E}$ is stable(can not be decomposed), $\theta$ is non-zero, then by Lemma \ref{Gm fixed point}, $\mathbb{G}_m$ acts freely on $Z_1$. Thus $Z_1/\mathbb{G}_m\twoheadrightarrow\pi(Z_1)$. One have $\dim(Z_1)\leq \dim({\mathcal N} il_P)=\dim({\textbf{M}}_P)$, so $$\dim(\pi(Z_1))\leq \dim(Z_1/\mathbb{G}_m)=\dim({\textbf{M}}_P)-1<\dim({\textbf{M}}_P).$$ Thus the set of very stable parabolic bundles contains a non-empty Zariski open set in ${\textbf{M}}_P$.
\end{proof}
\begin{corollary}
For a generic choice of $a\in\mathcal{H}_P$, the natural forgetful map $h_{P}^{-1}(a)\dashrightarrow {\textbf{M}}_P$ is a dominant rational map.
\end{corollary}
\begin{proof}
By Theorem \ref{image parabolic}, we know that the image of $\mathbf{Higgs}_P$ is contained in $\mathcal{H}_P$.
Consider the following rational map:
\[\rho:\mathbf{Higgs}_P\dashrightarrow {\mathcal H}_P\times {\textbf{M}}_P\quad u\mapsto (h_P(u),\pi(u)).\]
By the existence of very stable parabolic vector bundle, i.e. there exist $(0,{\mathcal E})\in \mathcal{H}_{P}\times {\textbf{M}}_P$ whose pre-image is $(0,{\mathcal E})\in \mathbf{Higgs}_P$. By Corollary \ref{dim formula}, we know that $\dim \mathbf{Higgs}_P=\dim\mathcal{H}_P+\dim {\textbf{M}}_P$, it means that $\rho$
is generically finite. Thus $h_{P}^{-1}(a)\rightarrow {\textbf{M}}_P$ is dominant, for generic $a\in\mathcal{H}_P$.
\end{proof}
As an application, we can also show that the rational forgetful map $F:h^{-1}(a)\dashrightarrow {\textbf{M}}_{P}$ is defined on an open sub-variety $U\subset h^{-1}(a)$ and $h^{-1}(a)\backslash U$ is of co-dimension $\geq 2$. This can be proved using similar method in \cite{Br85}. It is well-known that there is a parabolic theta line bundle ${\mathcal L}_{P}$ (which is not canonically defined) over ${\textbf{M}}_{P}$. Then $F^{*}{\mathcal L}_{P}$ can be extended to a line bundle over $h^{-1}(a)$, we still denote it by $F^{*}{\mathcal L}_{P}$. To conclude:
\begin{corollary}
For an $\ell\in\mathbb{Z}$, there is an embedding:
\[
H^{0}({\textbf{M}}_{P},{\mathcal L}_{P}^{\otimes\ell})\hookrightarrow H^{0}(h^{-1}_P(a),F^{*}{\mathcal L}_{P}^{\otimes\ell})
\]
\end{corollary}
This is a generalization of that in \cite{BNR} to parabolic case. It is interesting to see that the left hand side vector space is also know as generalized parabolic theta functions of level $\ell$ (also referred to as conformal blocks) as in \cite{LS97}.
\section{Appendix}
In this appendix, we discuss singularities of generic spectral curves, along with ramification. Since we may work over positive characteristics, it needs a little bit more work to use Jacobian criterion. We assume $D=x$ and if $\text{char}(k)=2$, rank $r\geq 3$.
\begin{lemma}\label{generic integrality}
For a generic choice of $a\in \mathcal{H}_{P}$, the corresponding spectral curve $X_{a}$ is integral, totally ramified over $x$, and smooth elsewhere.
\end{lemma}
\begin{proof} Since being integral is an open condition, similar as in \cite[Remark 3.1]{BNR}, we only need to show there exist $ a\in\mathcal{H}_P$, such that $X_{a}$ is integral.
Take $\text{char}_{\theta}=\lambda^{r}+a_{r}=0$ with $a_{r}\in H^{0}(X, \omega^{\otimes r}((r-\gamma_{r})\cdot x))$.
The spectral $X_a$ is integral if $a_{r}$ is not $r$-th power of an element in $H^{0}(X,\omega_X(x))$, this is true for generic $a_{r}$.
Since smoothness outside $x$ is an open condition, it is sufficient to find such a spectral curve.
When $\text{char}(k)\nmid r$, we take $\text{char}_{\theta}=\lambda^{r}+a_{r}=0$. Due to the weak Bertini theorem, we can choose $a_{r}$ with only simple roots outside $x$. Applying Jacobian criterion, $X_{a}$ is what we want.
When $\text{char}(k)\mid r$, we take following equation:
$$\text{char}_{\theta}=\lambda^{r}+a_{r-1}\lambda+a_{r}=0$$
Then consider following equations:
\[\left\{\begin{array}{l}
\lambda^{r}+a_{r-1}\lambda+a_{r}=0\\
a_{r-1}=0\\
a'_{r-1}\lambda+a'_{r}=0.
\end{array}\right.\]
Since rank $r\geq 3$, by weak Bertini theorem, we can choose $a_{r-1}$ with only simple roots outside $x$. Take $s\in H^0(X,\omega((1+\gamma_r-\gamma_{r-1})x))$ with zeros outside $zero(a_{r-1})$, we can find $a_{r}=a_{r-1}\otimes s$ such that $zero(a_{r-1})\supset zero(a_{r})$, and $zero(a_{r-1})$ are simple zeros of $a_r$, then $X_{a}$ is smooth outside $x$. \footnote{At $x$, $a_{r}$ will always has multiple zeros except Borel type}
\end{proof}
Similarly, we have
\begin{lemma}
For generic $a\in \mathcal{H}$, the corresponding spectral curve $X_{a}$ is smooth.
\end{lemma}
What's more, in this case we can also say something about ramification:
\begin{lemma}
For generic choice $a\in {\mathcal H}$, we have $\pi_a:X_{a}\to X$ is unramified over $x$.
\end{lemma}
\begin{proof} The ramification divisor of $\pi_a$ is defined by the resultant. It is a divisor in the linear system of the line bundle $R:=\omega_X(x)^{\otimes r(r-1)}$. Considering the following morphism given by the resultant:
$$Res: {\mathcal H}\rightarrow H^{0}(X,R),\quad a\mapsto \text{Res}(a).$$ We have the codimension $1$ sub-space $$W:=H^{0}(X,R(-x))\subset H^{0}(X,R),$$ such that $Res(a)\in W$ if and only if $\pi_a$ is ramified over $x$.
$Res$ is an polynomial map so the image is a sub-variety. To prove our statement, we only need to find one particular $a$ so that $\pi_a$ is unramified over $x$.
Consider the characteristic polynomial of the form
$\lambda^r+a_r$. In the neighbourhood of $x$, we can write it as $\lambda^r+b_r\cdot(\frac{dt}{t})^{\otimes r}$. Here $\frac{dt}{t}$ is the trivialization of $\omega(x)$ near $x$. By Jacobian criterion, $\pi_a$ is unramified over $x$ if $b_r\in {\mathcal O}_{X,x}$ is indecomposable. Take $b_r=t$ and extend $t\cdot(\frac{dt}{t})^{\otimes r}$ to a global section $s$, we find $a=\lambda^r+s$ such that $\pi_a$ is unramified at $x$.
\end{proof}
\bibliographystyle{alpha}
| proofpile-arXiv_065-7214 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Kitaev model~\cite{KITAEV20062} and its related models
have attracted much interest in condensed matter physics
since the possibility of the direction-dependent Ising interactions
has been proposed in the realistic materials~\cite{Jackeli}.
Among them, low temperature properties
in the candidate materials such as
$A_2{\rm IrO_3}$ ($A={\rm Na, K}$)~\cite{PhysRevB.82.064412,PhysRevLett.108.127203,PhysRevLett.108.127204,PhysRevLett.109.266406,modic2014realization,PhysRevLett.114.077202,Kitagawa2018nature}
and $\alpha$-${\rm RuCl_3}$~\cite{PhysRevB.90.041112,Kubota,PhysRevB.91.144420,PhysRevB.91.180401,Kasahara}
have been examined extensively.
To clarify the experimental results,
the roles of the Heisenberg interactions~\cite{Chaloupka,Jiang,Singh},
off-diagonal interactions~\cite{Katukuri,Suzuki},
interlayer coupling~\cite{Tomishige,Seifert,Tomishige2}, and
the spin-orbit couplings~\cite{Nakauchi} have been theoretically investigated for both ground state and finite temperature properties.
One of the important issues characteristic of the Kitaev models is the fractionalization
of the spin degree of freedom.
In the Kitaev model with $S=1/2$ spins, the spins are exactly shown to be fractionalized
into itinerant Majorana fermions and localized fluxes, which
manifest themselves in the ground state and thermodynamic properties~\cite{Nasu1,Nasu2}.
It has been observed as the half-quantized thermal quantum Hall effects,
which is a clear evidence of the Majorana quasiparticles fractionalized from quantum spins~\cite{Kasahara}.
Recently, the Kitaev model with larger spins has theoretically
been examined~\cite{Baskaran,SuzukiYamaji,S1Koga,Oitmaa,Kee}.
In the spin-$S$ Kitaev model,
the specific heat exhibits double peak structure,
and plateau appears in the temperature dependence of the entropy~\cite{S1Koga}.
This suggests the existence of the fractionalization
even in this generalized Kitaev model.
However, it is still hard to explain
how the spin degree of freedom is divided in the generalized Kitaev models
beyond the exactly solvable $S=1/2$ case~\cite{KITAEV20062,Nasu1,Nasu2}.
The key to understand the ``fractionalization'' in the spin-$S$ Kitaev model should be
the multiple entropy release phenomenon.
The half of spin entropy $\sim \frac{1}{2}\ln (2S+1)$
in higher temperatures emerges with a broad peak in the specific heat.
Then, a question arises how the plateau structure appears in the entropy
in the Kitaev model composed of multiple kinds of spins
(the mixed-spin Kitaev model).
In other words, how is the many-body state realized in the system,
with decreasing temperatures?
The extension to the mixed-spin models should be a potential
to exhibit an intriguing nature of the ground states.
In fact, the mixed-spin quantum Heisenberg model has been
examined~\cite{Mitsuru,Pati,Fukui,Tonegawa,KogaMix,KogaMix2,Kolezhuk,Takushima},
and the topological nature of spins and lattice plays an important role
in stabilizing the non-magnetic ground states.
Moreover, mixed-spin Kitaev model can be realized
by replacing transition metal ions to other ions
in the Kitaev candidate materials.
Therefore, it is desired to study this model
to discuss the nature of the spin fractionalization in the Kitaev system.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{mixed-model.pdf}
\caption{
(a) Mixed-spin Kitaev model on a honeycomb lattice.
Solid (open) circles represent spin $s$ ($S$).
Red, blue, and green lines denote $x$, $y$, and $z$ bonds
between nearest neighbor sites, respectively.
(b) Plaquette with sites marked $1-6$ is shown
for the corresponding operator
$W_p$ defined in Eq.~(\ref{eq:Wp}).
}
\label{fig:model}
\end{figure}
In this manuscript, we investigate the mixed-spin Kitaev model,
where two distinct spins $(s, S)\;[s<S]$ are periodically arranged
on the honeycomb lattice (see Fig.~\ref{fig:model}).
First, we show the existence of the $Z_2$ symmetry in each plaquette in the system.
In addition, by considering another local symmetry,
we show that the macroscopic degeneracy exists in each energy level
when one of the spins is half-integer and the other integer.
The exact diagonalization (ED) in the system
with $(s,S)=(\frac{1}{2},1)$ reveals that
the ground state has a macroscopic degeneracy,
which is consistent with the presence of the two kinds of local symmetries.
Using thermal pure quantum (TPQ) state methods~\cite{TPQ1,TPQ2},
we find that, at least, the double peak structure appears
in the specific heat and
the plateau appears at intermediate temperatures in the entropy,
which are similar to those in the spin-$S$ Kitaev models~\cite{S1Koga}.
From systematic calculations for the mixed-spin systems with $s, S\le 2$,
we clarify that the smaller spin-$s$ is responsible
for the high-temperature properties.
The deconfinement picture to explain the ``spin fractionalization''
in the Kitaev model is addressed.
We consider the Kitaev model on a honeycomb lattice,
which is given by the following Hamiltonian as
\begin{eqnarray}
{\cal H} &=&
-J\sum_{\langle i,j \rangle_x}s_i^x S_j^x
-J\sum_{\langle i,j \rangle_y}s_i^y S_j^y
-J\sum_{\langle i,j \rangle_z}s_i^z S_j^z,\label{eq:H}
\end{eqnarray}
where $s_i^\alpha(S_i^\alpha)$ is the $\alpha(=x,y,z)$ component
of a spin-$s(S)$ operator at the $i$th site.
$J$ is the exchange constant between the nearest neighbor spin pairs $\langle i,j \rangle_\gamma$.
The model is schematically shown in Fig.~\ref{fig:model}(a).
We consider here the following
local Hermite operator defined on each plaquette $p$ as,
\begin{eqnarray}
W_p &=& \exp\Big[i\pi \left(S_1^x+s_2^y+S_3^z+s_4^x+S_5^y+s_6^z\right)-i\pi\eta \Big],\label{eq:Wp}
\end{eqnarray}
where $\eta=[3(s+S)]$ is a phase factor.
By using the following relation for the spin operators,
$ e^{i\pi S^\alpha}S^\beta e^{-i\pi S^\alpha}=(2\delta_{\alpha\beta}-1)S^\beta,$
we find $[{\cal H}, W_p]=0$ for each plaquette and $W_p^2=1$.
Therefore, the mixed-spin Kitaev system has a $Z_2$ local symmetry.
It is known that this local $Z_2$ symmetry is important to
understand ground state properties in the Kitaev model.
We wish to note that the local operator $W_p$ on a plaquette $p$
commutes with those on all other plaquettes
in the spin-$S$ Kitaev models,
while this commutation relation is not always satisfied
in the present mixed-spin Kitaev model.
In fact,
we obtain $[W_p, W_q]\propto [e^{i\pi(s_i^x+S_j^y)},e^{i\pi(s_i^y+S_j^x)}]\propto
\sin[\pi(s_i^y-S_j^x)]$
when the plaquettes $p$ and $q$ share the same $z$ bond $\langle ij\rangle_z$.
This means that the local operator does not commute with the adjacent ones
in the mixed-spin Kitaev model with one of spins being half-integer and
the other integer.
Instead, we introduce another local symmetry specific in this case.
When either $s$ or $S$ is half-integer and the other is integer,
the Hilbert space is divided into subspaces specified by
the set of the eigenvalues $w_p(=\pm 1)$ of
the $N_p(\le N/6)$ local operators $W_p$
defined on the plaquettes $p\in {\cal P}$,
where ${\cal P}$ is a set of the plaquettes whose corners
are not shared with each other.
Now, we assume the presence of the local operator $R_p$ on a plaquette $p(\in {\cal P})$
so as to satisfy the conditions,
$R_p^2=1$ and the following commutation relations
$[{\cal H}, R_p]=0$, $[W_p, R_q]=0\; (p\neq q)$, and $\{W_p, R_p\}=0$.
In the case that half-integer and integer spins are mixed,
such an operator can be introduced so that
the spins located on its corners
are inverted as
${\bf S}_{2i-1}\rightarrow -{\bf S}_{2i-1},
{\bf s}_{2i}\rightarrow -{\bf s}_{2i}\;(i=1,2,3)$
and the signs of the six exchange constants are changed on the bonds connecting with a corner site belonging to the plaquette,
shown as the dashed lines in Fig.~\ref{fig:model}(b).
When a wavefunction for the energy level $E$ is given by the set of $\{w_p\}$ as
$|\psi\rangle=|\psi;\{w_1, w_2,\cdots,w_p,\cdots\}\rangle$,
we obtain ${\cal H}|\psi'\rangle=E|\psi'\rangle$ with the wave function
$|\psi'\rangle=R_p|\psi\rangle=|\psi;\{w_1, w_2,\cdots,-w_p,\cdots\}\rangle$.
Since the operators $R_p$ for arbitrary plaquettes in ${\cal P}$ generates
degenerate states, the presence of $R_p$ results in,
at least, $2^{N/6}$-fold degenerate ground states.
This qualitative difference in the spin magnitudes
$s$ and $S$ can be confirmed in the small clusters.
By using the ED method, we obtain ground state properties
in the twelve-site systems, as shown in Table~\ref{tbl2}.
We clearly find that, as for the ground-state degeneracy, the mixed-spin systems
can be divided into three groups.
When both spins $s$ and $S$ are integer,
the ground state is always singlet.
In the half-integer case,
the four-fold degenerate ground state is realized in the $N=12a$ system,
while
the singlet ground state is realized in the $N=12b$ system.
This feature is essentially the same as ground state properties
in the $S=1/2$ Kitaev model,
where the ground-state degeneracy depends on the topology in the boundary condition.
By contrast,
the eight-fold degenerate state is realized
in the system with one of spins being half-integer and
the other integer,
which suggests the macroscopic degeneracy in the thermodynamic limit.
\begin{center}
\begin{table}
\caption{
Ground state energy $E_g$ and its degeneracy $N_d$
in the mixed-spin $(s, S)$ Kitaev models with the twelve-site clusters.
}
\begin{tabular}{cc|cc|cc}
\hline \hline
\multirow{2}{*}{$s$} & \multirow{2}{*}{$S$} &\multicolumn{2}{c|}{$N=12a$}&\multicolumn{2}{c}{$N=12b$}\\
&& $E_g/JN$ & $N_d$ & $E_g/JN$ & $N_d$\\
\hline
1/2 & 1/2 & -0.20417 & 4 & -0.21102 & 1 \\
1/2 & 1 & -0.33533 & 8 & -0.34235 & 8 \\
1/2 & 3/2 & -0.47208 & 4 & -0.47389 & 1 \\
1/2 & 2 & -0.60214 & 8 & -0.60260 & 8 \\
1 & 1 & -0.66487 & 1 & -0.67421 & 1 \\
1 & 3/2 & -0.92855 & 8 & -0.93437 & 8 \\
1 & 2 & -1.19271 & 1 & -1.19567 & 1 \\
3/2 & 3/2 & -1.37169 & 4 & -1.38840 & 1 \\
3/2 & 2 & -1.76901 & 8 & -1.77691 & 8 \\
2 & 2 & -2.33449 & 1 & -2.35306 & 1 \\
\hline \hline
\end{tabular}
\label{tbl2}
\end{table}
\end{center}
To confirm this,
we focus on the mixed-spin system with $(s, S)=(1/2, 1)$.
By using the ED method,
we obtain the ground-state energies for several clusters up to 24 sites
[see Fig.~\ref{fig:model}(a)].
The obtained results are shown in Table~\ref{tbl}.
It is clarified that a finite size effect slightly appears
in the ground state energy, and
its value is deduced as $E_g/JN=-0.335$.
We also find that the ground state is $N_{\cal S}(=N_d/2^{N_p})$-fold degenerate
in each subspace
and its energy is identical in all subspaces ${\cal S}[\{w_p\}]$
except for the $N=18a$ system~\cite{N18a}.
The large ground-state degeneracy $N_d\ge 2^{N/6}$ is consistent with
the above conclusion.
We also find that the first excitation energy $\Delta$ is much smaller than
the exchange constant $J$,
as shown in Table~\ref{tbl}.
These imply the existence of multiple low-energy states in the system.
\begin{center}
\begin{table}
\caption{
Ground state profile for several clusters in the Kitaev model
with $(s, S)=(1/2, 1)$. $N_p$ is the number of plaquettes,
where the local operator $W_p$ is diagonal in the basis set.
$N_d$ is the degeneracy in the ground state.
}
\begin{tabular}{ccccc|ccccc}
\hline \hline
$N$ & $N_p$ & $E_g/JN$ & $\Delta/J$ & $N_d$ & $N$ & $N_p$ & $E_g/JN$ & $\Delta/J$ & $N_d$ \\
\hline
12a & 1 & -0.33981 & 0.0071 & 8 & 20a & 2 & -0.33550 & 0.0013 &20 \\
12b & 2 & -0.34235 & 0.0024 & 8 & 20b & 3 & -0.34210 & 0.0041 &32 \\
16a & 2 & -0.33543 & 0.0002 &20 & 22 & 2 & -0.33531 & 0.0016 &20 \\
16b & 2 & -0.33895 & 0.0019 &16 & 24a & 4 & -0.33525 & 0.0031 &64 \\
18a & 3 & -0.33533 & 0.0018 &8 & 24b & 4 & -0.33511 & 0.0010 &64 \\
18b & 2 & -0.33537 & 0.0015 &40 \\
\hline \hline
\end{tabular}
\label{tbl}
\end{table}
\end{center}
Next, we consider thermodynamic properties in the Kitaev model.
It is known that there exist two energy scales in the $S=1/2$ Kitaev model~\cite{KITAEV20062},
which clearly appear as
double peak structure in the specific heat
and a plateau in the entropy~\cite{Nasu1,Nasu2}.
Similar behavior has been reported in the spin-$S$ Kitaev model~\cite{S1Koga}.
These suggest the existence of the fractionalization
in the generalized spin-$S$ Kitaev model.
An important point is that the degrees of freedom for the high energy part
depend on the magnitude of spins $\sim(2S+1)^{N/2}$.
On the other hand, in the mixed-spin case, it is unclear which spin is responsible
for the high-temperature properties.
Here, we calculate thermodynamic quantities for twelve-site clusters,
by diagonalizing the corresponding Hamiltonian.
Furthermore, we apply the TPQ state method~\cite{TPQ1,TPQ2} to larger clusters.
In this calculation, the thermodynamic quantities are deduced
by the statistical average of the results obtained from, at least,
25 independent TPQ states.
Here, we calculate specific heat $C(T)=dE(T)/dT$,
entropy $S(T)=S_\infty -\int_T^\infty C(T')/T' dT'$, and
the nearest-neighbor spin-spin correlation
$C_S(T)=\langle s_i^\alpha S_j^\alpha \rangle_\alpha = -2E(T)/(3J)$,
where $S_\infty= \frac{1}{2} \ln (2s+1)(2S+1)$ and $E(T)$ is the internal energy per site.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{tpq2.pdf}
\caption{
(a) Specific heat, (b) entropy, and (c) spin-spin correlation
as a function of temperatures.
Shaded areas stand for the standard deviation of the results
obtained from the TPQ states.
}
\label{fig:tpq}
\end{figure}
The results for the mixed-spin systems with $(s, S)=(1/2, 1)$
are shown in Fig.~\ref{fig:tpq}.
We clearly find the multiple-peak structure in the specific heat.
Note that
finite size effects appear only at low temperatures.
Therefore, our TPQ results for the 24 sites appropriately capture
the high temperature properties ($T\gtrsim 0.01J$) in the thermodynamic limit.
Then, we find the broad peak around $T_H\sim 0.6J$,
which is clearly separated by the structure at low temperatures ($T<0.01J$).
Now, we focus on the corresponding entropy,
which is shown in Fig.~\ref{fig:tpq}(b).
This indicates that, decreasing temperature,
the entropy monotonically decreases and the plateau structure is found
around $T/J\sim 0.1$.
The released entropy is $\sim\frac{1}{2}\ln 2$,
which is related to the smaller spin $(s=1/2)$.
Therefore,
multiple temperature scales do not appear at high temperatures
although the system is composed of two kinds of spins $(s, S)$.
However, it does not imply that only smaller spins are frozen
and larger spins remain paramagnetic at the temperature
since the spin-spin correlations develop around $T\sim T_H$ and
a quantum many-body spin state
is formed, as shown in Fig.~\ref{fig:tpq}(c).
We have also confirmed that local magnetic moments do not appear even
in the wavefunction constructed by the superposition of the ground states
with different configurations of $\{w_p\}$.
By contrast,
the value $\frac{1}{2}\ln 2$ reminds us of the high-temperature feature for
itinerant Majorana fermions in spin-1/2 Kitaev model~\cite{Nasu1,Nasu2}.
Then, one expects that, in the mixed-spin $(s, S)$ Kitaev model,
higher temperature properties are described by the smaller spin-$s$ Kitaev model,
where degrees of freedom $\sim (2s+1)^{1/2}$ are frozen at each site~\cite{S1Koga}.
In the case, a peak structure appears in the specific heat and
the plateau structure at $\sim S_\infty - \ln (2s+1)/2$ in the entropy.
These interesting properties at higher temperatures will be examined systematically.
Further decrease of temperatures decreases the entropy and
finally $S \sim S_\infty-\ln 2$ at lower temperatures, as shown in Fig.~\ref{fig:tpq}(b).
This may suggest that thermodynamic properties in this mixed-spin Kitaev model
with $(s, S)=(1/2, 1)$ are governed by two kinds of fractional quasiparticles originating
from the smaller $s=1/2$ spin by analogy with the spin fractionalization
in the spin-1/2 Kitaev model.
In the case, the existence of the remaining entropy $S \sim S_\infty-\ln 2$
should be consistent with macroscopic degeneracy in the ground state
as discussed before.
However, our TPQ data have a large system size dependence at low temperatures,
and conclusive results could not be obtained.
Therefore, a systematic analysis is desired to clarify the nature of
low temperature properties.
To clarify the role of the smaller spins in the mixed-spin Kitaev models,
we calculate the entropy in the systems with $s, S\le 2$ and $N=12a$
by means of the TPQ state methods.
The results are shown in Fig.~\ref{fig:tpqS}.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{tpqS.pdf}
\caption{
(a) $S-S_\infty$
in the generalized $(s, S)$ Kitaev model
at higher temperatures.
Squares with lines represent data for the $S=1/2$ Kitaev model
obtained from the Monte Carlo simulations~\cite{Nasu1,Nasu2}.
Circles, triangles, and diamonds with lines represent
the TPQ data for $S=1, 3/2$, and $2$ cases~\cite{S1Koga}.
(b) $(S-S_\infty)/\ln (2s+1)$ and $C_s/(sS)$ as a function of $T/JS$.
}
\label{fig:tpqS}
\end{figure}
The plateau structure is clearly observed
in the curve of the entropy in the mixed-spin Kitaev models.
In addition, we find that the plateau is located around
$S=S_\infty-\frac{1}{2}\ln (2s+1)$, as expected above.
Therefore, we can say that, decreasing temperatures,
the half of the degree of freedom in the smaller spin-$s$ are released.
This may be explained by the deconfined-spin picture in the Kitaev model.
In the picture, each spin $S$ is divided into two kinds of quasiparticles
with distinct energy scales: $2S$ $L$-quasiparticles and $2S$ $H$-quasiparticles,
which are dominant at lower and higher temperatures, respectively.
In the exactly solvable $S=1/2$ Kitaev model,
$H$- ($L$-)quasiparticles are identical to
itinerant Majorana fermions (localized fluxes).
In addition, this should explain the double peak structure
in the specific heat of the spin-$S$ Kitaev model,
each of which corresponds to the half entropy release~\cite{S1Koga}.
In our mixed-spin $(s,S)$ system,
the entropy release at higher temperatures can be interpreted as follows:
$2s$ fractional $H$-quasiparticles are present
with the energy scale of $\sim J$.
On the other hand, remaining $H$-quasiparticles originating from
larger spin-$S$ posses the energy that is much smaller than $J$
due to the absense of the two-dimensional network.
Therefore, only $2s$ $H$-quasiparticles form the many-body state
at high temperatures,
resulting in the plateau structure in the entropy.
Interestingly, the temperature $T^*$ characteristic of the plateau
in the entropy,
which may be defined such that $S(T^*)=S_\infty-\frac{1}{2}\ln (2s+1)$,
depends on the magnitude of the larger spin.
In fact, we find that
$T^*$ should be scaled by the larger spin $T^*\sim JS$,
which is shown in Fig.~\ref{fig:tpqS}(b).
This is in contrast to the conventional temperature scale
$T^{**}\sim J\sqrt{s(s+1)S(S+1)}$,
which is derived from the high-temperature expansion.
This discrepancy is common to the spin-$S$ Kitaev model~\cite{S1Koga},
implying that quantum fluctuations are essential even in this temperature range
in the mixed-spin Kitaev models.
As for the spin-spin correlation, decreasing temperatures,
it develops around $T/JS\sim 1$ and is almost saturated around $T^*$,
as shown in Fig.~\ref{fig:tpqS}(b).
This means that the many-body spin state is indeed realized at the temperature.
We also find that at low temperatures, the normalized spin-spin correlation $C_s/(sS)\sim 0.4$
is less than unity when $s$ and $S$ are large.
This suggests that the quantum spin liquid state is, in general, realized
in the generalized mixed-spin Kitaev model,
which is consistent with the presence of
magnetic fluctuations even in the classical limit~\cite{SuzukiYamaji}.
In summary, we have studied the mixed-spin Kitaev model.
First, we have clarified the existence of the local $Z_2$ symmetry
at each plaquette.
We could introduce an operator $R_p$ on the plaquette $p$ so as
to (anti)commute with the Hamiltonian ($W_p$),
which leads to
the macroscopic degeneracy for each energy level
in the mixed-spin system with one of spins being half-integer and
the other integer.
Using the TPQ state methods for several clusters,
we have found the double peak structure in the specific heat and
plateau in the entropy, which suggests
the existence of the fractionalization in the mixed-spin system.
Deducing the entropy in the mixed-spin system with $s, S\le 2$ systematically,
we have clarified that the smaller spin plays a crucial role
in the thermodynamic properties at higher temperatures.
We expect that the present mixed-spin Kitaev systems are realizable in the real materials
by substituting the magnetic ions in the Kitaev candidate materials to other magnetic ions with larger spins,
and therefore, the present work should stimulate material researches
for mixed-spin Kitaev systems.
\begin{comment}
The local $Z_2$ symmetry in the honeycomb lattice
exists even in the random-spin and random-bond Kitaev model,
$ {\cal H} =\sum_{\langle i,j\rangle_x}J_{ij}S_i^x S_j^x+\sum_{\langle i,j\rangle_y}J_{ij}S_i^y S_j^y+
\sum_{\langle i,j\rangle_z}J_{ij}S_i^z S_j^z$,
where $\langle i,j\rangle_\alpha$ is the nearest neighbor pairs on the $\alpha$ bond,
as shown in Fig.~\ref{fig:model}(a).
In fact, the Hamiltonian commutes with the following local operator
$ W_p = \exp\Big[i\pi \left(S_1^x+S_2^y+S_3^z+S_4^x+S_5^y+S_6^z\right)-i\pi\eta\Big],$
where $\eta(=\sum_{i=1}^6 S_i)$ is a phase factor.
Therefore, it is naively expected that a non-magnetic ground state is realized
even in the random-spin and random-bond Kitaev model.
It is also interesting to clarify how robust thermodynamic properties characteristic of
the Kitaev model are against the introduction of magnetic impurities,
which is left for future work.
\end{comment}
\begin{acknowledgments}
Parts of the numerical calculations were performed
in the supercomputing systems in ISSP, the University of Tokyo.
This work was supported by Grant-in-Aid for Scientific Research from
JSPS, KAKENHI Grant Nos. JP18K04678, JP17K05536 (A.K.),
JP16K17747, JP16H02206, JP18H04223 (J.N.).
\end{acknowledgments}
| proofpile-arXiv_065-7218 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
Moving in the right way at the right time can be a matter of life and death. Whether avoiding a predator or searching for food, choosing the correct movements in response to specific stimuli is a crucial part of how an organism interacts with its environment. The repetitive, highly coordinated movements that make up behavioral stereotypes have been shown to be entwined with survival strategies in a number of species, for example the incredible correlation in posture between prey capture events in raptors \cite{csermley1989} and the escape response of \emph{C. elegans} when exposed to extreme heat \cite{Ryu2008}. Understanding these stereotypes is vital to creating a full picture of a species' interactions with its environment. If stereotypes represent evolved, selection-driven behavior in animals, might the same not be true for single-celled organisms?
This point of view may be particularly useful in understanding chemotaxis, the guided movement of a cell in response to a chemical gradient. During chemotaxis, eukaryotic cells change their shape through the repeated splitting and extension of actin-rich structures called pseudods \cite{AndrewAndInsall2007, Neilson2011, Haastert2010}. Though this behavior is well known, the study of chemotaxis has traditionally focused on the signaling events that regulate cytoskeletal remodeling. Even where pseudopods are acknowledged to be relevant, the focus is on the biochemical mechanisms that generate and regulate them \cite{Otsuji2010, Beta2013, Davidson2017}. These mechanisms are, however, staggeringly complex \cite{Devreotes2010} and the way chemotaxis emerges from these lower-level processes remains largely unknown. Rather than delving deeper into the network of biochemical interactions, we can instead learn from the shape changes and movements that this intricate machine has evolved to produce. Such an approach, also known as morphological profiling, shows great promise in biomedicine \cite{Marklein2017}.
Here, we explore this question using \emph{Dictyostelium discoideum}, a model for chemotaxis noted for its reproducible directional migration towards cyclic adenosine monophosphate (cAMP) \cite{vanPost2007, Tweedy2016}, which it senses using typical G-protein coupled receptors. To capture cell shape (or posture) at any given point in time, we employ Fourier shape descriptors, a rotationally invariant method of quantifying shapes used previously to show that cell shape and environment are intrinsically linked \cite{Tweedy2013} (Fig. 1A). These shape data are naturally of high dimensionality, making further analysis difficult. We reduce their dimensionality using principal component analysis (PCA), a method used previously to obtain the key directions of variability from the shapes of cells \cite{Tweedy2013, Keren2008, Bakal2013} and animals (Fig. 1B) \cite{Ryu2008, Broekmans2016, Gyenes2016}. Our final challenge (and the focus of this paper) is to quantify behavior, which we define as the movement between shapes. There are many potential ways to do so \cite{Valetta2017, Gomez2016}, however we have adapted the variational maximum caliber (MaxCal) approach \cite{Phillips2010, Dill2013} to this end. These methods have several advantages over conventional alternatives: Firstly, Fourier descriptors capture all available information on shape, and the subsequent PCA provides a natural quantitative means of discarding its less informative elements. Easier methods, such as measuring aspect ratio or eccentricity, require us to assume that they provide a useful description \emph{a priori}, and cannot inform us how much (or what) we have discarded for the sake of simplicity. Secondly, our chosen methods are blind to the researcher's preconceptions, as well as to previous descriptions of shape and behavior. Any behavioral modes identified have emerged from the data without our deciding on (and imposing) them, as we might if using supervised machine learning or fitting parameters to a preconceived biochemical model. Finally, the minimal model we construct using maximum caliber makes no reference to any underlying biochemistry and therefore cannot make potentially incorrect assumptions about it. We demonstrate the usefulness of these methods by showing that they successfully discriminate between the behavior of drug-treated or genetically altered cells and their parental strains.
\section*{Results}
\subsection*{Maximum caliber approach to behavioral classification}
Cells continuously change shape as they migrate, creating trajectories in the space of shapes that are specific to their situation. For example, we have previously shown that cells follow different shape trajectories in environments with low and high chemoattractant signal-to-noise ratios \cite{Tweedy2013}, here defined as the local gradient squared over the background concentration (Fig. 1C). In this example, it is important to note that the distributions of cell shape for each condition overlap significantly. This means that it is not always possible to accurately determine the cell's condition from a static snapshot of its shape. In contrast, the dynamics of shape change in each condition are clearly distinct. Our aim here is to quantify the details of these shape changes, making a small set of values that can act as a signature for a given mode of behavior. We can then use such signatures to quantitatively compare, or to discriminate between, various conditions or genotypes. To this end, we employ the MaxCal method (Fig. 2A).
MaxCal was originally proposed by E. T. Jaynes \cite{Jaynes1980} to treat dynamical problems in physics, and is much like his better-known maximum entropy (MaxEnt) method used in equilibrium physics. The motivation is the same for both; we wish to find the probability of a given structure for a system in a way that neither assumes something we do not know, nor contradicts something we do know, i.e. an observation of the system's behavior. In the case of MaxEnt, this is achieved by finding the maximum Shannon entropy with respect to the probabilities of the set of possible configurations the system can take \cite{Bialek2006}. MaxCal uses the probabilities of the possible trajectories a dynamical system can follow instead. In this case, the entropy is replaced by the caliber \(C\), \cite{Phillips2010}, so called because the flow rate in a pipe relates to its caliber, or internal diameter. In essence, the method extracts the degree to which different rates or events within the system are coupled, or co-occur beyond random chance. This method has previously been used to sucessfully predict the dynamics of neuronal spiking, the flocking behavior of birds and gene regulatory networks \cite{Cavagna2014,Vasquez2012, Firman2017}.
Generally, the caliber takes the form
\begin{equation}
C(\{ p_{j} \}) = -\sum_{j} p_{j}\ln\left(p_{j}\right) + \mu \sum_{j} p_{j} + \sum_{n} \lambda_{n} \sum_{j} S_{n,j} p_{j} ,
\label{eqn:Cal}
\end{equation}
where \(p_{j}\) is the (potentially time-dependent) probability of the \(j\)th trajectory. The first term on the right-hand side of Eq. \eqref{eqn:Cal} represents a Shannon-entropy like quantity and the second ensures that the \(p_{j}\) are normalized. The third constrains the average values of some properties \( S_{n, j} \) of the trajectories \( j \) to the values of some macroscopically observed quantities \( \langle S_{n} \rangle \), making sure we do not contradict any known information.
By maximizing the caliber, the probabilities of the trajectories
\begin{equation}
p_{j} = Q^{-1} \exp \left(\sum_{n} \lambda_{n} S_{n,j} \right)
\label{eqn:MCprobs}
\end{equation}
are found, where \( Q = \sum_{j} \exp(\sum_{n} \lambda_{n} S_{n,j}) \) is the dynamical partition function and \( \{\lambda_{n}\} \) are a set of Lagrange multipliers. This Boltzmann-type distribution fulfils detailed balance, even for non-equilibrium problems. Practically, the problem is to find these Lagrange multipliers (and hence, the partition function). To this end, we exploit their relations to the externally observed average values of some quantities
\begin{equation}
\langle S_{n} \rangle = \frac{\partial \ln Q}{\partial \lambda_{n}},
\label{eqn:Observe}
\end{equation}
where the values of the \( \langle S_{n} \rangle \) are determined from experiment. This training process is equivalent to maximum-likelihood fitting to observed data.
As our interest is in cell shape and motility, we derive our values for the \( \langle S_{n} \rangle \) from the shape dynamics of migrating \emph{Dictyostelium} cells. In order to effectively parameterise our model, we must constrain the continuum of possible shape changes to a much smaller set of discrete unit changes in our principal components (PCs). We therefore build our model from discretized values of the shape measures PC 1 and 2, assigning them to the variables \(N_{1}\) and \(N_{2}\), respectively. Their values are analagous to particle numbers in a chemical reaction. The switching between continuous and discrete variables is possible as \(\frac{\sigma_{x}}{\langle N_{x}\rangle}\approx 0.035\) is small with \(x=1,2\) for \(PC_{x}\) and \(\sigma_{x}\) the standard deviation. We reduce the size of the time-step \(\delta t\) until we no longer observe changes greater than 1 in a single \(\delta t\) (similar to deviations of master equations). As PC 2 accounts for less overall variation than PC 1, (see Fig. 1B), we naturally reach this minimal change for a much larger value of \(\delta_t\), which is undesirable because by the time \(\delta_t\) is small enough for PC 1, changes in PC 2 are almost never observed, making correlations between the two PCs difficult to detect. We therefore scale all changes in PC 2 by a factor of \(\sigma_1/\sigma_2\) in order that unit changes are observed in both PCs for a similar value of \(\delta_t\). Practically, our training data yielded a \(\delta_t\) of 0.1875s (as each 3s frame in the video data was divided into 16 sections, in which the intermediate PC values were linearly interpolated).
The advantage of limiting the possible macroscopic shape changes in \(\delta t\) to the following: an increase, a decrease, or no change in each PC. As changes in each PC can be considered independently, this gives us a total of 3x3 = 9 cases (that is, no change form the current position, or a move to any of the 8 neighbouring spaces, see Fig. 2A inset). These macroscopic cases are taken to be the observable effects of an underlying microscopic structure. From our analogy of a chemical reaction, we treat increases to be particle creation and denote the microscopic variable for an increase in trajectory \(j\) as \( S^{x}_{+,j} \), where \(x\in\{1,2\}\) correspond to PC 1 and 2, respectively. For small \(\delta t\) this variable is binary, taking the value \( 1 \) when \(N_{x}\) increases over a single time-step and taking the value \( 0 \) otherwise. Decreases will be treated as particle decay, with \(N_{x}\) separate variables \( \{S^{x,i}_{-,j}\}\) are used to denote decays for the \(i\)th particle, with \(1 \le i \le N_{x}\). These \( \{S^{x,i}_{-,j}\}\) are equal to \( 1 \) if the \(i\)th particle decays in \(\delta t\) and equal to \( 0 \) otherwise. Hence, in each \(\delta t\) there are \(N_{x}+2\) possible microtrajectories for each component; an increase, no change, or the removal of any particle \(N_{x}\) (Fig. 2B). We choose such a first-order decay over a zeroth-order decay in order to introduce a virtual force, bringing the system back toward the mean (see Fig. S2). As the two components may change independently, there are \((N_{1}+2)(N_{2}+2)\) possible microtrajectories in a single \(\delta t\) over PC 1 and 2. Applying Eq. 3, we constrain the probabilities of these microtrajectories such that they agree with the macroscopically observed rates \(\langle S^{x}_{\alpha} \rangle\), with \(\alpha\in\{+,-\}\) an increase or decrease in component \(x\), respectively.
We then expand the model to include a following time-step, allowing us to capture short-term correlations between events. This increases the number of possible trajectories substantially. The number of microtrajectories in a given time-step depends on \(N_{x}\) at time \(t+\delta t\), and this quantity is different dependent on the pathway taken in the first time-step, so we must include this history dependence. For example, a reduction in component \(x\) can happen in \(N_{x}\) ways, and will cause \(N_{x}\) to go down by one. This change is followed by \((N_{x}-1)+2\) possible microtrajectories in the following time-step. Multiplying the quantities for each time-step gives us \(N_{x}\big(N_{x}+1\big)\) microtrajectories in which there is a decrease in the first time-step. Accounting for the effect of the changing values of \(N_{1}\) and \(N_{2}\) over interval \({t,t+\delta t}\) in each microtrajectory on the interval starting at time \(t+\delta t\), the number of microtrajectories over \(2\delta t\) is \((N_{1}^{2}+3N_{1}+5)(N_{2}^{2}+3N_{2}+5)\). Each observable has a corresponding value in trajectory \(j\) of \(S^{xy}_{\alpha \beta, j}\), which is 1 if the correlation is observed and 0 otherwise. We can reduce this to 10 time-correlated observables by assuming symmetry under order-reversal, \emph{i.e.} \(S^{xy}_{\alpha \beta, j} \equiv S^{yx}_{\beta \alpha, j}\) (Fig. 2C). This assumption is justified: if we consider a negatively correlated movement between PC1 and PC2, we may see transitions in the order \(1+, 2-, 1+\). Here the two couplets \(1+,2-\) and \(2-,1+\) both represent the same phenomenon (see Fig S3).
This leads to an additional 16 observables \(\langle S_{xy}^{\alpha \beta} \rangle\), where \(x,y\in\{1,2\}\) are shape PCs and \(\alpha, \beta \in \{-,+\}\) denote a change in the component displayed above. We constrain our analysis to the first two shape components only, as further components account for relatively little residual variance in shape, whilst increasing computational complexity geometrically.
As an example, we show the partition function in a single shape component, in which there are 5 observables,
\(\{\langle S^{+} \rangle,\langle S^{++} \rangle, \langle S^{+-} \rangle, \langle S^{-} \rangle, \langle S^{--} \rangle\}\): \begin{align}
Q_{N} &= \gamma^{+}\big[\gamma^{+}\gamma^{++}+1+\big(N+1\big)\gamma^{-}\gamma^{+-}\big] \nonumber\\
&+N\gamma^{-}\big[\gamma^{+}\gamma^{+-}+1+\big(N-1\big)\gamma^{-}\gamma^{--}\big] \nonumber \\
&+ \gamma^{+}+1+N\gamma^{-},
\label{eq:exampleQ}
\end{align} where \(\gamma^{\alpha}=e^{\lambda^{\alpha}}\) corresponds to a rate (when divided by \(\delta t\)), with \(\lambda^{\alpha}\) the Lagrange multiplier associated with observable \( \langle S^{\alpha} \rangle \). The first line in Eq. \eqref{eq:exampleQ} shows all possible transitions that begin with an increase over the first time-step, and so the whole line shares the factor \(\gamma^{+}\), the rate of increase. A subsequent increase contributes a further \(\gamma^{+}\), as well as a coupling term \(\gamma^{++}\) which allows us to capture the likelihood of adjacent transitions beyond the naive probability \(\gamma^{+} \gamma^{+}\). A subsequent decrease can happen in \(N+1\) ways, each linked to the rate of decrease \(\gamma^{-}\). The term \(\gamma^{+-}\) is a coupling constant, controling the likelihood of an adjacent increase and decrease beyond the naive probability \(\gamma^{+} \gamma^{-}\). Finally, the +1 allows for the possibility of no transition occurring in the subsequent time-step. The second and third lines correspond to a decrease in the first time-step, and no transition occurring in the first time-step, respectively.
The Lagrange multipliers corresponding to observables are found using Eq. \eqref{eqn:Observe}, which yields a set of equations to be solved simultaneously (see supplementary material for details). In the case of a single component, these equations are
\begin{gather}
\refstepcounter{equation}
\begin{align}
\tag{\theequation a}
\langle S^{+} \rangle &= \gamma^{+}\bigg[2\gamma^{+}\gamma^{++} + 2 + (2N+1)\gamma^{-}\gamma^{+-}\bigg]\\ \tag*{}
\langle S^{-} \rangle &= \gamma^{-}\bigg[(2N+1)\gamma^{+}\gamma^{+-}+2N \\\tag{\theequation b} &\;\;\;\;\;\;\;\;\;\;\; + 2N(N-1)\gamma^{-}\gamma^{--}\bigg]
\end{align}\\ \begin{align}\tag{\theequation c}
\langle S^{++} \rangle &= \gamma^{+}\gamma^{+}\gamma^{++}\\ \tag{\theequation d}
\langle S^{+-} \rangle &= 2N\gamma^{+}\gamma^{-}\gamma^{+-}\\ \tag{\theequation e}
\langle S^{--} \rangle &= N(N-1)\gamma^{-}\gamma^{-}\gamma^{--}.
\end{align}
\end{gather}
The equations for the two-component partition function and Lagrange multipliers can be found in the SI.
This method effectively allows us to build a map of the commonality of complex, correlated behaviors relative to basic rates of shape change (as quantified using principal components). For a given Lagrange multiplier governing a particular correlation, a value less than zero indicates a behavior that is less common than expected, and a value greater than zero represents a behavior that is more common.
\subsection*{Stereotypical behavior without biochemical details}
After training our model on \emph{Dictyostelium} shape trajectories, we confirmed that the method had adequately captured the observed correlations by using them to simulate the shape changes of untreated cells responding to cAMP. In order to illustrate the importance of the correlations, we also ran control simulations trained only on the basic rates of increase and decrease in each PC without these correlations. We compared the activities of the uncorrelated and correlated simulations against the observed data. The uncorrelated model acts entirely proportional to the observed rates (though, interestingly, did not match them; Fig. 2D). In contrast, individual cells from the experimental data show very strong anticorrelation, with increases in one component coupled with decreases in the other. This behavior is clearly replicated by the correlated simulations, in both cases appearing in the plot as a red diagonal from the bottom left to the top right. Furthermore, we see suppression of turning behavior in both PCs, with the most poorly represented activity (relative to chance) being a switch in direction in either PC (for example 1+ followed by 1-). This too is reflected in the correlated simulations.
The predictive power of MaxCal simulations goes beyond those correlations on which they were directly trained: We tested the simulations' ability to predict repetition of any given transition. These patterns took the form of \(N\) transitions in \(T\) time steps, e.g. five 1+ transitions in ten time-steps. The MaxCal model predicted frequencies of appearance for these patterns that closely resembled the real data (Fig. 3A, model in red, real data in black). In contrast, the uncorrelated model predicted patterns at a much lower rate, for example there are runs of 5 consecutive increases in PC 1 in the real data at a rate of around one in 1.35 minutes. The correlated model predicts this pattern rate to be one in every three minutes. The uncorrelated model predicts the same pattern at a rate of one in 6.67 hours. This result indicates that no higher-order correlations are required to recapitulate the data, allowing us to avoid the huge increase in model complexity that their inclusion would entail.
The greater predictive power of the MaxCal model is reflected by its lower Jensen-Shannon divergence from the observed data for these kinds of pattern (Fig. 3B). The MaxCal model also more closely matches the observed probabilities of generating a given number of transitions in a row, with predictions almost perfect up to 4 transitions in a row (twice the length-scale of the measured correlations), and far stronger predictive power than the uncorrelated model over even longer timescales (Fig. 3C).
\subsection*{A real world application of MaxCal methods to discriminate between genotypes}
We wondered whether the MaxCal methods would accurately discriminate between biologically relevant conditions. To investigate this, we used two comparisons. First, we compared shape data from control AX2 cells against the same cells treated with two drugs targeting cytoskeletal function: the phospholipase A2 inhibitor p-bromophenacyl bromide (BPB) and the phosphoinositide 3-kinase inhibitor LY294002 (LY) (for details, see \cite{Meier2011}). Second, we compared a stable myosin heavy-chain knockout against its parent strain (again AX2) (Fig. 4A). We first looked at the effects of these conditions on the distribution of cell shapes, to see whether their effect could be identified from shape, rather than behavior. The drug treatment caused a substantial change in the distribution within the population (Fig. 4B), but still left a substantial overlap. In contrast, the \( \emph{mhcA}^{-} \) cells showed no substantial difference to their parent in shape distribution (Fig 4C). In both cases, the identification of a condition from a small sample of shape data would not be feasible.
We then compared the behavioral Lagrange multipliers of each condition, found by MaxCal, producing distributions for the estimated values of these by bootstrapping our data (sampling with replacement). The values of \(\gamma_{\:1\: 1}^{+-}\) and \(\gamma_{\:2\: 2}^{+-}\) are lower in the untreated condition than those in drug-treated condition, indicating the persistence of shape change in WT cells (Fig. 4D). The anticorrelation between PCs 1 and 2 through pseudopod splitting is reflected in \(\gamma_{\:1\: 2}^{+-}\) and \(\gamma_{\:1\: 2}^{-+}\), both of which have values greater than 1 in WT cells. In comparison, the drug-treated cells have only a moderate anticorrelation. In the \( \emph{mhcA}^{-} \) strain, the differences in the values of \(\gamma_{\:1\:1}^{+-}\), \(\gamma_{\:1\:2}^{+-}\) and \(\gamma_{\:1\:2}^{-+}\) when compared with their parent show similar changes to those observed in drug treatment (Fig. 4E). In both cases, the differences highlighted by these dynamical measurements are striking.
We then applied the MaxCal model to the task of classification. We settled upon classification using \emph{k}-nearest-neighbors (kNN). In order to see how the strength of our prediction improved with more data, we classified based on the preferred class of \emph{N} repeats, all known to come from the same data set. We estimated the classification power of our methods by cross validation, dividing the drug-treated data and its control into three sets containing different cells, and dividing the \( \emph{mhcA}^{-} \) and its parent by the day of the experiment. We first performed the classification by shape alone, taking small subsamples of frames from each cell and projecting them into
their shape PCs, with our distance measure for the kNN being the Euclidean distance in these PCs. With one, two or 3 PCs, we were able to achieve reasonable classification of the drug-treated cells against their parent as data set size increased, with the accuracy of classification leveling off at around 0.85 (with 1 being perfect and 0.5 being no better than random chance, Fig. 5A-C, blue). In contrast, classification of \( \emph{mhcA}^{-} \) cells was little better than random chance, even with relatively many data (Fig. 5A-C, green). This is unsurprising given the similarity of the distributions of these two conditions. We then calculated our MaxCal multipliers for subsamples of each of these groups, bootstrapping 100 estimates from \(20\%\) of each set. We then repeat our kNN classification, instead using for a distance measure the two MaxCal values that best separate our training classes. As the test data come from entirely separated sources (in the case of drug-treated cells coming from different cells, and in the case of the \( \emph{mhcA}^{-} \) being taken on different days), we can be confident that we do not conflate our training and test data. In both the drug-treated case and the \( \emph{mhcA}^{-} \) mutant, the dynamics differ very cleanly between our test and control data. As such, our classification is close to perfect even for only a few samples (Fig. 5D).
As the two Lagrange multipliers that best classified the data both encoded correlations between adjacent time-steps, we guessed that this short-term memory might be key to recapitulating the dynamic properties of cell shape change. A key aspect of the shape dynamics of AX2 cells is the anticorrelation between the first two PCs at the single-cell level (which is definitionally absent at the population level, as PCA produces uncorrelated axes). To see if memory is vital to recapitualting this dynamical aspect of cell shape change, we constructed two versions of the master equation for our MaxCal system (see SI for details). The first is Markovian (that is, at a given time the probabilities of each possible next event only depend on the current state of the system). We ran Gillespie simulations corresponding to this master equation, and compared the correlations of trajectories from these simulations with those from real data. The expected anticorrelation is clearly observed in the data (Fig 5E, black line), but the trajectories of our Markovian Gillespie simulations fail to recapitulate it (Fig. 5E, blue line).
We then introduced a memory property to the simulations, allowing the probabilities of each possible event to depend on the nature of the previous event (with the very first event taken using the uncorrelated probabilities). The model has nine possible states (with each state containing its own set of event probabilities), corresponding to the nine possible events that might have preceeded it. These are an increase, a decrease, or no change in each PC indepenently (3x3 events). These non-Marovian simulations recovered the distribution of correlations observed in the data (Fig. 5E, red line). This indicates that such features of cell shape change can only be addressed by methods that acknowledge a dependency on past events.
\section*{Discussion}
Eukaryotic chemotaxis emerges from a vast network of interacting molecular species. Here, instead of examining the molecular details of chemotaxis in \emph{Dictyostelium discoideum}, we have inferred properties that capture cell behavior from observations of shape alone. For this purpose, we quantified shape using Fourier shape descriptors, reduced these shape data to a small, tractable dimensionality by principal component analysis, and built a minimal model of behavior using the maximum caliber variational method. Unlike conventional modeling approaches, such as master equations and their simplifications, our method is intrinsically non-Markovian, capturing memory effects and short-term history in the values of the behavioral signature it yields (see SI for further discussion of memory, and a comparison to the master equation). Our approach has the advantage of ease, requiring only the observation of what a cell naturally does \cite{Keren2008}, without tagging or genetic manipulation, as well as of generality, being independent from the specific and poorly understood biochemistry of any one cell type. This is important to understanding chemotaxis, as the biochemistry governing this process can vary greatly: for example, the spermatazoa of \emph{C. elegans} chemotax and migrate with no actin at all \cite{Nelson1982}, but strategies for accurate chemotaxis might be shared among biological systems and cell types.
A number of recent studies have demonstrated the importance of pairwise or short-scale correlations in determining complex behaviors both in space and time. The behavior of large flocks of starlings can be predicted from the interactions of individual birds with their nearest neighbors \cite{Toner1995, Bialek2012}, and the pairwise correlations between neurons can be used to replicate activity at much higher coupling orders, correctly reproducing the coordinated firing patterns of large groups of cells \cite{Bialek2006}. Furthermore, cells in many circumstances use short-range spatial interactions to organise macroscopically \cite{DePalo2017}. Interestingly, these systems appear to exhibit self-organized criticality \cite{Bialek2011, Dante2010}, in which the nature of their short-range interactions leads to periods of quiescence puntuated by sudden changes. This could indicate the coupling strengths inherent to a system (such as the temporal correlations in our shape modes in \emph{Dictyostelium} cells) are crucial for complex behavior. Absence of this behavior could be an indicator of disease as illustrated by both of our aberrant cell types.
Here, we employ a very simple classifier to demonstrate the usefulness of our MaxCal multipliers as a measurement by which we can classify cell behaviours. We choose MaxCal because it is a minimal, statistical approach to modelling a complex phenomenon, allowing high descriptive power with no assumptions made about the underlying mechanism.
As our understanding of the molecular biology controlling cell shape improves, an interesting alternative would be to use our data in training recurrent neural network (RNN) auto-encoders, a self-supervised method in which the neural network trains a model to accurately represent the input data. In particular, long short-term memory RNNs have recently been used to accurately identify mitotic events \cite{Phan2018} in cell video data and classes of random migration based on cell tracks \cite{Kimmel2019}. The two approaches are not mutually exclusive; MaxCal can provide a neat, compressed basis in which to identify behavioural states of cells, whilst RNNs could be used to learn time-series rules for transitions between behavioural states.
It is increasingly clear that cell shape is a powerful discriminatory tool \cite{Marklein2017}. For example, diffeomorphic methods of shape analysis have the power to discriminate between healthy and diseased nuclei \cite{Rohde2008}. Shape characteristics can also be used as an indicator of gene expression \cite{Bakal2013}: an automated, shape-based classification of \emph{Drosophila} haemocytes recently showed that shape characteristics correlate with gene expression levels, specifically that depletion of the tumor suppressor PTEN leads to populations with high numbers of rounded and elongated cells. Of particular note is the observation from this study that genes regulate shape transitions as opposed to the shapes themselves, illustrating the importance of tools to quantify behavior as well as shape. This may be an appropriate approach to take if, for example, creating automated assistance for pathologists when classifying melanocytic lesions (a task which has already proved tractable to automated image analyses \cite{Esteva2017}), as classes are few in number, predefined and extensive training data are available. A drawback of the method used by \cite{Bakal2013} is that their classes are decided in advance, and the divisions between them are arbitrary. This means that the method cannot find novel important features of shape by definition, as it can only pick between classes decided upon by a person in advance.
A stronger alternative would be to take some more general description of shape and behavior (such as the one we detail here), which could be used to give biopsied cells a quantitative signature. Training would then map these data not onto discrete classes, but onto measured outcomes based on the long-term survival of patients. It will be important for any such method to account for the heterogeneity of primary tissue samples as small sub-populations, lost in gross measurements, may be key determinants of patient outcomes. Such an approach would allow a classifier to identify signs of disease and metastatic potential not previously observed or conceived of by the researchers themselves. As machine learning advances, it will be vital to specify the problem without the insertion of our own biases. Then, behavioral quantification will become a powerful tool for medicine.
\section{Methods}
{\bf Cell culture.} The cells used in our experiments are either of the \emph{Dictyostelium discoideum} AX2 strain, or a stable myosin heavy-chain knockout (\( \emph{mhcA}^{-} \)) in an AX2 background. Cells are grown in a medium containing \(10 \mu g/mL\) Geneticin 418 disulfate salt (G418) (Sigma-Aldrich) and \(10 \mu g/mL\) Blasticidine S hydrochloride (Sigma-Aldrich). Cells are concentrated to \( c = 5 \times 10^{6}\) cells\(/mL\) in shaking culture (150 rpm). Five hours prior to the experiment, cells are washed with \(17 mM \) K-Na PBS pH 6.0 (Sigma-Aldrich). Four hours prior to the experiment, cells are pulsed every 6 minutes with 200nM cAMP, and are introduced into the microfluidic chamber at \(c = 2.5 \times 10^{5} \)cells\(/mL\). Measurements are performed with cells starved for 5-7 h. Drug-treated cells were exposed to \(200pM\) p-bromophenacyl bromide and \(50nM\) LY294002. No. cells sampled in AX2 control, drug-treated, AX2 parent, \( \emph{mhcA}^{-} \) are, respectively, 313, 23, 858,198.
{\bf Microfluidics and imaging.} The microfluidic device is made of a \(\mu\)-slide 3-in-1 microfluidic chamber (Ibidi) as described in (12), with three \(0.4 \times 1.0 mm^{2}\) inflows that converge under an angle of \(\alpha = 32^{\circ}\) to the main channel of dimension \(0.4 \times 3.0 \times 23.7 mm^{3}\). Both side flows are connected to reservoirs, built from two \(50 ml\) syringes (Braun Melsungen AG), separately connected to a customized suction control pressure pump (Nanion). Two micrometer valves (Upchurch Scientific) reduce the flow velocities at the side flows. The central flow is connected to an infusion syringe pump (TSE Systems), which generates a stable flow of \(1 ml/h\). Measurements were performed with an Axiovert 135 TV microscope (Zeiss), with LD Plan-Neofluar objectives \(20x/0.50 N.A.\) and \(40x/0.75 N.A.\) (Zeiss) in combination with a DV2 DualView system (Photometrics). A solution of \(1 \mu M\) Alexa Fluor 568 hydrazide (Invitrogen) was used to characterize the concentration profile of cAMP (Sigma-Aldrich) because of their comparable molecular weight.
{\bf Image preprocessing.} We extracted a binary mask of each cell from the video data using Canny edge detection, thresholding, and binary closing and filling. The centroid of each mask was extracted to track cell movement. Overlapping masks from multiple cells were discarded in order to avoid unwanted contact effects, such as distortions through contact pressure and cell-cell adhesion. For each binary mask, the coordinates with respect to the centroid of 64 points around the perimeter were encoded in a complex number, with each shape therefore recorded as a 64 dimensional vector of the form \({\bf S} = {\bf x} + i{\bf y}\). These vectors were passed through a fast Fourier transform in order to create Fourier shape descriptors. Principal component analysis was performed on the power spectra (with the power spectrum \(P(f) = |s(f)|^{2}\) for the frequency-domain signal \(s(f)\)) to find the dominant modes of variation. This approach is superior to simple descriptors such as circularity and elongation, as key directions of variability within the high-dimensional shape data cannot be known a-priori. As we have previously reported \cite{Tweedy2013}, 90\% of \emph{Dictyostelium} shape variation can be accounted for using the first three principal components (PCs), corresponding to the degree of cell elongation (PC 1), pseudopod splitting (PC 2) and polarization in the shape (PC 3) (Fig. 1B), with around 85\% of variability accounted for in just two, and 80\% in one. \newline
\section{Acknowledgements}
We are grateful to B\"{o}rn Meier for sharing his data, and to both Andr\'{e} Brown, Linus Schumacher and Peter Thomason for a critial reading of the manuscript. This work was supported by Cancer Research UK core funding (L.T.), the Deutsche Forschungs-gemeinschaft (DFG fund HE5958/2-1), the Volkswagen Foundation grant I/85100 (D.H), and the BBSRC grant BB/N00065X/1 (R.G.E.) well as the ERC Starting Grant 280492-PPHPI (R.G.E.).
\section*{Author contributions statement}
LT and RGE designed the study. LT and PW performed the experiments, and LT conducted
data analysis and modelling. All authors (LT, PW, DH, RHI, RGE) analyzed results and data,
and wrote the paper.
\section*{Additional information}
\textbf{Competing interests}
All authors declare that there is no conflict of interest, neither financial nor non-financial.
| proofpile-arXiv_065-7227 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Word embeddings are fixed-length vector representations for words \cite{mikolov2013efficient,cui2018survey}. In recent years, the morphology of words is drawing more and more attention \cite{cotterell2015morphological}, especially for Chinese whose writing system is based on logograms\footnote{https://en.wikipedia.org/wiki/Logogram}.
\begin{CJK}{UTF8}{gbsn}
With the gradual exploration of the semantic features of Chinese, scholars have found that not only words and characters are important semantic carriers, but also stroke\footnote{https://en.wikipedia.org/wiki/Stroke\_(CJK\_character)} feature of Chinese characters is crucial for inferring semantics \cite{cao2018cw2vec}. Actually, a Chinese word usually consists of several characters, and each character can be further decomposed into a stroke sequence which is certain and changeless, and this kind of stroke sequence is very similar to the construction of English words. In Chinese, a particular sequence of strokes can reflect the inherent semantics. As shown in the upper half of Figure \ref{fig:example}, the Chinese character ``驾" (drive) can be decomposed into a sequence of eight strokes, where the last three strokes together correspond to a root character ``马" (horse) similar to the root ``clar" of English word ``declare" and ``clarify".
Moreover, Chinese is a language originated from Oracle Bone Inscriptions (a kind of hieroglyphics). Its character glyphs have a spatial structure similar to graphs which can convey abundant semantics \cite{su2017learning}. Additionally, the critical reason why Chinese characters are so rich in morphological information is that they are composed of basic strokes in a 2-D spatial order. However, different spatial configurations of strokes may lead to different semantics. As shown in the lower half of Figure 1, three Chinese characters ``入" (enter), ``八" (eight) and ``人" (man) share exactly a common stroke sequence, but they have completely different semantics because of their different spatial configurations.
\end{CJK}
\begin{figure*}[t]
\centering
\includegraphics[width=5.5in]{figure/example.pdf}
\begin{CJK}{UTF8}{gbsn}
\caption{The upper part is an example for illustrating the inclusion relationship hidden in strokes order and character glyphs. The lower part reflects that a common stroke sequence may form different Chinese characters if their spatial configurations are different.}
\label{fig:example}
\end{CJK}
\vspace{-0.2cm}
\end{figure*}
In addition, some biological investigations have confirmed that there are actually two processing channels for Chinese language. Specifically, Chinese readers not only activate the left brain which is a dominant hemisphere in processing alphabetic languages \cite{springer1999language,knecht2000language,paulesu2000cultural}, but also activate the areas of the right brain that are responsible for image processing and spatial information at the same time \cite{tan2000brain}.
Therefore, we argue that the morphological information of characters in Chinese consists of two parts, i.e., the sequential information hidden in root-like strokes order, and the spatial information hidden in graph-like character glyphs. Along this line, we propose a novel Dual-channel Word Embedding (DWE) model for Chinese to realize the joint learning of sequential and spatial information in characters. Finally, we evaluate DWE on two representative tasks, where the experimental results exactly validate the superiority of DWE in capturing the morphological information of Chinese.
\section{Releated Work}
\subsection{Morphological Word Representations}
Traditional methods on getting word embeddings are mainly based on the distributional hypothesis \cite{harris1954distributional}: words with similar contexts tend to have similar semantics. To explore more interpretable models, some scholars have gradually noticed the importance of the morphology of words in conveying semantics \cite{luong2013better,qiu2014co}, and some studies have proved that the morphology of words can indeed enrich the semantics of word embeddings \cite{sak2010morphology,soricut2015unsupervised,cotterell2015morphological}. More recently, Wieting et al. \shortcite{wieting2016charagram} proposed to represent words using character \textit{n}-gram count vectors. Further, Bojanowski et al. \shortcite{bojanowski2017enriching} improved the classic skip-gram model \cite{mikolov2013efficient} by taking subwords into account in the acquisition of word embeddings, which is instructive for us to regard certain stroke sequences as roots in English.
\subsection{Embedding for Chinese Language}
The complexity of Chinese itself has given birth to a lot of research on Chinese embedding, including the utilization of character features \cite{chen2015joint} and radicals \cite{sun2014radical,yin2016multi,yu2017joint}. Considering the 2-D graphic structure of Chinese characters, Su and Lee \shortcite{su2017learning} creatively proposed to enhance word representations by character glyphs. Lately, Cao et al. \shortcite{cao2018cw2vec} proposed that a Chinese word can be decomposed into a sequence of strokes which correspond to subwords in English, and Wu et al. \shortcite{wu2019glyce} designed a Tianzige-CNN to model the spatial structure of Chinese characters from the perspective of image processing. However, their methods are either somewhat loose for the stroke criteria or unable to capture the interactions between strokes and character glyphs.
\section{DWE Model}
As we mentioned earlier, it is reasonable and imperative to learn Chinese word embeddings from two channels, i.e., a sequential stroke \textit{n}-gram channel and a spatial glyph channel. Inspired by the previous works~\cite{chen2015joint, dong2016character, su2017learning, wu2019glyce}, we propose to combine the representation of Chinese words with the representation of characters to obtain finer-grained semantics, so that unknown words can be identified and their relationship with other known Chinese characters can be found by distinguishing the common stroke sequences or character glyph they share.
\begin{CJK}{UTF8}{gbsn}
Our DWE model is shown in Figure~\ref{fig:framework}. For an arbitrary Chinese word $w$, e.g., ``驾车", it will be firstly decomposed into several characters, e.g., ``驾" and ``车", and each of the characters will be further processed in a dual-channel character embedding sub-module to refine its morphological information. In sequential channel, each character can be decomposed into a stroke sequence according to the criteria of Chinese writing system as shown in Figure~\ref{fig:example}. After retrieving the stroke sequence, we add special boundary symbols $<$ and $>$ at the beginning and end of it and adopt an efficient approach by utilizing the stroke \textit{n}-gram method ~\cite{cao2018cw2vec}\footnote{We apply a richer standard of strokes (32 kinds of strokes) than they did (only 5 kinds of strokes).} to extract strokes order information for each character. More precisely, we firstly scan each character throughout the training corpus and obtain a stroke \textit{n}-gram dictionary $G$. Then, we use $G(c)$ to denote the collection of stroke \textit{n}-grams of each character $c$ in $w$. While in spatial channel, to capture the semantics hidden in glyphs, we render the glyph $I_c$ for each character $c$ and apply a well-known CNN structure, LeNet \cite{lecun1998gradient}, to process each character glyph, which is also helpful to distinguish between those characters that are identical in strokes.
\end{CJK}
After that, we combine the representation of words with the representation of characters and define the word embedding for $w$ as follows:
\begin{equation}
\textbf{w} = \textbf{w}_{ID} \oplus \frac{1}{N_c}(\sum_{c \in w} {\sum_{g \in G(c)} \textbf{g} \ast CNN(I_c)}),
\label{eq:2}
\end{equation}
\noindent where $\oplus$ and $\ast$ are compositional operation\footnote{There are a variety of options for $\oplus$ and $\ast$, e.g., addition, item-wise dot product and concatenation. In this paper, we uses the addition operation for $\oplus$ and item-wise dot product operation for $\ast$.}. $\textbf{w}_{ID}$ is the word ID embedding and $N_c$ is the number of characters in $w$.
\begin{figure}[t]
\centering
\includegraphics[width=3in]{figure/framework.pdf}
\begin{CJK}{UTF8}{gbsn}
\vspace{-0.4cm}
\caption{An illustration of our Dual-channel Word Embedding (DWE) model.}
\label{fig:framework}
\end{CJK}
\vspace{-0.4cm}
\end{figure}
According to the previous work~\cite{mikolov2013efficient}, we compute the similarity between current word $w$ and one of its context words $e$ by defining a score function as $s(w, e) = \textbf{w} \cdot \textbf{e}$, where $\textbf{w}$ and $\textbf{e}$ are embedding vectors of $w$ and $e$ respectively. Following the previous works \cite{mikolov2013efficient,bojanowski2017enriching}, the objective function is defined as follows:
\begin{equation}
\small
\begin{split}
\mathcal{L} = \sum_{w \in D} \sum_{e \in T(w)} log \sigma (s(w, e)) + \lambda \mathbb{E}_{e'\sim P} [log \sigma(-s(w, e'))],
\end{split}
\label{eq:3}
\end{equation} where $\lambda$ is the number of negative samples and $\mathbb{E}_{e'\sim P}[\cdot]$ is the expectation term. For each $w$ in training corpus $D$, a set of negative samples $T(w)$ will be selected according to the distribution $P$, which is usually set as the word unigram distribution. And $\sigma$ is the sigmoid function.
\begin{table*}[t]
\centering
\footnotesize
\caption{Performance on word similarity and word analogy task. The dimension of embeddings is set as 300. The evaluation metric is $\rho$ for word similarity and accuracy percentage for word analogy.}
\begin{tabular}{c|cc|ccc|ccc}
\hline
\multirow{3}{*}{Model} & \multicolumn{2}{c|}{\multirow{2}{*}{Word Similarity}} & \multicolumn{6}{c}{Word Analogy} \\
\cline{4-9} & \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{3CosAdd} & \multicolumn{3}{c}{3CosMul} \\
\cline{2-9} & wordsim-240 & wordsim-296 & Capital & City & Family & Capital & City & Family \\
\hline
Skipgram \cite{mikolov2013efficient} & 0.5670 & 0.6023 & \textbf{0.7592} & \textbf{0.8800} & 0.3676 & \textbf{0.7637} & \textbf{0.8857} & 0.3529 \\
CBOW \cite{mikolov2013efficient} & 0.5248 & 0.5736 & 0.6499 & 0.6171 & 0.3750 & 0.6219 & 0.5486 & 0.2904 \\
GloVe \cite{pennington2014glove} & 0.4981 & 0.4019 & 0.6219 & 0.7714 & 0.3167 & 0.5805 & 0.7257 & 0.2375 \\
sisg \cite{bojanowski2017enriching} & 0.5592 & 0.5884 & 0.4978 & 0.7543 & 0.2610 & 0.5303 & 0.7829 & 0.2206 \\
\hline
CWE \cite{chen2015joint} & 0.5035 & 0.4322 & 0.1846 & 0.1714 & 0.1875 & 0.1713 & 0.1600 & 0.1583 \\
GWE \cite{su2017learning} & 0.5531 & 0.5507 & 0.5716 & 0.6629 & 0.2417 & 0.5761 & 0.6914 & 0.2333 \\
JWE \cite{yu2017joint} & 0.4734 & 0.5732 & 0.1285 & 0.3657 & 0.2708 & 0.1492 & 0.3771 & 0.2500 \\
cw2vec \cite{cao2018cw2vec} & 0.5529 & 0.5992 & 0.5081 & 0.7086 & 0.2941 & 0.5465 & 0.7714 & 0.2721 \\
\hline
DWE (ours) & \textbf{0.6105} & \textbf{0.6137} & 0.7120 & 0.7486 & \textbf{0.6250} & 0.6765 & 0.7257 & \textbf{0.6140} \\
\hline
\end{tabular}
\label{tab:results}%
\vspace{-0.4cm}
\end{table*}%
\section{Experiments}
\subsection{Dataset Preparation}
We download parts of Chinese Wikipedia articles from Large-Scale Chinese Datasets for NLP\footnote{https://github.com/brightmart/nlp\_chinese\_corpus}. For word segmentation and filtering the stopwords, we apply the jieba\footnote{https://github.com/fxsjy/jieba} toolkit based on the stopwords table\footnote{https://github.com/YueYongDev/stopwords}.
Finally, we get 11,529,432 segmented words. In accordance with their work~\cite{chen2015joint}, all items whose Unicode falls into the range between 0x4E00 and 0x9FA5 are Chinese characters. We crawl the stroke information of all 20,402 characters from an online dictionary\footnote{https://bihua.51240.com/} and render each character glyph to a 28 $\times$ 28 1-bit grayscale bitmap by using \textit{Pillow}\footnote{https://github.com/python-pillow/Pillow}.
\subsection{Experimental Setup}
We choose \textit{adagrad}~\cite{duchi2011adaptive} as our optimizing algorithm, and we set the batch size as 4,096 and learning rate as 0.05. In practice, the slide window size $n$ of stroke $n$-grams is set as $3 \leq n \leq 6$.
The dimension of all word embeddings of different models is consistently set as 300. We use two test tasks to evaluate the performance of different models: one is \textit{word similarity}, and the other is \textit{word analogy}. A word similarity test consists of multiple word pairs and similarity scores annotated by humans. Good word representations should make the calculated similarity have a high rank correlation with human annotated scores, which is usually measured by the Spearman's correlation $\rho$~\cite{zar1972significance}.
The form of an analogy problem is like ``king":``queen" = ``man":``?", and ``woman" is the most proper answer to ``?". That is, in this task, given three words $a$, $b$, and $h$, the goal is to infer the fourth word $t$ which satisfies ``$a$ is to $b$ that is similar to $h$ is to $t$". We use $3CosAdd$ \cite{mikolov2013efficient} and $3CosMul$ function \cite{levy2014linguistic} to calculate the most appropriate word $t$. By using the same data used in \cite{chen2015joint} and \cite{cao2018cw2vec}, we adopt two manually-annotated datasets for Chinese word similarity task, i.e., wordsim-240 and wordsim-296~\cite{jin2012semeval} and a three-group\footnote{capitals of countries, (China) states/provinces of cities, and family relations. } dataset for Chinese word analogy task.
\subsection{Baseline Methods}
We use gensim\footnote{https://radimrehurek.com/gensim/} to implement both CBOW and Skipgram and apply the source codes pulished by the authors to implement CWE\footnote{https://github.com/Leonard-Xu/CWE}, JWE\footnote{https://github.com/hkust-knowcomp/jwe}, GWE\footnote{https://github.com/ray1007/GWE} and GloVe\footnote{https://github.com/stanfordnlp/GloVe}. Since Cao et al.~\shortcite{cao2018cw2vec} did not publish their code, we follow their paper and reproduce cw2vec in mxnet\footnote{https://mxnet.apache.org/} which we also use to implement sisg~\cite{bojanowski2017enriching}\footnote{ http://gluon-nlp.mxnet.io/master/examples/word\_embed-ding/word\_embedding\_training.html} and our DWE. To encourage further research, we will publish our model and datasets.
\subsection{Experimental Results}
\begin{CJK}{UTF8}{gbsn}
The experimental results are shown in Table \ref{tab:results}. We can observe that our DWE model achieves the best results both on dataset wordsim-240 and wordsim-296 in the similarity task as expected because of the particularity of Chinese morphology, but it only improves the accuracy for the \textit{family} group in the analogy task.
Actually, it is not by chance that we get these results, because DWE has the advantage of distinguishing between morphologically related words, which can be verified by the results of the similarity task. Meanwhile, in the word analogy task, those words expressing family relations in Chinese are mostly compositional in their character glyphs. For example, in an analogy pair ``兄弟" (brother) : ``姐妹" (sister) = ``儿子" (son) : ``女儿" (daughter), we can easily find that ``兄弟" and ``儿子" share an exactly common part of glyph ``儿" (male relative of a junior generation) while ``姐妹" and ``女儿" share an exactly common part of glyph ``女" (female), and this kind of morphological pattern can be accurately captured by our model. However, most of the names of countries, capitals and cities are transliterated words, and the relationship between the morphology and semantics of words is minimal, which is consistent with the findings reported in ~\cite{su2017learning}. For instance, in an analogy pair ``西班牙" (Spain) : ``马德里" (Madrid) = ``法国" (France) : ``巴黎" (Paris), we cannot infer any relevance among these four words literally because they are all translated by pronunciation.
In summary, since different words that are morphologically similar tend to have similar semantics in Chinese, simultaneously modeling the sequential and spatial information of characters from both stroke \textit{n}-grams and glyph features can indeed improve the modeling of Chinese word representations substantially.
\end{CJK}
\section{Conclusions}
In this article, we first analyzed the similarities and differences in terms of morphology between alphabetical languages and Chinese. Then, we delved deeper into the particularity of Chinese morphology and proposed our DWE model by taking into account the sequential information of strokes order and the spatial information of glyphs. Through the evaluation on two representative tasks, our model shows its superiority in capturing the morphological information of Chinese.
| proofpile-arXiv_065-7248 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Altruism or cooperativity \cite{tr71} describe behavior that is more in favor of others than of the actor herself. Alarm calls are an example of altruistic behavior: Increasing the risk of becoming prey itself first, one individual of a group warns the others of a predator approaching \cite{ta87}. At first glance, the observation of altruism sustained over generations appears incompatible with Darwin's theory of natural selection, featuring the survival of the fittest \cite{darwin1859,spencer1864}. If non-altruists acting only to their own benefit have an advantage over altruists in terms of reproductive success, altruistic traits eventually disappear.
The question of sustained altruism and cooperativity has been addressed in the framework of evolutionary game theory, in particular by work on the Prisoner's Dilemma and Public Goods Games \cite{da80,axha81}. In these and other games, the time evolution of the system is assumed to take place as a sequence of two elementary steps: (i) the combined behavioral choices of the participants lead to an assignment of a payoff to each player which (ii) determines the choice of their future strategies or roles. In the simplest case, with two possible strategies, cooperation and defection, the dilemma arises as follows. Regardless of the other agent's move, an agent's best (highest payoff) move is always defection. On the other hand, the sum of all players payoffs is maximal when all cooperate. Therefore, natural selection always favors defection \cite{da80}, despite cooperation is the best global strategy.
The aforementioned social dilemma is frequently analyzed by means of the replicator equation \cite{ohno06,rocusa09,brpa18} describing the time evolution of the fraction of players holding one of the two strategies. If the fitness of an individual equals its payoff, the resulting replicator equation for the Prisoner's Dilemma has only two steady-state solutions, the only-defector and the only-cooperator solutions, the former being the only stable one. Nevertheless, the prevalence of cooperation is still possible within the context of evolutionary games, provided appropriate reciprocity mechanisms are included in the dynamics \cite{tr71,axha81,leke06,za14,wakojuta15}: Direct reciprocity, indirect reciprocity, kin selection, group selection, and network structure. If compared to the well-mixed situation, the new mechanisms include update rules that favor the interactions among cooperators.
The network structure mechanism was one of the first reciprocity mechanisms studied in the literature. It refers to the restriction of agents interactions among neighbors. In a two-dimensional regular network, the survival of altruists was explained in terms of their ability of preventing the exploitation of defectors through the formation of clusters \cite{noma92,namaiw97,tamaiw97,iwnale98}. Further progress in the field considered births and deaths: The second step of the dynamics, the one that allows a change of the strategy, is now interpreted as a death of a player followed by a birth. The new ecologic perspective allowed to assess the importance of new relevant issues, such as the fluctuation of the population density \cite{miwi00,lilihiiwboni13,Huang:2015,mcfrhawano18}, the movement of agents \cite{bara98,yiliwalulimali14,busz05,vasiar07,lero10}, the spatial distribution of neighbors and their number \cite{ifkido04,sena09}, among others. Recent works also consider networks of interactions \cite{ohno06,asgola08,fuwanoha09,notaan10,pegoszflmo13,wakojuta15,kiyoki16}, focus on the critical properties of the system \cite{hasz05,szvusz05,vuszsz06}, include other novel dynamic rules \cite{raamnaoh10,pesz10,pisapa12,liliclgu15,pachadno15,iyki16,szpe17}, analyze the formation of patterns \cite{noma92,szfa07,lanoha08,funoha10,waha11,yabasa18}, and evaluate the effect on the population growing as external pressure rises \cite{Sella:2000}. The latter aspect has been widely analyzed in the context of competing species \cite{gamere13,dodi13}, but has not received much attention in relation with the prevalence of altruism.
Although general considerations about the prevalence of altruism in the context of the Public Goods Games can be inferred from the numerous studies on the topic \cite{ohhalino06,ko11}, the behavior of cooperation turns out to be very dependent on the specific dynamics considered \cite{ha06,rocusa09a}. This is the case when trying to evaluate the importance of the spatial heterogeneity and the formation of clusters of cooperators: Many studies \cite{namaiw97,iwnale98,lefedi03,thel03,lilihiiwboni13} explain the coexistence of cooperation and defection using the so called pair approximation, an approach that goes one step beyond mean field by tracking the dynamics of pairs of neighbors. However, pair approximation still assumes spatial homogeneity of the system. Hence, there is no need for the formation of clusters of cooperators for explaining their long-term survival.
Recent works on the evolution of cooperation suggest the need of giving up on certain common statements of evolutionary game theory \cite{gaferutacusamo12,grrosemitr12,pejorawabosz17,sa18}. Particularly, some experiments on the dynamics of human cooperation show that people choose their strategy regardless the payoff of others \cite{grgrmisetrcumosa14}. Similar conclusions are given in the context of living beings \cite{doissi17}. See also recent experimental and numerical works on related topics \cite{waszpe12,kudi13,nadrfo16,leahha17}.
Here we study the evolution of cooperation in the framework of interacting particle systems. We model birth and death in a spatially extended population as a contact process and ask the following: What is the phase diagram of the contact process with an additional --- cooperative--- type of particle that supports survival at neighboring sites?
Our approach provides a natural framework to assess the effects of different mechanisms on the behavior of the system and on the survival of cooperativity, such as the dynamics of interactions, the fluctuation in the population size, the presence or absence of cooperation clusters, and the spatial variation of parameters, among others.
The organization of the work is as follows. In Sec.~\ref{sec:2} we introduce the agent-based model of a population of cooperators and defectors living on a generic network. For later sections the main focus is on the square lattice, where the system has only three relevant parameters: the total number of sites $N$, the death parameter $p$, and the cost-of-altruism parameter $\epsilon$. Section \ref{sec:3} includes stochastic simulations. We obtain the phase diagram in the parameter space $(p,\epsilon)$ showing the steady-state configurations of the system. The effect of $p$ being spatially dependent is also addressed. In Sec.~\ref{sec:4} the system is described theoretically. Three complementary formulations, using main-field or pair-approximation approaches, are given. They aim at describing the system under different physical conditions. Finally, a discussion and outlook of the main results are included in Sec.~\ref{sec:5}.
\section{Definition of the model}
\label{sec:2}
The model describes the evolution of a population on an arbitrary network with $N$ nodes. The set of neighbors of a node $i$ is denoted by $N_i$. The network is symmetric (undirected), so that $j \in N_i$ implies $i\in N_j$; also $i \notin N_i$ (no self-loops). Each agent in the population is either a cooperator $C$ or a defector $D$, with $c_i$ and $d_i$ being their respective numbers at node $i$. A site or node of the network holds at most one agent ($C$ or $D$) but it may also be empty ($E$), hence $0\le c_i+d_i\le 1$ and $c_id_i=0$. Thus the state of the system $S$ is given by
\begin{equation}
\label{eq:1}
S=\{c_i,d_i\}_{i=1}^N\equiv \{x_i\}_{i=1}^N,\quad x_i\in\{c_i,d_i\},
\end{equation}
where $X$ is either a cooperator or a defector, and $x_i$ its number at site $i$. Moreover, the number $e_i=1-x_i$ gives $1$ if site $i$ is empty and $0$ if $x_i=1$. From condition $c_id_i=0$ we also have $e_ix_i=0$.
A state transition is either the birth or the death of one agent at a site $i$. At the birth of a cooperator we set $c_i=1$ at a previously empty site $i$,
\begin{equation}
e_i=1\xrightarrow{\pi_b(c_i,S)} c_i=1
\end{equation}
Likewise for the birth of a defector, $d_i=1$ is set at an empty site $i$,
\begin{equation}
e_i=1\xrightarrow{\pi_b(d_i,S)} d_i=1
\end{equation}
These transitions occur at a rate proportional to the fraction of neighboring sites occupied by the agent type to be born, as \\
\begin{align}
\label{eq:2}
&\pi_b(c_i,S) = e_i \sum_{j \in N_i}c_j/k_j \equiv e_i \tilde{c}_i,& \\
\label{eq:3}
&\pi_b(d_i,S) = e_i \sum_{j \in N_i}d_j/k_j \equiv e_i \tilde{d}_i,&
\end{align}
where $k_i = |N_i|$ is the degree (number of those neighbors) of node $i$, and $\tilde{x}_i\equiv \sum_{j \in N_i} x_j/k_j$. The death of an agent is a state transition setting $c_i=0$ or $d_i=0$ at a previously occupied site $i$,
\begin{eqnarray}
&&c_i=1\xrightarrow{\pi_d(c_i,S)} e_i=1, \\
&&d_i=1\xrightarrow{\pi_d(d_i,S)} e_i=1,
\end{eqnarray}
with respective rates
\begin{align}
\nonumber
\pi_d(c_i,S)=&p\left[c_i\bar e_i+(1-\epsilon)c_i\bar c_i+(2-\epsilon)c_i\bar d_i\right]& \\
\label{eq:4}
=& pc_i\left\{1- \left[\bar c_i - (1-\epsilon) (\bar c_i+\bar d_i)\right] \right\},& \\
\nonumber
\pi_d(d_i,S) =&p\left(d_i\bar e_i+d_i\bar d_i\right)& \\
\label{eq:5}
=&p d_i \left(1- \bar c_i\right),&
\end{align}
where now $\bar{x}_i\equiv k_i^{-1} \sum_{j \in N_i} x_j$. Agents die at a baseline rate $p$. This rate is reduced, however, by the fraction of adjacent sites occupied by a cooperator. The death rate of a cooperator, on the other hand, has an additional positive term proportional (with factor $1-\epsilon$) to the fraction of adjacent agents. This way, the parameter $\epsilon$ accounts for the cost of the altruistic act, the limit of $\epsilon=0$ corresponding to maximum cost where the altruist definitely loses its life for saving that of its neighbor. The other limit is costless altruism at $\epsilon=1$.
In the absence of cooperators, or in the absence of defectors with $\epsilon=0$, the model reduces to the contact process \cite{ha74,madi05} equivalent to the SIS (susceptible-infected-susceptible) model of epidemics \cite{he89,albrdrwu08}. The equivalence is obtained by mapping each empty site to a susceptible individual and each site with a defector to an infected individual.
\section{Simulations}
\label{sec:3}
Let us first illustrate and numerically analyze the dynamics on periodic square lattices. As defined above, the model features non-ergodicity. Eventually both types of agents go extinct in a finite size system. In the simulations in this section, a slightly modified version of the model is employed: We set to zero the death rate of an agent currently being the only one of its type (C or D). This allows us to take long-term measurements of concentrations and distributions without having to restart the dynamics. Given the rates, simulations are performed with a standard Gillespie algorithm \cite{gi76,gi07}.
\subsection{Square lattice with homogeneous parameters} \label{subsec:sqlhom}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{latr_1_nolegend.pdf}}
\caption{\label{fig:densities}
Average densities of agents on a square lattice of $50 \times 50$ sites as a function of parameters $p$ and $\epsilon$ (large center panel). For each combination of parameters, the agents' concentrations $\mean{c}$ and $\mean{d}$ are encoded by color. Red indicates high concentration of cooperators; blue indicates high concentration of defectors; green is for co-existence of the two types, and white for a low overall concentration of agents. Arrows labeled with letters (a)--(e) indicate parameter combinations further analyzed in Figure\ \ref{fig:distributions}. The panels in the top row are snapshots of typical system states encountered for $p \in \{0.2, 0.4, 0.7\}$ (panels left to right) with $\epsilon= 0.75$.
}
\end{figure}
Figure \ref{fig:densities} shows the parameter dependence of the stationary mean concentrations of agents. At $\epsilon=0$, cooperators are absent in the whole range of $p$, while the concentration of defectors is positive for $p < p_c \approx 0.62$ and vanishes for $p>p_c$. Now fixing $0<\epsilon<1$ and increasing $p$ from $0$ to $1$, the concentration of defectors $\mean{d}$ still decreases with $p$. Before $\mean{d}$ reaches zero, however, the concentration of cooperators $\mean{c}$ becomes
positive. Simulations on square lattices of smaller size ($N=20^2$, $N=30^2$) and checks with $N=100^2$ yield results almost identical to those of Fig.\ \ref{fig:densities}.
\begin{figure*}
\centerline{\includegraphics[width=.8\textwidth]{all_0.pdf}}
\caption{\label{fig:all_concentration}
Total concentration of agents (a) on square lattices with $N=50 \times 50$ sites and (b) from the numerical solution of the pair approximation, Eqs \eqref{eq:37}-\eqref{eq:41}. In both (a) and (b),
the three curves are for parameter values $\epsilon = 0.99, 0.75, 0.10$ (top to bottom). The insets zoom in on the curves for $\epsilon =0.10$. The inset of (a) shows these curves for different system sizes $N=30 \times 30$ (dotted curve), $N=50\times 50$ (solid curve), and $N=100 \times 100$ (dashed curve).
}
\end{figure*}
In the coexistence regime of cooperators and defectors (green area in Figure \ref{fig:densities}), the growth of cooperation outweighs the decline of defection. Here the total concentration of agents grows with $p$,
\begin{equation}
\frac{\partial (\mean{c} + \mean{d})}{\partial p} >0~.
\end{equation}
Figure~\ref{fig:all_concentration}(a) explicitly shows this non-monotonicity by plotting $\mean{c} + \mean{d}$ versus $p$ for different choices of $\epsilon$.
\begin{figure*}
\centerline{\includegraphics[width=.8\textwidth]{latrhist_0.pdf}}
\caption{\label{fig:distributions}
Distributions of the number of agents on a square lattice with $50 \times 50$ sites. In the lower row, panels (c), (d), and (e), $\epsilon = 0.75$.
Each panel describes a transition between presence and absence of a type of agent. The transitions are also marked in Fig.\ \ref{fig:densities} with the panel identifiers (a)--(e).
}
\end{figure*}
Let us now take a closer look at the transitions between the regimes observed in Figure~\ref{fig:densities}. To this end, we record the distributions in the number of agents (each type separately) and consider their changes under parameter variation. Figure~\ref{fig:distributions} shows this analysis for five transitions (a)-(e), also marked in the bottom panel of Figure~\ref{fig:densities}.
Transitions in Figs.~\ref{fig:distributions}(a) and \ref{fig:distributions}(e) are extinctions of one type of agent in the absence of the other type. However, the transitions are distinguishable by the approximate exponents of the algebraic decay of distributions, giving $1/4$ for the extinction of defectors versus $3/7$ for cooperators. This indicates that, even in the absence of defectors and close to the extinction transition (e), the dynamics of cooperators is essentially different from the contact process.
Differences in the distributions of the order parameter (Figure~\ref{fig:densities}), however, do not contradict transitions (a)-(e) belonging to the same universality class. Transitions (a), (b) and (e) fulfill the premises of the directed percolation conjecture, cf.\ section 3.3.6 in \cite{hi00}. Transitions (c) and (d) do not fulfill the assumption of a unique absorbing state because only one type of
agent goes extinct at the transition. In preliminary numerical exploration (results not shown here), we have found the scaling of the order parameter (concentration of agents) compatible with the value $0.580(4)$ for exponent $\beta$ in directed percolation in two dimensions \cite{wazhligade13}. We conjecture that all transitions (a)-(e) belong to the universality class of directed percolation.
\subsection{Spatially dependent parameter $p$}
\begin{figure}
\centerline{\includegraphics[width=0.49\textwidth]{latrange_1.pdf}}
\caption{\label{fig:pspatial}
(a) Stationary mean concentrations for dynamics on a square lattice where parameter $p$ of the model varies with the horizontal location $x\in\{1,2,\dots,L\}$ according to Eq.\ (\ref{eq:p_of_x}). Lattice size is $L \times L$ with $L=100$. Cooperators' survival rate is $\epsilon = 0.75$ constant in space. Showing the concentration dependence on $x$, each plotted value is, for a given x, a uniform average over the $y$-coordinate of the lattice and over time $t\in [0,10 ^6]$.
(b) Snapshot of a state in the simulation as described for panel (a).
}
\end{figure}
Let us study a variation of the model with a spatial dependence of the parameter $p$, a way of mimicking ecological conditions \cite{ch00,okle13}. For an agent at lattice site $(x,y)$, $x,y\in\{1,\dots,L\}$, the death rate is based on the parameter value
\begin{equation}\label{eq:p_of_x}
p(x) = \begin{cases}
\frac{2x-1}{L} & \text{if } x\le L/2 \\
\frac{2(L-x)+1}{L} & \text{otherwise.}
\end{cases}
\end{equation}
For $L$ even, the minimum value $1/L$ is assumed by $p(x)$ at $x=1$ and $x=L$; its maximum value $1-1/L$ is obtained at $x=L/2$ and $x=L/2+1$. The parameter $\epsilon$ remains spatially homogeneous, here $\epsilon =0.75$.
Figure~\ref{fig:pspatial}(a) shows the concentration of agents as a function of lattice coordinate $x$, i.e. averaged over lattice coordinate $y$ and time. We see that the effect of parameter $p$ is local. The $p$-dependence of $\mean{c}$ and $\mean{d}$ observed under spatially homogeneous $p$ in Section~\ref{subsec:sqlhom} qualitatively matches that of the scenario with spatially dependent $p$.
\section{Analytic approximations}
\label{sec:4}
In this section, we derive three complementary theoretical descriptions of our model, defined in Sec.~\ref{sec:2}. The first two ones are based on a mean-field approximation, while the third one uses the pair approximation. As will be shown, the different approaches have different ranges of applicability and explain the prevalence/extinction and even the coexistence of altruism and defection under different physical and biological conditions. In the case of the pair approximation, a very similar phase diagram to the numerical one shown in Fig.~\ref{fig:densities} is obtained.
Our starting point is the master equation for the probability $P(S,t)$ of finding the system in state $S$ at time $t$. By means of a probabilistic balance in the continuum time limit \cite{ka92}, and using the rates given by Eqs. \eqref{eq:2}-\eqref{eq:5}, the master equation reads as
\begin{equation}
\label{eq:6}
\begin{split}
\partial_tP(S,t)=\sum_{i=1}^N&\sum_{x_i\in\{c_i,d_i\}}\left\{ (E^-_{x_i}-1)\left[\pi_b(x_i,S)P(S,t)\right] \right. \\
& \left. +(E^+_{x_i}-1)\left[\pi_d(x_i,S)P(S,t)\right] \right\},
\end{split}
\end{equation}
where the operators $E^\pm_{x_i}$ act on a generic function $f(x_1,\dots,x_i,\dots,x_N)$ as $E^\pm_{x_i}f(x_1,\dots,x_i,\dots,x_N)=f(x_1,\dots,x_i\pm 1,\dots,x_N)$, with $x_k\in\{c_k,d_k\}$, $k=1,\dots,N$.
By taking moments of the master equation \eqref{eq:6} we can derive equations for the mean numbers of cooperators and defectors in site $i$, $\mean{c_i}$ and $\mean{d_i}$. After using the relation $e_i=1-c_i-d_i$ and some manipulations, we obtain
\begin{eqnarray}
\label{eq:7}
\nonumber
\frac{d}{dt}\mean{c_i}=&& \mean{\pi_b(c_i)-\pi_d(c_i)}\\
\nonumber
=&& \mean{\tilde c_i e_i}-p\left[\mean{c_i\bar e_i}+(1-\epsilon)\mean{c_i\bar c_i}\right.\\
\nonumber
&& \left.+(2-\epsilon)\mean{c_i\bar d_i}\right]\\
\nonumber
=&&-p\mean{c_i}+\mean{\tilde c_i}-\left[\mean{c_i\tilde c_i} -\epsilon p \mean{c_i\bar c_i} \right. \\
&& \qquad \left. +p(1-\epsilon)\mean{c_i\bar d_i}+\mean{\tilde c_i d_i}\right], \\
\label{eq:8}
\nonumber
\frac{d}{dt}\mean{d_i}=&& \mean{\pi_b(d_i)-\pi_d(d_i)}\\
\nonumber =&& \mean{\tilde d_i e_i}-p\left[\mean{d_i\bar e_i}+\mean{d_i\bar d_i}\right]\\
\nonumber =&&-p\mean{d_i}+\mean{\tilde d_i} \\
&&-\left[\mean{c_i\tilde d_i} -p\mean{\bar c_i d_i}+\mean{d_i\bar d_i}\right],
\end{eqnarray}
for $i=1,\dots,N$. Since the first moments are coupled to the second ones through correlations between neighbors, it is also convenient to derive equations for the two node correlations for neighboring sites, i.e. $\mean{x_ix_j}$ with $j\in N_i$:
\begin{eqnarray}
\label{eq:9}
\nonumber
\frac{d}{dt}\mean{c_ic_j}=&& \mean{c_i\pi_b(c_j)+\pi_b(c_i)c_j-c_i\pi_d(c_j)-\pi_d(c_i)c_j} \\
\nonumber = && \mean{c_ie_j\tilde c_j}+\mean{\tilde c_i e_i c_j}-p\mean{c_ic_j(\bar e_i+\bar e_j)}\\
\nonumber && -p(1-\epsilon)\mean{c_ic_j(\bar c_i+\bar c_j)} \\
&& -p(2-\epsilon)\mean{c_ic_j(\bar d_i+\bar d_j)}, \\
\label{eq:10}
\nonumber
\frac{d}{dt}\mean{c_id_j}=&& \mean{c_i\pi_b(d_j)+\pi_b(c_i)d_j-c_i\pi_d(d_j)-\pi_d(c_i)d_j} \\
\nonumber =&& \mean{c_ie_j\tilde d_j}+\mean{\tilde c_i e_i d_j}-p\mean{\bar e_ic_id_j}\\
\nonumber && -p\mean{c_id_j(\bar e_j+\bar d_j)}-p(1-\epsilon)\mean{\bar c_ic_id_j} \\
&& -p(2-\epsilon)\mean{\bar d_ic_id_j}, \\
\label{eq:11}
\nonumber
\frac{d}{dt}\mean{d_id_j}=&& \mean{d_i\pi_b(d_j)+\pi_b(d_i)d_j-d_i\pi_d(d_j)-\pi_d(d_i)d_j} \\
\nonumber = && \mean{d_ie_j\tilde d_j}+\mean{\tilde d_i e_i d_j}-p\mean{d_id_j(\bar e_i+\bar e_j)}\\
&& -p\mean{d_id_j(\bar d_i+\bar d_j)},
\end{eqnarray}
where $\tilde x_i$ and $\bar x_i$ are defined just after Eqs. \eqref{eq:3} and \eqref{eq:5}, respectively. The two remaining moments, $\mean{c_ie_j}$ and $\mean{d_ie_j}$ can be obtained from the previous ones by means of the identity $1=e_i+c_i+d_i$, as $\mean{c_ie_j}=\mean{c_i}-\mean{c_ic_j}-\mean{c_id_j}$ and $\mean{d_ie_j}=\mean{d_i}-\mean{d_id_j}-\mean{d_ic_j}$.
Although the system of Eqs. \eqref{eq:7}-\eqref{eq:11} are exact and valid for any structure of neighbors (network), it is not closed, due to the presence of three nodes correlations. Therefore, in order to have a closed set of equations, three approximations are explored. The first two ones make use of the mean-field approximation, where two node correlations are ignored, and the third one uses pair approximation. Furthermore, we restrict ourselves to regular networks where $k_i=k$ for all $i$, so as to simplify the description (now $\tilde x_i=\bar x_i=k^{-1} \sum_{j \in N_i} x_j$).
\subsection{Exact relations}
Before proceeding with the approximations, some exact relations will be derived. They apply for homogeneous steady-state configurations.
Consider first the case of only defectors. Since $\mean{c}=0$, we also have $\mean{cc}=\mean{cd}=\mean{ce}=0$. Using Eq.~\eqref{eq:8}, together with $\mean{de}=\mean{d}-\mean{dd}$, we have
\begin{equation}
\label{eq:42}
\mean{dd}=(1-p)\mean{d},
\end{equation}
and, with Eq.~\eqref{eq:11} and the identity $\mean{dde}+\mean{ddd}=\mean{dd}$,
\begin{equation}
\label{eq:43}
p\left[1-k(1-p)\right]\mean{d}+(k-1)\mean{ded}=0.
\end{equation}
which implies, in order to have positive solutions, $1-k(1-p)\le 0$, that is
\begin{equation}
\label{eq:44}
p\le 1-\frac{1}{k}.
\end{equation}
This is an overestimation of the extinction probability of defectors, for all $\epsilon\in[0,1]$. For $\epsilon=0$, where the model is the SIS model, and the square lattice ($k=4$), the previous estimation is $0.75$ while the one from the simulations is around $0.62$ \cite{saol02, vofama09}, see also Fig.\ \ref{fig:densities}.
For the only-cooperator case, it is $\mean{d}=0$ and $\mean{dd}=\mean{cd}=\mean{de}=0$. Using Eq.~\eqref{eq:7} together with $\mean{ce}=\mean{c}-\mean{cc}$, we get
\begin{equation}
\label{eq:55}
\mean{cc}=\frac{1-p}{1-\epsilon p}\mean{c},
\end{equation}
and, with Eq.~\eqref{eq:9} and the identity $\mean{cce}+\mean{ccc}=\mean{cc}$,
\begin{equation}
\label{eq:56}
\begin{split}
&\frac{p}{1-\epsilon p}\left[p(1-\epsilon)-(k-1)(1-p)\right]\mean{c}\\
&\qquad +(k-1)\left(\mean{cec}+p\epsilon\mean{ccc}\right)=0,
\end{split}
\end{equation}
which now implies $p(1-\epsilon)-(k-1)(1-p)\le 0$ or
\begin{equation}
\label{eq:57}
p\le 1-\frac{1-\epsilon}{k-\epsilon}\ge 1-\frac{1}{k}.
\end{equation}
Again, this is an overestimation of the critical probability extinction when there are only cooperators in the system. The critical value here is bigger or equal to the one of Eq. \eqref{eq:44}, as expected due to the altruistic benefit. Equation \eqref{eq:57} also provides an estimation of the dependence of the critical probability on $\epsilon$. In particular, it tends to $1$ for $\epsilon\to 1$, in agreement with the numerical simulations of Fig. \ref{fig:densities}.
Equations \eqref{eq:44} and \eqref{eq:57}, and also the other relations, are the same for $\epsilon=0$ provided we interchange the types of particles, because the model with only defectors and only cooperators coincide in this limit. This can be seen from the rates defining the dynamics in Eqs. \eqref{eq:2}-\eqref{eq:5}: The rates for defectors in the absence of cooperators are the same as the rates for cooperators in the absence of defectors at $\epsilon=0$.
\subsection{Global mean-field approximation} \label{sec:global_mean}
For the global mean-field case, equivalent to the dynamics on a complete graph in the limit of infinite system size, correlations among nodes are absent. In general, assuming the mean-field approximation implies the following two approximations:
\begin{eqnarray}
\label{eq:12}
&& \mean{x_i x_j}\simeq \mean{x_i}\mean{x_j}, \qquad i\ne j \\
\label{eq:13}
&& \mean{x_i}\simeq \mean{x_j} \equiv \mean{x}, \qquad \text{for all } i.
\end{eqnarray}
This is also a good approximation when there is no correlation expected between the agents, for instance when there is one kind of agent and the distribution of empty sites is homogeneously distributed. Then, the concentrations $\mean{c}$ and $\mean{d}$ of cooperators and defectors evolve, according to Eqs.~\eqref{eq:7} and \eqref{eq:8}, as
\begin{eqnarray}
\label{eq:14}
\nonumber
\frac{d}{dt}\mean{c}=&&\mean{c} \left\{(1-p)-(1-\epsilon p)\mean{c} \right. \\
&& \qquad \left.-\left[1+p(1-\epsilon)\right]\mean{d}\right\}, \\
\label{eq:15}
\frac{d}{dt}\mean{d}=&&\mean{d} \left[(1-p)-(1-p)\mean{c}-\mean{d}\right].
\end{eqnarray}
The system \eqref{eq:14} and \eqref{eq:15} can be used now to analyze the homogeneous steady-state solutions. Requiring stationarity, $\frac{d}{dt}\mean{c}=\frac{d}{dt}\mean{d}=0$, we find the trivial solution $\mean{c}=\mean{d}=0$ (all sites empty) and, two other, nontrivial ones, namely
\begin{eqnarray}
\label{eq:16}
&& \mean{c}=0\; \mathrm{and} \; \mean{d}=1-p, \\
\label{eq:17}
&& \mean{c}=\frac{1-p}{1-\epsilon p}\; \mathrm{and} \; \mean{d}=0.
\end{eqnarray}
The trivial solution is clearly unstable, since the coefficient $1-p$ of the less degree terms in Eqs. \eqref{eq:14} and \eqref{eq:15} is positive for $p< 1$. However, it is an absorbing state, and their presence becomes important for small system sizes, as already mentioned in Sec. \ref{sec:3}.
In order to assess the stability of the solution with only defectors, consider the perturbation of Eq.~\eqref{eq:16}: $\mean{c}=0+\mean{c}_1$ and $\mean{d}=1-p+\mean{d}_1$ with $\mean{c}_1\sim \mean{d}_1$. Then, up to linear order in the perturbations, we have
\begin{eqnarray}
\label{eq:18}
&&\frac{d}{dt}\mean{c}_1\simeq -p(1-p)(1-\epsilon)\mean{c}_1, \\
\label{eq:19}
&&\frac{d}{dt}\mean{d}_1\simeq -(1-p)\left[(1-p)\mean{c}_1+\mean{d}_1\right].
\end{eqnarray}
The first equation, and hence the second one, have $\mean{c}_1=\mean{d}_1=0$ as the steady solution, revealing the stable character of \eqref{eq:16}. Proceeding similarly with the only-cooperators solution, Eq. \eqref{eq:17}, we obtain the system
\begin{eqnarray}
\label{eq:20}
\frac{d}{dt}\mean{c}_1\simeq && -\frac{1-p}{1-\epsilon p}\left\{(1-\epsilon p)\mean{c}_1 \right.\\
&& \nonumber \qquad \left. +\left[1+p(1-\epsilon)\right]\mean{d}_1\right\}, \\
\label{eq:21}
\frac{d}{dt}\mean{d}_1\simeq && p(1-p)\frac{1-\epsilon}{1-\epsilon p}\mean{d}_1,
\end{eqnarray}
which now reveals the unstable character of the solution, since the solution of Eq.~\eqref{eq:21} increases exponentially with time. According to this analysis, in well-mixed populations, cooperators go extinct.
\subsection{Local mean-field approximation}
We can go one step beyond the global mean-field approximation by considering situations where the concentrations of cooperators and defectors change from site to site. In particular, we suppose situations where the site dependence can be encoded through a vector $\mathbf r$, which is nothing but the vector of space position in a regular graph. This way, we deduce in the sequel a macroscopic description that removes one of the approximation of the global mean field, namely that of Eq.~\eqref{eq:13}, but still neglects correlations, Eq.~\eqref{eq:12}. The procedure is similar to the one used in Ref. \cite{khlohe17}.
By looking at the dynamics on a length scale $L$ much larger than the typical distance between sites $l$, the relevant quantities become local concentrations:
\begin{eqnarray}
\label{eq:22}
&& \kappa(\mathbf r) \equiv \mean{c_i}, \\
\label{eq:23}
&& \delta(\mathbf r) \equiv \mean{d_i}.
\end{eqnarray}
In a regular graph in $\mathbb R^d$, for example, $\kappa(\mathbf r)$ and $\delta(\mathbf r)$ give the number of cooperators and defectors inside a region of volume $l^d$ centered at position $\mathbf r$. The new quantities are assumed to be smooth functions of $\mathbf r$, a property that allows us to relate any density of site $j\in N_i$ and position $\mathbf l$, say $\chi(\mathbf r+\mathbf l)=\kappa(\mathbf r+\mathbf l)$ or $\chi(\mathbf r+\mathbf l)=\delta(\mathbf r+\mathbf l)$ at position $\mathbf r$, with that of site $i$, $\chi(\mathbf r)$, as
\begin{eqnarray}
\label{eq:24}
&&\chi(\mathbf r+\mathbf l)\simeq \chi(\mathbf r)+\nabla \chi(\mathbf r)\cdot \mathbf l+\frac{1}{2}\nabla \nabla \chi(\mathbf r): \mathbf l \mathbf l,
\end{eqnarray}
Hence, we have
\begin{equation}
\label{eq:25}
\mean{\bar x_i}=\frac{1}{k_i}\sum_{k\in N_i}\chi(\mathbf r+\mathbf l_k) \simeq \chi(\mathbf r)+\nabla^2_r\chi(\mathbf r).
\end{equation}
where we have assumed $\sum_{k\in N_i}\mathbf l_k\simeq 0$, which is an exact expression for a regular square lattice and quiet a good approximation for \emph{isotropic} configurations. Moreover,
\begin{equation}
\label{eq:26}
\nabla^2_r\chi(\mathbf r)\equiv\frac{1}{2k_i}\sum_{k\in N_i}\nabla \nabla \chi(\mathbf r): \mathbf l_k \mathbf l_k\simeq \frac{l^2}{2d}\nabla^2\chi(\mathbf r),
\end{equation}
which is valid, again, under \emph{isotropic} configurations of sites.
With approximations \eqref{eq:12}, \eqref{eq:25}, and \eqref{eq:26}, the exact system \eqref{eq:7} and \eqref{eq:8} becomes the following reaction-diffusion system
\begin{eqnarray}
\label{eq:27}
\nonumber &&\partial_t\kappa=\kappa\left\{(1-p)-(1-\epsilon p)\kappa-\left[1+p(1-\epsilon)\right]\delta \right\} \\
&& \quad +\left[1-(1-\epsilon p)\kappa-\delta\right]\nabla_r^2\kappa -p(1-\epsilon)\kappa \nabla_r^2\delta, \\
\label{eq:28}
\nonumber &&\partial_t\delta=\delta\left\{(1-p)-(1-p)\kappa-\delta \right\} \\
&& \qquad +\left[1-\kappa-\delta\right]\nabla_r^2\delta+p\delta \nabla_r^2\kappa.
\end{eqnarray}
As expected, we recover the mean-field description for homogeneous solutions, hence we still have the solutions given in Eqs. \eqref{eq:16} and \eqref{eq:17}. However, an important benefit of the present description, if compared to that of the global mean-field approximation, is the possibility of studying the latter solutions under local perturbations, in contrast to homogeneous and global ones done in the previous subsection.
Consider the homogeneous solution of Eq.~\eqref{eq:16}, $\kappa_0=0$ and $\delta_0=1-p$. Following the standard linear stability analysis, we seek solutions of the form $\kappa=\kappa_0+\kappa_1$ and $\delta=\delta_0+\delta_1$, with $\kappa_1\sim \delta_1\ll \delta_0$. After linearizing and seeking solutions of the form $\chi_1=\tilde \chi_1 e^{i\boldsymbol \xi \cdot \mathbf r}$, system \eqref{eq:27} and \eqref{eq:28} becomes
\begin{eqnarray}
\label{eq:29}
&&\partial_t\tilde\kappa_1=-p\left[(1-p)(1-\epsilon)+\frac{l^2}{2d}\xi^2\right]\tilde\kappa_1, \\
\label{eq:30}
&&\partial_t\tilde\delta_1=-\left[(1-p)+p\frac{l^2}{2d}\xi^2\right]\left[(1-p)\tilde\kappa_1+\tilde\delta_1\right].
\end{eqnarray}
The steady state solution for any wavelength $\mathbf \xi$ is the trivial one, meaning that the solution of only defectors is linearly stable: any initial and small spatial perturbation in the number of defectors (and also cooperators) tends to zero as time increases.
Proceeding similarly with the solution of Eq.~\eqref{eq:17}, we get
\begin{eqnarray}
\label{eq:31}
\nonumber &&\partial_t\tilde\kappa_1=-\left[(1-p)(1-\epsilon)+p\frac{l^2}{2d}\xi^2\right]\tilde\kappa_1, \\
&& \qquad -\frac{1-p}{1-\epsilon p}\left[1+p(1-\epsilon)\left(1-\frac{l^2}{2d}\xi^2\right)\right]\tilde\delta_1, \\
\label{eq:32}
&&\partial_t\tilde\delta_1=\frac{p(1-\epsilon)}{1-\epsilon p}\left[(1-p)-\frac{l^2}{2d}\xi^2\right]\tilde\delta_1.
\end{eqnarray}
In this case, the stability of the system depends on the value of $\xi$. Setting $\xi=2\pi/L$, the smallest allowed value for the given boundary conditions, the solution \eqref{eq:17} is stable for $p<p_c^*$ with
\begin{equation}
\label{eq:33}
p_c^*=1-\frac{2\pi^2l^2}{dL^2}\simeq 1-\frac{2\pi^2}{dN^{\frac{2}{d}}},
\end{equation}
where we have used the approximation $L/l\simeq N^{1/d}$. This means that, under this approximation, the only-cooperators solution is stable for systems small enough. For $N\to \infty$ it is $p_c^*\to 1$, and the solution is always unstable, and we recover the result of mean field.
Although the local mean-field approximation could in principle be seen as very crude, it shows the importance of taking into account the system size while describing altruism, as already pointed out in Ref. \cite{sa18}. In this case, the inclusion of spatial dependence, while still neglecting correlations, stabilizes the only-cooperators solution for $p<\tilde p_c$. Moreover, the results suggest the existence of other solutions, spatially non-homogeneous ones, and the possibility of discontinuous (first-order) transitions among them. This is because the only-defectors solution keeps always linearly stable, with no other stable solution close to it.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.49\textwidth]{pair_app_0.pdf}}
\caption{ \label{fig:pair_app_analytic}
Phases in pair approximation. The extinction transition between the regime of only cooperators (red area) to the empty system (white area) is given by the expression in $p$ and $\epsilon$ in Equation~\eqref{eq:67}. The transition between coexistence (green area) and the regime of only cooperators is described by Equation~\eqref{eq:78} using expressions \eqref{eq:79}-\eqref{eq:81}.
The transition between coexistence and the regime of only defectors (blue area) has an approximate description (dashed curve) in Equation~\eqref{eq:82} using expressions \eqref{eq:83}-\eqref{eq:85}. The exact solution (boundary between blue and green area) has been obtained as well, details given elsewhere.}
\end{figure}
\subsection{Pair approximation}
The previous mean-field approaches are expected to fail when the concentration of defectors and cooperators are locally correlated. Since births occur among neighboring sites, correlations are expected to be important, in general. Hence, we reconsider system \eqref{eq:7}-\eqref{eq:11}, and try to express the three nodes moments as a function of the one and two nodes mean values. Although different approaches are possible (see for instance \cite{khlohe17}), we explore here the so-called pair approximation. Pair approximation has been extensively applied to a variety of stochastic processes defined on a network, aiming at describing different situations as diverse as spin dynamics \cite{olmesa93,gl13}, opinion dynamics \cite{vaeg08,scbe09,vacasa10,pecasato18,pecasato18a}, epidemics \cite{ledu96,eake02,tasigrhoki12}, and population dynamics \cite{namaiw97,iwnale98,lilihiiwboni13,lefedi03,thel03}. In each of the cases, the pair approximation assumes that the probability of a given node quantity $x_i$ conditioned to the values of a neighboring site $x_j$ and to a next-neighboring site $x_k$ is independent of the latter \cite{maogsasa92}: $\text{Prob}(x_i|x_jx_k)\simeq \text{Prob}(x_i|x_j)$. In other words, the state of a neighbor of a given node is considered to be independent of the state of another neighbor. In our model, where $x_i\in\{c,d,e\}$ takes the values $0$ or $1$, the mean values $\mean{x_i x_j}$ and $\mean{x_i x_j x_k}$ are essentially the respective probabilities of the given quantities, hence, under the pair approximation $\mean{x_ix_jx_k}=\text{Prob}(x_i|x_jx_k)\text{Prob}(x_jx_k)\simeq P(x_i|x_j)\text{Prob}(x_jx_k)$, we have
\begin{equation}
\label{eq:36}
\mean{x_ix_jx_k}\simeq\frac{\mean{x_ix_j}\mean{x_jx_k}}{\mean{x_j}}.
\end{equation}
Note that the order of appearance of the variables inside the brackets is important: $x_i$ refers to a node which is a neighbor of $x_j$ and $x_j$ is a neighbor of $x_k$. Observe that the pair approximation keeps the correlations regardless of the occupancy of the middle node, namely $\sum_{x_j\in\{c,d,e\}}\mean{x_ix_jx_k}=\mean{x_i1x_k}\simeq\sum_{x_j\in\{c,d,e\}}\frac{\mean{x_ix_j}\mean{x_jx_k}}{\mean{x_j}}\ne \mean{x_i}\mean{x_j}$, in general.
For simplicity, we consider homogeneous situations for which system \eqref{eq:7}-\eqref{eq:11}, within the pair approximation of Eq. \eqref{eq:36}, becomes
\begin{eqnarray}
\label{eq:37}
\nonumber
\frac{d}{dt}\mean{c}&=&(1-p)\mean{ce}-p(1-\epsilon)\mean{cc}\\
&& -p(2-\epsilon)\mean{cd}, \\
\label{eq:38}
\frac{d}{dt}\mean{d}&=&(1-p)\mean{de}-p\mean{dd}, \\
\label{eq:39}
\nonumber
\frac{k}{2}\frac{d}{dt}\mean{cc}&=&\mean{ce}-p(1-\epsilon)\mean{cc}\\
\nonumber &&+(k-1)\left\{\frac{\mean{ce}^2}{\mean{e}}-p\left[\frac{\mean{cc}\mean{ce}}{\mean{c}} \right. \right. \\
&&\left.\left.+(1-\epsilon)\frac{\mean{cc}^2}{\mean{c}}+(2-\epsilon)\frac{\mean{cc}\mean{cd}}{\mean{c}}\right]\right\}, \\
\label{eq:40}
\nonumber k\frac{d}{dt}\mean{cd}&=&-p(2-\epsilon)\mean{cd}+(k-1)\left\{2\frac{\mean{ce}\mean{ed}}{\mean{e}} \right. \\
\nonumber &&-p\left[\frac{\mean{ec}{\mean{cd}}}{\mean{c}}+\frac{\mean{cd}\mean{de}}{\mean{d}}+\frac{\mean{cd}\mean{dd}}{\mean{d}} \right. \\
&&\left.\left.+(1-\epsilon)\frac{\mean{cc}\mean{cd}}{\mean{c}}+(2-\epsilon)\frac{\mean{cd}^2}{\mean{c}}\right]\right\}, \\
\label{eq:41}
\nonumber
\frac{k}{2}\frac{d}{dt}\mean{dd}&=&\mean{de}-p\mean{dd}+(k-1)\left[\frac{\mean{de}^2}{\mean{e}} \right.\\
&&\left. -p\left(\frac{\mean{dd}\mean{de}}{\mean{d}}+\frac{\mean{dd}^2}{\mean{d}}\right)\right],
\end{eqnarray}
where $\mean{xy}$ is for any two adjacent nodes with particles $x$ and $y$. Hence, $\mean{xy}=\mean{yx}$.
\subsubsection{Steady-state solutions}
The system \eqref{eq:37}-\eqref{eq:41} has several steady-state solutions. The most obvious one it the trivial solution, without particles, $\mean{c}=\mean{d}=0$. This is the absorbing state we have already mentioned.
By setting all time derivatives of Eqs. \eqref{eq:37}-\eqref{eq:41} to zero and $c=0$, we obtain the steady-state solution for defection only:
\begin{equation}
\label{eq:53}
\mean{d}=\frac{(1-p)k-1}{k-(1+p)},
\end{equation}
valid for
\begin{equation}
\label{eq:54}
p\le 1-\frac{1}{k}.
\end{equation}
Observe that the previous inequality is the same as the one in Eq. \eqref{eq:44}, derived using exact relations. However, in this case, $p=1-1/k$ is the exact critical value for the extinction of defectors in the absence of cooperators, within the pair approximation.
The only-cooperators solution is obtained from Eqs. \eqref{eq:37}-\eqref{eq:41} as a steady-state solution with $d=0$. Now,
\begin{equation}
\label{eq:66}
\mean{c}=\frac{(1-p)k-(1-\epsilon p^2)}{[k-(1+p)](1-\epsilon p)},
\end{equation}
for
\begin{equation}
\label{eq:67}
p\le \frac{k-\sqrt{k^2-4\epsilon(k-1)}}{2\epsilon}\ge 1-\frac{1}{k}.
\end{equation}
The equality of the last relation holds for $\epsilon \to 0$. For the other limiting value of $\epsilon$, i.e. $\epsilon \to 1$, the upper allowed value of $p$ is $1$, as we also obtained exactly.
Other steady-state solutions describing coexistence, but close to the previous ones, can also be found as follows. First, we notice that the system \eqref{eq:37}-\eqref{eq:41}, under the steady-state condition, can be reduced to a nonlinear system of only two equations with $\mean{c}$ and $\mean{d}$ as unknown quantities. Second, we seek solutions close to the one-type ones, i.e. $\mean{c}\simeq \frac{1-k(1-p)-\epsilon p^2}{(1-k+p)(1-\epsilon p)}$ and $\mean{d}\simeq 0$ for the only-cooperators case and $\mean{c}\simeq 0$ and $\mean{d}\simeq \frac{1-k(1-p)}{1-k+p}$ for the only-defectors. For the former case, the resulting equations are linear and nontrivial solutions appear below the following line:
\begin{equation}
\label{eq:78}
\epsilon_c(p)=\frac{A(p)B(p)}{C(p)+\sqrt{C^2(p)-A^2(p)B(p)}},
\end{equation}
with
\begin{eqnarray}
\label{eq:79}
A(p)&=&2p(1-p)(k-1-kp) \\
\label{eq:80}
B(p)&=&(k+1-(k+2)p+2p^2)/[p(1-p)], \\
\nonumber
C(p)&=&k(k-1)-(2k^2-3k+2)p \\
\label{eq:81}
&&+(k^2-3k+1)p^2+(k+2)p^3-2p^4.
\end{eqnarray}
For the only-defectors case, the resulting set of equation is nonlinear, but one can still find a condition for a nontrivial solution to exist. Now, the nontrivial solutions are above $\epsilon_d(p)$, which has the following approximate expression
\begin{equation}
\label{eq:82}
\epsilon_d(p)\simeq \frac{E(p)}{F(p)}\left[1-\sqrt{1-\frac{2G(p)F(p)}{E^2(p)}}\right],
\end{equation}
with
\begin{eqnarray}
\nonumber
E(p)&=&(k-1)^3+(k-1)(4k^2-7k+8)p\\
\nonumber
&& -\{k[5k(k-5)+28]-3\}p^2\\
\label{eq:83}
&& -(13k-5)kp^3, \\
\nonumber
F(p)&=&2p\{(k-1)(2k^2-3k+2)\\
\nonumber
&&\quad -(2k^3-14k^2+14k-1)p \\
\label{eq:84}
&& \qquad -(9k-4)kp^2\}, \\
\nonumber
G(p)&=&(k-1-kp)[(k-1)^2\\
\label{eq:85}
&&\quad +(3k^2-7k+10)p+2(3k-1)p^2].
\end{eqnarray}
Since $\epsilon_c(p)\ge \epsilon_d(p)$, the coexistence solutions are in the region in between the two lines, as shown in Fig.\ \ref{fig:pair_app_analytic} for the square lattice ($k=4$).
Finally, there may be other solutions describing coexistence not necessarily close to the only-cooperator nor only-defectors ones. This can be shown explicitly for $\epsilon=1$, for which we can obtain explicit expressions. After some algebra, we get
\begin{eqnarray}
\label{eq:68}
&& \mean{c}=\frac{2(k-1)(2k-3)(1-3p)}{(1-p)[4(k-1)(k-p-2)+p+1]}, \\
\label{eq:69}
&& \mean{d}=\frac{(2k-3)[(4k-5)p-1]}{4(k-1)(k-p-2)+p+1}, \\
\label{eq:70}
&& \mean{cd}=\frac{4(k-1)(k-p-2)+p+1}{2(k-1)(2k-3)}\mean{c}\mean{d},
\end{eqnarray}
valid for
\begin{equation}
\label{eq:71}
\frac{1}{4k-5}\le p\le \frac{1}{3}.
\end{equation}
Moreover, it can be seen that this solution is linearly unstable, with only one unstable mode. However, the characteristic time of the unstable mode is much slower than the others, meaning that the system can stay close to the solution for a long time.
\subsubsection{Stability of the steady-state solutions}
The stability of the only-defectors and only-cooperators solutions have been studied by means of a modified linear stability analysis of system \eqref{eq:37}-\eqref{eq:41}, following several steps. First, using the identities $\mean{ce}=\mean{c}-\mean{cc}-\mean{cd}$ and $\mean{de}=\mean{d}-\mean{cd}-\mean{dd}$, all mean values are expressed in terms of $\mean{c}$, $\mean{d}$, $\mean{cc}$, $\mean{cd}$, and $\mean{dd}$. Second, the homogeneous solution is linearly perturbed as
\begin{equation}
\label{eq:86}
\mathbf u=\mathbf u_0+\gamma \mathbf u_1,
\end{equation}
with $\mathbf u=(\mean{c},\mean{d},\mean{cc},\mean{cd},\mean{dd})$ the vector of the homogeneous solutions, $\mathbf u_0$ is the vector of the unperturbed solutions, $\mathbf u_1$ is the perturbation vector, and $\gamma$ a perturbative parameter. Third, the proposed solution is replaced in \eqref{eq:37}-\eqref{eq:41} and the resulting system is expanded up to linear order in $\gamma$. Contrary to the usual linear perturbation schemes, we obtain a nonlinear closed system of equations for the unknown perturbation quantities $u_{1,i},$ for $i=1,\dots, 5$. For both, the only-cooperators and only-defectors solutions, the equation for the perturbation can be written as
\begin{equation}
\label{eq:87}
\frac{d}{dt}\mathbf u_1=M(\beta)\mathbf u_1,
\end{equation}
with $M$ being a matrix and $\beta$ a linear function of $\mean{cc}/\mean{c}$, $\mean{cd}/\mean{c}$, $\mean{cd}/\mean{d}$, and $\mean{dd}/\mean{d}$, whose explicit form depends on the solution considered. In any case, $\beta$ is a bounded function, since $0\le \mean{xy}/\mean{x}\le 1$ for $x,y\in\{c,d\}$. Finally, the asymptotic behavior of $\mathbf u_1(t)$ for $t\to \infty$, hence the stable or unstable character of $\mathbf u_0$, can be determined from the spectra of $M(\beta)$ for any $\beta$, using the following lemma.
\emph{Lemma}: If all eigenvalues of $M(\beta)$ have negative real parts for all values of $\beta$, then $\mathbf u_0$ is linearly stable. \newline
\emph{Proof}: Given a time $t>0$ and an integer $n>0$, we define $t_i=\frac{t}{M}i$ for $i=0,\dots,n$. Thanks to the Mean Value Theorem, it is $\mathbf u_1(t_i)=\left(I+M_1\frac{t}{n}\right)\mathbf u(t_{i-1})$ for $i\ge 1$, where $M_i$ is the value of $M$ for a time in $(t_{i-1},t_i)$ and use has been made of Eq. \eqref{eq:87} to evaluate the time derivative. By iteration, $\mathbf u_1(t_i)=\left[\prod_{k=1}^i\left(I+M_k\frac{t}{n}\right)\right]\mathbf u_0$. Denoting by $\lVert\cdot\rVert$ any vector norm, we have $\lVert \mathbf u_1(t)\rVert = \lVert \left[\prod_{k=1}^n(I+M_k\frac{t}{n})\right]\mathbf u_0 \rVert \le \lVert (I+\tilde M\frac{t}{n})^n\mathbf u_0 \rVert$ where $\tilde M$ is such that $\lVert (I+\tilde M\frac{t}{n})\mathbf u_0 \rVert=\max_{k}\lVert\left(I+M_k\frac{t}{n}\right)\mathbf u_0 \rVert$. Taking $n\to \infty$, $\lVert \mathbf u_1(t)\rVert \le \lVert e^{\tilde M t}\mathbf u_0\rVert $ which tends to zero as $t\to \infty$, as all eigenvalues of $\tilde M$ have negative real parts. $\square
$
Using the lemma, we see that the only-cooperators solution is stable above line $\epsilon_c(p)$ given by Eq. \eqref{eq:78}, and the only-defectors solution is stable below line $\epsilon_d(p)$ given approximately by Eq. \eqref{eq:82}. This implies that the instability of the one-type solutions is due to the presence of coexistence solutions which become stable. Numerical evaluation of the time evolution of system \eqref{eq:37}-\eqref{eq:41} confirms the theoretical analysis.
\section{Discussion}
\label{sec:5}
As the root of the present work, we introduce a basic stochastic model of a spatially extended population of altrustic and non-altruistic agents, called cooperators and defectors.
The population evolves by a birth-death process. In line with the considerations by Huang and colleages \cite{Huang:2015}, an agents' interaction with another agent influences the death rates only.
Agents' interactions are altruistic acts. They lower the death rate of the recipient while increasing the death rate of the donor, relative to a baseline death rate $p$ for agents in isolation, $p$
being one of two parameters of the model. The benefit-cost ratio of the altruistic is encoded in the second parameter $\epsilon$.
Results are obained as (1) stochastic simulations of finite systems and (2) stationary solutions and their stability in approximate descriptions by rate equations. The pair approximation, neglecting all spatial correlations except those of nearest neighbors, yields our main result: For any benefit-cost ratio above $1$, the stable stationary solutions in dependence of baseline death rate $p$ display (i) a regime of co-existence of cooperators and defectors and (ii) a regime of a population of cooperators only. In the $(p,\epsilon)$ parameter plane, these regimes and the related transitions appear as a continuation of the known extinction transition for a spatial population without cooperative interaction (also known as contact process, asynchronous SIS model). The latter case corresponds to benefit-cost ratio of exactly 1 ($\epsilon=0$).
The phase diagram from pair approximation is fully qualitatively consistent with that from stochastic simulation with finite square lattices.
Simulations of sufficiently large instances of $k$-regular random graphs yield an equivalent phase diagram (results not shown here); this holds also for preliminary simulation results on other graphs,
including scale-free \cite{baal99} and small-world networks \cite{wast98}. Thus we speculate that the observed type of $(p,\epsilon)$ phase diagram is generic, holding for most types of connected
sparse graphs. For dense graphs, however, we expect mean-field behavior without stable cooperation seen in Sec.~\ref{sec:global_mean}.
Consider a spatially extended population subject to a decline in livability, which in reality may be a reduction of food resources or an increase of predators. In our model, this scenario is represented by increasing $p$ and comes with the following prediction. Initially without cooperators, the concentration of agents decreases until reaching a transition point with the onset of co-existence. In this
regime, the concentration of defectors further decreases; this decrease is overcompensated by the increase in cooperators. Thus in the co-existence regime, there is net population growth under increasing $p$
\cite{Sella:2000}. Further increase of $p$ first leads to a regime with a population containing cooperators only and then an extinction phase where zero population size is the only stable solution.
From earlier studies, both theoretical \cite{Sella:2000} and experimental, increasing baseline death rate has been known to enhance cooperation. Perturbing populations of yeast cells by dilution shocks, S\'{a}nchez and Gore find populations with larger fractions of cooperative cells (providing digestive enzyme to the population) more likely to survive \cite{sago13}. Datta and co-authors observe cooperation promoted when a population expands the space it occupies, cooperators forming a wave of invaders \cite{dakocvdugo13}.
According to the rule by Ohtsuki and colleagues \cite{ohhalino06}, cooperation supersedes defection when the benefit-cost ratio is larger than the agent's number of neighbors $z$. While their theory assumes a population of constant size and each agent with a constant number $z$ of neighbors, we here see cooperation enhanced when the number of neighbors (occupied adjacent sites) is reduced dynamically due to a shrinking population density.
When the population most ``needs'' it, i.e.\ at low density close to extinction, cooperation appears as a stable stationary solution for any benefit-cost ratio above 1. Future work may check if a rule relating benefit-cost ratio and neighborhood size characterizes the appearance of stable cooperation also in the present model with varying population size. For experimentally testing the present model's predictions, the unperturbed steady state of a population would have to be observed.
Giving up the spatial homogeneity of the baseline death rate $p$, we have investigated the scenario of a gradient between low $p$ (high livability) and high $p$ (low livability). The regimes encountered previously by tuning $p$ for the whole system are now found simultaneously at their corresponding spatial position. In particular, high concentration of cooperators is found next to the region uninhabited due to large death rate $p$. There is a region of co-existence where total population concentration increases with $p$ also spatially. Cooperation arises when and where needed to avoid extinction.
\section*{Acknowledgments}
Partial financial support has been received from the Agencia Estatal de
Investigacion (AEI, Spain) and Fondo Europeo de Desarrollo Regional
under Project PACSS RTI2018-093732-B-C21 (AEI/FEDER,UE) and the Spanish
State Research Agency, through the Maria de Maeztu Program for units of
Excellence in R\&D (MDM-2017-0711).
KK acknowledges funding from MINECO through the Ram{\'o}n y Cajal program
and through project SPASIMM, FIS2016-80067-P (AEI/FEDER, EU).
\bibliographystyle{unsrt}
| proofpile-arXiv_065-7255 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Shannon channel capacity underlies the communication principle for achieving the highest bit rate at which information can be transmitted with arbitrary small probability. A standard way to model the input and output relations is the memoryless additive white Gaussian noise (AWGN) channel
\begin{eqnarray}
\begin{array}{l}\label{equ-ch}
y = x + n,
\end{array}
\end{eqnarray}
where $y$ is the received signal, $x$ is the transmitted signal and $n$ is the received AWGN component from a normally distributed ensemble of power $\sigma_N^2$ denoted by $n \sim \mathcal{N}(0,\sigma_N^2)$ \cite{Shannon1948} .
In the previous literatures, the channel capacities of the finite alphabet inputs have been calculated in terms of the reliable transmission bit rates (RTBRs) by
\begin{equation}
\begin{array}{l}\label{equ2}
\rm{I}(X;Y) =\rm{H}(Y) - \rm{H}(N)
\end{array}
\end{equation}
where $\rm{I}(X;Y)$ is the mutual information, ${\rm{H}}(Y)$ is the entropy of the received signal and ${\rm{H}}(N) = {\log _2} (\sqrt{2 \pi e \sigma_N^2})$ is that of the AWGN. The some numerical results of \eqref{equ2} have calculated as shown in Fig. 1, where the capacity of Gaussian type signal input is also plotted as a reference.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{fig1}
\caption{Reliable transmission bit rates for BPSK, QPSK and Gaussian type.}
\label{fig1}
\end{figure}
Though the capacity concept holds for the last decades, there were still some
considerations on the possibility of beyond the capacities\cite{ jiao}. A mathematical incentive can be found from the down-concavity of the mutual information curves as shown in Fig.1, from which one can conclude
\begin{equation}
\begin{array}{l}\label{equ3}
\tilde{\rm{I}}_{x}\left[(E_1+E_2)/\sigma_N^2\right] < \tilde{\rm{I}}_{x_1}(E_1/\sigma_N^2)+\tilde{\rm{I}}_{x_2}(E_2/\sigma_N^2)
\end{array}
\end{equation}
when $x=x_1+x_2$ is the signal superposition, $x_1$ and $x_2$ are two independent signals, and $E$, $E_1$ and $E_2$ are the symbol energies of $x$, $x_1$ and $x_2$, respectively. In contrast to the conventional signal superposition methods, obtaining a gain from \eqref{equ3} requires non inter-symbol interference, i.e, non interference between $x_1$ and $x_2$.
Nevertheless, the great difficulty can be encountered when one tries to organize the signal superposition that allows the separation to extract a contribution from \eqref{equ3}.
This paper peruses, however, the inequality \eqref{equ3} by creating a new method, referred to as the orthogonal cocktail BPSK, that works in Hamming- and Euclidean space for separating the parallel transmission of the independent signals. The derivations are done with the assumption of using the ideal channel codes that allow the error free transmission of BPSK and QPSK as well.
Throughout the present paper, we use the capital letter to express a vector and the small letter to indicate its component, e.g., $A = \{a_1,a_2, ...., a_M\}$, where $A$ represents the vector and $a_i$ the $ith$ component. In addition, we use $\hat{y}$ to express the estimate of $y$ at the receiver and $\tilde{\rm{I}}(\gamma)$ to express the nutual information $\rm{I}(X;Y)$ with SNR, $\gamma$, as the argument \cite{Verdu2007}. The details are introduced in the following sections.
\section{Signal Superposition- and Separation Scheme}
Let us consider a binary information source bit sequence
which is partitioned into two independent subsequences expressed
in vector form of $C^{(i)} = \{c^{(i)}_1,c^{(i)}_2,.....,c^{(i)}_{K_i}\}$, where $K_i$ is the length of the source subsequence and $i=1,2$ indicates the two source subsequences.
The two source subsequences are separately encoded, in Hamming space, by two difference channel code matrices
\begin{eqnarray}
\begin{array}{l}\label{equ-code}
v^{(i)}_{m} = \sum\limits_{k_i}g^{(i)}_{mk_i}c^{(i)}_{k_i}
\end{array}
\end{eqnarray}
where $v^{(i)}_m$ is the $mth$ component of the channel code $V^{(i)}$, and $g^{(i)}_{mk_i}$ is the element of the code matrix $G^{(i)}$ for $i=1,2$ and $m=1,2,....,M$, respectively. We note that $M$ is the length of the channel code word, and $R_1=K_1/M$ and $R_2=K_2/M$ are the two code rates which are unnecessarily to be equal.
For the signal modulations, we borrow the QPSK constellation to map the two channel codes, $V^{(1)}$ and $V^{(2)}$, into the Euclidean space specified by
$s^{(1)}=\{\sqrt{2}\alpha,j0\}$, $s^{(2)}=\{0, j\sqrt{2}\alpha\}$, $s^{(3)}=\{-\sqrt{2}\alpha,j0\}$ and $s^{(4)}=\{0, -j\sqrt{2}\alpha\}$, where $\alpha>0$ and $j=\sqrt{-1}$ , as shown in Fig.2.
In contrast the conventional QPSK modulation, the proposed method allows $V^{(1)}$ to be demodulated and decoded separately from $V^{(2)}$. This decoding scheme is defined as the partial decoding in this approach because that only one source subsequence, i.e. $C^{(1)}$, is decoded.
Consequently, using the decoding results of $C^{(1)}$ allows a reliable separation of $V^{(2)}$ from $V^{(1)}$. Then, $V^{(2)}$ can be demodulated over two perpendicular BPSKs: one is constructed by $s^{(1)}$ and $s^{(3)}$ and the other by $s^{(2)}$ and $s^{(4)}$.
More important, the Euclidean distance between the two signal points with each BPSK from the decouple is larger than $2\alpha$ that results, eventually, in a RTBR gain as found latter. Thus, we refer the proposed method to as the orthogonal cocktail BPSK (OCB), as explained in the following paragraphs.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\textwidth]{fig2}
\caption{Constellation of the proposed method.}
\label{fig2}
\end{figure}
The OCB modulation is classified into two cases with respect to the bit values of $v^{(1)}_m =0$ or $1$. Case I belongs to $v^{(1)}_m=0$, whereby we map $v^{(1)}_m=0$ and $v^{(2)}_m=0$ onto $s^{(1)}_m$, and $v^{(1)}_m=0$ and $v^{(2)}_m=1$ onto $s^{(3)}_m$. Actually, one can regard that the BPSK in horizontal direction is used to the signal mapping of case I.
Case II belongs $v^{(1)}_m=1$, whereby $v^{(1)}_m=1$ and $v^{(2)}_m=0$ are mapped onto $s^{(2)}_m$, and $v^{(1)}_m=1$ and $v^{(2)}_m=1$ onto $s^{(4)}_m$, where one can find the BPSK in vertical direction.
For expressing the OCB modulations more intuitive, the signal mapping of the two cases is listed in Table I, in which $s^{(\kappa)}_m$ for $\kappa=1,2,3, 4$ is the QPSK constellation with sequential index $m$ added.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.5}
\centering
\small
\caption{Signal modulation results.}
\label{Table1}
\begin{tabular}{c|c|c|c}
\hline
\multirow{2}{*}{Case I}
& \tabincell{c}{$v^{(1)}_m=0$} & \tabincell{c}{$v^{(2)}_m=0$} & \tabincell{c}{$S^{(1)}_m$} \\
\cline{2-4}
& \tabincell{c}{$v^{(1)}_m=0$} & \tabincell{c}{$v^{(2)}_m=1$} & \tabincell{c}{$S^{(3)}_m$} \\
\hline
\multirow{2}{*}{Case II}
& \tabincell{c}{$v^{(1)}_m=1$} & \tabincell{c}{$v^{(2)}_m=0$} & \tabincell{c}{$S^{(2)}_m$} \\
\cline{2-4}
& \tabincell{c}{$v^{(1)}_m=1$} & \tabincell{c}{$v^{(2)}_m=1$} & \tabincell{c}{$S^{(4)}_m$} \\
\hline
\end{tabular}
\end{table}
Then, the transmitter inputs one symbol another into the AWGN channel by
\begin{eqnarray}
\begin{array}{l}\label{equ11}
y_m = s^{(\kappa)}_m + n_m, \ \ \ for \ \ m\ =\ 1, \ 2,\, ....,M
\end{array}
\end{eqnarray}
where $y_m$ is the received signal, and $s^{(\kappa)}_m$ the transmitted symbol and $n_m$ is the Gaussian noise statistically equivalent to that in \eqref{equ-ch}.
At the receiver, all received signals in Euclidean space are recoded sequentially. The demodulation starts from $V^{(1)}$ by
\begin{eqnarray}
\begin{array}{l}\label{equ-v1}
\hat{y}_m = s^{(1)}_m \ \ \ or \ \ s^{(3)}_m\ \ \ \ for \ \ \ v^{(1)}_m=0
\end{array}
\end{eqnarray}
and
\begin{eqnarray}
\begin{array}{l}\label{equ-c1}
\hat{y}_n = s^{(2)}_m \ \ or \ \ s^{(4)}_m\ \ \ \ for \ \ \ v^{(1)}_m=1
\end{array}
\end{eqnarray}
where $ \hat{y}_m $ is the estimate of $ y_m $. Then, we work on the partial decoding scheme defined above by using the estimates of \eqref{equ-v1} and \eqref{equ-c1} to obtain $\hat{C}^{(1)}$.
Once $\hat{C}^{(1)}$ has been obtained, the receiver reconstructs the channel code $V^{(1)}$ by
\begin{eqnarray}
\begin{array}{l}\label{equ-re1}
\hat{v}^{(1)}_m = \sum\limits_{k_i}g^{(1)}_{mk_1} \hat{c}^{(1)}_{k_1},
\end{array}
\end{eqnarray}
which can be used to decouple the QPSK into the two perpendicular BPSKs in Euclidean space.
\begin{figure}[ht]
\centering
\subfigure[]{
\includegraphics[width=0.3\textwidth]{fig3a}
\label{fig3a}}
\subfigure[]{
\includegraphics[width=0.3\textwidth]{fig3b}
\label{fig3b}}
\caption{The mapping symbols for (a) case I and (b) case II.}
\label{fig3}
\end{figure}
The results of \eqref{equ-re1} can be regarded as the reliable reconstruction, whereat $\hat{v}^{(1)}_m = 0 $ indicates that the recoded signal belongs to case I, while $\hat{v}^{(1)}_m = 1$ to case II. Thus, the two perpendicular BPSKs can be decoupled as shown in Fig.3(a)(b), receptively. This allows the detection of $v^{(2)}_m$ as follows.
If $v^{(1)}_m=0$, the receiver detects the recoded signal $s^{(\kappa)}_m$ by
\begin{eqnarray}
\begin{array}{l}\label{equ-v21}
\hat{y}_m = s^{(1)}_m \ \ for\ \ v^{(2)}_m=0
\end{array}
\end{eqnarray}
and
\begin{eqnarray}
\begin{array}{l}\label{equ-v2}
\hat{y}_m = s^{(3)}_m\ \ for\ \ v^{(2)}_m=1
\end{array}
\end{eqnarray}
If $v^{(1)}_m=1$, the receiver detects $v^{(2)}_m$ by
\begin{eqnarray}
\begin{array}{l}\label{equ-v31}
j\hat{y}_m = js^{(2)}_m \ \ for\ \ v^{(2)}_m=0
\end{array}
\end{eqnarray}
and
\begin{eqnarray}
\begin{array}{l}\label{equ-v3}
j\hat{y}_m = js^{(4)}_m\ \ for\ \ v^{(2)}_m=1
\end{array}
\end{eqnarray}
Then, by taking the estimates of \eqref{equ-v21}, \eqref{equ-v2}, \eqref{equ-v31} and \eqref{equ-v3} to the decoding of $C^{(2)}$, we can obtained the $\hat{C}^{(2)}$.
In practical situation, when an error presents in reconstruction of $\hat{v}^{(1)}_m$, the detection of $v^{(2)}_m$ can be wrong with $50\%$ probability. Then, the decoding can suffer from the error propagation.
However, when working with the ideal low density block code, the infinitive error probability of $\hat{C}^{(1)}$ can lead to the infinitive small probability of the reconstruction of $\hat{V}^{(1)}$. Thus, the error rate problem in the signal separation can be neglected when we are studying on the capacity issue.
\section{Up-Bound Issue}
Assume that we are working with the ideal channel codes that allows error free transmissions of QPSK and BPSK, the RTBR of the OCB method is found higher than that of the QPSK input as proved in the following paragraphs.
First, we prove that the RTBR of $C^{(1)}$ is at a half of QPSK input by
\begin{eqnarray}
\begin{array}{l}\label{equ12}
\mathbb{R}_{c1} = \frac{1}{2}\tilde{\rm{I}}_q(2\alpha^2/\sigma_N^2)
\end{array}
\end{eqnarray}
where $\mathbb{R}_{c1}$ is the RTBR of $C^{(1)}$ and $\tilde{\rm{I}}_q$ is the mutual information of QPSK input.
Proof:
In order to prove this issue, we first recall the following theorem: when the Euclidean distance $\tilde{d}(\xi,\xi')$ is the same, the large Hamming distance of the channel codes can lead to smaller BER. This is true when we compare the OCB with the conventional BPSK since the source codes, i.e, $C^{(1)}$, can be found as a QPSK coded modulation that deleted a half of the source bits. The OCB can have the smaller BER in comparison with that of QPSK input. Thus, for using infinitive long channel codes, whenever the transmission of QPSK input is of infinitive small error probability, the partial coding with $C^{(1)}$ applies as well.
The RTBR of $C^{(1)}$ is at half of the QPSK input because the former transmits one channel bit per symbol, while the latter two channel bits.
Once $C^{(1)}$ is transmitted to the receiver without error, the demodulation of $V^{(2)}$ can be done by using the two BPSK symbols, in each of which the Euclidean distance is $\sqrt{2}\alpha^2$. Thus, the symbol energy is found at $2\alpha^2$ that should be used to calculate the mutual information
\begin{eqnarray}
\begin{array}{l}\label{equ15}
\mathbb{R}_{c2}=\tilde{\rm{I}}_b(2\alpha^2/\sigma_N^2)
\end{array}
\end{eqnarray}
where $\mathbb{R}_{c2}$ is the RTBR of $C^{(2)}$, and $\tilde{\rm{I}}_b$ is the mutual information of BPSK.
Finally, the summation of RTBRs in \eqref{equ12} and \eqref{equ15} yields
\begin{eqnarray}
\begin{array}{l}\label{result2}
\mathbb{R}_J = \frac{1}{2}\tilde{\rm{I}}_q(2\alpha^2/\sigma_N^2)+\tilde{\rm{I}}_b(2\alpha^2/\sigma_N^2)
\end{array}
\end{eqnarray}
where $\mathbb{R}_J$ is the RTBR of this approach.
The numerical results of \eqref{result2} are plotted in Fig.4, whereat one can that the curve of OCB on the left side of QPSK. This indicates the RTBR exceeding of QPSK input.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{fig4}
\caption{ADRs of OCB compared with QPSK and BPSK versus linear ratio of $E_s/\sigma_N^2$.}
\label{fig4}
\end{figure}
\section{Conclusion}
In this paper, we proposed the OCB method for increasing the RTBR further beyond the QPSK input and, even, the Shannon capacity of Gaussian type signals. The proposed method works in Hamming and Euclidean space in separation of the two independent signals transmitted in parallel over an AWGN channel. Theoretical derivations prove this approach base on the assumption of using the ideal channel codes.
| proofpile-arXiv_065-7262 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In closed systems, Quantum Phase Transitions (QPTs) are defined as non-analytic changes of the ground state energy when a control parameter other than temperature is varied across a critical point~\cite{Sachdev-QPT}.
They are accompanied by non-analytic changes in observables or correlation functions~\cite{LMG-thermodynamical_limit-Mosseri,Hirsch-Dicke_TC_-quantum-and-semi-analysis-chaos,Dicke_Entanglement_and_QPT-Brandes} and form a fascinating research area on their own.
Nowadays, it is possible to study such QPTs in experimental setups with cold atoms~\cite{Baumann-Dicke_qpt,Baumann-symm-break-in-Dicke-QPT,brennecke2013real,LMG-Exp_Bifurcation_rabi_to_jesophson-Oberthaler,Ritsch-Domokos-Cold-atoms-in-opt-potential}, which provide high degree of control and allow to test theoretical predictions.
However, each experimental set-up is an open system, such that the impact of the reservoir on the QPT should not be neglected.
To the contrary, the presence of a reservoir can fundamentally change the nature of the QPT.
For example, in the famous Dicke phase transition, it is the presence of the reservoir that actually creates a QPT via the environmental coupling of a collective spin~\cite{Dicke-Dicke_Modell}.
With the renewed interest in quantum thermodynamics, it has become a relevant question whether QPTs can be put to use e.g.\ as working fluids of quantum heat engines~\cite{fusco2016a,cakmak2016a,ma2017a,kloc2019a}.
This opens another broad research area of dissipative QPTs in non-equilibrium setups.
Here, the non-equilibrium configuration can be implemented in different ways, e.g.\ by periodic driving~\cite{Dicke-nonequilibrium_qpt-bastidas,LMG-ac_driven_QPT-Georg,Bastidas-Critical_quasienergy_in_driven_systems}, by quenching~\cite{Dicke-Robust_quantum_correlation_with_linear_increased_coupling-Acevedo,LMG-Nonadiabatic_dynamics_of_ESQPT-kopylov,LMG-Criticality_revealed_and_quench_dynamics-Campbell}, by coupling to reservoirs~\cite{Scully-quantum_optics,LMG-collective_and_independent-decay-Lee,Kopylov_Counting-statistics-Dicke} or by a combination of these approaches~\cite{mostame2007a,mostame2010a}.
One has even considered feedback control of such quantum-critical systems~\cite{Dicke_Dynamical_phase_transition_open_Dicke-Klinder,Thermalization-Pseudo_NonMarkovian_dissipative_system-Chiocchetta,Feedback-mirror_propiertes_as_time_delay_fb-carmele,Kopylov-LMG_ESQPT_control,Pyragas-Uravaling_coherent_FB-Kabuss}.
All these extensions should however be applied in combination with a reliable microscopic description of the system-reservoir interaction.
For example, in the usual derivation of Lindblad master equations one assumes that the system-reservoir interaction is weak compared to the splitting of the system energy levels~\cite{Scully-quantum_optics,Breuer-open_quantum_systems}.
In particular in the vicinity of a QPT -- where the energy gap above the ground state vanishes -- this condition cannot be maintained.
Therefore, while in particular the application of the secular approximation leads to a Lindblad-type master equation preserving the density matrix properties, it has the disadvantage that its range of validity is typically limited to non-critical points or to finite-size scaling investigations~\cite{vogl2012b,schaller2014a}.
In principle, the weak-coupling restriction can be overcome with different methods such as e.g.\ reaction-coordinate mappings~\cite{Schaller-QS_far_from_equilibrium,nazir2018a,Thermodynamics-Nonequilibr_react_coordinate-Strasberg}.
These however come at the price of increasing the dimension of the system, which renders analytic treatments of already complex systems difficult.
In this paper, we are going to study at the example of the Lipkin-Meshkov-Glick (LMG) model how a QPT is turned dissipative by coupling the LMG system~\cite{LMG-validity_many_body_approx-Lipkin} to a large environment.
To avoid the aforementioned problems, we use a polaron~\cite{mahan2013many,Polaron-inelastic_resonant_tunneling_electrons_barrier-Glazman,Polaron-electron_phonon_resonant_tunneling-wingreen,Polaron-coherent_collective_effects-_mesoscopic-Brandes,Polaron-electron_transistor_strong_coupling_counting-Schaller} method, which allows to address the strong coupling regime~\cite{Polaro-spin_boson_dynamics-Thorwart,Polaron-spin_boson_comparisson-Wilhelm,Polaron-dissipation_two_level_comparisson-brandes,Dicke_ultra-strong-coupling-limit-qpt_Bucher,Schaller-QS_far_from_equilibrium,Krause2015,Keeling_PRL-nonequilibrium_model_photon-cond,PhotCond-interplay_coh_disisp_dynamics-Milan} without increasing the number of degrees of freedom that need explicit treatment.
In particular, we show that for our model the position of the QPT is robust in presence of dissipation.
We emphasize that the absence of a reservoir-induced shift -- in contrast to mean-field-predictions~\cite{Bhaseen_dynamics_of_nonequilibrium_dicke_models,Dicke_open-critical_exponent_of_noise-Nagy,Kopylov_Counting-statistics-Dicke,Rabi_dissipative-QPT_Plenio,Dicke-dissipative_bistability_noise_nonthermal-Buchhold,Dicke-dynamical_symmetry_breaking-Li,Morrison-Dissipative_LMG-and_QPT} --
is connected with starting from a Hamiltonian with a lower spectral bound and holds without additional approximation.
Our work is structured as follows.
In Sec.~\ref{sec:1} we introduce the dissipative LMG model, in Sec.~\ref{sec:2} we show how to diagonalize it globally using the Holstein-Primakoff transformation.
There, we also derive a master equation in both, original and polaron, frames and show that the QPT cannot be modeled within the first and that the QPT position is not shifted within the latter approach.
Finally, we discuss the effects near the QPT by investigating the excitations in the LMG system and the waiting time distribution of emitted bosons in Sec.~\ref{sec:3}.
\section{Model}\label{sec:1}
\subsection{Starting Hamiltonian}
The isolated LMG model describes the collective interaction of $N$ two-level systems with an external field and among themselves.
In terms of the collective spin operators
\begin{align}
J_\nu = \frac{1}{2}\sum_{m=1}^N \sigma_\nu^{(m)}\,,\qquad \nu \in \{x,y,z\}
\end{align}
and $J_\pm = J_x \pm {\rm i} \cdot J_y$
with $\sigma_\nu^{(m)}$ denoting the Pauli matrix of the $m$th spin,
the anisotropic LMG Hamiltonian reads~\cite{LMG-Critical_scaling_law_entaglement-Vidal}
\begin{equation}
\label{eq:LMG}
H_{\rm LMG}(h,\gamma_x) = -h J_z - \frac{\gamma_x}{N} J_x^2\,,
\end{equation}
where $h$ is the strength of a magnetic field in $z$ direction and $\gamma_x$ is the coupling strength between each pair of two-level systems.
As such, it can be considered a quantum generalization of the Curie-Weiss model~\cite{kochmanski2013a}.
Throughout this paper, we consider only the subspace with the maximum angular momentum $j=\frac{N}{2}$, where the eigenvalues of the angular momentum
operator $J^2 = J_x^2 + J_y^2 + J_z^2$ are given by $j(j+1)$.
Studies of the LMG model are interesting not only due to its origin in the nuclear context~\cite{LMG-lipkin1965validity,LMG-validity_many_body_approx-Lipkin,LMG-validity_many_body_approx-Lipkin-3}, but also due to its experimental realization with cold atoms and high possibility of control~\cite{LMG-Exp_Bifurcation_rabi_to_jesophson-Oberthaler}.
In particular the existence of a QPT at $\gamma_x^{\rm cr} = h$ with a non-analytic ground-state energy density has raised the interest in the community~\cite{LMG-phase_transition-Gilmore,LMG-large_N_scaling_behaviour-Heiss,LMG-networks_qpt-sorokin,LMG_Entanglement_dynamics_Vidal}:
For $\gamma_x < \gamma_x^{\rm cr}$, the system has a unique ground state, which we denote as the {\it normal phase} further-on.
In contrast, for $\gamma_x > \gamma_x^{\rm cr}$ it exhibits a {\it symmetry-broken phase}~\cite{LMG-thermodynamical_limit-Mosseri,LMG-symmetry_breaking_dynamics_finite_size-Huang}, where e.g.\ the eigenvalues become pairwise degenerate and the $J_z$-expectation exhibits a bifurcation~\cite{LMG-spectrum_thermodynamic_limit_and_finite_size-corr-Mosseri,LMG-Nonadiabatic_dynamics_of_ESQPT-kopylov}.
Strictly speaking, the QPT is found only in the thermodynamic limit (for $N \to \infty$), for finite sizes $N$ smoothing effects in the QPT signatures will appear~\cite{LMG-Finite_size_scalling_Dusuel,LMG-Finite_size_scaling-Vidal,LMG-Wiseman_control-Kopylov}.
Here, we want to investigate the LMG model embedded in an environment of bosonic oscillators $c_k$ with frequencies $\nu_k$.
The simplest nontrivial embedding preserves the conservation of the total angular momentum and allows for energy exchange between system and reservoir.
Here, we constrain ourselves for simplicity to the case of a $J^x$ coupling.
Furthermore, to
ensure that the Hamiltonian has a lower spectral bound for all values of the system-reservoir coupling strength, we write the interaction in terms of a positive operator
\begin{align}
\label{eq:H_LMG_And_Bath_ini}
H_{\rm tot} &= H_{\rm LMG}(h,\gamma_x)\notag \\
&\;+\sum_k \nu_k \rb{c_k^\dag + \frac{g_k}{\sqrt{N} \nu_k}J_x}\rb{c_k + \frac{g_k}{\sqrt{N} \nu_k}J_x}\,.
\end{align}
Here, $g_k > 0$ represent emission/absorption amplitudes (a possible phase can be absorbed in the bosonic operators), and the factor $N^{-1/2}$ needs to be
included to obtain a meaningful thermodynamic limit $N \to \infty$, but can also be motivated from the scaling of the quantization volume $V \propto N$.
Since the LMG Hamiltonian has a lower bound, the spectrum of this Hamiltonian $H_{\rm tot}$ is (for finite $N$) then bounded from below for all values of the
coupling strength $g_k$.
Upon expansion and sorting spin and bosonic operators, this form implicates an effective rescaling of the system Hamiltonian $H_{\rm LMG}(h,\tilde\gamma_x)$ with a renormalized spin-spin interaction
\begin{equation}\label{EQ:interaction_rescaled}
\tilde{\gamma}_x = \gamma_x - \sum_k \frac{g_k^2}{\nu_k}\,,
\end{equation}
which indeed leads to a shift of the critical point within a naive treatment.
\subsection{Local LMG diagonalization}
In the thermodynamic limit Eq.~\eqref{eq:LMG} can be diagonalized using the Holstein-Primakoff transform which maps collective spins to bosonic operators $b$~\cite{HP-Trafo_field_dependency_of_ferromagnet_Primakoff,Clive-Brandes_Chaos_and_qpt_Dicke,Kopylov_Counting-statistics-Dicke}
\begin{align}
\label{eq:HP_trafo}
J_+ &= \sqrt{N - b^\dag b} b\,, \qquad J_- = b^\dag \sqrt{N - b^\dag b}\,,\\
J_z &= \frac{N}{2} - b^\dag b\,.\notag
\end{align}
However, to capture both phases of the LMG Hamiltonian, one has to account for the macroscopically populated ground state in the symmetry-broken phase.
This can be included with the displacement $b = \sqrt{N}\alpha + a$ with complex $\alpha$ in Eq.~\eqref{eq:HP_trafo}, where $N\abs{\alpha}^2$ is the classical mean-field population of the mode~\cite{Clive-Brandes_Chaos_and_qpt_Dicke,Kopylov_Counting-statistics-Dicke,LMG-networks_qpt-sorokin}
and $a$ is another bosonic annihilation operator.
The next step is then to expand for either phase Eq.~\eqref{eq:LMG} with the inserted transformation~\eqref{eq:HP_trafo} in terms
of $1/\sqrt{N}$ for $N\gg 1$ -- see App.~\ref{APP:tdlimit} -- which yields a decomposition of the Hamiltonian
\begin{align}
\label{eq:H_LMG_HP}
H_{\rm LMG}^{\rm HP}(h,\gamma_x) &= N \cdot H_0^{\rm HP} + \sqrt{N} H_1^{\rm HP} + H_2^{\rm HP}\\\notag
&\qquad + {\mathcal O}\left(\frac{1}{\sqrt{N}}\right)\,,
\end{align}
with individual terms depending on the phase
\begin{align}
H_0^{\rm HP} &= \begin{cases}
-\frac{h}{2} &: \gamma_x < \gamma_x^{\rm cr}\\
-\frac{h^2 + \gamma_x^2}{4\gamma_x} &: \gamma_x > \gamma_x^{\rm cr}
\end{cases}\,,\\
H_1^{\rm HP} &\stackrel{!}{=} \begin{cases}
0 &: \gamma_x < \gamma_x^{\rm cr}\\
0 &: \gamma_x > \gamma_x^{\rm cr}
\end{cases}\,,\notag\\
H_2^{\rm HP} &=\begin{cases}
(h - \frac{\gamma_x}{2})a^\dag a - \frac{\gamma_x}{4} (a^2 + {a^\dag}^2)-\frac{\gamma_x}{4} &: \gamma_x < \gamma_x^{\rm cr}\\
+\frac{5 \gamma_x - 3 h}{4} a^\dagger a
+\frac{3 \gamma_x - 5 h}{8} \left(a^2 + {a^\dag}^2\right) &: \gamma_x > \gamma_x^{\rm cr}\\
\qquad{+\frac{\gamma_x - 3 h}{8}}
\end{cases}\,.\notag
\end{align}
We demand in both phases that $H_1^{\rm HP}$ is always zero.
Technically, this enforces that only terms quadratic in the creation and annihilation operators occur in the Hamiltonian.
Physically, this enforces that we expand around the correct ground state, i.e.,
in the final basis, the ground state is the state with a vanishing quasiparticle number.
This requirement is
trivially fulfilled in the normal phase with $\alpha=0$ but requires a finite
real value of the mean-field $\alpha$ in the symmetry-broken phase~\cite{Clive-Brandes_Chaos_and_qpt_Dicke,Kopylov_Counting-statistics-Dicke,LMG-networks_qpt-sorokin}, altogether leading to a phase-dependent displacement
\begin{equation}\label{eq:mean_field}
\alpha(h,\gamma_x) = \sqrt{\frac{1}{2} \left(1 - \frac{h}{\gamma_x}\right)} \Theta(\gamma_x - h)\,,
\end{equation}
which approximates $H_{\rm LMG}^{\rm HP}$ by a harmonic oscillator near its ground state.
Here we note that $-\alpha(h,\gamma_x)$ is also a solution.
The mean-field expectation value already allows to see the signature of the phase transition in the closed LMG model at $\gamma_x = h$, since $\alpha$ is only finite for $\gamma_x > h$ and is zero elsewhere.
Since up to corrections that vanish in the thermodynamic limit, the Hamiltonian defined by Eq.~\eqref{eq:H_LMG_HP} is quadratic in $a$, it can in either phase be diagonalized by a rotation of the old operators $a=\cosh(\varphi) d+\sinh(\varphi) d^\dagger$ with $\varphi\in\mathbb{R}$ to new bosonic operators $d$.
The system Hamiltonian $H_{\rm LMG}^{\rm HP}$ then transforms into a single harmonic oscillator, where the frequency $\omega$ and ground state energy are functions of $h$ and $\gamma_x$
\begin{align}\label{EQ:lmg_oscillator}
H_{\rm LMG}^{\rm HP}(h,\gamma_x) &= \omega(h,\gamma_x) d^\dag d + C_2(h,\gamma_x)\\\notag
&\qquad- N \cdot C_1(h,\gamma_x)
+ {\mathcal O}\left(\frac{1}{\sqrt{N}}\right)\,.\notag
\end{align}
The actual values of the excitation energies $\omega(h,\gamma_x)$ and the constants $C_i(h,\gamma_x)$ are summarized in table~\ref{tab:0}.
\begin{table}
$
\begin{array}{|l||l|l|}
\hline
& \text{normal:}\; \gamma_x<h & \text{symmetry-broken:}\; \gamma_x>h \\ \hline\hline
b & \multicolumn{2}{|c|}{\sqrt{N} \alpha(h,\gamma_x) + \cosh(\varphi(h,\gamma_x)) d + \sinh(\varphi(h,\gamma_x)) d^\dag} \\ \hline
\varphi(h,\gamma_x) & \frac{1}{4} \ln\left(\frac{h}{h-\gamma_x}\right) &
\frac{1}{4} \ln\left(\frac{\gamma_x+h}{4(\gamma_x - h)}\right)\\ \hline
\alpha(h,\gamma_x) & 0 & \sqrt{\frac{1}{2}\left(1 - \frac{h}{\gamma_x}\right)} \\\hline
\omega(h,\gamma_x) & \sqrt{h(h-\gamma_x)} &
\sqrt{\gamma_x^2 - h^2} \\ \hline
C_1(h,\gamma_x) & \frac{h}{2} & \frac{h^2 + \gamma_x^2}{4 \gamma_x} \\ \hline
C_2(h,\gamma_x) & \frac{1}{2} \left(\sqrt{h(h - \gamma_x)} - h\right) &
\frac{1}{2} \left(\sqrt{\gamma_x^2-h^2}-\gamma_x\right) \\ \hline
\end{array}
$
\caption{Parameters of the diagonalization procedure of the LMG model $H_{\rm LMG}(h,\gamma_x)$ for the normal phase ($\gamma_x<h$, second column) and for the symmetry-broken phase ($\gamma_x>h$, last column).
In both phases, the $d$ operators correspond to fluctuations around the mean-field value $\alpha$, which is zero only in the normal phase.
}
\label{tab:0}
\end{table}
Fig.~\ref{FIG:bogoliubov_lmg} confirms that the thus obtained spectra from the bosonic representation agree well with finite-size numerical diagonalization when $N$ is large enough.
\begin{figure}
\includegraphics[width=0.95\columnwidth,clip=true]{bogoliubov_lmg.pdf}
\caption{\label{FIG:bogoliubov_lmg}
Lower part of the isolated LMG model spectrum for finite-size numerical diagonalization of Eq.~(\ref{eq:LMG}) (thin curves) and using the bosonic representation (bold curves) based on Eq.~(\ref{EQ:lmg_oscillator}) for the three lowest energies.
For large $N$, the spectra are nearly indistinguishable.
In the symmetry-broken phase (right), two numerical eigenvalues approach the same oscillator solution.
These correspond to the two different parity sectors, formally represented by two possible displacement solutions $\pm\alpha(h,\gamma_x)$ in Eq.~(\ref{eq:mean_field}).
}
\end{figure}
First, one observes for consistency that the trivial spectra deeply in the normal phase ($\gamma_x \approx 0$) or deeply in the symmetry-broken phase ($h \approx 0$) are reproduced.
In addition, we see that at the QPT $\gamma_x = \gamma_x^{\rm cr}=h$, the excitation frequency $\omega$ vanishes as expected, which is also reflected
e.g.\ in the dashed curve in Fig.~\ref{fig:fluctuations}(a).
For consistency, we also mention that all oscillator energies $E_n$ are continuous at the critical point $\gamma=h$.
Furthermore, the second derivative with respect to $\gamma_x$ of the continuum ground state energy per spin $\lim_{N\to\infty} E_0/N$ is discontinuous at the critical point, classifying the phase transition as second order.
Finally, we note that this treatment does not capture the excited state quantum phase transitions present in the LMG model as we are only concerned with the lower part of the spectrum.
\section{Master Equation}
\label{sec:2}
We first perform the derivation of the conventional Born-Markov-secular (BMS) master equation in the usual way, starting directly with Eq.~\eqref{eq:H_LMG_And_Bath_ini} \cite{Kopylov_Counting-statistics-Dicke,LMG-collective_and_independent-decay-Lee,LMG_thermalization_kastner}.
Afterwards, we show that a polaron transform also allows to treat regions near the critical point.
\subsection{Conventional BMS master equation}
The conventional BMS master equation is derived in the energy eigenbasis of the system, i.e., the LMG model with renormalized spin-spin interaction $\tilde\gamma_x$, in order to facilitate the secular approximation.
In this eigenbasis the master equation has a particularly simple form.
Applying the very same transformations (that diagonalize the closed LMG model) to its open version~\eqref{eq:H_LMG_And_Bath_ini}, we arrive at
the generic form
\begin{align}
\label{eq:H_LMG_And_Bath_HP}
H_{\rm tot}^{\rm HP} &= H_{\rm LMG}^{\rm HP}(h,\tilde{\gamma}_x) + \sum \nu_k c_k^\dag c_k\notag \\
&\qquad+ \left[A(h,\tilde\gamma_x) (d + d^\dag) + \sqrt{N} Q(h,\tilde\gamma_x)\right] \times\notag\\
&\qquad\qquad \times \sum_k g_k (c_k + c_k^\dagger)\,,
\end{align}
where we note that the LMG Hamiltonian is now evaluated at the shifted interaction~(\ref{EQ:interaction_rescaled}).
The phase-dependent numbers $A$ and $Q$ are defined in Table~\ref{tab:1}.
\begin{table}
$
\begin{array}{|l||l|l|}
\hline
& \text{normal:}\; \tilde\gamma_x<h & \text{symmetry-broken:}\; \tilde\gamma_x>h \\ \hline\hline
C_3(h,\tilde\gamma_x) & 1 & \frac{\sqrt{2} h}{\sqrt{\tilde\gamma_x(\tilde\gamma_x + h)}} \\ \hline
A(h,\tilde\gamma_x) & \multicolumn{2}{|c|}{\frac{C_3(h,\tilde\gamma_x)}{2} \exp[\varphi(h,\tilde\gamma_x)]} \\ \hline
Q(h,\tilde\gamma_x) & \multicolumn{2}{|c|}{\alpha(h,\tilde\gamma_x) \sqrt{1 - \alpha^2(h,\tilde\gamma_x)}} \\ \hline
\end{array}
$
\caption{Additional parameters of the diagonalization procedure for the derivation of the master equation
in the original frame for the normal phase ($\tilde\gamma_x < h$, second column) and for the symmetry-broken phase ($\tilde\gamma_x > h$, last column).
Note that as compared to the closed model in Tab.~\ref{tab:0}, functions are evaluated at the shifted interaction~(\ref{EQ:interaction_rescaled}).
}
\label{tab:1}
\end{table}
In particular, in the normal phase we have $Q=0$, and we recover the standard problem of a harmonic oscillator weakly coupled to a thermal reservoir.
In the symmetry-broken phase we have $Q \neq 0$, such that the shift term in the interaction Hamiltonian formally diverges as $N\to\infty$, and a naive perturbative treatment does not apply.
Some thought however shows, that this term can be transformed away by applying yet another displacement for both system and reservoir modes
$d \to d + \sigma$ and $c_k \to c_k + \sigma_k$ with $\sigma,\sigma_k\in\mathbb{C}$ chosen such that all terms linear in creation and annihilation operators vanish in the total Hamiltonian.
This procedure does not change the energies of neither system nor bath operators, such that eventually, the master equation in the symmetry-broken phase is formally equivalent to the one in the normal phase, and the interaction proportional to $Q$ is not problematic.
Still, when one approaches the critical point from either side, the system spacing $\omega$ closes in the thermodynamic limit, which makes the interaction Hamiltonian at some point equivalent or even stronger than the system Hamiltonian.
Even worse, one can see that simultaneously, the factor $A \sim e^{+\varphi}$ in the interaction Hamiltonian diverges at the critical point,
such that a perturbative treatment is not applicable there.
Therefore, one should consider the results of the naive master equation in the thermodynamic limit $N\to\infty$ with caution.
The absence of a microscopically derived master equation near the critical point is a major obstacle in understanding the fate of quantum criticality in
open systems.
Ignoring these problems, one obtains a master equation having the standard form for a harmonic oscillator coupled to a thermal reservoir
\begin{align}\label{EQ:density_matrix}
\dot\rho(t) &= - {\rm i} \left[H_{\rm LMG}^{\rm HP}(h,\tilde{\gamma}_x),\rho \right]
+ F_e \mathcal{D}(d)\rho + F_a \mathcal{D}(d^\dag)\rho\,,\notag\\
F_e &= A^2(h,\tilde\gamma_x) \Gamma(\omega(h,\tilde\gamma_x)) [1 + n_B(\omega(h,\tilde\gamma_x)]\,,\notag\\
F_a &= A^2(h,\tilde\gamma_x) \Gamma(\omega(h,\tilde\gamma_x)) n_B(\omega(h,\tilde\gamma_x))\,.
\end{align}
Here, we have used the superoperator notation
$\mathcal{D}(O)\rho \hat{=} O \rho O^\dag - \frac{1}{2} \rho O^\dag O -\frac{1}{2} O^\dag O \rho$ for any operator $O$ and
\begin{equation}
\label{eq:spectral_density_general}
\Gamma(\omega) = 2 \pi \sum_k g_k^2 \delta(\omega - \nu_k)
\end{equation}
is the original spectral density of the reservoir, and
$n_B(\omega)=[e^{\beta \omega}-1]^{-1}$ is the Bose distribution
with inverse reservoir temperature $\beta$.
These functions are evaluated at the system transition frequency $\omega(h,\tilde\gamma_x)$.
The master equation has the spontaneous and stimulated emission terms in $F_e$ and the absorption term in $F_a$, and due to the balanced Bose-Einstein function these will at steady state just thermalize the system at the reservoir temperature, as is generically found for such BMS master equations.
Note that $H_{\rm LMG}^{\rm HP}$ from Eq.~\eqref{EQ:density_matrix} is evaluated at the rescaled coupling
$\tilde\gamma_x$.
Therefore, the position of the QPT is at $\tilde\gamma_x^{\rm cr} = h$ and shifted to higher $\gamma_x$ couplings, see~\eqref{EQ:interaction_rescaled}.
Similar shifts of the QPT position in dissipative quantum optical models are known e.g.\ from mean-field treatments~\cite{Bhaseen_dynamics_of_nonequilibrium_dicke_models,Dicke-realization_Dicke_in_cavity_system-Dimer}.
However, here we emphasize that we observe them as a direct consequence of ignoring the divergence of interaction around the phase transition in combination with positive-definite form of the initial total Hamiltonian Eq.~\eqref{eq:H_LMG_And_Bath_ini}.
\subsection{Polaron master equation}
In this section, we apply a unitary polaron transform to the complete model, which has for other (non-critical) models been used to investigate the full regime of system-reservoir coupling strengths~\cite{Polaron-spin_boson-Wang,Polaron_collective_system_with_interaction-Wang}.
We will see that for a critical model, it can -- while still bounded in the total coupling strength -- be used to
explore the systems behaviour at the QPT position.
\subsubsection{Polaron transform}
We choose the following polaron transform $U_p$
\begin{equation}
\label{eq:Polaron_trafo}
U_p = e^{-J_x \hat{B}}\,,\qquad \hat{B} = \frac{1}{\sqrt{N}} \sum_{k} \frac{g_k}{\nu_k} \left(c_k^\dag - c_k\right)\,.
\end{equation}
The total Hamiltonian~\eqref{eq:H_LMG_And_Bath_ini} in the polaron frame then becomes
\begin{align}
\label{eq:H_LMG_And_Bath_Polaron}
\bar{H}_{\rm tot} &= U_p^\dag H_{\rm tot} U_p\\
&= - h D \cdot J_z - \frac{\gamma_x}{N} J_x^2
+ \sum_k \nu_k c_k^\dag c_k\notag\\
&\qquad- h \cdot \left[J_z \cdot \left(\cosh(\hat{B}) - D\right) - {\rm i} J_y \sinh(\hat{B})\right]\,. \notag
\end{align}
Here, $\gamma_x$ is the original interaction of the local LMG model,
and the renormalization of the external field $D$ is defined via
\begin{align}
\label{eq:bath_shift_polaron}
D &= \avg{\cosh (\hat{B})} = \trace{\cosh(\hat{B}) \frac{e^{-\beta \sum_k \nu_k c_k^\dagger c_k}}{\trace{e^{-\beta \sum_k \nu_k c_k^\dagger c_k}}}}\notag\\
&= \exp\left[-\frac{1}{N}\sum_k \left(\frac{g_k}{\nu_k}\right)^2 \left(n_k + \frac{1}{2}\right) \right] > 0\,,\notag\\
n_k &= \frac{1}{e^{\beta \nu_k} - 1}\,.
\end{align}
It has been introduced to enforce that the expectation value of the system-bath coupling vanishes for the thermal reservoir state.
More details on the derivation of Eq.~\eqref{eq:H_LMG_And_Bath_Polaron} are presented in App~\ref{app:polaron_derivation}.
The operator $\hat{B}\propto\frac{1}{\sqrt{N}}$ decays in the thermodynamic limit, such that for these studies, only the first few terms in the expansions of the $\sinh(\hat{B})$ and $\cosh(\hat{B})$ terms need to be considered.
Accordingly, the position of the QPT in the polaron frame is now found at the QPT of the closed model
\begin{equation}
\label{eq:qpt_position_polaron}
\gamma_x^{\rm cr} = h D
\stackrel{N\to\infty}{\to} h\,.
\end{equation}
Here, we have with $D\to 1$ implicitly assumed that the thermodynamic limit is performed in the system first.
If a spectral density is chosen that vanishes faster than quadratically for small frequencies, the above replacement holds unconditionally (see below).
We emphasize again we observe the absence of a QPT shift as a result of a proper system-reservoir interaction with a lower spectral bound.
Without such an initial Hamiltonian, the reservoir back-action would shift the dissipative QPT~\cite{Bhaseen_dynamics_of_nonequilibrium_dicke_models,Dicke-realization_Dicke_in_cavity_system-Dimer}.
For the study of strong coupling regimes, polaron transforms have also been applied e.g.\ to single spin
systems~\cite{Polaron-spin_boson-Wang} and collective non-critical spin systems~\cite{Polaron_collective_system_with_interaction-Wang}.
Treatments without a polaron transformation should be possible in our case too, by rewriting Eq.~\eqref{eq:H_LMG_And_Bath_ini} in terms of reaction coordinates~\cite{Reaction_Coordinate-Effect_friction_electron_transfer_biomol-Garg,Thermodynamics-Nonequilibr_react_coordinate-Strasberg,nazir2018a}, leading to an open Dicke-type model.
In the thermodynamic limit, we can use that the spin operators $J_\nu$ scale at worst linearly in $N$ to expand the interaction and $D$, yielding
\begin{align}
\bar{H}_{\rm tot} &\approx - h \left[1-\frac{1}{N} \delta\right] \cdot J_z - \frac{\gamma_x}{N} J_x^2
+ \sum_k \nu_k c_k^\dag c_k\notag\\
&\qquad- h \cdot \left[\frac{J_z}{N} \left(\frac{1}{2} \bar{B}^2 + \delta\right) - {\rm i} \frac{J_y}{\sqrt{N}} \bar{B}\right]\notag\\
&= -h J_z - \frac{\gamma_x}{N} J_x^2 + \sum_k \nu_k c_k^\dag c_k\notag\\
&\qquad- h \cdot \left[\frac{J_z}{N} \frac{1}{2} \bar{B}^2 - {\rm i} \frac{J_y}{\sqrt{N}} \bar{B}\right]\,,
\end{align}
where $\bar{B} = \sqrt{N} \hat{B}$ and
$D \equiv e^{-\frac{\delta}{N}}$ has been used.
As in the thermodynamic limit, $J_z/N$ just yields a constant, the first term in the last row can be seen as an all-to-all interaction between the environmental oscillators, which only depends in a bounded fashion on the LMG parameters $h$ and $\gamma_x$.
Since it is quadratic, it can be formally transformed away by a suitable global Bogoliubov transform
$c_k = \sum_q (u_{kq} b_q + v_{kq} b_q^\dagger)$
of all reservoir oscillators, which results in
\begin{align}
\bar{H}_{\rm tot} &\approx -h J_z - \frac{\gamma_x}{N} J_x^2 + \sum_k \tilde\nu_k b_k^\dag b_k\notag\\
&\qquad+ h \frac{{\rm i} J_y}{\sqrt{N}} \sum_k \left(h_k b_k - h_k^* b_k^\dagger\right)\,,
\end{align}
and where $h_k \in \mathbb{C}$ are the transformed reservoir couplings and the $\tilde\nu_k$ the transformed reservoir energies.
In case of weak coupling to the reservoir which is assumed here however, we will simply neglect the $\bar{B}^2$-term since it is then much smaller than the linear $\bar{B}$ term.
\subsubsection{System Hamiltonian diagonalization}
To proceed, we first consider the normal phase $\gamma_x < h$.
We first apply the Holstein-Primakoff transformation to the total Hamiltonian, compare appendix~\ref{APP:tdlimit}.
Since in the normal phase the vanishing displacement implies $a=b$, this yields
\begin{align}
\label{eq:H_tot_Polaron_HP_Normal}
\bar{H}_{\rm tot, N}^{\rm (HP)} &= - \frac{h}{2} N + \left(h-\frac{\gamma_x}{2}\right) a^\dag a -\frac{\gamma_x}{4} ({a^\dag}^2 + a^2+1)
\notag\\
&\quad + \sum_k \tilde{\nu}_k b_k^\dag b_k + \frac{h}{2}(a - a^\dag) \sum_k \left(h_k b_k - h_k^* b_k^\dagger\right)\,.
\end{align}
Here, the main difference is that the system-reservoir interaction now couples to the momentum of the LMG oscillator mode and not the position.
Applying yet another Bogoliubov transform $a = \cosh(\varphi(h,\gamma_x)) d + \sinh(\varphi(h,\gamma_x)) d^\dagger$ with the same parameters as in table~\ref{tab:0} eventually yields a Hamiltonian of a single diagonalized oscillator coupled via its momentum to a reservoir.
Analogously, the symmetry-broken phase $\gamma_x > h$ is treated with a finite displacement as outlined in App.~\ref{APP:tdlimit}.
The requirement, that in the system Hamiltonian all terms proportional to $\sqrt{N}$ should vanish, yields
the same known displacement~(\ref{eq:mean_field}).
One arrives at a Hamiltonian of the form
\begin{align}
\label{eq:H_tot_Polaron_HP_SR}
\bar{H}_{\rm tot, S}^{\rm (HP)} &= -\frac{h^2+\gamma_x^2}{4\gamma_x} N +
\frac{5 \gamma_x - 3 h}{4} a^\dagger a\\
&\qquad+\frac{3 \gamma_x - 5 h}{8} \left(a^2 + {a^\dag}^2\right)
+\frac{\gamma_x - 3 h}{8}
+ \sum_k \tilde\nu_k b_k^\dag b_k\notag\\
&\qquad+ \frac{h}{2} \sqrt{1-\abs{\alpha(h,\gamma_x)}^2} (a-a^\dagger) \sum_k (h_k b_k - h_k^* b_k^\dagger)\,.
\notag
\end{align}
Using a Bogoliubov transformation to new bosonic operators $d$ the system part in the above equation can be diagonalized again.
Thus, in both phases the Hamiltonian acquires the generic form
\begin{align}
\bar{H}_{\rm tot}^{\rm (HP)} &= \omega(h,\gamma_x) d^\dagger d - N C_1(h,\gamma_x) + C_2(h,\gamma_x)\notag\\
&\qquad+ \bar{A}(h,\gamma_x) (d-d^\dagger) \sum_k \left(h_k b_k - h_k^* b_k^\dagger\right)
\notag\\
&\qquad+ \sum_k \tilde{\nu}_k b_k^\dag b_k\,,
\end{align}
where the system-reservoir coupling modification $\bar{A}(h,\gamma_x)$ is found in Tab.~\ref{tab:2}.
\begin{table}
$
\begin{array}{|l||l|l|}
\hline
& \text{normal:}\; \gamma_x<h & \text{symmetry-broken:}\; \gamma_x >h \\ \hline\hline
\bar{C}_3(h,\gamma_x) & h & h \sqrt{\frac{1}{2}\left(1 + \frac{h}{\gamma_x}\right)} \\ \hline
\bar{A}(h,\gamma_x) & \multicolumn{2}{|c|}{\frac{\bar{C}_3(h,\gamma_x)}{2} \exp[-\varphi(h,\gamma_x)]} \\ \hline
\end{array}
$
\caption{Additional parameters of the diagonalization procedure of $H_{\rm LMG}$ in the polaron frame for the normal phase ($\gamma_x < h$, second column) and symmetry broken phase ($\gamma_x > h$, last column).
Note that $\varphi(h,\gamma_x)$ -- see Tab.~\ref{tab:0} -- is evaluated at the original spin-spin coupling $\gamma_x$.}
\label{tab:2}
\end{table}
To this form, we can directly apply the derivation of the standard quantum-optical master equation.
\subsubsection{Master Equation}
In the polaron-transformed interaction Hamiltonian, we do now observe the factor $\bar{A}(h,\gamma_x)$, which depends on $h$ and $\gamma_x$, see tables~\ref{tab:2} and~\ref{tab:0}.
This factor is suppressed as one approaches the shifted critical point, it vanishes there identically.
Near the shifted QPT, its square $\bar{A}^2(h,\gamma_x)$ shows the same scaling behaviour as the system gap $\omega(h,\gamma_x)$, such that in the polaron frame, the system-reservoir interaction strength is adaptively scaled down with the system Hamiltonian, and a naive master equation approach can be applied in this frame.
From either the normal phase or the symmetry-broken phase we arrive at the following generic form of the system density matrix master equation
\begin{align}
\label{eq:density_matrix_polaron}
\dot\rho(t) &= - {\rm i} \left[H_{\rm LMG}^{\rm HP}(h,\gamma_x),\rho \right]
+ \bar{F}_e \mathcal{D}(d)\rho + \bar{F}_a \mathcal{D}(d^\dag)\rho\,,\notag\\
\bar{F}_e &= \bar{A}^2(h,\gamma_x) \bar\Gamma(\omega(h,\gamma_x)) [1 + n_B(\omega(h,\gamma_x))]\,,\notag\\
\bar{F}_a &= \bar{A}^2(h,\gamma_x) \bar\Gamma(\omega(h,\gamma_x)) n_B(\omega(h,\gamma_x))\,.
\end{align}
Here, $\bar\Gamma(\omega) = 2 \pi \sum_k \abs{h_k}^2 \delta(\omega - \tilde\nu_k)$ denotes the transformed spectral density, which is related to the original spectral density via the Bogoliubov transform that expresses the $c_k$ operators in terms of the $b_k$ operators, and $n_B(\omega)$ again denotes the
Bose distribution.
The mapping from the reservoir modes $c_k$ to the new reservoir modes $b_k$ has been represented in an implicit form, but in general it will be a general multi-mode Bogoliubov transformation~\cite{Hamilt_diagonalization_quadratik_Tsallis1978,Hamilt_diagonalization_quadratik_Tikochinsky}
with a sophisticated solution.
However, if $h g_k/\nu_k$ is small in comparison to the reservoir frequencies $\nu_k$, the Bogoliubov transform will hardly change the reservoir oscillators and thereby be close to the identity.
Then, one will approximately recover
$\bar\Gamma(\omega) \approx \Gamma(\omega)$.
Even if this assumption is not fulfilled, we note from the general form of the master equation that the steady state will just be the thermalized system -- with renormalized parameters depending on $\Gamma(\omega)$, $h$, and $\gamma_x$.
Therefore, it will not depend on the structure of $\bar\Gamma(\omega)$ -- although transient observables may depend on this transformed spectral density as well.
In our results, we will therefore concentrate on a particular form of $\Gamma(\omega)$ only and neglect the implications for $\bar\Gamma(\omega)$.
\section{Results}
\label{sec:3}
To apply the polaron transform method, we require that all involved limits converge.
All reasonable choices for a spectral density~(\ref{eq:spectral_density_general}) will lead to convergence of the renormalized spin-spin interaction~(\ref{EQ:interaction_rescaled}).
However, convergence of the external field renormalization~(\ref{eq:bath_shift_polaron}) may require subtle discussions on the order of the thermodynamic limits in system ($N\to\infty$)
and reservoir ($\sum_k g_k^2 [\ldots] \to \frac{1}{2\pi}\int \Gamma(\omega)[\ldots]d\omega$), respectively.
These discussions can be avoided if the spectral density grows faster than quadratically for small energies, e.g.
\begin{equation}
\label{eq:specral_density}
\Gamma(\omega) =\eta \frac{\omega^3}{\omega_c^2} \cdot \exp(-\omega/\omega_c)\,,
\end{equation}
where $\omega_c$ is a cutoff frequency and $\eta$ is a dimensionless coupling strength.
With this choice, the renormalized all-to-all interaction~(\ref{EQ:interaction_rescaled}) becomes
\begin{equation}
\tilde{\gamma}_x = \gamma_x - \frac{\eta \cdot \omega_c}{\pi}\,,
\end{equation}
such that the QPT position Eq.~\eqref{EQ:interaction_rescaled} is shifted to
$\gamma_x^{\rm cr} \to h + \frac{\eta \cdot \omega_c}{\pi}$.
We emphasize again that -- independent of the spectral density -- both derived master equations Eq.~\eqref{EQ:density_matrix} and~\eqref{eq:density_matrix_polaron} let the system evolve towards the respective thermal state
\begin{equation}
\label{eq:density_matrix_steady_state}
\rho = \frac{\exp(-\beta H_{\rm LMG}^{\rm HP}(h,\tilde\gamma_x))}{Z}\,,\;\;
\bar\rho = \frac{\exp(-\beta H_{\rm LMG}^{\rm HP}(h,\gamma_x))}{\bar{Z}}\,,
\end{equation}
in the original and polaron frame, respectively,
where $\beta$ is the inverse temperature of the bath and $Z/\bar{Z}$ are the respective normalization constants.
The difference between the treatments is therefore that within the BMS treatment~\eqref{EQ:density_matrix} the rates may diverge and that the system parameters are renormalized.
The divergence of rates within the BMS treatment would also occur for a standard initial Hamiltonian.
To illustrate this main result, we discuss a number of conclusions can be derived from it below.
\subsection{Magnetization}
In general, the role of temperature in connection with the thermal phase transition in models like LMG or Dicke has been widely studied using partition sums or by using naive BMS master equations
\cite{LMG-thermal_phase_partition_sum_Tzeng,hayn2017thermodynamics,wilms2012finite,dalla2016dicke}.
Since in our case the stationary system state is just the thermalized one, standard methods (compare Appendix~\ref{APP:magnetization}) just analyzing the canonical Gibbs state of the isolated LMG model can be used to obtain stationary expectation values such as e.g. the magnetization.
For the polaron approach we obtain
\begin{align}\label{EQ:mag_ss}
\avg{J^z} = -\frac{\partial E_0(h,\gamma_x)}{\partial h} - \frac{1}{e^{\beta \omega(h,\gamma_x)}-1} \frac{\partial\omega(h,\gamma_x)}{\partial h}\,,
\end{align}
where $E_0(h,\gamma_x)=C_2(h,\gamma_x)-N C_1(h,\gamma_x)$ is the ground state energy and $\omega(h,\gamma_x)$ the energy splitting, compare Tab.~\ref{tab:0}.
The quantum-critical nature is demonstrated by the first (ground state) contribution, where the nonanalytic dependence of the ground state energy on the external field strength will map to the magnetization.
The second contribution is temperature-dependent.
In particular, in the thermodynamic limit $N\to\infty$, only a part of the ground state contribution remains and we obtain
\begin{align}\label{EQ:mag_gs}
\lim_{N\to\infty}\frac{\avg{J^z}}{N} \to \frac{1}{2}\left\{\begin{array}{ccc}
1 &:& h > \gamma_x\\
\frac{h}{\gamma_x} &:& \gamma_x>h
\end{array}\right.\,.
\end{align}
For finite system sizes however, finite temperature corrections exist.
In Fig.~\ref{FIG:magnetization}, we show a contour plot of the magnetization density $\avg{J^z}/N$ from the exact numerical calculation of the partition function (dashed contours) and compare with the results from the bosonic representation (solid green contours).
\begin{figure}
\includegraphics[width=0.95\columnwidth]{magnetization_1000.pdf}
\caption{\label{FIG:magnetization}
Contour plot of the magnetization density $\avg{J^z}/N$ versus spin-spin interaction $\gamma_x$ and temperature $k_B T$ for $N=1000$.
At the critical point $\gamma_x = h$, the magnetization density at low temperatures (bottom) suddenly starts to drop from a constant value in the normal phase (left) to a decaying curve in the symmetry-broken phase (right) as predicted by~(\ref{EQ:mag_gs}).
At higher temperatures, the transition is smoother and the predictions from the bosonic representation (solid green contours, based on Eq.~(\ref{EQ:mag_ss}))
and the finite-size numerical calculation of the partition function (dashed contours, based on the Gibbs state with Eq.~(\ref{eq:LMG})) disagree for $\gamma_x \approx h$.
For the finite-size calculation, weak coupling has been assumed $k_B T \ll N \omega_c/\eta $, such that $U_p^\dag J_z U_p \approx J_z$ instead of \eqref{eq:jz_polaron_non_polaron_connection}.
}
\end{figure}
We see in the contour lines of the magnetization convincing agreement between the curves of the bosonic representation (solid green) and the finite-size calculation (dashed black) only for very low temperatures or away from the critical point.
The disagreement for $\gamma_x \approx h$ and $T>0$ can be attributed to the fact that the bosonization for finite sizes only captures the lowest energy eigenstates well, whereas in this region also the higher eigenstates become occupied.
However, it is clearly visible that in the low temperature regime, the magnetization density will drop suddenly when $\gamma_x \ge h$, such that the QPT can be detected at correspondingly low temperatures.
At high temperatures, the magnetization density falls of smoothly with increasing spin-spin interaction.
\subsection{Mode Occupation}
The master equations appear simple only in a displaced and rotated frame.
When transformed back, the steady-state populations
$\avg{d^\dagger d} = \trace{d^\dagger d \rho}$ and
$\overline{\avg{d^\dagger d}} = \trace{d^\dagger d \bar\rho}$ actually measure displacements around the mean-field.
Fig.~\ref{fig:fluctuations} compares the occupation number and system frequency with (solid) and without (dashed) polaron transform.
Panel (a) demonstrates that the LMG energy gap is in the BMS treatment strongly modified by dissipation, such that in the vicinity of the closed QPT the non-polaron and polaron treatments lead to very different results.
Panel (b) shows the fluctuations in the diagonal basis $\overline{\avg{d^\dag d}}$ ($\avg{d^\dag d}$) around the mean-field $\alpha(h,\gamma_x)$ (or $\alpha(h,\tilde\gamma_x)$) in the polaron (or non-polaron) frame.
Finally, panel (c) shows the mode occupation
$\avg{a^\dagger a} = \sinh^2(\varphi(h,\gamma_x)) + 2 \cosh^2(\varphi(h,\gamma_x)) \avg{d^\dagger d}$ (and analogous in the symmetry-broken phase) in the non-diagonal basis.
These are directly related to the deviations of the $J_z$-expectation value from its mean-field solution, compare App.~\ref{APP:tdlimit}.
Since the frequency $\omega(h,\tilde\gamma_x)$ (Tab.~\ref{tab:0}) vanishes at
$\gamma_x = h + \frac{\eta \cdot \omega_c}{\pi}$ in the non-polaron frame,
the BMS approximations break down around the original QPT position, see dashed line in Fig.~\ref{fig:fluctuations}(a).
Mode occupations in both the diagonal and non-diagonal bases diverge at the QPT point, see the dashed lines in Fig.~\ref{fig:fluctuations}(b-c).
In particular, in the polaron frame the fluctuation divergence occurs around the original quantum critical point at $\gamma_x = h$, see the solid lines in Fig.~\ref{fig:fluctuations}.
\begin{figure}
\includegraphics[width=0.51 \columnwidth]{frequency}
\includegraphics[width=0.49 \columnwidth]{occupation}
\includegraphics[width=0.49 \columnwidth]{occupation_orig}
\caption{(a) LMG oscillator frequency $\omega(h,\gamma_x)$ or $\omega(h,\tilde\gamma_x)$, (b) diagonal frame steady-state mode occupations $\overline{\avg{d^\dag d}}$ ($\avg{d^\dag d}$), (c) non-diagonal frame steady-state mode occupations $\overline{\avg{a^\dag a}} (\avg{a^\dag a})$ for the polaron (solid) and non-polaron (dashed) master equations.
Divergent mode occupations indicate the position of the QPT where the excitation frequency vanishes.
For the polaron treatment, the QPT position stays at $\gamma_x/h = 1$ just as in the isolated LMG model
in contrast to the shift predicted by the BMS master equation.
Parameters: $\eta=2\pi \cdot 0.1, \omega_c = 0.5 h, \beta = 1.79/h$.
}
\label{fig:fluctuations}
\end{figure}
\subsection{Waiting times}
The coupling to the reservoir does not only modify the system properties but may also lead to the emission or absorption of reservoir excitations (i.e., photons or phonons depending on the model implementation), which can in principle be measured independently.
Classifying these events into classes $\nu$ describing e.g.\ emissions or absorptions, the waiting-time distribution between two such system-bath exchange processes of type $\mu$ after $\nu$ is characterized by~\cite{brandes2008waiting}
\begin{equation}
\label{eq:wt_definition}
\mathcal{w}_{\mu\nu}(\tau) = \frac{{\rm Tr}\left(\mathcal{J}_\mu \exp(\mathcal{L}_0 \tau)\mathcal{J}_\nu \rho\right)}{{\rm Tr} \left(\mathcal{J}_\nu \rho \right)}\,.
\end{equation}
Here $\mathcal{J}_\mu, \mathcal{L}_0$ are super operators describing the jump $\mu$ and the no-jump evolution $\mathcal{L}_0$.
For example, in master equation~\eqref{EQ:density_matrix}, there are only two distinct types of jumps, emission `e' and absorption `a'.
Their corresponding super-operators are then acting as
\begin{align}
\mathcal{J}_{e} \rho &= F_e d \rho d^\dag\,,\qquad
\mathcal{J}_{a} \rho = F_a d^\dag \rho d\,, \notag\\
\mathcal{L}_0 \rho &= -{\rm i} \left[\omega d^\dagger d, \rho\right] - \frac{F_e}{2} \left\{d^\dagger d, \rho\right\}
- \frac{F_a}{2} \left\{d d^\dagger, \rho\right\}\,,
\end{align}
such that the total Liouvillian is decomposable as $\mathcal{L} = \mathcal{L}_0 + \mathcal{J}_e + \mathcal{J}_a$.
The same equations are valid in the polaron frame~\eqref{eq:density_matrix_polaron}, just with the corresponding overbar on the variables.
It is straightforward to go to a frame where the Hamiltonian dynamics is absorbed $\tilde{\rho} = e^{+{\rm i} \omega t d^\dagger d} \rho e^{-{\rm i} \omega t d^\dagger d}$, we see that the whole Liouvillian in this frame $\tilde{\mathcal L}$ is just proportional to the spectral density, evaluated at the
system transition frequency $\omega$.
Thereby, it enters as a single parameter, a different spectral density could be interpreted as a rescaling
$\Gamma(\omega) \to \alpha \Gamma(\omega)$, which would imply ${\cal L}_0 \to \alpha {\cal L}_0$ and ${\cal J}_\mu \to \alpha {\cal J}_\mu$.
These transformations would only lead to a trivial stretching of the waiting time distribution
$\mathcal{w}_{\mu\nu}(\tau) \to \alpha \mathcal{w}_{\mu\nu}(\alpha \tau)$, compare also Eq.~(\ref{EQ:waitingtime_explicit}).
Since the LMG Hamiltonian and the steady state~\eqref{eq:density_matrix_steady_state} are diagonal, analytic expressions for the waiting time distributions can be derived, see App.~\ref{APP:waiting_time}.
\begin{figure}
\includegraphics[width=0.49 \columnwidth]{wt_tau}
\includegraphics[width=0.49 \columnwidth]{wt_gx}
\caption{Waiting time distributions (WTD) between two emission (absorption and emission) events $\bar{\mathcal{w}}_{ee(ae)}$ (solid, dot-dashed) calculated in the polaron frame as a function of $\tau$ (a) for a fixed
$\gamma_x$ value and (b) distribution $\bar{\mathcal{w}}_{ee}$ as a function of $\gamma_x$ for two different fixed $\tau$ values (b).
Additionally, the WTD in the non-polaron frame is shown in (b) for $\tau = 0$ case (dashed), which wrongly diverges around the shifted critical point.
At the true critical point a non-analytic dependence of the distribution on the intra-spin coupling strength $\gamma_x$ is clearly visible, within the polaron treatment however all WTDs remain finite.
Parameters: $\eta=2\pi \cdot 0.1, \omega_c = 0.5 h, \beta = 1.79/h$, (a) $\gamma_x = 0.5 h$.
}
\label{fig:waiting_times}
\end{figure}
In Fig.~\ref{fig:waiting_times} we show two waiting-time distributions $\bar{\mathcal{w}}_{ee(ae)}$ as a function of time $\tau$ for fixed coupling strength
$\gamma_x$ (a) and the repeated-emission waiting-time distribution $\bar{\mathcal{w}}_{ee}(\tau)$ as a function of $\gamma_x$ for two fixed waiting times $\tau$ (b).
A typical feature of a thermal state is bunching of emitted photons, which we see in Fig.~\ref{fig:waiting_times}(a):
After an emission event the same event has the highest probability for $\tau \to 0$, thus immediately.
When looking at waiting time distributions of different phases, like in panel (a), a significant difference is not visible.
However, fixing the waiting time $\tau$ and varying $\gamma_x$ we find, that the waiting times have their maximum at the position of QPT, see Fig.~\ref{fig:waiting_times}(b).
Essentially, this is related to the divergence of $n_B(\omega)$ when the energy gap vanishes.
Whereas the non-polaron treatment predicts a divergence of waiting times around the critical point $\tilde{\gamma}_x^{\rm cr}$, see the dashed curve in Fig.~\ref{fig:waiting_times}(b), the waiting times within the polaron approach remain finite but depend non-analytically on the Hamiltonian parameters.
Therefore, the quantum-critical behaviour is not only reflected in system-intrinsic observables like mode occupations but also in reservoir observables like the statistics of photoemission events.
\section{Summary}
We have investigated the open LMG model by using a polaron transform technique
that also allows us to address the vicinity of the critical point.
First, within the polaron treatment, we have found that the position of the QPT is robust when starting
from an initial Hamiltonian with a lower spectral bound.
This shows that the choice of the starting Hamiltonian should be discussed with care for critical models, even when
treated as weakly coupled.
Second, whereas far from the QPT, the approach presented here reproduces naive master equation treatments,
it remains also valid in the vicinity of the QPT.
In the transformed frame, the effective interaction
scaled with the energy gap of the system Hamiltonian, which admits a perturbative treatment at the critical point.
We therefore expect that the polaron-master equation approach is also applicable
to other models that bilinearly couple to bosonic reservoirs via position operators.
Interestingly, we obtained that for a single reservoir the stationary properties are determined by those of the isolated system alone,
such that a standard analysis applies.
The critical behaviour (and its possible renormalization) can be detected with system observables like magnetization or mode occupations but is also visible in reservoir observables like waiting-time distributions, which remain finite in the polaron frame.
We hope that our study of the LMG model paves the way for further quantitative investigations of dissipative quantum-critical systems,
e.g. by capturing higher eigenstates by augmented variational polaron treatments~\cite{mccutcheon2011a} or by investigating the non-equilibrium dynamics
of critical setups.
\section*{Acknowledgements}\vspace{-2mm}
The authors gratefully acknowledge financial support from the DFG (grants BR 1528/9-1, BR 1528/8-2, and SFB 910) as well as fruitful discussions with M. Kloc, A. Knorr, and C. W\"achtler.
| proofpile-arXiv_065-7267 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Algorithm} \label{sec:algorithm}
In this section we present a data stream algorithm for $k$-power instances of {\mwm}. This algorithm assumes access to positive upper bound $\wmax$ and lower bound $\wmin$ on the weights of all the elements, and has a semi-streaming space complexity when the ratio between $\wmax$ and $\wmin$ is upper bounded by a polynomial in $n$. Proposition~\ref{prop:k_approx_for_k_power} states the properties that we prove for this algorithm more formally.
\begin{proposition} \label{prop:k_approx_for_k_power}
There exists a $2k$-approximation data stream algorithm for \emph{$k$-power} instances of {\mwm}. This algorithm assumes access to positive upper bound $\wmax$ and lower bound $\wmin$ on the weights of all the elements, and its space complexity is $O(\rho (\log(\nicefrac{\wmax}{\wmin}) / \log k + 1))$ under the assumption that constant space suffices to store a single element and a single weight.
\end{proposition}
Before getting to the proof of Proposition~\ref{prop:k_approx_for_k_power}, we note that together with Reduction~\ref{reduction:power_k_weights} this proposition immediately implies the following corollary.
\begin{corollary} \label{cor:theorem_for_bounded_weights}
There exists an $O(k \log k)$-approximation data streaming algorithm for {\mwm}. This algorithm assumes access to positive upper bound $\wmax$ and lower bound $\wmin$ on the weights of all the elements. The space complexity of this algorithm is $O(\rho (\log(\nicefrac{\wmax}{\wmin}) + \log k))$ under the assumption that constant space suffices to store a single element and a single weight.
\end{corollary}
Note that when the ratio between $\wmax$ and $\wmin$ is polynomial in $n$, the space complexity of the algorithm from Corollary~\ref{cor:theorem_for_bounded_weights} becomes $O(\rho \log n)$, and thus, the algorithm is semi-streaming. In Appendix~\ref{app:general_weights} we explain how the algorithm can be modified so that it keeps the ``effective'' ratio $\nicefrac{\wmax}{\wmin}$ on the order of $O(k^2\rho^2)$ even when no values $\wmax$ and $\wmin$ are supplied to the algorithm and the weights of the elements come from an arbitrary range. This leads to the space complexity of $O(\rho (\log k + \log \rho))$ stated in Theorem~\ref{thm:streaming_k_extendible}.
The rest of this section is devoted to the proof of Proposition~\ref{prop:k_approx_for_k_power}. As a first step towards this goal, let us recall that the unweighted greedy algorithm is an algorithm that considers the elements of the ground set $\cN$ in an arbitrary order, and adds every considered element to the solution it constructs if that does not violate independence. As mentioned above, it follows immediately from the definition of $k$-set systems that the unweighted greedy algorithm achieves an approximation ratio of $k$ for the problem of finding a maximum cardinality independent set subject to a $k$-set system constraint. Since $k$-set systems generalize $k$-extendible systems, the same is true also for $k$-extendible constraints. The following lemma improves over this by showing a tighter guarantee for $k$-extendible constraints.
\begin{lemma}
\label{lem:greedy_improved_result}
Given a $k$-extendible set system $(\cN, \cI)$, the unweighted greedy algorithm is guaranteed to produce an independent set $B$ such that $k \cdot |B \setminus A| \geq |A \setminus B|$ for any independent set $A \in \cI$.
\end{lemma}
\begin{proof}
Let us denote the elements of $B \setminus A$ by $x_1,x_2,\dotsc,x_m$ in an arbitrary order. Using these elements, we recursively define a series of independent sets $A_0, A_1,\dotsc,A_m$. The set $A_0$ is simply the set $A$. For $1 \leq i \leq m$, we define $A_i$ using $A_{i - 1}$ as follows. Since $(\cN, \cI)$ is a $k$-extendible system and the subsets $A_{i-1}$ and $A_{i-1} \cap B + x_i \subseteq B$ are both independent, there must exist a subset $Y_i \subseteq A_{i-1} \setminus (A_{i-1} \cap B) = A_{i-1} \setminus B$ such that $|Y_i| \leq k$ and $A_{i-1} \setminus Y_i+x_i\in \mathcal{I}$. Using the subset $Y_i$, we now define $A_i = A_{i-1} \setminus Y_i+x_i$.
Note that by the definition of $Y_i$, $A_i \in \mathcal{I}$ as promised.
Furthermore, since $Y_i \cap B = \varnothing$ for each $0 \leq i \leq m$, we know that $(A \cup \{x_1, x_2, \dots, x_m\}) \cap B \subseteq A_m$, which implies $B \subseteq A_m$ because $\{x_1,x_2,\dotsc,x_m\} = B \setminus A$.
However, $B$, as the output of the unweighted greedy algorithm, must be inclusion-wise maximal independent set (\ie, a base), and thus, it must be in fact equal to the independent set $A_m$ containing it.
Let us now denote $Y = \bigcup_{i=1}^{m} Y_i$, and consider two different ways to bound the number of elements in $Y$. On the one hand, since every set $Y_i$ includes up to $k$ elements, we get $|Y| \leq km = k \cdot |B \setminus A|$. On the other hand, the fact that $B = A_m$ implies that every element of $A \setminus B$ belongs to $Y_i$ for some value of $i$, and therefore, $|Y| \geq |A \setminus B|$. The lemma now follows by combining these two bounds.
\end{proof}
We are now ready to present the algorithm we use to prove Proposition~\ref{prop:k_approx_for_k_power}, which is given as \refalg{cs_alg}.
This algorithm has two main stages. In the first stage, the algorithm runs an independent copy of the unweighted greedy algorithm for every possible weight of elements. The copy corresponding to the weight $k^i$ is denoted by $\Greedy_i$ in the pseudocode of the algorithm, and Algorithm~\ref{cs_alg} feeds to it only the input elements whose weight is at least $k^i$. The output of $\Greedy_i$ is denoted by $C_i$ in the algorithm. We also denote in the analysis by $E_i$ the set of elements fed to $\Greedy_i$. By definition, $C_i$ is obtained by running the unweighted greedy algorithm on the elements of $E_i$, which is a property we use below.
In the second stage of \refalg{cs_alg} (which is done as a post-processing after the stream has ended), the algorithm constructs an output set $T$ based on the outputs of the copies of the unweighted greedy algorithm. Specifically, this is done by running the unweighted greedy algorithm on the elements of $\bigcup_{i = \imin}^{\imax} C_i$, considering the elements of the sets $C_i$ in a decreasing value of $i$ order. While doing so, the given pseudocode also keeps in $T_i$ the temporary solution obtained by the unweighted greedy algorithm after considering only the elements of $C_j$ for $j \geq i$. This temporary solution is used by the analysis below, but need not be kept by a real implementation of \refalg{cs_alg}.
\begin{algorithm}
\DontPrintSemicolon
\caption{\textbf{Greedy of Greedies}}
\label{cs_alg}
Let $\imin \gets \lceil \log_k \wmin \rceil$ and $\imax \gets \lfloor \log_k \wmax \rfloor$.\\
Create
$\imax - \imin + 1$ instances of the unweighted greedy algorithm named $\Greedy_{\imin}, \Greedy_{\imin + 1}, \dotsc, \Greedy_{\imax}$.\\
\For{each element $u$ that arrives from the stream}
{
Let $i_u \gets \log_k w(u)$.\\
Feed $u$ to $\Greedy_{\imin},\Greedy_{\imin + 1},\dotsc,\Greedy_{i_u}$.
}
\BlankLine
Let $C_i$ denote the output of $\Greedy_i$ for every $\imin \leq i \leq \imax$. \\
Let $T \gets \varnothing$. \\
\For{every $\imin \leq i \leq \imax$ in descending order}
{
Greedily add elements from $C_i$ to $T$ as long as this is possible. \\
Let $T_i$ denote the current value of $T$.
}
\Return{$T$}.
\end{algorithm}
We begin the analysis of \refalg{cs_alg} by analyzing its space complexity.
\begin{lemma} \label{lem:space_complexity}
\refalg{cs_alg} can be implemented using a space complexity of $O(\rho (\log(\nicefrac{\wmax}{\wmin})/\log k \allowbreak + 1))$.
\end{lemma}
\begin{proof}
Note that each copy of the unweighted greedy algorithm only has to store its solution, which contains up to $\rho$ elements since it is independent. \refalg{cs_alg} uses $\imax - \imin + 1$ such copies, and thus, the space it needs for these copies is only
\[
\rho(\imax - \imin + 1)
\leq
\rho \left(\log_k\left(\frac{\wmax}{\wmin}\right) + 1\right)
=
\rho \cdot O\left(\frac{\log(\nicefrac{\wmax}{\wmin})}{\log k} + 1\right)
\enspace.
\]
In addition to the space used by the copies of the unweighted greedy algorithm, Algorithm~\ref{cs_alg} only needs to store the set $T$. This set contains a subset of the elements from the outputs of the above copies, and thus, can increases the space required only by a constant factor.
\end{proof}
To complete the proof of Proposition~\ref{prop:k_approx_for_k_power}, it remains to analyze the approximation ratio of Algorithm~\ref{cs_alg}. We begin with the following lemma, which is the technical heart of our analysis. Like in Section~\ref{sec:reduction}, let us denote by $OPT$ be an arbitrary (fixed) optimal solution to the problem we want to solve. We also assume for consistency that $T_{\imax + 1} = \varnothing$ (note that $T_{\imax + 1}$ is not defined by Algorithm~\ref{cs_alg}).
\begin{lemma}
\label{lem:size_of_Opt_intersection_Ei}
For each integer $\imin \leq i \leq \imax$, $k^2\cdot|T_{i+1}| + k\cdot|T_i \setminus T_{i+1}| \geq |OPT \cap E_i|$.
\end{lemma}
\begin{proof}
The set $T_i$ can be viewed as the output of the unweighted greedy algorithm running on $\bigcup_{i \leq j \leq \imax} C_j$.
Since we also know that $C_i$ is independent, \reflem{lem:greedy_improved_result} guarantees
\[
k \cdot |T_i \setminus C_i|
\geq
|C_i \setminus T_i|
\enspace.
\]
Adding $k \cdot |C_i \cap T_i|$ to both its sides, we get
\begin{align*}
k \cdot |T_i|
\geq{} &
k \cdot |C_i \cap T_i| + |C_i \setminus T_i|
=
k \cdot |C_i \cap T_i| + \{|C_i| - |C_i \cap T_i|\} \\
={} &
(k - 1) \cdot |C_i \cap T_i| + |C_i|
\geq
(k - 1) \cdot |C_i \cap T_i| + k^{-1} \cdot |OPT \cap E_i|
\enspace ,
\end{align*}
where the last inequality holds since the unweighted greedy algorithm achieves $k$-approximation and $OPT \cap E_i$ is an independent set within $E_i$ (recall that $E_i$ is the set of elements that were fed to $\Greedy_i$). Using the last inequality we can now get
\begin{align*}
k \cdot |T_i \setminus T_{i+1}| + k \cdot |T_{i+1}|
={} &
k \cdot |T_i|
\geq
(k - 1) \cdot |C_i \cap T_i| + k^{-1} \cdot |OPT \cap E_i| \\
\geq{} &
(k - 1) \cdot |T_i \setminus T_{i+1}| + k^{-1} \cdot |OPT \cap E_i|
\enspace ,
\end{align*}
where the first equality holds because $T_{i+1} \subseteq T_i$, and the second inequality holds because $T_i \setminus T_{i+1} \subseteq C_i \cap T_i$ (recall that the algorithm constructs $T_i$ by adding elements of $C_i$ to $T_{i + 1}$).
The lemma now follows by rearranging the above inequality and multiplying it by $k$.
\end{proof}
Using the last lemma, we can prove the existence of a useful mapping from the elements of $OPT$ to the elements of $T$.
\begin{lemma} \label{lem:mapping}
There exists a mapping $f\colon OPT \to T$ such that
\begin{compactenum}
\item for each $t \in T$, $|f^{-1}(t)| \leq k^2$. \label{prop:budget_large}
\item for each $t \in T$, $|\{u \in f^{-1}(t) \mid w(u) = w(t)\}| \leq k$. \label{prop:budget_small}
\item for each $u \in OPT$, $w(u)\leq w(f(u))$.\label{prop:weights}
\end{compactenum}
\end{lemma}
\begin{proof}
We construct $f$ by scanning the elements $OPT$ and defining the mapping $f(e)$ for every element $e$ scanned.
To describe the order in which we scan the elements of $OPT$, let us define $P_i = OPT \cap (E_i \setminus E_{i - 1})$.
Note that $P_{i_{\min}}, P_{i_{\min} + 1}, \dotsc, P_{i_{\max}}$ is a disjoint partition of $OPT$, and thus, any scan of the elements of $P_{i_{\min}}, P_{i_{\min} + 1}, \dotsc, P_{i_{\max}}$ is a scan of the elements of $OPT$.
Specifically, we scan the elements of $OPT$ by first scanning the elements of $P_{i_{\max}}$ in an arbitrary order, then scanning the elements of $P_{i_{\max} - 1}$ in an arbitrary order, and so on.
Consider now the situation when our scan gets to an arbitrary element $u$ of set $P_i$.
One can note that prior to scanning $u$, we scanned (and mapped) only elements of $P_i \cup P_{i + 1} \cup \dotsb \cup P_{i_{\max}} = OPT \cap E_i$, and thus, we mapped at most $|OPT \cap E_i| - 1$ elements (the $-1$ is due to the fact that $u \in OPT \cap E_i$, and $u$ was not mapped yet).
Combining this with \reflem{lem:size_of_Opt_intersection_Ei}, we get that at the point in which we scan $u$ there must still be either an element $t \in T_{i+1}$ that still has less than $k^2$ elements mapped to it or an element $t \in T_i \setminus T_{i + 1}$ that still has less than $k$ elements mapped to it.
We choose the mapping $f(u)$ of $u$ to be an arbitrary such element $t$.
Property~\ref{prop:budget_large} of the lemma is clearly satisfied by the above construction because we never map an element $u$ to an element $t$ that already has $k^2$ elements mapped to it.
To see why Property~\ref{prop:weights} of the lemma also holds, note that every element $u \in P_i$ must have a weight of $k^i$ by the definition of $P_i$.
This element is mapped by $f$ to some element $t \in T_{i + 1} \cup (T_i \setminus T_{i + 1}) = T_i \subseteq E_i$, and the weight of $t$ is at least $k^i = w(u)$ by the definition of $E_i$.
It remains to prove Property~\ref{prop:budget_small} of the lemma. Consider an arbitrary element $t \in T$ of weight $k^i$. The elements of $OPT$ whose weight is $k^i$ are exactly the elements of $P_i$, and thus, we need to show that $|f^{-1}(t) \cap P_i| \leq k$.
Since all the elements of $T_{i + 1} \subseteq C_{i + 1} \cup C_{i + 2} \cup \dotsb \cup C_{i_{\max}} \subseteq E_{i + 1}$ have weights of at least $k^{i + 1}$, $t$ cannot belong to $T_{i + 1}$.
Thus, an element of $P_i$ can be mapped to $t$ when scanned only if $t$ has less than $k$ elements already mapped to it (if $t \in T_i$) or not at all (if $t \not \in T_i$), which implies that no more than $k$ elements of $P_i$ can get mapped to $t$, which is exactly what we wanted to prove.
\end{proof}
We are now ready to prove the approximation ratio of Algorithm~\ref{cs_alg} (and complete the proof of Proposition~\ref{prop:k_approx_for_k_power}).
\begin{lemma} \label{lem:approx_ratio_cs_alg}
\refalg{cs_alg} is a $2k$-approximation algorithm for $k$-power instances of {\mwm}.
\end{lemma}
\begin{proof}
Let $f$ be the function whose existence is guaranteed by \reflem{lem:mapping}.
The properties of this function imply that, for each element $t\in T$,
\[
\sum_{u \in f^{-1}(t)} \mspace{-9mu} w(u)
=
\sum_{ \substack{ u \in f^{-1}(t) \\ w(u)
=
w(t)}} \mspace{-9mu} w(u) + \sum_{ \substack{e \in f^{-1}(t) \\ w(u) < w(t)}} \mspace{-9mu} w(u)
\leq
k \cdot w(t) + (k^2-k) \cdot \frac{w(t)}{k} \leq
2 k \cdot w(t)
\enspace .
\]
Thus,
\[
w(OPT)
=
\sum_{u \in OPT} \mspace{-9mu}w(u)
=
\sum_{t \in T} \sum_{u \in f^{-1}(t)} \mspace{-9mu} w(u)
\leq
\sum_{t \in T} \mspace{9mu} [2 k \cdot w(t)]
=
2 k \cdot w(T)
\enspace ,
\]
which completes the proof of the lemma.
\end{proof}
\section{Conclusion}
In this work we have presented the first semi-streaming $\tilde{O}(k)$-approximation algorithm for the problem of finding a maximum weight set subject to a $k$-extendible constraint. This result is intrinsically interesting because the generality of $k$-extendible constraints makes our algorithm applicable to many problems of interest. Additionally, we believe (as discussed in Section~\ref{sec:introduction}) that our result is likely to be the final intermediate step towards the goal of designing an algorithm with similar properties for general $k$-set system constraints or proving that this cannot be done.
Given our work, the immediate open question is to settle the approximation ratio that can be obtained for $k$-set system constraints in the data stream model. Another interesting research direction is to find out whether one can improve over the approximation ratio of our algorithm. Specifically, we leave open the question of whether there is a semi-streaming algorithm for finding a maximum weight set subject to a $k$-extendible constraint whose approximation ratio is clean $O(k)$.
\section{Algorithm for General Weights} \label{app:general_weights}
In this section we present a semi-streaming algorithm for $k$-power instances of {\mwm}. Unlike Algorithm~\ref{cs_alg}, this algorithm does not assume access to the bounds $\wmax$ and $\wmin$, and its space complexity remains nearly linear regardless of the ratio between these bounds. A more formal statement of the properties of this algorithm is given in Proposition~\ref{prop:k_approx_for_k_power}. Note that, together with Reduction~\ref{reduction:power_k_weights}, this proposition immediately implies Theorem~\ref{thm:streaming_k_extendible}.
\begin{proposition} \label{prop:k_approx_for_k_power_general}
There exists a $4k$-approximation semi-streaming algorithm for \emph{$k$-power} instances of {\mwm}. The space complexity of this algorithm is $O(\rho (\log k + \log \rho) / \log k)$ under the assumption that constant space suffices to store a single element and a single weight.
\end{proposition}
Throughout this section we assume for simplicity that the $k$-extendible system does not include any self-loops (a \emph{self-loop} is an element $u \in \cN$ such that $\{u\}$ is a dependent set---\ie, $\{u\} \not \in \cI$). This assumption is without loss of generality since a self-loop cannot belong to any independent set, and thus, an algorithm can safely ignore self-loops if they happen to exist. One consequence of this assumption is that $\max_{u \in \cN} w(u) \leq w(OPT)$, where $OPT$ is an arbitrary fixed optimal solution like in the previous sections. This inequality holds since $\{u\}$ is a feasible solution for every element $u \in \cN$, and therefore, its weight cannot exceed the weight of $OPT$.
As mentioned in Section~\ref{sec:algorithm}, the algorithm we use to prove Proposition~\ref{prop:k_approx_for_k_power_general} is a variant of \refalg{cs_alg} that includes additional logic designed to force the ratio $\nicefrac{\wmax}{\wmin}$ to be effectively polynomial---specifically, $O(k^2\rho^2)$. Given access to $\rho$ and $\max_{u \in \cN} w(u)$, this could be done simply by settings $\wmax = \max_{u \in \cN} w(u)$ and $\wmin = \max_{u \in \cN} w(u) / (2\rho)$ and discarding any element whose weight is lower then $\wmin$.\footnote{Starting from this point, $\wmax$ and $\wmin$ are no longer necessarily upper and lower bounds on the weights of all the elements. However, they remain upper and lower bounds on the weights of the non-discarded elements.} This guarantees that the ratio $\nicefrac{\wmax}{\wmin}$ is small, and affects the weight of the optimal solution $OPT$ by at most a constant factor since the total weight of the elements of this solution that get discarded is upper bounded by
\[
|OPT| \cdot \wmin
\leq
\rho \cdot \frac{\max_{u \in \cN} w(u)}{2\rho}
=
\frac{\max_{u \in \cN}w(u)}{2}
\leq
\frac{w(OPT)}{2}
\enspace.
\]
Unfortunately, our algorithm does not have access (from the beginning) to $\rho$ and $\max_{u \in \cN} w(u)$. As an alternative, this algorithm, which is given as Algorithm~\ref{algorithm:general_weights}, does two things. First, it keeps $\wmax$ equal to the maximum weight of the elements seen so far, which guarantees that eventually $\wmax$ becomes $\max_{u \in \cN} w(u)$. Second, it runs the unweighted greedy algorithm on the input it receives. The size of the solution maintained by the unweighted greedy algorithm, which we denoted by $g$, provides an estimate for the maximum size of an independent set consisting only of elements that have already arrived. In particular, after all the elements arrive, $\rho/k \leq g \leq \rho$ because the unweighted greedy algorithm is a $k$-approximation algorithm.
Given the above discussion and the fact that the final value of $kg$ is an upper bound on $\rho$, it is natural to define $\wmin$ as $\wmax / (2kg)$ and discard every element whose weight is lower than $\wmin$. Unfortunately, this does not work since $\wmax$ and $g$ change during the execution of Algorithm~\ref{algorithm:general_weights}, and reach their final values only when it terminates. Thus, we need to set $\wmin$ to a more conservative (lower) value. In particular, Algorithm~\ref{algorithm:general_weights} uses $\wmin = \wmax / (2gk)^2$.
Like Algorithm~\ref{cs_alg}, Algorithm~\ref{algorithm:general_weights} maintains an instance of the unweighted greedy algorithm for every possible weight between $\wmin$ and $\wmax$. However, doing so is somewhat more involved for Algorithm~\ref{algorithm:general_weights} because $\wmin$ and $\wmax$ change during the algorithm's execution, which requires the algorithm to occasionally create and remove instances of unweighted greedy. The creation of such instances involves one subtle issue that needs to be kept in mind. In Algorithm~\ref{cs_alg} every instance of unweighted greedy associated with a weight $w$ receives all elements whose weight is at least $w$. To mimic this behavior, when Algorithm~\ref{algorithm:general_weights} creates new instances of unweighted greedy following a decrease in $\wmin$ (which can happen when $g$ increases), the newly created instances are not fresh new instances but copies of the instance of unweighted greedy that was previously associated with the lowest weight.
The rest of the details of Algorithm~\ref{algorithm:general_weights} are identical to the details of Algorithm~\ref{cs_alg}. Specifically, every arriving element $u$ is feed to every instance of unweighted greedy associated with a weight of $w(u)$ or less, and at termination the outputs of all the unweighted greedy instances are combined in the same way in which this is done in Algorithm~\ref{cs_alg}.
\begin{algorithm}[ht]
\DontPrintSemicolon
\caption{\textbf{Greedy of Greedies for Unbounded Weights}}
\label{algorithm:general_weights}
Create an instance of the unweighted greedy algorithm named $\Greedy$, and let $g$ denote the size of the solution maintained by it. \\
\For{each element $u$ that arrives from the stream}
{
Feed $u$ to $\Greedy$. \\
\If{$u$ is the first element to arrive}
{
Let $\wmax \gets w(u)$ and $\wmin \gets \wmax / (2gk)^2$.\\
Let $\imin \gets \lceil \log_k \wmin \rceil$ and $\imax \gets \log_k \wmax$.\\
Create new instances of the unweighted greedy algorithm named $\Greedy_{\imin},\allowbreak \Greedy_{\imin + 1},\allowbreak \dotsc, \Greedy_{\imax}$.
}
\Else
{
Update $\wmax \gets \max\{\wmax, w(u)\}$ and $\imax \gets \log_k \wmax$. If the value of $\wmax$ increased following this update, create new instances of unweighted greedy named $\Greedy_{\pimax+1}, \Greedy_{\pimax + 2}, \dotsc, \Greedy_{\imax}$, where $\pimax$ is the old value of $\imax$.\footnotemark\label{line:create}\\
Update $\wmin \gets \wmax / (2gk)^2$ and $\imin \gets \lceil \log_k \wmin \rceil$. If the value of $\wmin$ increased following this update, delete the instances of unweighted greedy named $\Greedy_{\pimin},\allowbreak \Greedy_{\pimin + 1}, \dotsc, \Greedy_{\imin - 1}$, where $\pimin$ is the old value of $\imin$. In contrast, if the value of $\wmin$ decreased following the update, copy $\Greedy_{\pimin}$ into new instances of unweighted greedy named $\Greedy_{\imin},\allowbreak \Greedy_{\imin + 1},\allowbreak \dotsc, \Greedy_{\pimin - 1}$.\label{line:delete}
}
\If{$w(u) \geq \wmin$}
{
Let $i_u \gets \log_k w(u)$.\\
Feed $u$ to $\Greedy_{\imin},\Greedy_{\imin + 1},\dotsc,\Greedy_{i_u}$.
}
}
\BlankLine
Let $C_i$ denote the output of $\Greedy_i$ for every $\imin \leq i \leq \imax$. \\
Let $T \gets \varnothing$. \\
\For{every $\imin \leq i \leq \imax$ in descending order}
{
Greedily add elements from $C_i$ to $T$ as long as this is possible. \\
Let $T_i$ denote the current value of $T$.
}
\Return{$T$}.
\end{algorithm}
We now get to the analysis of \refalg{algorithm:general_weights}, and let us begin by bounding its space complexity. Let $g(h)$, $\imin(h)$, $\imax(h)$, $\wmin(h)$ and $\wmax(h)$ denote the values of $g$, $\imin$, $\imax$, $\wmin$ and $\wmax$, respectively, at the end of iteration number $h$ of Algorithm~\ref{algorithm:general_weights}.
\begin{lemma} \label{lem:app_space}
\refalg{algorithm:general_weights} can be implemented using a space complexity of $O(\rho (\log k + \log \rho) /\log k)$.
\end{lemma}
\begin{proof}
\footnotetext{As written, Line~\ref{line:create} might create a large number of instances of unweighted greedy when there is a large increase in $\wmax$. However, when this happens most of the newly created instances are immediately deleted by Line~\ref{line:delete}. A smart implementation of Algorithm~\ref{algorithm:general_weights} can avoid the creation of unweighted greedy instances that are destined for such immediate deletion, and this is crucial for the analysis of the space complexity of Algorithm~\ref{algorithm:general_weights} in the proof of Lemma~\ref{lem:app_space}.}
Using the same argument used in the proof of Lemma~\ref{lem:space_complexity}, it can be shown that the space complexity of Algorithm~\ref{algorithm:general_weights} is upper bounded by $O(\rho)$ times the maximum number of unweighted greedy instances maintained by the algorithm at the same time. By making the deletions of unweighted greedy instances precede the creation of new instances within every given iteration of the main loop of Algorithm~\ref{algorithm:general_weights} (and avoiding the creation of instances that need to be immediately deleted), it can be guaranteed that the maximum number of instances of unweighted greedy maintained by Algorithm~\ref{algorithm:general_weights} at any given time is exactly $\max_{1 \leq h \leq n} \{\imax(h) - \imin(h) + 2\}$. Thus, the algorithm's space complexity is at most
\begin{align*}
&
O(\rho) \cdot \max_{1 \leq h \leq n} \{\imax(h) - \imin(h) + 2\}
=
O(\rho) \cdot \max_{1 \leq h \leq n} \{\log_k \wmax(h) - \log_k \lceil \wmin(h) \rceil + 2\}\\
\leq{} &
O(\rho) \cdot \max_{1 \leq h \leq n} \left\{\log_k \left(\frac{\wmax}{\wmin} \right) + 2\right\}
=
O(\rho) \cdot \max_{1 \leq h \leq n} \left\{\log_k (2k \cdot g(h))^2 + 2\right\}\\
\leq {} &
O(\rho) \cdot [\log_k(2 \rho k)^2 + 2]
\leq
O(\rho) \cdot \frac{2 \ln \rho + 4\ln k + 2}{\ln k}
\enspace ,
\end{align*}
where the second inequality is due to the fact that $g$ is always the size of an independent set, and thus, cannot exceed $\rho$.
\end{proof}
Our next objective is to analyze the approximation ratio of Algorithm~\ref{algorithm:general_weights}. Like in the toy analysis presented above for the case in which the algorithm has access to $\rho$ and $\max_{u \in \cN} w(u)$, the analysis we present starts by upper bounding the total weight of the discarded elements. However, to do that we need the following technical observation, which can be proved by induction.
\begin{observation} \label{obs:nice}
Algorithm~\ref{algorithm:general_weights} maintains the invariant that, at the end of every one of its loops, if an element $u \in \cN$ was fed to some instance of unweighted greedy currently kept by the algorithm, then it was fed exactly to those instances associated with a weight of at most $\log_k w(u)$.
\end{observation}
We say that an element $u \in \cN$ is \emph{discarded} by Algorithm~\ref{algorithm:general_weights} if $u$ was never fed to the final instance $\Greedy_{\imin(n)}$ (during the execution of Algorithm~\ref{algorithm:general_weights} there might be multiple instances of unweighted greedy named $\Greedy_i$ for $i = \imin(n)$---by \emph{final instance} we mean the last of these instances). Let $F$ be the set of discarded elements.
\begin{lemma}
$w(OPT \cap F) \leq \frac{1}{2} \cdot w(OPT)$.
\end{lemma}
\begin{proof}
For every $1 \leq i \leq |OPT \cap F|$, let $u_i$ be the $i$-th element of $OPT \cap F$ to arrive, and let $h_i$ be its location in the input stream. Given Observation~\ref{obs:nice}, the fact that $u_i \in F$ implies that $u_i$ was not feed to the final instance $\Greedy_{\log_k w(u)}$, which can only happen if an instance named $\Greedy_{\log_k w(u)}$ either did not exist when $u_i$ arrived or was deleted at some point after $u_i$'s arrival. Thus, $\imin(h'_i) > \log_k w(u_i)$ for some $h_i \leq h'_i \leq n$.
The crucial observation now is that $g(h'_i) \geq g(h_i) \geq i/k$ because by the time $u_i$ arrives there are already $i$ elements of $OPT$ that arrived, and these elements form together an independent set of size $i$ (recall that $g$ is a $k$-approximation for the maximum size of an independent set consisting only of elements that already arrived). Thus, we get
\[
w(u_i)
=
2^{\log_k w(u_i)}
\leq
2^{\imin(h'_i) - 1}
\leq
\wmin(h'_i)
=
\frac{\wmax(h'_i)}{(2k \cdot g(h'_i))^2}
\leq
\frac{\max_{u \in \cN} w(u)}{(2k \cdot (i/k))^2}
\leq
\frac{w(OPT)}{4i^2}
\enspace,
\]
where the first inequality holds since $\imin(h'_i) > \log_k w(u_i)$ and both $\imin(h'_i)$ and $\log_k w(u_i)$ are integers. Adding up the last inequality over $1 \leq i \leq |OPT \cap F|$ yields
\[
w(OPT \cap F)
=
\sum_{i=1}^{|OPT \cap F|} \mspace{-9mu} w(u_i)
\leq
\sum_{i=1}^{|OPT \cap F|}{\frac{w(OPT)}{4 i^2}}
\leq
\frac{w(OPT)}{4} \cdot \left[1 + \int_1^{\infty}{i^{-2}}\right]
=
\frac{w(OPT)}{2}
\enspace .
\qedhere
\]
\end{proof}
The next lemma shows that Algorithm~\ref{algorithm:general_weights} has a good approximation ratio with respect to the non-discarded elements of $OPT$.
\begin{lemma} \label{lemma}
$w(OPT \setminus F) \leq 2k \cdot w(T)$.
\end{lemma}
\begin{proof}
Observe that $(\cN \setminus F, \cI \cap 2^{\cN \setminus F})$ is a $k$-extendible system, derived from $(\cN, \cI)$ by removing all elements of $F$.
In addition, all the weights of the elements of this set system are powers of $k$, and thus, by Proposition~\ref{prop:k_approx_for_k_power}, Algorithm~\ref{cs_alg} achieves $2k$-approximation for the problem of finding a maximum weight independent set of $(\cN \setminus F, \cI \cap 2^{\cN \setminus F})$. In other words, when Algorithm~\ref{cs_alg} is fed only the elements of $\cN \setminus F$, its output set $T'$ obeys $w(OPT') \leq 2k \cdot w(T')$, where $OPT'$ is an arbitrary maximum weight set independent set of $(\cN \setminus F, \cI \cap 2^{\cN \setminus F})$.
We now note that one consequence of Observation~\ref{obs:nice} is that, by the time Algorithm~\ref{algorithm:general_weights} terminates, the instances $\Greedy_{\imin(n)}, \Greedy_{\imin(n) + 1}, \dotsc, \Greedy_{\imax(n)}$ it maintains receive exactly the input received by the corresponding instances in Algorithm~\ref{cs_alg} when the last algorithm gets only the elements of $\cN \setminus F$ as input. Since Algorithms~\ref{cs_alg} and~\ref{algorithm:general_weights} compute their outputs based on the outputs of $\Greedy_{\imin(n)}, \Greedy_{\imin(n) + 1}, \dotsc, \Greedy_{\imax(n)}$ in the same way, this implies that the output set $T$ of Algorithm~\ref{algorithm:general_weights} is identical to the output set $T'$ produced by Algorithm~\ref{cs_alg} when this algorithm is given only the elements of $\cN \setminus F$ as input.
Combining the above observations, we get
\[
w(T)
=
w(T')
\geq
\frac{w(OPT')}{2k}
\geq
\frac{w(OPT \setminus F)}{2k}
\enspace,
\]
where the last inequality holds since $OPT'$ is a maximum weight independent set in $(\cN \setminus F, \cI \cap 2^{\cN \setminus F})$ and $OPT \setminus F$ is independent in this set system. The lemma now follows by rearranging the last inequality.
\end{proof}
\begin{corollary} \label{cor:app_approximation}
$w(OPT) \leq 4k \cdot w(T)$, and thus, the approximation ratio of Algorithm~\ref{algorithm:general_weights} is at most $4k$.
\end{corollary}
\begin{proof}
Combining the last two lemmata, one gets
\[
\frac{w(OPT)}{2}
\leq
w(OPT) - w(OPT \cap F)
=
w(OPT \setminus F)
\leq
2k \cdot w(T)
\enspace.
\]
The corollary now follows by rearranging the above inequality.
\end{proof}
We conclude the section by noticing that Proposition~\ref{prop:k_approx_for_k_power_general} is an immediate consequence of Lemma~\ref{lem:app_space} and Corollary~\ref{cor:app_approximation}.
\section{Introduction} \label{sec:introduction}
Many problems in combinatorial optimization can be cast as special cases of the following general task. Given a ground set $\cN$ of weighted elements, find a maximum weight subset of $\cN$ obeying some constraint $\cC$. In general, one cannot get any reasonable approximation ratio for this general task since it captures many hard problems such as maximum independent set in graphs. However, the existing literature includes many interesting classes of constraints for which the above task becomes more tractable. In particular, in the 1970's Jenkyns~\cite{J76} and Korte and Hausmann~\cite{KH78} suggested, independently, a class of constraints named \emph{$k$-set system} constraints which represents a sweet spot between generality and tractability. On the one hand, finding a maximum weight set subject to a $k$-set system constraint captures many well known problems such as matching in hypergraphs, matroid intersection and asymmetric travelling salesperson. On the other hand, $k$-set system constraints have enough structure to allow a simple greedy algorithm to find a maximum weight set subject to such a constraint up to an approximation ratio of $k$.\footnote{$k$ is a parameter of the constraint which intuitively captures its complexity. The exact definition of $k$ is given in Section~\ref{sec:preliminaries}, but we note here that in many cases of interest $k$ is quite small. For example, matroid intersection is a $2$-set system.}
The $k$-approximation obtained by the greedy algorithm for finding a maximum weight set subject to a $k$-set system constraint was recently shown to be the best possible~\cite{BV14}. Nevertheless, over the years many works improved over it either by achieving a better guarantee for more restricted classes of constraints~\cite{FNSW11,LSV10,LSV13}, or by extending the guarantee to more general objectives (such as maximizing a submodular function)~\cite{FHK17,FNSW11,FNW78,GRST10,LMNS10,LSV10,MBK16,W12}. Unfortunately, many of the above mentioned improvements are based on quite slow algorithms. Moreover, as modern applications require the processing of increasingly large amounts of data, even the simple greedy algorithm is often viewed these days as too slow for practical use. This state of affairs has motivated recent works aiming to study the problem of finding a maximum weight set subject to a $k$-set system constraint in a Big Data oriented setting such as Map-Reduce and the data stream model. For the Map-Reduce setting, Ponte Barbosa et al.~\cite{BENW16} essentially solved this problem by presenting a $(k + O(\eps))$-approximation Map-Reduce algorithm for it using $O(1/\eps)$ rounds, which almost matches the optimal approximation ratio in the sequential setting. In contrast, the situation for the data stream model is currently much more involved.
The only non-trivial data stream algorithm known to date (as far as we know) for finding a maximum weight set subject to a general $k$-set system constraint is a $k^2(1 + \eps)$-approximation semi-streaming algorithm by Crouch and Stubbs~\cite{CS14}. As one can observe, there is a large gap between the last approximation ratio and the $k$-approximation that can be achieved in the offline setting. Several works partially addressed this gap by providing an $O(k)$-approximation semi-streaming algorithms for more restricted classes of constraints, the most general of which is known as \emph{$k$-matchoid} constraints~\cite{CK15,CGQ15,FKK18,MJK18}. However, these results cannot be considered a satisfactory solution for the gap because $k$-matchoid constraints are much less general than $k$-set system constraints.\footnote{We do not formally define $k$-matchoid constraints in this paper, but it should be noted that they usually fail to capture knapsack like constraints. For example, a single knapsack constraint in which the ratio between the largest and smallest item sizes is at most $k$ is a $k$-set system constraint, but usually not a $k$-matchoid constraint.}
In this paper we make a large step towards resolving the above gap. Specifically, we present an $\tilde{O}(k)$-approximation semi-streaming algorithm for finding a maximum weight set subject to a class of constraints, known as \emph{$k$-extendible} constraints, that was introduced by~\cite{M06} and captures (to the best of our knowledge) all the special cases of $k$-set system constraints studied in the literature to date (including, in particular, $k$-matchoid constraints). Formally, we prove the following theorem.
\begin{theorem} \label{thm:streaming_k_extendible}
There is a polynomial time semi-streaming algorithm achieving $O(k \log k)$-approx\-imation for the problem of finding a maximum weight set subject to a $k$-extendible constraint. Assuming it takes constant space to store a single element and a single weight, the space complexity of the algorithm is $O(\rho (\log k + \log \rho))$, where $\rho$ is the maximum size of a feasible set according to the constraint.
\end{theorem}
As the class of $k$-extendible constraints captures every other restricted class of $k$-set system constraints from the literature, we believe Theorem~\ref{thm:streaming_k_extendible} represents the final intermediate step before closing the above mentioned gap completely (\ie, either finding an $\tilde{O}(k)$ semi-streaming algorithm for $k$-set system constraints, or proving that this cannot be done). It should also be mentioned that the approximation ratio guaranteed by Theorem~\ref{thm:streaming_k_extendible} is optimal up to an $O(\log k)$ factor since it is known that one cannot achieve better than $k$-approximation for finding a maximum weight set subject to a $k$-extendible constraint even in the offline setting~\cite{FHK17}.
\subsection{Additional Related Work}
In the $k$-dimensional matching problem, one is given a weighted hypergraph in which the vertices are partitioned into $k$ subsets, and every edge contains exactly one vertex from each one of these subsets. The objective in this problem is to find a maximum weight matching in the hypergraph. Hazan et al.~\cite{HSS06} showed that no algorithm can achieve a better than $\Omega(k / \log k)$-approximation for $k$-dimension matching unless $\mathtt{P} = \mathtt{NP}$. Interestingly, it turns out that $k$-dimensional matching is captured by all the standard restricted cases of the the problem of finding a maximum weight set subject to $k$-set system constraint, and thus, the inapproximability of Hazan et al.~\cite{HSS06} extends to them as well. For most of these restricted cases this is the strongest inapproximability known, although a tight inapproximability of $k$ was proved for $k$-set system and $k$-extendible constraints by~\cite{BV14} and~\cite{FHK17}, respectively.
Complementing the hardness result of~\cite{HSS06}, some works presented algorithmic results for either $k$-dimensional matching or natural generalizations of it such as $k$-set packing~\cite{B00,HS89,SW13}.
\section{Preliminaries and Notation} \label{sec:preliminaries}
In this section we formally define some of the terms used in Section~\ref{sec:introduction} and the notation that we use in the rest of this paper. Given a ground set $\cN$, an \emph{independence system} over this ground set is a pair $(\cN, \cI)$ in which $\cI$ is a non-empty collection of subsets of $\cN$ (formally, $\varnothing \neq \cI \subseteq 2^\cN$) which is \emph{down-closed} (\ie, if $T$ is a set in $\cI$ and $S$ is a subset of $T$, then $S$ also belongs to $\cI$). One easy way to get an example of an independence system is to take an arbitrary vector space $W$, designate the set of vectors in this space as the ground set $\cN$, and make $\cI$ the collection of all independent sets of vectors in $W$. Since removing a vector from an independent set of vectors cannot make the set dependent, the pair $(\cN, \cI)$ obtained from $W$ in this way is indeed an independence system.
The above example for getting an independence system from a vector space was one of the original motivations for the study of independence systems, and thus, a lot of the terminology used for independence systems is borrowed from the world of vector spaces. In particular, a set is called \emph{independent} in a given independence system $(\cN, \cI)$ if and only if it belongs to $\cI$, and it is called a \emph{base} of the independence system if it is an inclusion-wise maximal independent set. Using this terminology, we can now define $k$-set systems.
\begin{definition}
An independence system $(\cN, \cI)$ is a $k$-set system for an integer $k \geq 1$ if for every set $S \subseteq \cN$, all the bases of $(S, 2^S \cap \cI)$ have the same size up to a factor of $k$ (in other words, the ratio between the sizes of the largest and smallest bases of $(S, 2^S \cap \cI)$ is at most $k$).
\end{definition}
An immediate consequence of the definition of $k$-set systems is that any base of such a system is a maximum size independent set up to an approximation ratio of $k$. Thus, one can get a $k$-approximation for the problem of finding a maximum size independent set in a given $k$-set system $(\cN, \cI)$ by outputting an arbitrary base of the $k$-set system, which can be done using the following simple strategy, which we call the \emph{unweighted greedy algorithm}. Start with the empty solution, and consider the elements of the ground set $\cN$ in an arbitrary order. When considering an element, add it to the current solution, unless this will make the solution dependent (\ie, not independent).
A \emph{$k$-set system constraint} is a constraint defined by a $k$-set system, and a set $S$ obeys this constraint if and only if it is independent in that $k$-set system. Note that using this notion we can refer to the problem studied in the previous paragraph as finding a maximum cardinality set subject to a $k$-set system constraint. More generally, given a weight function $w \colon \cN \to \nnR$ and a $k$-set system $(\cN, \cI)$ over the same ground set, it is often useful to consider the problem of finding a maximum weight set $S \subseteq \cN$ subject to the constraint corresponding to this $k$-set system (the weight of a set $S$ is defined as $\sum_{u \in S} w(u)$). Jenkyns~\cite{J76} and Korte and Hausmann~\cite{KH78} showed that one can get a $k$-approximation for this problem using an algorithm, known simply as the \emph{greedy algorithm}, which is a variant of the unweighted greedy algorithm that considers the elements of $\cN$ in a non-decreasing weight order.
The definition of $k$-set systems is very general, which occasionally does not allow them to capture all the necessary structure of a given application. Thus, various stronger kinds of independent set systems have been considered over the years, the most well known of which is the intersection of $k$ matroids (which is equivalent to a $k$-set system for $k = 1$, and represents a strictly smaller class of independence systems for larger values of $k$). In this work we consider another kind of independence systems, which was originally defined by~\cite{M06}. In this definition we use the expression $S + u$ to denote the union $S \cup \{u\}$. We use the plus sign in a similar way throughout the rest of this paper.
\begin{definition}
An independence system $(\cN, \cI)$ is a $k$-extendible system for an integer $k \geq 1$ if for any two sets $S \subseteq T \subseteq \cN$ and an element $u \not \in T$ such that $S + u \in \cI$, there is a subset $Y \subseteq T \setminus S$ of size at most $k$ such that $T \setminus Y + u \in \cI$.
\end{definition}
The class of $k$-extendible systems is general enough to capture the intersection of $k$ matroids and every other restricted class of $k$-set systems from the literature that we are aware of. In contrast, it is not difficult to verify that any $k$-extendible system is a $k$-set system. Thus, the greedy algorithm provides $k$-approximation for the problem of finding a maximum weight set subject to a $k$-extendible constraint---\ie, a constraint defined by a $k$-extendible system and allowing only sets that are independent in this system.
In the data stream model version of the above problem, the elements of the ground set of a $k$-extendible system $(\cN, \cI)$ arrive one after the other in an adversarially chosen order. An algorithm for this model views the elements of $\cN$ as they arrive, and it gets to know the weight $w(u)$ of every element $u$ upon its arrival. Additionally, as is standard in the field, we assume the algorithm has access to an \emph{independence oracle} that given a set $S \subseteq \cN$ answers whether $S$ is independent. The objective of the algorithm is to output a maximum weight independent set of the $k$-extendible system. If the algorithm is allowed enough memory to store the entire input, then the data stream model version becomes equivalent to the offline version of the problem. Thus, an algorithm for this model is interesting only if it has a low space complexity. Since any algorithm for this model must use at least the space necessary for storing its output, most works on this model look for \emph{semi-streaming} algorithms, which are data stream algorithms whose space complexity is upper bounded by $O(\rho \cdot \polylog n)$---where $\rho$ is the maximum size of an independent set and $n$ is the size of the ground set. In particular, we note that the space complexity guaranteed by Theorem~\ref{thm:streaming_k_extendible} falls within this regime because $\rho \leq n$ by definition, and one can assume that $k \leq n$ because any independence system is $n$-extendible.
One can observe that the unweighted greedy algorithm (unlike the greedy algorithm itself) can be implemented as a semi-streaming algorithm because it considers the elements in an arbitrary order. This observation is crucial for our result since the algorithm we develop is heavily based on using the unweighted greedy algorithm as a subroutine (a similar use of the unweighted greedy algorithm is done by the current state-of-the-art algorithm for the problem due to Crouch and Stubbs~\cite{CS14}).
\paragraph{Paper Organization:} In Section~\ref{sec:reduction} we present a reduction that allows us to assume that the weights of the elements are powers of $k$, at the cost of losing a factor of $O(\log k)$ in the space complexity of the algorithm. Using this reduction, we present a basic version of our algorithm in Section~\ref{sec:algorithm}. This basic version presents our main new ideas, but achieves semi-streaming space complexity only under the simplifying assumption that the ratio between the maximum and minimum element weights is polynomially bounded. This simplifying assumption can be dropped using standard techniques, and we defer the details to Appendix~\ref{app:general_weights}.
\section{Reduction to \texorpdfstring{$k$}{k}-Power Weights} \label{sec:reduction}
In this section we present a reduction that allows us to assume that the weights of all the elements in the ground set $\cN$ are powers of $k$. This reduction simplifies the algorithms we present later in this paper. However, before presenting the reduction itself, let us note that we assume from this point on that $k = 2^i$ for some integer $i \geq 1$. This assumption is without loss of generality because if $k$ does not obey it, then we can increase its value to the nearest integer that does obey it. Since the new value of $k$ is larger than the old value by at most a factor of $2$, the approximation ratio guaranteed for both values of $k$ by Theorem~\ref{thm:streaming_k_extendible} is asymptotically equal.
We say that an instance of {\mwm} is a \emph{$k$-power} instance if the weights of all the elements in it are powers of $k$.
\begin{reduction} \label{reduction:power_k_weights}
Assume that we are given a polynomial time data stream algorithm $ALG$ for \mwm{}. If $ALG$ provides $\alpha$-approximation for $k$-power instances of the problem using $S_{ALG}$ space, then there exists a polynomial time data stream algorithm for the same problem which achieves $O(\alpha \log{k})$-approximation for arbitrary instances using $O(S_{ALG} \cdot \log k)$ space. Moreover, if the weights of all the elements fall within some range $[\wmin, \wmax]$, then it suffices for $ALG$ to provide $\alpha$-approximation for $k$-power instances in which all the weights fall within the range $[\wmin/k, \wmax]$.
\end{reduction}
Before presenting the algorithm we use to prove the above reduction, we need to define some additional notation.
Let $\ell \triangleq \log_2 k$, and note that $\ell$ is a positive integer because we assume that $k$ is at least $2$ and a power of $2$.
For every element $u \in \cN$ of weight $w(u)$, we define an auxiliary weight $w_2(u) \triangleq k^{\lfloor \log_k w(u) \rfloor}$.
Intuitively, $w_2(u)$ is the highest power of $k$ which is not larger than $w(u)$. The following observation formally states the properties of $w_2$ that we need.
In this observation we use the notation $i(u) \triangleq \lfloor \log_2 w(u) \rfloor$.
\begin{observation} \label{obs:weights_relationships}
For every element $u \in \cN$, $w_2(u)$ is a power of $k$ obeying $w(u)/2 \leq w_2(u) \cdot 2^{i(u) \bmod \ell} \leq w(u)$ and $w(u) / k \leq w_2(u) \leq w(u)$.
\end{observation}
\begin{proof}
The first part of the observation, namely that $w_2(u)$ is a power of $k$, follows immediately from the definition of $w_2$.
Thus, we concentrate here on proving the other parts of the observation.
Note that
\[
w_2(u)
=
k^{\lfloor \log_k w(u) \rfloor}
=
k^{\lfloor \ell^{-1} \cdot \log_2 w(u)\rfloor}
=
k^{\ell^{-1} \cdot \{\lfloor \log_2 w(u)\rfloor - \lfloor \log_2 w(u) \rfloor \bmod \ell\}}
=
k^{\ell^{-1} \cdot \lfloor \log_2 w(u)\rfloor} / 2^{i(u) \bmod \ell}
\enspace.
\]
Rearranging the last equality, we get
\[
\frac{w(u)}{2}
=
k^{\log_k w(u) - \log_k 2}
=
k^{\ell^{-1} \log_2 w(u) - \ell^{-1}}
\leq
k^{\ell^{-1} \cdot \lfloor \log_2 w(u) \rfloor}
=
w_2(u) \cdot 2^{i(u) \bmod \ell}
\enspace,
\]
and
\[
w_2(u) \cdot 2^{i(u) \bmod \ell}
=
k^{\ell^{-1} \cdot \lfloor \log_2 w(u) \rfloor}
\leq
k^{\ell^{-1} \cdot \log_2 w(u)}
=
k^{\log_k w(u)}
=
w(u)
\enspace.
\]
To complete the proof of the observation, we note that it also holds that
\[
w_2(u)
=
k^{\lfloor \log_k w(u) \rfloor}
\leq
k^{\log_k w(u)}
=
w(u)
\quad
\text{and}
\quad
w_2(u)
=
k^{\lfloor \log_k w(u) \rfloor}
\geq
k^{\log_k w(u) - 1}
=
\frac{w(u)}{k}
\enspace.
\qedhere
\]
\end{proof}
We are now ready to present the algorithm that we use to prove \refreduc{reduction:power_k_weights}, which appears as \refalg{alg}.
To intuitively understand this algorithm, it is useful to think of $i(u)$ as the ``class'' element $u$ belongs to. All the elements within class $i$ have weights between $2^i$ and $2^{i + 1}$, and thus, treating them all as having the weight $2^i$ does not affect the approximation ratio by more than a factor of $2$. Let us call $2^i$ the \emph{characteristic} weight of class $i$. Note now that the ratio between the characteristic weight of class $i_1$ and the characteristic weight of class $i_2$ is $2^{i_1 - i_2}$, which is a power of $k$ whenever $i_1 - i_2$ is an integer multiple of $\ell = \log_2 k$. Thus, one can group the classes into $\ell$ groups such that the ratio between the characteristic weights of any pair of classes within a group is a power of $k$ (see Figure~\ref{fig} for a graphial illustration of these groups). Moreover, by multiplying all the characteristic weights in the group by an appropriate scaling factor, one can make them all powers of $k$. This means that for every group there exists a transformation that converts all the weights of the elements in it to powers of $k$ and preserves the ratio between any two weights in the group up to a factor of $2$. In particular, we get that the elements of the group after the transformation form a $k$-power instance.
\begin{figure}[ht]
\centering
\includestandalone[]{weights_reduction_diagram}
\caption{Each circle in this drawing represent a class, and the value $i(u)$ of the elements in this class appears in the center the circle. The classes are grouped according to the columns in the drawing. We note that element $u$ belonging to group $j$ has weight within the range $[2^{k \ell +j}, 2^{k \ell +j + 1})$, where $k$ is an integer and $i(u)= k \ell +j$.} \label{fig}
\end{figure}
Adding up all the above, we have described a way to transform any instance of finding a maximum weight independent set subject to a $k$-extendible constraint into $\ell$ new instances of this problem that are guaranteed to be $k$-power. Algorithm~\ref{alg} essentially creates these $\ell$ new instances on the fly, and feeds them to $\ell$ copies of the algorithm $ALG$ whose existence is assumed in Reduction~\ref{reduction:power_k_weights}. Given this point of view, $i(u) \bmod \ell$ should be understood as the group to which element $u$ belongs, and $w_2(u)$ is the transformed weight of $u$. Observation~\ref{obs:weights_relationships} can now be interpreted as stating that the ratio between the weights of elements belonging to the same group (and thus, having the same $i(u) \bmod \ell$ value) is indeed changed by the transformation by at most a factor of $2$.
\begin{algorithm} \label{alg}
\DontPrintSemicolon
\caption{\textbf{Modulo $\ell$ Split}}
Create $\ell$ instances of $ALG$ named $ALG_0, ALG_1, \dotsc, ALG_{\ell - 1}$.\\
\For{each element $u$ that arrives from the stream}
{
Calculate $i(u)$ and $w_2(u)$ as defined above. \\
Feed $u$ to $ALG_{(i(u) \bmod \ell)}$ with the weight $w_2(u)$. \\
}
Let $C_i$ denote the output of $ALG_i$ for every $0 \leq i \leq \ell - 1$. \\
\Return{the best solution among $C_0, C_1, \dotsc, C_{\ell - 1}$}.
\end{algorithm}
In the rest of this section, we use $B_i$ to denote the set of elements fed to instance $ALG_i$ by \refalg{alg}, and $T$ to denote the output of \refalg{alg}.
Additionally, we denote by $OPT$ an arbitrary (fixed) optimal solution for the original instance recieved by Algorithm~\ref{alg}.
The following lemma proves that \refalg{alg} has the approximation ratio guaranteed by \refreduc{reduction:power_k_weights}.
\begin{lemma} \label{lemma:logk_approx}
$w(OPT) \leq O(\alpha \log k) \cdot w(T)$.
\end{lemma}
\begin{proof}
Since \refalg{alg} feeds every arriving element into exactly one of the instances $ALG_0,\allowbreak ALG_1, \dotsc, ALG_{\ell-1}$, the sets $B_0, B_1, \dotsc, B_{\ell-1}$ form a disjoint partition of $\cN$. Thus,
\[
w(OPT) = \sum \limits_{i=0}^{\ell-1} w(B_i \cap OPT)
\enspace .
\]
Hence, by an averaging argument, there must exist an index $i$ such that $w(OPT) \leq \ell \cdot w(OPT \cap B_i) $.
We now note that it follows from the pseudocode of Algorithm~\ref{alg} and Observation~\ref{obs:weights_relationships} that the copies of $ALG$ get only weights that are powers of $k$, and moreover, these weights belong to the range $[\wmin/k, \wmax]$ whenever the original weights received by Algorithm~\ref{alg} belong to the range $[\wmin, \wmax]$. Thus, by the assumption of Reduction~\ref{reduction:power_k_weights}, $ALG_i$ achieves $\alpha$-approximation for the instance it faces. Since $B_i \cap OPT$ is a feasible solution within this instance and $C_i$ is the output of $ALG_i$, we get $w_2(OPT \cap B_i) \leq \alpha \cdot w_2(C_i)$.
Therefore,
\[
w(OPT)
\leq
\ell \cdot w(OPT \cap B_i)
\leq
2\ell \cdot w_2(OPT \cap B_i) \cdot 2^i
\leq
2\ell\alpha \cdot w_2(C_i) \cdot 2^i
\leq
2\ell\alpha \cdot w(C_i)
\leq
2\ell\alpha \cdot w(T)
\enspace,
\]
where the second and penultimate inequalities hold by Observation~\ref{obs:weights_relationships}, and the last inequality is due to the fact that $T$ is the best solution among $C_0, C_1, \dotsc, C_{\ell-1}$.
\end{proof}
The next lemma analyzes the space complexity of Algorithm~\ref{alg} and completes the proof of \refreduc{reduction:power_k_weights}.
\begin{lemma}
\refalg{alg}'s space complexity is $O(S_{ALG} \cdot \log{k})$.
\end{lemma}
\begin{proof}
\refalg{alg} runs $\log{k}$ parallel copies of $ALG$, each of them is assumed (by \refreduc{reduction:power_k_weights}) to use $S_{ALG}$ space. Thus, the space required by these $\log k$ copies is $O(S_{ALG} \cdot \log k)$. In addition to this space, Algorithm~\ref{alg} only requires enough space to do two things.
\begin{itemize}
\item The algorithm has to store the outputs of the copies of $ALG$. However, these outputs are originally stored by the copies themselves, and thus, storing them requires no more space than what is used by the copies.
\item Calculate the sum of the weights of the elements in the solutions produced by the copies of $ALG$.
Since we assume that the weight of an element can be stored in constant space, this requires again (up to constant factors) no more space than the space used by the copies of $ALG$ to store their solutions. \qedhere
\end{itemize}
\end{proof}
| proofpile-arXiv_065-7276 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Graphene is a two-dimensional (2D) material \cite{nov2005,kim2005} that has attracted
a lot of interest in view of its unique physical properties
and applications potential in diverse fields such as electronics, spintronics and
quantum computing \cite{casrev,stephanspintronics,quantumc}. Due to its weak
spin orbit coupling \cite{Tombros-vanWees2007,macdonald2011-prb,Popinciuc-vanWees2009,dlubak-fert2010,han-kawakami2011,yan-ozyilmaz2011,maassen-vanWees2012,dlubak-fert2012,cummings-roche2016,Tuan-roche2016} graphene possesses a long spin relaxation time and lengths even at room temperature \cite{drogeler}.
While these characteristics offer an optimal platform for spin manipulation, it remains however a challenge to achieve robust spin polarization efficiently at room temperature.
Several methods have been proposed in
order to introduce ferromagnetic order on graphene, among which functionalization with adatoms \cite{func2013}, addition of defects \cite{yazyev2007-defts,Yang2011}, and by means of proximity effect via an adjacent ferromagnet\cite{yang2013,Zollnerprb2016,barlas2015-prox,moodera2016,bart2016,brataas2008-prox}.
The latter approach attracted a lot of interest using magnetic
insulators (MI) as a substrate to induce exchange splitting in graphene.
When a material is placed on top of a magnetic insulator, it can acquire
proximity induced spin polarization and exchange splitting \cite{yang2013} resulting from the
hybridization between $p_z$ orbitals with those of the neighboring magnetic insulator.
For practical purposes, the implementation in spintronic devices of this kind of materials could lead to lower power consumption since no current injection across adjacent ferromagnet (FM) is required as in case of traditional spin injection techniques.
Experimentally, the existence of proximity exchange splitting via magnetic insulator in graphene have been demonstrated with exchange fields up to 100~T using the coupling between graphene and EuS~\cite{moodera2016}.
For yittrium irog garnet/graphene (YIG/Gr) based system, using non-local spin transport measurements, Leutenantsmeyer et al.~\cite{bart2016} demonstrated exchange field strength of ~0.2~T. Another possibility of inducing exchange splitting in graphene using FM metal, by separating them by alternative 2D material such as hexa-boron nitride (hBN), was also proposed theoretically~\cite{Zollnerprb2016}.
Recent studies have suggested the creation of graphene-based devices where EuO-graphene junction can act as a spin filter and spin valve simultaneously by gating the system~\cite{song-spinf-spinv2015}. It was also demonstrated~\cite{song-diode2018} that a double EuO barrier on top of a graphene strip can exhibit negative differential resistance making this system a spin selective diode. However, the drawback of using EuO is its low Curie temperature and the predicted strong electron doping~\cite{yang2013}. It was proposed therefore using high Curie temperature materials such as YIG or cobalt ferrite (CFO)~\cite{ali2017}. Indeed, a large change in the resistance of a graphene-based spintronic device has been reported recently where the heavy doping induced by YIG could be treated by gating~\cite{Song-gating}.
In this Letter we demonstrate the existence of Proximity Magnetoresistance (PMR) effect in graphene for four different magnetic insulators (MI), YIG, CFO, europium oxide (EuO) and europium sulfide (EuS). Using ab initio parameters reported in Ref.~[\citenum{ali2017}], we show that for YIG and CFO based lateral graphene-based devices with armchair edges, PMR values could reach 77\% and 22\% at room temperature (RT), respectively. With chalcogenides, EuS and EuO, PMR values can reach 100\% at 16 K and 70 K, respectively. In addition, we demonstrate the robustness of this effect with respect to system dimensions and edge type termination. Furthermore, our calculations with spin-orbit coupling (SOC) included does not significantly affect the PMR. These findings will stimulate experimental investigations of the proposed phenomenon PMR and development of other proximity effect based spintronic devices.
\section{Methodology}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{fig_1.png}
\caption{(color online) Lateral spintronic device comprising two magnetic insulators on top of a graphene sheet. The magnetic graphene regions have a length $L$, width $W$ and are separated by a distance $d$.}
\label{fig_1}
\end{figure}
In order to calculate conductances and PMR, we employed the tight-binding approach with scattering matrix formalism conveniently implemented within the KWANT package~\cite{xavier}. The system modeled is shown in Fig.~\ref{fig_1} and comprises two identical proximity induced magnetic regions of width $W$ and length $L$ resulting from insulators with magnetizations {\bf M}$_1$ and {\bf M}$_2$, separated by a distance $d$ of nonmagnetic region of graphene sheet with armchair edges. Both magnetic graphene regions are separated from the leads $L_1$ and $L_2$ by a small pure graphene region. In order to take into account the magnetism arising in graphene from the proximity effects induced by the MI's, in the Hamiltonian are used the parameters obtained for different MI's in Ref. ~[\citenum{ali2017}]. It is important to note that the magnetic regions do not affect the linear dispersion of graphene bands, except breaking the valley and electron-hole symmetry resulting in spin-dependent band splitting and doping. The discretized Hamiltonian for the magnetic graphene regions can be expressed as:
\begin{widetext}
\begin{equation}\label{eq:1}
H = \sum_{i\sigma} \sum_l t_{l\sigma} c^\dagger_{(i+l)1\sigma} c_{i0\sigma} +h.c.
+ \sum_{i\sigma\sigma'}\sum_{\mu=0}^1
\left[\delta+(-1)^\mu\Delta_\delta\right] c^\dagger_{i\mu\sigma} [\vec m . \vec\sigma] c_{i\mu\sigma'}
+ \sum_{i\sigma}\sum_{\mu=0}^1
\left[E_D+(-1)^\mu\Delta_s\right] c^\dagger_{i\mu\sigma} c_{i\mu\sigma}
\end{equation}
\end{widetext}
where $c^\dagger_{i\mu\sigma}$ ($c^\dagger_{i\mu\sigma}$) creates (annihilates) an electron of type $\mu=0$ for A sites and $\mu=1$ for B sites on the unit cell $i$ with spin $\sigma=\uparrow(\downarrow)$ for up (down) electrons.
$\vec m$ and $\vec\sigma$ respectively represent a unit vector that points in the direction of the magnetization and the vector of Pauli matrices, so that $\vec m.\vec\sigma = m_x \sigma_x+m_y\sigma_y+m_z\sigma_z$.
The anisotropic hopping $t_{l\sigma}$ connects unit cells $i$ to their nearest neighbor cells $i+l$.
Parameters $\delta$, $\Delta_\delta$, $\Delta_s$ are defined via exchange spin-splittings ${{\delta}_{e}}$ (${{\delta}_{h}}$) of the electrons (holes) and spin-dependent band gaps $\Delta_{\sigma}$ defined in Ref.~[\citenum{ali2017}]. $E_D$ indicates the Dirac cone position with respect to the Fermi level. The Hamiltonian for the whole device is obtained by making aforementioned parameters spatially dependent.
\begin{figure*}[ht]
\centering
\includegraphics[width=1\linewidth]{fig_2.png}
\caption{(color online) Band structure obtained using tight-binding Hamiltonian defined by Eq.~(\ref{eq:1}) (solid lines) fitted to the band structure from DFT spin majority (green open circles) and spin minority (black filled circles) data for the cases with (a) YIG, (b) CFO, (c) EuS and (d) EuO from Ref.~[\citenum{ali2017}]. The inset in (b) shows the anisotropic hoppings reported in Table~\ref{tab1}}.
\label{fig_2}
\end{figure*}
To obtain hopping parameters of Hamiltonian~(\ref{eq:1}), we fitted tight-binding bands to those obtained from first principles calculations in Ref.~[\citenum{ali2017}]. The results of the fitting procedure in case of graphene magnetized by YIG, CFO, EuS and EuO are shown in Fig.~\ref{fig_2}(a), (b), (c) and (d), respectively. The corresponding hopping parameters are given in Table~\ref{tab1}. As one can see, the graphene bands obtained with tight-binding Hamiltonian given by Eq.~\ref{eq:1} are in good agreement with those obtained using Density Functional Theory (DFT) confirming suitability of our model for transport calculations. Of note, due to the presence of superficial tension at the interface between CFO and graphene, hopping parameters in this case are anisotropic as they depend on direction to the nearest neighbor as specified in the inset of Fig.~\ref{fig_2}(b).
\begin{table}[ht]
\caption {hopping parameters used in equation \ref{eq:1} for each magnetic insulator considered.}
\label{tab1}
{\footnotesize
\begin{center}
\begin{tabular}{ p{1.6cm}|p{1.7cm}|p{2cm}|p{2.2cm} }
\hline
\hline
Material & Hopping direction & spin up (eV) & spin down (eV)\\
\hline
\hline
YIG & t & 3.6 & 3.8 \\
\hline
\multirow{3}{4em}{CFO} & $t_1$ & 1.38 & 1.44 \\
& $t_2$ & $1.41e^{-i0.01}$ & $1.48e^{-i0.01}$ \\
& $t_3$ & $1.36e^{-i0.02}$ & $1.44e^{-i0.02}$ \\
\hline
EuS & t & 4.5 & 4.8 \\
\hline
EuO & t & 4.9 & 4.3 \\
\hline
\end{tabular}
\end{center}
}
\end{table}
The conductance for parallel and antiparallel configurations of magnetizations {\bf M}$_1$ and {\bf M}$_2$ in the linear response regime is then obtained according to:
\begin{equation}
G_{P(AP)}=\frac{e}{h}\sum_{\sigma}\int T_{P(AP)}^\sigma \left(\frac{-\partial f}{\partial E} \right)dE ,
\label{eq:1b}
\end{equation}
where $T_{P(AP)}^\sigma$ indicates spin-dependent transmission probability for parallel(antiparallel) magnetizations configurations and $f=1/(e^{(E-\mu)/k_BT}+ 1)$ represents the Fermi-Dirac distribution with $\mu$ and $T$ being electrochemical potential (Fermi level) and temperature, respectively. It is important to note that temperature smearing has been taken into account using the Curie temperature of each MI.
The PMR amplitude has been defined according to following expression:
\begin{equation}
\textrm{PMR}=\left(\frac{G_P -G_{AP}}{G_P +G_{AP}}\right)\times 100 \% ,
\label{eq:2}
\end{equation}
In order to determine the impact of the system dimensions on the PMR, several calculations were carried out for different lengths, widths and separations of the magnetic regions. Furthermore, we checked the robustness of PMR on edge type termination by calculating the PMR for systems with zigzag, armchair and rough edges. The latter were created by removing atoms and bounds randomly and deleting the dangling atoms at the new edges.
\section{Results}
In Fig.~\ref{fig:fig_3} we present the PMR curves for lateral device structures based on YIG, CFO, EuS and EuO on top of a graphene sheet with armchair edges. Taking into account Curie temperatures for these materials, the curves were smeared out using 16 K (70 K) for EuS (EuO), and 300 K for YIG and CFO cases.
For system with YIG we found a maximum PMR value of 77\% while for CFO the value obtained was 22\%. In case of chacolgenides EuS and EuO used, the maximum PMR values reach 100\%. Among the materials studied, YIG represents the most suitable candidate for lateral spintronic applications due to both high Curie temperature and considerably large PMR value.
\begin{figure}[b]
\centering
\includegraphics[width=1\columnwidth]{fig_3.png}
\caption{(color online) Proximity magnetoresistance defined by Eq.~\ref{eq:2} as a function of energy in respect to the Fermi level for YIG (blue circles), CFO(red squares), EuS(black diamonds) and EuO(green triangles) using temperature smeared conductances at T=300 K, 300 K, 16 K and 70 K, respectively. System dimensions are $L=49.2$~nm, $W=39.6$~nm and $d=1.5$~nm. }
\label{fig:fig_3}
\end{figure}
In order to elucidate the underlying physics behind these PMR results, let us analyze details of the conductance behaviour. In Fig.~\ref{fig:fig_4}(a)-(b) we reproduce the graphene bands in proximity of YIG and corresponding transmission probabilities resolved in spin for P and AP configurations at $T=0$ K for a system with dimensions $L=49.2$ nm, $W=39.6$ nm and $d=1.5$ nm. One can see that for energies between -0.88 eV and -0.78 eV there is no majority spin states present and the only contribution to transmission $T_{P}^\downarrow$ is from minority spin channel (Fig.~\ref{fig:fig_4}(b), red solid line). In other words, the situation within this energy range is half-metallic giving rise to maximum PMR values of 100\% using ``pessimistic" definition given by Eq.~(\ref{eq:2}). The similar situation is for energy ranges between -0.72 eV and -0.75 eV but this time the only contribution $T_{P}^\uparrow$ is from majority spin channel (Fig.~\ref{fig:fig_4}(b), red dashed line). One should point out here that the conduction profile here is due combining both magnetic and nonmagnetic regions into one scattering region. The conductance of a pure graphene nanoribbon sheet represents quantized steps due to transverse confinement with no conductivity at zero energy depending on its edges. Inducing magnetism within graphene sheet leads to symmetry breaking with the shift of exchange splitted gaps in the vicinity of Dirac cone region below the Fermi level. This leads to characteristic conductance profile with two minima at around -0.8~eV and 0~eV (not shown here) due to the Dirac cone regions of magnetized and the pure graphene.
The corresponding conductances for the parallel ($G_P$) and for the antiparallel ($G_{AP}$) magnetic configurations at $T=300$ K are shown in Fig~\ref{fig:fig_4}(c). Interestingly, even at room temperature the PMR for YIG based structure preserves a very high value of about 77\% as already pointed above, a behavior that is very encouraging for future experiments on PMR. As a guide to the eye with dashed lines we highlight the energy value where the PMR has a maximum in Fig. \ref{fig:fig_4}.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{fig_4.png}
\caption{(color online) (a) Band structure reproduced using the DFT parameters from Ref.~[\citenum{ali2017}] for graphene in proximity of YIG. (b) Transmission probabilities for majority (dash lines) and minority (solid) spin channel for parallel (red) and antiparallel (blue) magnetization configurations at $T=0$ K for a system with dimensions $L=49.2$~nm, $W=39.6$~nm and $d=1.5$~nm. (c) Resulting conductance for parallel (red circles) and antiparallel (blue squares) magnetization configurations at 300~K. (d) PMR for device with armchair (blue circles), rough (red squares) and zigzag (black triangles) edge termination of graphene. PMR profiles as a function of (e) $L$, (f) $W$ and (g) $d$. (h) Dependence of PMR for the energy outlined by dashed line in (e), (f) and (g) as a function of $L$ (black circles), $W$ (red squares) and $d$ (blue triangles). The green square highlights the region where PMR becomes independent of system dimensions.}
\label{fig:fig_4}
\end{figure*}
Since the edges may strongly influence the aforementioned properties of the system, we next explore the robustness of PMR against different edge types of the graphene channel of the proposed device. It is well known that electric field can trigger half-metallicity in zigzag nanoribbons due to the antiferromagnetic interaction of the edges~\cite{zz-loui}. On the other hand, graphene nanoribbons with armchair edges can display insulating or metallic behaviour depending on graphene nanoribbon (GNR) width~\cite{dressel-arm,exp-armc-peter}. Armchair and zigzag edges are particular cases and the most symmetric edge directions in graphene. But one can cut GNR at intermediate angular direction between these two limiting cases giving rise to an intermediate direction characterized by a chirality angle~$\theta$\cite{oleg2013-review}.
Graphene band structure is highly dependent on $\theta$. When the angle is increased, the length of the edge states localized at the Fermi level decrease and eventually disappear in the limiting case when $\theta =30^{\circ}$, i. e.
when acquires armchair edge. In the laboratory conditions, graphene sheets are finite and have
imperfections that influence their transport properties. For defects at the edges, it has been demonstrated that rough edges can diminish the conductance of a graphene
nanoribbon as was shown in Ref.~[\citenum{libisch2012-rough}] or may exhibit a nonzero spin conductance as reported in Ref.~[\citenum{wimmer2008-rough}].
In order to demonstrate the robustness of PMR with respect to the edge type, we thus performed calculations with the same system setup (Fig.~\ref{fig_1}) but this time for various edge terminations. The resulting PMR behavior for the cases with armchair, rough edges and zigzag are shown in Fig.~\ref{fig:fig_4}(d). The former have been modeled by creating extended vacancies distributed randomly. It is clear that the maximum PMR value does not present a significant variation maintaining for all cases PMR values around 75\%. With this results in hand we can claim that the PMR is indeed robust with respect to edge termination type.
As a next step, we checked the dependence of the PMR on different system dimensions, i.e. the length of the magnetic region $L$, system width $W$ and the separation between the magnetic regions $d$. The corresponding dependences are presented respectively in Fig.~\ref{fig:fig_4}(e),(f) and (g). One can see that for all energy ranges the PMR ratio has a tendency to increase as a function of $L$ approaching limiting value of 77\% at energies around -0.81 eV indicated by a dashed line Fig.~\ref{fig:fig_4}(e). As for dependence of the PMR as a function of GNR width $W$, clear oscillations due to quantum well states formation are present with a tendency to vanish as system widens (Fig.~\ref{fig:fig_4}(f)). On a contrary, the PMR shows almost constant behavior as a function of separation between the magnets $d$ (Fig.~\ref{fig:fig_4}(g)) due to the fact that transport is in ballistic regime. For convenience, we summarize all these dependencies in Fig.~\ref{fig:fig_4}(h) at energy -0.81 eV as a function of $L$, $W$ and $d$. One can clearly see that the PMR saturates as system dimensions are increased. At the same time, it shows the oscillations in the PMR for small $W$ as well as the invariance of the PMR with respect to $d$. For large dimensions highlighted by the green box in Fig.~\ref{fig:fig_4}(h), we can claim that the PMR is indeed robust, and the maximum PMR value would be eventually limited only by the magnitude of the spin diffusion length in the system.
Finally, we consider the impact of spin-orbit coupling on the PMR. Despite weak SOC within graphene, the proximity of adjacent materials can induce the interfacial Rashba SOC~\cite{macdonald2011-prb}. Rashba type SOC is included into our tight-binding approach adding the following term:
\begin{equation}\label{soc}
H_{SO} = i \lambda_R \sum_{i\sigma\sigma'} \sum_l
c^\dagger_{(i+l)1\sigma} [\sigma^x_{\sigma\sigma'} d_l^x - \sigma^y_{\sigma\sigma'} d_l^y] c_{i0\sigma'} +h.c.
\end{equation}
where the vector $\vec d_l=(d_l^x,d_l^y)$ connects the two nearest neighbours, $\lambda_R$ indicates the SOC strength. The values of $\lambda_R$ are generally lie in the range between ~1-10~meV (see, for instance, in Ref.~[\citenum{macdonald2014-prl}]). Keeping in mind this information, we present in in Fig.~\ref{fig:fig_5} the PMR dependences for three values of spin-orbit interaction. One can see that increasing the strength of SOC $\lambda_R$ lower the PMR. This behavior is expected and could be attributed to the fact that spin-orbit interaction mixes the spin channels. These dependencies allows us to conclude that PMR is quite robust also against SOC and even in the worst scenario remains of the order of 50~\% (cf. black triangles and blue circles in Fig.~\ref{fig:fig_5}).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{fig_5.png}
\caption{(color online) PMR dependencies for three values of Rashba spin-orbit interaction parameter $\lambda_R$ defined by Eq.~(\ref{soc}) for YIG-based system with armchair edges and of dimensions $L=49.2$~nm, $W=39.6$~nm and $d=1.5$~nm. The dashed line is a guide to the eye that shows the maximum value when $\lambda_R=0$~eV.}
\label{fig:fig_5}
\end{figure}
\section{Conclusions}
In this paper we introduced the proximity induced magnetoresistance phenomenon in graphene based lateral system comprising regions with proximity induced magnetism by four different magnetic insulators. For YIG and CFO based devices we found PMR ratios of 77\% and 22\% at room temperature, respectively. For chalcogenide based systems, i.e. with EuS and EuO, we found PMR values of 100\% for both at 16 K and 70 K, respectively. Very importantly, it is demonstrated that the PMR is robust with respect to system dimensions and edge type termination. Furthermore, the PMR survives in case of the presence of SOC decreasing only by about a half even in the case of considerably big SOC strength values. We hope this work will encourage further experimental research and will be useful for the development of novel generation of spintronic devices based on generation and exploring spin currents without passing charge currents across ferromagnets.
\section{acknowledgments}
We thank J. Fabian and S. Roche for fruitful discussions. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 696656 and 785219 (Graphene Flagship). X.W. acknowledge support by ANR Gransport.
| proofpile-arXiv_065-7279 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
Large and deep neural networks, despite of their great successes in a wide variety of applications, call for compact and efficient model representations to reduce the vast amount of network parameters and computational operations, that are resource-hungry in terms of memory, energy and communication bandwidth consumption. This need is imperative especially for resource constrained devices such as mobile phones, wearable and Internet of Things (IoT) devices. Neural network compression is a set of techniques that address these challenges raised in real life industrial applications.
Minimizing network sizes without compromising original network performances has been pursued by a wealth of methods, which often adopt a three-phase learning process, i.e. training-pruning-tuning. In essence, network features are first learned, followed by the pruning stage to reduce network sizes. The subsequent fine-tuning phase aims to restore deteriorated performances incurred by undue pruning. This ad hoc three phase approach, although empirically justified e.g. in \cite{NetSlim_Liu2017d,IntreprePrune_Qin2018,Lib,Wen2016,Zhou2016a}, was recently questioned with regards to its efficiency and effectiveness. Specifically \cite{RethinkPrune_2018arXiv,PruningNip_2018arXiv} argued that the network \textit{architecture} should be optimized first, and then \textit{features} should be learned from scratch in subsequent steps.
In contrast to the two aforementioned opposing approaches, the present paper illustrates a novel method which simultaneously learns
both the \textit{number of filters} and \textit{network features} over multiple optimization epochs. This integrated optimization process brings about immediate benefits and challenges --- on the one hand, separated processing steps such as training, pruning, fine-tuning etc, are no longer needed and the integrated optimization step guarantees consistent performances for the given neural network compression scenarios. On the other hand, the dynamic change of network architectures has significant influences on the optimization of features, which in turn might affect the optimal network architectures. It turns out the interplay between architecture and feature optimizations plays a crucial role in improving the final compressed models.
\iffalse
For experiments of CIFAR-10/100 image classification tasks, the proposed method is compared favorably with existing methods in terms of reduced training cost, and better trade-off between models sizes and accuracies.
\fi
\section{RELATED WORK}
Network pruning was pioneered \cite{lecun1990optimal,hassibi1993second,han2015learning} in the early development of neural network, since when a broad range of methods have been developed.
We focus on neural network compression methods that prune filters or channels. For thorough review of other approaches we refer to a recent survey paper \cite{Cheng2017}.
Li \emph{et al}. \cite{Lib} proposed to prune filters with small effects on the output accuracy and managed to reduce about one third of inference cost without compromising original accuracy on CIFAR-10 dataset. Wen \emph{et al}. \cite{Wen2016} proposed a structured sparsity regularization framework, in which the group lasso constrain term was incorporated to penalize and remove unimportant filters and channels. Zhou \emph{et al}. \cite{Zhou2016a} also adopted a similar regularization framework, with tensor trace norm and group sparsity incorporated to penalize the number of neurons. Up to 70\% of model parameters were reduced without scarifying classification accuracies on CIFAR-10 datasets. Recently Liu \emph{et al}. \cite{NetSlim_Liu2017d} proposed an interesting network slimming method, which imposes L1 regularization on channel-wise \textit{scaling factors} in batch-normalization layers and demonstrated remarkable compression ratio and speedup using a surprisingly simple implementation. Nevertheless, network slimming based on scaling factors is not guaranteed to achieve desired accuracies and separate fine-tunings are needed to restore reduced accuracies.
Qin \emph{et al}. \cite{IntreprePrune_Qin2018} proposed a functionality-oriented filter pruning method to remove less important filters, in terms of their contributions to classification accuracies. It was shown that the efforts for model retraining is moderate but still necessary, as in the most of state-of-the-art compression methods.
\iffalse
Lin \emph{et al}. \cite{Lin2017a} proposed to prune the deep neural network dynamically, by incorporating a decision network to select pruned filters with reinforcement learning.
Also Alvarez \cite{Alvarez2017} proposed to explicitly account compression in the training process, by introducing a regularizer to encourage low rank weight parameters.
Zhu and Gupta \cite{GradulPrune_2017arXiv} proposed a gradual pruning which can be incorporated within the training process.
\fi
DIVNET adopted Determinantal Point Process (DPP) to enforce diversities between individual neural activations \cite{DivNet_2015arXiv}.
Diversity of filter weights defined in (\ref{eq:diversity_ncc}) is related to orthogonality of weight matrix, which has been extensively studied. An example being \cite{Harandi2016}, proposed to learn Stiefel layers, which have orthogonal weights, and demonstrated its applicability in compressing network parameters. Interestingly, the notion of diversity regularized machine (DRM) has been proposed to generate an ensemble of SVMs in the PAC learning framework \cite{Yu2011}, yet its definition of diversity is critically different from our definition in (\ref{eq:diversity_ncc}), and its applicability to deep neural networks is unclear.
\section{SIMULTANEOUS LEARNING OF ARCHITECTURE AND FEATURE}
The proposed compression method belongs to the general category of filter-pruning approaches.
In contrast to existing methods \cite{NetSlim_Liu2017d,IntreprePrune_Qin2018,Lib,Wen2016,Zhou2016a,RethinkPrune_2018arXiv,PruningNip_2018arXiv}, we adopt following techniques to ensure that simultaneous optimization of network architectures and features is a technically sound approach. First, we introduce an explicit \textit{pruning loss} estimation as an additional regularization term in the optimization objective function. As demonstrated by experiment results in Section \ref{sect:exper}, the introduced pruning loss enforces the optimizer to focus on promising candidate filters while suppressing contributions of less relevant ones. Second, based on the importance of filters, we explicitly \textit{turn-off} unimportant filters below given percentile threshold. We found the explicit shutting down of less relevant filters is indispensable to prevent biased estimation of pruning loss. Third, we also propose to enforce the diversities between filters and this diversity-based regularization term improves the trade-off between model sizes and accuracies, as demonstrated in various applications.
Our proposed method is inspired by network slimming \cite{NetSlim_Liu2017d} and main differences from this prior art are two-folds: a) we introduce the pruning loss and incorporate explicit pruning into the learning process, without resorting to the multi-pass pruning-retraining cycles; b) we also introduce filter-diversity based regularization term which improves the trade-off between model sizes and accuracies.
\subsection{Loss Function}\label{sect:review-sparsity}
Liu \emph{et al}. \cite{NetSlim_Liu2017d} proposed to push towards zero the scaling factor in batch normalization (BN) step during learning, and subsequently, the insignificant channels with small scaling factors are pruned. This sparsity-induced penalty is introduced by regularizing L1-norm of the learnable parameter $\gamma$ in the BN step \emph{i.e., }} \def\Ie{\emph{I.e., }
\begin{equation}\label{eq:filter_spartiy}
g(\gamma) = \left| \gamma \right|; \textnormal{ where } \hat{z}= \frac{z_{in} - \mu_B}{\sqrt{\sigma^2 + \epsilon}}; z_{out} = \gamma \hat{z} + \beta,
\end{equation
in which $z_{in}$ denote filter inputs, $\mu_B, \sigma$ the filter-wise mean and variance of inputs, $\gamma, \beta$ the scaling and offset parameters of batch normalization (BN) and $\epsilon$ a small constant to prevent numerical un-stability for small variance. It is assumed that there is always a BN filter appended after each convolution and fully connected filter, so that the scaling factor $\gamma$ is directly leveraged to prune unimportant filters with small $\gamma$ values. Alternatively, we propose to directly introduce scaling factor to each filter since it is more universal than reusing BN parameters, especially considering the networks which have no BN layers.
By incorporating a filter-wise sparsity term, the object function to be minimized is given by:
\begin{equation}\label{eq:regularized_sparisty_func}
L = \sum_{(\textbf{x},y)} loss( f(\textbf{x},\textbf{W}), y) + \lambda \sum_{\gamma \in \Gamma } g(\gamma),
\end{equation}
where the first term is the task-based loss, $g(\gamma)=||\gamma||_1$ and $\Gamma$ denotes the set of scaling factors for all filters. This pruning scheme, however, suffers from two main drawbacks: 1) since scaling factors are equally minimized for all filterers, it is likely that the pruned filters have unignorable contributions that should not be unduly removed.
2) the pruning process, \emph{i.e., }} \def\Ie{\emph{I.e., } architecture selection, is performed independantly w.r.t\onedot} \def\dof{d.o.f\onedot the feature learning; the performance of pruned network is inevitably compromised and has to be recovered by single-pass or multi-pass fine-tuning, which impose additional computational burdens.
\subsubsection{An integrated optimization}\hfill
Let $\textbf{W}, \check{\textbf{W}}, \hat{\textbf{W}}$ denote the sets of neural network weights for, respectively, all filters, those pruned and remained ones i.e. $\textbf{W} = \{ \check{\textbf{W}} \bigcup \hat{\textbf{W}} \}$. In the same vein, $\Gamma= \{ P(\Gamma) \bigcup R(\Gamma)\}$ denote the sets of scaling factors for all filters, those removed and remained ones respectively.
To mitigate the aforementioned drawbacks, we propose to introduce two additional regularization terms to Eq. \ref{eq:regularized_sparisty_func},
\begin{align}\label{eq:regularized_all}
L( \hat{\textbf{W}}, R(\Gamma)) = & \sum_{(\textbf{x},y)} loss( f(\textbf{x},\hat{\textbf{W}} ), y) + \lambda_1 \sum_{\gamma \in R(\Gamma) } g(\gamma) \nonumber \\
& - \lambda_2 \frac{\sum_{\gamma \in R({\Gamma}) } \gamma }{\sum_{\gamma \in \Gamma } \gamma} - \lambda_3 \sum_{l \in L} Div(\hat{\textbf{W}}^l),
\end{align}
where $loss( \cdot, \cdot )$ and $\sum_{\gamma \in R(\Gamma) } g(\gamma)$ are defined as in Eq. \ref{eq:regularized_sparisty_func}, the third term is the pruning loss and the forth is the diversity loss which are elaborated below. $\lambda_1, \lambda_2, \lambda_3$ are weights of corresponding regularization terms.
\begin{figure*}[t!]
\centering
\includegraphics[width=.45\textwidth]{./cifar10}
\includegraphics[width=.45\textwidth]{./cifar100}
\caption{Comparison of scaling factors for three methods, \emph{i.e., }} \def\Ie{\emph{I.e., } baseline with no regularization, network-slimming \cite{NetSlim_Liu2017d}, and the proposed method with diversified filters, trained with CIFAR-10 and CIFAR-100.
Note that the pruning loss defined in (\ref{eq:regularized_all}) are 0.2994, 0.0288, \num{1.3628e-6}, respectively, for three methods. Accuracy deterioration are 60.76\% and 0\% for network-slimming \cite{NetSlim_Liu2017d} and the proposed methods, and the baseline networks completely failed after pruning, due to insufficient preserved filters at certain layers.
} \label{fig:nsf_compare}
\end{figure*}
\subsubsection{Estimation of pruning loss}\hfill
The second regularization term in (\ref{eq:regularized_all}) i.e. $\gamma^R := \frac{\sum_{\gamma \in R({\Gamma}) } \gamma }{\sum_{\gamma \in \Gamma } \gamma}$ (and its compliment $\gamma^P :=\frac{\sum_{\gamma \in P({\Gamma}) } \gamma }{\sum_{\gamma \in \Gamma } \gamma} = 1 - \gamma^R$) is closely related to performance deterioration incurred by undue pruning\footnote{In the rest of the paper we refer to it as the estimated pruning loss.}.
The scaling factors of pruned filters $ P(\Gamma)$, as in \cite{NetSlim_Liu2017d}, are determined by first ranking all $\gamma$ and taking those below the given percentile threshold. Incorporating this pruning loss enforces the optimizer to increase scaling factors of promising filters while suppressing contributions of less relevant ones.
The rationale of this pruning strategy can also be empirically justified in Figure \ref{fig:nsf_compare},
in which scaling factors of three different methods are illustrated. When the proposed regularization terms are added, clearly, we observed a tendency for scaling factors being dominated by few number of filters --- when 70\% of filters are pruned from a VGG network trained with CIFAR-10 dataset, the estimated pruning loss $ \frac{\sum_{\gamma \in P({\Gamma}) } \gamma }{\sum_{\gamma \in \Gamma } \gamma} $ equals 0.2994, 0.0288, \num{1.3628e-6}, respectively, for three compared methods. Corresponding accuracy deterioration are 60.76\% and 0\% for network-slimming \cite{NetSlim_Liu2017d} and the proposed methods. Therefore, retraining of pruned network is no longer needed for the proposed method, while \cite{NetSlim_Liu2017d} has to retain the original accuracy through single-pass or multi-pass of pruning-retraining cycles.
\subsubsection{Turning off candidate filters}\hfill
It must be noted that the original loss $\sum_{(\textbf{x},y)} loss( f(\textbf{x},{\textbf{W}} ), y)$ is independent of the pruning operation. If we adopt this loss in (\ref{eq:regularized_all}), the estimated pruning loss might be seriously biased because of undue assignments of $\gamma$ not being penalized. It seems likely some candidate filters are assigned with rather small scaling factors, nevertheless, they still retain decisive contributions to the final classifications. Pruning these filters blindly leads to serious performance deterioration, according to the empirical study, where we observe over 50$\%$ accuracy loss at high pruning ratio.
In order to prevent such biased pruning loss estimation, we therefore explicitly shutdown the outputs of selected filters by setting corresponding scaling factors to absolute zero. The adopted loss function becomes $\sum_{(\textbf{x},y)} loss( f(\textbf{x},\hat{\textbf{W}} ), y)$.
This way, the undue loss due to the biased estimation is reflected in $loss( f(\textbf{x},\hat{\textbf{W}}), y)$, which is minimized during the learning process. We found the turning-off of candidate filters is indispensable.
\begin{algorithm}[t!]
\caption{Proposed algorithm}\label{algo}
\begin{algorithmic}[1]
\Procedure{Online Pruning}{}
\State $\textit{Training data}~ \gets \{x_i, y_i\}_{i=1}^{N} $
\State $\textit{Target pruning ratio}~\mathbf{Pr}_N \gets \mathbf{p}\% $
\State $\textit{Initial network weights}~W \gets \textit{method by \cite{he2015delving}}$
\State $\Gamma \gets \{0.5\}$
\State $\hat{W} \gets W$
\State $P(\Gamma) \gets \emptyset$
\State $R(\Gamma) \gets \Gamma$
\For{\text{each} \textit{epoch $n \in$\{$1,\dots,N$\}}}
\State $\textit{Current pruning ratio}~\mathbf{Pr}_n \in [0, \mathbf{Pr}_N]$
\State $\textit{Sort}~\Gamma$
\State $P(\Gamma) \gets \textit{prune filters w.r.t\onedot} \def\dof{d.o.f\onedot} \mathbf{Pr}_n$
\State $R(\Gamma) \gets \Gamma \setminus \textit{P} (\Gamma)$
\State $\textit{Compute}~L( \hat{\textbf{W}}, R(\Gamma))~\textit{in Eq.}~(\ref{eq:regularized_sparisty_func})$
\State $\hat{\textbf{W}} \gets \textit{SGD}$
\State $\check{\textbf{W}} \gets \hat{\textbf{W}} \setminus \check{\textbf{W}}$
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsubsection{Online pruning}
We take a global threshold for pruning which is determined by percentile among all channel scaling factors. The pruning process is performed over the whole training process, \emph{i.e., }} \def\Ie{\emph{I.e., } simultaneous pruning and learning. To this end, we compute a linearly increasing pruning ratio from the first epoch (e.g., 0\%) to the last epoch (e.g., 100\%) where the ultimate pruning target ratio is applied. Such an approach endows neurons with sufficient evolutions driven by diversity term and pruning loss, to avoid mis-pruning neurons prematurely which produces crucial features. Consequently our architecture learning is seamlessly integrated with feature learning. After each pruning operation, a narrower and more compact network is obtained and its corresponding weights are copied from the previous network.
\subsubsection{Filter-wise diversity}\hfill
The third regularization term in (\ref{eq:regularized_all})
encourages high diversities between filter weights as shown below. Empirically, we found that this term improves the trade-off between model sizes and accuracies (see experiment results in Section \ref{sect:exper}).
We treat each filter weight, at layer $l$, as a weight (feature) vector $\textbf{W}^l_i$ of length $w \times h \times c$, where $w,h$ are filter width and height, $c$ the number of channels in the filter. The \textit{diversity} between two weight vectors of the same length is based on the \textit{normalized cross-correlation} of two vectors:
\begin{align}\label{eq:diversity_ncc}
div(\textbf{W}_i, \textbf{W}_j) := 1 - | \langle \mathbf{\bar{W}}_i, \mathbf{\bar{W}}_j \rangle | ,
\end{align}
in which $ \mathbf{\bar{W}} = \frac{ \mathbf{{W}}}{ | \mathbf{{W}} | } $ are normalized weight vectors, and $\langle \cdot , \cdot \rangle$ is the dot product of two vectors. Clearly, the diversity is bounded $0 \leq div(\textbf{W}_i, \textbf{W}_j) \leq 1$, with value close 0 indicating low diversity between highly correlated vectors and values near 1 meaning high diversity between uncorrelated vectors. In particular, diversity equals 1 also means that two vectors are orthogonal with each other.
The \textit{diversities} between $N$ filters at the same layer $l$ is thus characterized by a \textit{N-by-N} matrix
in which elements $d_{ij} =div(\textbf{W}^l_i, \textbf{W}^l_j), i,j=\{1,\cdots,N\}$ are pairwise diversities between weight vectors $\textbf{W}^l_i, \textbf{W}^l_j$. Note that for diagonal elements $d_{ii}$ are constant 0. The \textit{total diversity} between all filters is thus defined as the sum of all elements
\begin{align}
Div(\textbf{W}^l) := \sum^{N,N}_{i,j=1,1} d_{i,j}.
\end{align}
\begin{table*}[t!]
\caption{Results on CIFAR-10 dataset}
\centering
\begin{tabular}{lccccc}
\toprule
Models /\ Pruning Ratio & 0.0 & 0.5 & 0.6 & 0.7 & 0.8 \\
\midrule
VGG-19 (Base-line) & 0.9366 & - & - & - & - \\
VGG-19 (Network-slimming) & - & - & - & 0.9380 & NA \\
VGG-19 (Ours) & - & 0.9353 & 0.9394 & 0.9393 & 0.9302 \\
\midrule
ResNet-164 (Base-line) & 0.9458 & -& - & - & - \\
ResNet-164 (Network-slimming) & - & - & 0.9473 & NA & NA \\
ResNet-164 (Ours) & - & 0.9478 & 0.9483 & 0.9401 & NA \\
\bottomrule
\end{tabular} \label{tbl:c10}
\end{table*}
\begin{table*}[t!]
\caption{Results on CIFAR-100 dataset}
\centering
\begin{tabular}{lccccc}
\toprule
Models /\ Pruning Ratio & 0.0 & 0.3 & 0.4 & 0.5 & 0.6 \\
\midrule
VGG-19 (Base-line) & 0.7326 & - & - & - & - \\
VGG-19 (Network-slimming) & - & - & 0.7348 & - & - \\
VGG-19 (Ours) & - & 0.7332 & 0.7435 & 0.7340 & 0.7374 \\
\midrule
ResNet-164 (Base-line) & 0.7663 & - & - & - & - \\
ResNet-164 (Network-slimming) & - & - & 0.7713 & - & 0.7609 \\
ResNet-164 (Ours) & - & 0.7716 & 0.7749 & 0.7727 & 0.7745 \\
\bottomrule
\end{tabular} \label{tbl:c100}
\end{table*}
\section{EXPERIMENT RESULTS} \label{sect:exper}
In this section, we evaluate the effectiveness of our method on various applications with both visual and audio data.
\subsection{Datasets}
For visual tasks, we adopt ImageNet and CIFAR datasets. The ImageNet dataset contains 1.2 million training images and 50,000 validation images of 1000 classes. CIFAR-10 \cite{krizhevsky2009learning} which consists of 50K training and 10K testing RGB images with 10 classes. CIFAR-100 is similar to CIFAR-10, except it has 100 classes. The input image is 32$\times$32 randomly cropped from a zero-padded 40$\times$40 image or its flipping. For audio task, we adopt ISMIR Genre dataset \cite{cano2006ismir} which has been assembled for training and development in the ISMIR 2004 Genre Classification contest. It contains 1458 full length audio recordings from Magnatune.com distributed across the 6 genre classes: Classical, Electronic, JazzBlues, MetalPunk, RockPop, World.
\subsection{Image Classification}
We evaluate the performance of our proposed method for image classification on CIFAR-10/100 and ImageNet. We investigate both classical plain network, VGG-Net \cite{simonyan2014very}, and deep residual network \emph{i.e., }} \def\Ie{\emph{I.e., } ResNet \cite{he2016deep}. We evaluate our method on two popular network architecture \emph{i.e., }} \def\Ie{\emph{I.e., } VGG-Net \cite{simonyan2014very}, and ResNet \cite{he2016deep}. We take variations of the original VGG-Net, \emph{i.e., }} \def\Ie{\emph{I.e., } VGG-19 used in \cite{NetSlim_Liu2017d} for comparison purpose. ResNet-164 which has 164-layer pre-activation ResNet with bottleneck structure is adopted. As base-line networks, we compare with the original networks without regularization terms and their counterparts in network-slimming \cite{NetSlim_Liu2017d}. For ImageNet, we adopt VGG-16 and ResNet-50 in order to compare with the original networks.
To make a fair comparison with \cite{NetSlim_Liu2017d}, we adopt BN based scaling factors for optimization and pruning. On CIFAR, we train all the networks from scratch using SGD with mini-batch size 64 for 160 epochs. The learning rate is initially set to 0.1 which is reduced twice by 10 at 50\% and 75\% respectively. Nesterov momentum \cite{sutskever2013importance} of 0.9 without dampening and a weight decay of $10^{-4}$ are used. The robust weight initialization method proposed by \cite{he2015delving} is adopted. We use the same channel sparse regularization term and its hyperparameter $\lambda = 10^{-4}$ as defined in \cite{NetSlim_Liu2017d}.
\begin{table*}[t!]
\centering
\caption{Accuracies of different methods before (orig.) and after pruning (pruned). For CIFAR10 and CIFAR100, 70\% and 50\% filters are pruned respectively. Note that 'NA' indicates the baseline networks completely failed after pruning, due to insufficient preserved filters at certain layers. }
\begin{tabular}{|c|c|c|c|
\hline
CIFAR10 & \multicolumn{3}{c|}{Methods} \\
& BASE & SLIM\cite{NetSlim_Liu2017d} & OURS \\ \hline
\hline
ACC orig. & 0.9377 & 0.9330 & 0.9388 \\
\hline
ACC pruned & NA & 0.3254 & 0.9389 \\
\hline
$\gamma^P$ & 0.2994 & 0.0288 & 1.36e-6 \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|
\hline
CIFAR100 & \multicolumn{3}{c|}{Methods} \\
& BASE & SLIM\cite{NetSlim_Liu2017d} & OURS \\ \hline
\hline
ACC orig. & 0.7212 & 0.7205 & 0.75 \\
\hline
ACC pruned & NA & 0.0531 & 0.7436 \\
\hline
$\gamma^P$ & 0.2224 & 0.0569 & {4.75e-4} \\
\hline
\end{tabular}
\label{tab_acc_comp}
\end{table*}
\subsubsection{Overall performance}
The results on CIFAR-10 and CIFAR-100 are shown in Table \ref{tbl:c10} and Table \ref{tbl:c100} respectively. On both datasets, we can observe when typically 50-70\% fitlers of the evaluated networks are pruned, the new networks can still achieve accuracy higher than the original network. For instance, with 70\% filters pruned VGG-19 achieves an accuracy of 0.9393, compared to 0.9366 of the original model on CIFAR-10. We attribute this improvement to the introduced diversities between filter weights, which naturally provides discriminative feature representations in intermediate layers of networks.
As a comparison, our method consistently outperforms network-slimming without resorting to fine-tuning or multi-pass pruning-retraining cycles. It is also worth-noting that our method is capable of pruning networks with prohibitively high ratios which are not possible in network-slimming. Take VGG-19 network on CIFAR-10 dataset as an example, network-slimming prunes as much as 70\%, beyond which point the network cannot be reconstructed as some layers are totally destructed. On the contrary, our method is able to reconstruct a very narrower network by pruning 80\% filters while producing a marginally degrading accuracy of 0.9302. We conjecture this improvement is enabled by our simultaneous feature and architecture learning which can avoid pruning filters prematurely as in network-slimming where the pruning operation (architecture selection) is isolated from the feature learning process and the performance of the pruned network can be only be restored via fine-tuning.
The results on ImageNet are shown in Table \ref{tbl:imagenet} where we also present comparison with \cite{DataDrivenSS_Huang2018} which reported top-1 and top-5 errors on ImageNet. On VGG-16, our method provides 1.2\% less accuracy loss while saving additionally 20.5M parameters and 0.8B FLOPs compared with \cite{DataDrivenSS_Huang2018}. On ResNet-50, our method saves 5M more parameters and 1.4B more FLOPs than \cite{DataDrivenSS_Huang2018} while providing 0.21\% higher accuracy.
\begin{table*}[t!]
\caption{Results on ImageNet dataset}
\centering
\begin{tabular}{lccccccc}
\toprule
Models & Top-1 & Top-5 & Params & FLOPs \\
\midrule
VGG-16 \cite{DataDrivenSS_Huang2018}& 31.47 & 11.8 & 130.5M & 7.66B \\
VGG-16 (Ours) & 30.29 & 10.62& 44M & 6.86B \\
VGG-16 (Ours) & 31.51 & 11.92& 23.5M & 5.07B \\
\midrule
ResNet-50 \cite{DataDrivenSS_Huang2018} & 25.82 & 8.09 & 18.6M & 2.8B \\
ResNet-50 (Ours) & 25.61 & 7.91 & 13.6M & 1.4B \\
ResNet-50 (Ours) & 26.32 & 8.35 & 11.2M & 1.1B \\
\bottomrule
\end{tabular} \label{tbl:imagenet}
\end{table*}
\subsubsection{Ablation study}
In this section we investigate the contribution of each proposed component through ablation study.
\paragraph{Filter Diversity}
\iffalse
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./epoch}
\caption{Scaling factors of the VGG-19 network at various epochs during training trained with diversified filters.
} \label{fig:epoch}
\end{figure}
\fi
\begin{figure*}[t!]
\centering
\includegraphics[width=.49\textwidth]{./epoch}
\includegraphics[width=.49\textwidth]{./plc10_2}
\caption{(a) Scaling factors of the VGG-19 network at various epochs during training trained with diversified filters (b) Sorted scaling factors of VGG-19 network trained with various pruning ratios on CIFAR-10.
}\label{fig:scalings}
\end{figure*}
\iffalse
\begin{figure*}
\centering
\includegraphics[width=.45\textwidth]{./plc10}
\includegraphics[width=.45\textwidth]{./plc100}
\caption{Sorted scaling factors of VGG-19 network trained with various pruning ratios on CIFAR-10 and CIFAR-100.
} \label{fig:pl-pr}
\end{figure*}
\fi
Fig. \ref{fig:scalings} (a) shows the sorted scaling factors of VGG-19 network trained with the proposed filter diversity loss at various training epochs. With the progress of training, the scaling factors become increasingly sparse and the number of large scaling factors, \emph{i.e., }} \def\Ie{\emph{I.e., } the area under the curve, is decreasing. Fig. \ref{fig:nsf_compare} shows the sorted scaling factors of VGG-19 network for the baseline model with no regularization, network-slimming \cite{NetSlim_Liu2017d}, and the proposed method with diversified filters, trained with CIFAR-10 and CIFAR-100. We observe significantly improved sparsity by introducing filter diversity to the network compared with network-slimming, indicated by \textit{nsf}. Remember the scaling factors essentially determine the importance of filters, thus, maximizing \textit{nsf} ensures that the deterioration due to filter pruning is minimized. Furthermore,
the number of filters associated with large scaling factor is largely reduced, rendering more irrelevant filter to be pruned harmlessly. This observation is quantitatively confirmed in Table \ref{tab_acc_comp} which lists the accuracies of three schemes before and after pruning for both CIFAR-10 and CIFAR-100 datasets. It is observed that retraining of pruned network is no longer needed for the proposed method, while network-slimming has to restore the original accuracy through single-pass or multi-pass of pruning-retraining cycles. Accuracy deterioration are 60.76\% and 0\% for network-slimming and the proposed method respectively, whilst the baseline networks completely fails after pruning, due to insufficient preserved filters at certain layers.
\paragraph{Online Pruning}
We firstly empirically investigate the effectiveness of the proposed pruning loss. After setting $\lambda_3=0$, we train VGG-19 network by switching off/on respectively (set $\lambda_2=0$ and $\lambda_2=10^{-4}$) the pruning loss on CIFAR-10 dataset. By adding the proposed pruning loss, we observe improved accuracy of 0.9325 compared to 0.3254 at pruning ratio of 70\%. When pruning at 80\%, the network without pruning loss can not be constructed due to insufficient preserved filters at certain layers, whereas the network trained with pruning loss can attain an accuracy of 0.9298. This experiment demonstrates that the proposed pruning loss enables online pruning which dynamically selects the architectures while evolving filters to achieve extremely compact structures.
Fig. \ref{fig:scalings} (b) shows the sorted scaling factors of VGG-19 network trained with pruning loss subject to various target pruning ratios on CIFAR-10. We can observe that given a target pruning ratio, our algorithm adaptively adjusts the distribution of scaling factors to accommodate the pruning operation. Such a dynamic evolution warrants little accuracy loss at a considerably high pruning ratio, as opposed to the static offline pruning approaches, \emph{e.g., }} \def\Eg{\emph{E.g., } network-slimming, where pruning operation is isolated from the training process causing considerable accuracy loss or even network destruction.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{./comp}
\caption{Network architecure for image compression.
} \label{fig:comparch}
\end{figure}
\begin{table*}[t!]
\caption{Results of image compression on CIFAR-100 dataset}
\centering
\begin{tabular}{lccccc}
\toprule
Models & PSNR & Params & Pruned (\%) & FLOPs & Pruned (\%)\\
\midrule
Base-line & 30.13 & 75888 & - & 46M & - \\
Ours & 29.12 (-3\%) & 43023 & 43\% & 23M & 50\% \\
Ours & 28.89 (-4\%) & 31663 & 58\% & 17M & 63\% \\
\bottomrule
\end{tabular} \label{tbl:compression}
\end{table*}
\subsection{Image Compression}
The proposed approach is applied on end-to-end image compression task which follows a general autoencoder architecture as illustrated in Fig. \ref{fig:comparch}. We utilize general scaling layer which is added after each convolutional layer, with each scaling factor initialized as 1. The evaluation is performed on CIFAR-100 dataset. We train all the networks from scratch using Adam with mini-batch size 128 for 600 epochs. The learning rate is set to 0.001 and MSE loss is used. The results are listed in Table. \ref{tbl:compression} where both parameters and floating-point operations (FLOPs) are reported. Our method can save about 40\% - 60\% parameters and 50\% - 60\% computational cost with minor lost of performance (PSNR).
\subsection{Audio Classification}
We further apply our method in audio classification task, particularly \emph{music genre classification}. The preprocessing of audio data is similar with \cite{lidy2016parallel} and produces Mel spectrogram matrix of size 80$\times$80. The network architecture is illutrated in Fig. \ref{fig:genrearch}, where the scaling layer is added after both convolutional layers and fully connected layers. The evaluation is performed on ISMIR Genre dataset. We train all the networks from scratch using Adam with mini-batch size 64 for 50 epochs. The learning rate is set to 0.003. The results are listed in Table. \ref{tbl:genere} where both parameters and FLOPs are reported. Our approach saves about 92\% parameters while achieves 1\% higher accuracy, saving 80\% computational cost. With a minor loss of about 1\%, 99.5\% parameters are pruned, resulting in an extreme narrow network with $\times$50 times speedup.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{./genre}
\caption{Network architecure for music genre classification.
} \label{fig:genrearch}
\end{figure}
\begin{table*}[t!]
\caption{Results of music genre classification on ISMIR Genre dataset}
\centering
\begin{tabular}{lccccc}
\toprule
Models & Accuracy & Params & Pruned (\%) & FLOPs & Pruned (\%)\\
\midrule
Base-line & 0.808 & 106506 & - & 20.3M & - \\
Ours & 0.818 (+1\%) & 8056 & 92.5 & 4M & 80.3 \\
Ours & 0.798 (-1.3\%) & 590 & 99.5 & 0.44M & 98.4 \\
\bottomrule
\end{tabular} \label{tbl:genere}
\end{table*}
\section{CONCLUSIONS}
In this paper, we have proposed a novel approach to simultaneously learning architectures and features in deep neural networks. This is mainly underpinned by a novel pruning loss and online pruning strategy which explicitly guide the optimization toward an optimal architecture driven by a target pruning ratio or model size. The proposed pruning loss enabled online pruning which dynamically selected the architectures while evolving filters to achieve extremely compact structures. In order to improve the feature representation power of the remaining filters, we further proposed to enforce the diversities between filters for more effective feature representation which in turn improved the trade-off between architecture and accuracies. We conducted comprehensive experiments to show that the interplay between architecture and feature optimizations improved the final compressed models in terms of both models sizes and accuracies for various tasks on both visual and audio data.
\bibliographystyle{splncs03}
| proofpile-arXiv_065-7291 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{tex/intro.tex}
\section{Background}
\input{tex/background.tex}
\section{Algorithms}
\input{tex/algo.tex}
\section{Discussions}
\input{tex/discussion.tex}
\section{Extension to Trust Region}
\input{tex/penfac.tex}
\section{Experiments}
\input{tex/exp.tex}
\section{Conclusion}
In the context of learning deterministic policies, we studied the properties of two not very well-known but efficient updates, Continuous Actor Critic Learning Automaton (CACLA) and Continuous Actor Critic (CAC).
We first showed how closely they both are related to the stochastic policy gradient (SPG).
We explained why they are well designed to learn continuous deterministic policies when the value function is only approximated.
We also highlighted the limitations of those methods: a potential poor sample efficiency when the dimension of the action space increases and no guarantee that the underlying deterministic policy will converge toward a local optimum of $J(\mu_\theta)$ even with a linear approximation.
In the second part, we extended Neural Fitted Actor Critic (NFAC), itself an extension of CACLA, with a trust region constraint designed for deterministic policies and proposed a new algorithm, Penalized NFAC (PeNFAC).
Finally, we tried our implementation on various high-dimensional continuous environments and showed that PeNFAC performs better than DDPG and PPO to learn continuous deterministic policies.
As future work, we plan to consider off-policy learning and the combination of the updates of CAC and DPG together to ensure the convergence toward a local optimum while benefiting from the good updates of CAC.
\section*{Acknowledgments}
This work has been supported in part by the program of National Natural Science Foundation of China (No. 61872238).
Experiments presented in this paper were carried out using the Grid’5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr).
\bibliographystyle{named}
\subsection{Continuous Actor Critic Learning Automaton}
Continuous Actor Critic Learning Automaton (CACLA) \cite{VanHasselt2007} is an actor-critic method that learns a stochastic policy $\pi$ and its estimated value function $\hat V^\pi$.
We assume in this paper that CACLA uses isotropic Gaussian exploration, which implies that
$\pi$ can be written as follows:
\begin{equation}
\label{eq:hypo_polstoch}
\pi_{\theta,\sigma}(\cdot|s) = \mathcal{N}\big(\mu_\theta(s), \sigma^2 I)
\end{equation}
where $I$ is the identity matrix and $\sigma>0$ possibly annealed during learning.
CACLA alternates between two phases:
\noindent 1) a hill climbing step in the action space using a random optimization (RO) algorithm \cite{matyas1965random},
\noindent 2) a gradient-like update in the policy parameter space.
RO consists in repeating the following two steps:
i) sample a new action $a'$, which is executed in the environment in current state $s$, by adding a normally distributed noise to the current action $a=\mu(s)$,
ii) if $R(s, a') + \gamma \hat V^\pi(s') > \hat V^\pi(s)$ then $a \leftarrow a'$ else $a$ does not change.
\noindent Phase 2) is based on following update:
\begin{equation} \label{eq:base_cacla}
\text{If } \delta(s,a) > 0: \tilde{\theta} \leftarrow \theta - \alpha \big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s),
\end{equation}
where $\delta(s,a) = R(s, a) + \gamma \hat V^\pi(s') - \hat V^\pi(s)$ is the temporal difference (TD) error.
As the expectation of the TD error is equal to the advantage function, this update can be interpreted as follows: if an exploratory action $a$ has a positive advantage then policy $\mu$ should be updated towards $a$.
Note that although CACLA executes a stochastic policy $\pi$, it can be seen as learning a deterministic policy $\mu$.
\citeauthor{VanHasselt2007} \shortcite{VanHasselt2007} state that when learning in continuous action space, moving away from a bad action could be meaningless.
Indeed, while for stochastic policies, the probability of a bad action can be decreased,
for deterministic policies, moving in the action space in the opposite direction of an action with a negative advantage may not necessarily lead to better actions.
Thus, CACLA's update is particularly appropriate for learning continuous deterministic policies.
\subsection{Continuous Actor Critic}
In our discussion, we also refer to a slightly different version of CACLA, Continuous Actor Critic (CAC) \cite{VanHasselt2007}.
The only difference between CAC and CACLA is that
the update in CAC is scaled by the TD error:
\begin{equation}
\text{If } \delta(s,a) > 0: \tilde{\theta} \leftarrow \theta - \alpha \delta(s,a) \big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s),
\end{equation}
Thus an action with a larger positive advantage (here, estimated by the TD error) will have a bigger impact over the global objective.
\subsection{Neural Fitted Actor Critic}
The Neural Fitted Actor Critic (NFAC) \cite{zimmer2016,zimmer2018developmental} algorithm is an efficient instantiation of the CACLA update, which integrates the following techniques: batch normalization, $\lambda$-returns for both the critic and the actor, and batch learning with Adam\cite{Kingma2015}.
In this algorithm, the update of the parameters is not done anymore at each time step, but at the end of a given number of episodes.
\subsection{Trust Region for Deterministic Policies}
We now introduce a trust region method dedicated to continuous deterministic policies.
Given current deterministic policy $\mu$, and an exploratory policy $\pi$ defined from $\mu$, the question is to find a new deterministic policy $\tilde{\mu}$ that improves upon $\mu$.
Because a deterministic policy is usually never played in the environment outside of testing phases, a direct measure between two deterministic policies (i.e., a deterministic equivalent of Equation $\ref{eq:stochperfmeasure}$) is not directly exploitable.
Instead we introduce the following measure:
\begin{lemma}
The performance $J(\tilde{\mu})$ of a deterministic policy $\tilde{\mu}$ can be expressed by the advantage function of another stochastic policy $\pi$ built upon a deterministic policy $\mu$ as:
\begin{flalign} \label{eq:j mu bar}
J(\tilde{\mu}) = J(\mu) + \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) A^\mu(s, a) da ds + \notag \\
\int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) A^\pi \big(s, \tilde{\mu}(s)\big) ds.
\end{flalign}
\end{lemma}
See Appendix~\ref{appendix:fosl} for the proof.
The first two quantities in the RHS of (\ref{eq:j mu bar}) are independent of ${\tilde{\mu}}$.
The second one represents the difference of performance from moving from the deterministic policy $\mu$ to its stochastic version $\pi$.
Because $d^{\tilde{\mu}}_\gamma$ would be too costly to estimate, we approximate it with the simpler quantity $d_\gamma^\pi$, as done by \citeauthor{Schulman2015} \shortcite{Schulman2015} for TRPO, a predecessor to PPO.
\begin{theorem} \label{theo:trustdeter} Given two deterministic policies $\mu$ and $\tilde{\mu}$, a stochastic Gaussian policy $\pi$ with mean $\mu(s)$ in state $s$ and independent variance $\sigma$, if the transition function $T$ is L-Lipschitz continuous with respect to the action from any state then:
\begin{flalign*}
&\Big| \int_{\mathcal{S}} d^{\tilde{\mu}}(s) A^\pi \big(s, \tilde{\mu}(s)\big) - \int_{\mathcal{S}} d^{\pi}(s) A^\pi \big(s, \tilde{\mu}(s)\big) \Big| \leq \\ & \frac{\epsilon L}{1-\gamma} \underset{t>0}{\operatorname{max\ }} \Big( \big|\big| \tilde{\mu}(s) - \mu(s)\big|\big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t,
\end{flalign*}
where $\epsilon = \text{max}_{s,a} |A^\pi(s,a)| $.
\end{theorem}
\noindent The proof is available in Appendix~\ref{appendix:prooftrust}.
Thus, to ensure a stable improvement at each update, we need to keep both $|| \mu - \tilde{\mu} ||_{2,\infty}$ and $\sigma$ small.
Note that the Lipschitz continuity condition is natural in continuous action spaces.
It simply states that for a given state, actions that are close will produce similar transitions.
\subsection{Practical Algorithm}
To obtain a concrete and efficient algorithm, the trust region method can be combined with the previous algorithms.
Its integration to NFAC with a CAC update for the actor is called Penalized Neural Fitted Actor Critic (PeNFAC).
\citeauthor{VanHasselt2007} \shortcite{VanHasselt2007} observed that the CAC update performs worse that the CACLA update in their algorithms.
In their setting where the policy and the critic are updated at each timestep, we believe this observation is explained by the use of the TD error (computed from a single sample) to estimate the advantage function.
However, when using variance reduction techniques such as $\lambda$-returns and learning from a batch of interactions, or when
mitigating the update with a trust region constraint, we observe that this estimation becomes better (see~Figure~\ref{fig:penfaccomp}).
This explains why we choose a CAC update in PeNFAC.
In order to ensure that $|| \mu - \tilde{\mu} ||_{2,\infty}$ stays small over the whole state space, we approximate it with a Euclidean norm over the state visited by $\pi$.
To implement this constraint, we add a regularization term to the update and automatically adapts its coefficient, for a trajectory $(s_0, s_1, \ldots, s_h)$:
\begin{equation*}
\sum_{t=0}^{h-1} \Delta_{\text{CAC}}(s_t, \mu_\theta) + \beta \nabla_\theta \big|\big| \mu_{\text{old}}(s_t) - \mu_\theta(s_t) \big|\big|^2_2,
\end{equation*}
where $\beta$ is a regularization coefficient.
Similarly to the adaptive version of Proximal Policy Optimization (PPO) \cite{PPO}, $\beta$ is updated in the following way (starting from $\beta \leftarrow 1$):
\begin{itemize}
\item if $\hat{d}(\mu,\mu_{\text{old}}) < d_{\text{target}} / 1.5$: $\beta \leftarrow \beta / 2 $,
\item if $\hat{d}(\mu,\mu_{\text{old}}) > d_{\text{target}} \times 1.5$: $\beta \leftarrow \beta \times 2 $,
\end{itemize}
where $\hat{d}(\mu,\mu_{\text{old}}) = \frac{1}{\sqrt{m L}} \sum_{s \sim \pi} || \mu_{\text{old}}(s) - \mu_\theta(s) ||_2$ with $L$ being the number of gathered states.
Those hyper-parameters are usually not optimized because the learning is not too sensitive to them.
The essential value to adapt for the designer is $d_\text{target}$.
Note that the introduction of this hyperparameter mitigates the need to optimize the learning rate for the update of the policy, which is generally a much harder task.
\section{Proofs}
\onecolumn
\section{Proofs}
\subsection{Relation between DPG and CAC update for a given state}
\label{appendix:prooflimCAC}
For simplification, the proof of a single dimension $k$ of the parameter space is provided. To denote the $k$\textsuperscript{th} dimension of a vector $x$, we write $x_k$. If $x$ is a matrix, $x_{:,k}$ represents the $k$\textsuperscript{th} column vector.
We will use the following result from \citeauthor{Silver2014} \shortcite{Silver2014}:
\begin{flalign*}
\lim_{\sigma \rightarrow 0} \nabla_\theta J(\pi_{\theta,\sigma}) = \nabla_\theta J(\mu_\theta).
\end{flalign*}
Thus, the following standard regularity conditions are required: $T, T_0, R, \mu, \pi, \nabla_a T, \nabla_a R, \nabla_\theta \mu$ are continuous in all variables and bounded.
From this result, we derive the following equation for a fixed state $s$:
\begin{flalign*}
\lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) \nabla_\theta \pi_{\theta,\sigma}(a|s) da = \nabla_a A^\mu(s,a) \big|_{a=\mu_\theta(s)} \nabla_\theta \mu_\theta(s).
\end{flalign*}
\noindent We first study the special case of $\Delta_{\text{DPG}}(s, \mu_\theta)_k = 0$ and want to show that ${\lim_{\sigma \rightarrow 0} \Delta_{\text{CAC}} (s, \mu_\theta)}_k$ is also zero:
\begin{flalign*}
{\Delta_{\text{DPG}}(s, \mu_\theta)}_k = 0 \implies & \nabla_a A^\mu(s,a) \big|_{a=\mu_\theta(s)} {\nabla_\theta \mu_\theta(s)}_{:,k} = 0,\\
\implies & \lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) {\nabla_\theta \pi_{\theta,\sigma}(a|s)}_{:,k} da = 0, \\
\implies & \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^2} \int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) A^\pi(s,a) \big(a - \mu_\theta(s) \big) {\nabla_\theta \mu_\theta(s)}_{:,k} da = 0,\\
\implies & \lim_{\sigma \rightarrow 0} \frac{1}{\sigma^2} \int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) H\big(A^\pi(s,a)\big) A^\pi(s,a) \big(a - \mu_\theta(s) \big) {\nabla_\theta \mu_\theta(s)}_{:,k} da = 0,\\
\implies & \lim_{\sigma \rightarrow 0} {\Delta_{\text{CAC}} (s, \mu_\theta)}_k = 0.
\end{flalign*}
\noindent Now, we study the more general case ${\Delta_{\text{DPG}}(s, \mu_\theta)}_k \neq 0$:
\begin{flalign*}
g_k^+(s, \mu_\theta) =& \frac{\lim_{\sigma \rightarrow 0} \Delta_{\text{CAC}} (s, \mu_\theta)_k}{\Delta_{\text{DPG}}(s, \mu_\theta)_k}, \\
=& \frac{\lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) H(A^\pi(s,a)) \nabla_\theta {\pi_{\theta,\sigma}(a|s)}_{:,k} da}{ \lim_{\sigma \rightarrow 0} \int_{\mathcal{A}} A^\pi(s,a) \nabla_\theta {\pi_{\theta,\sigma}(a|s)}_{:,k} da }, \\
= & \lim_{\sigma \rightarrow 0} \frac{\int_{\mathcal{A}} A^\pi(s,a) H(A^\pi(s,a)) \nabla_\theta {\pi_{\theta,\sigma}(a|s)}_{:,k} da}{ \int_{\mathcal{A}} A^\pi(s,a) {\nabla_\theta \pi_{\theta,\sigma}(a|s)}_{:,k} da },\\
& \implies 0 \leq g_k^+(s, \mu_\theta) \leq 1.
\end{flalign*}
\subsection{Performance of a deterministic policy expressed from a Gaussian stochastic policy}
\label{appendix:fosl}
The proof is very similar to \cite{kakade2002approximately,Schulman2015} and easily extends to mixtures of stochastic and deterministic policies:
\begin{flalign*}
& \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) A^\mu(s, a) da ds + \int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) A^\pi(s, \tilde{\mu}(s)) ds = \\
& \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) \Big( R(s,a) + \gamma \mathbb{E}\big[V^\mu(s') | a\big] - V^\mu(s) \Big) da +
\int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) \Big( R(s,\tilde\mu(s)) + \gamma \mathbb{E}\big[V^\pi(s') | \tilde{\mu}(s) \big] - V^\pi(s) \Big) ds = \\
& J(\pi) + J(\tilde{\mu}) + \int_{\mathcal{S}} d_\gamma^{\pi}(s) \int_\mathcal{A} \pi(a|s) \Big( \gamma \mathbb{E}\big[V^\mu(s') | a\big] - V^\mu(s) \Big) da + \int_{\mathcal{S}} d_\gamma^{\tilde{\mu}}(s) \Big( \gamma \mathbb{E}\big[V^\pi(s') | \tilde{\mu}(s) \big] - V^\pi(s) \Big) ds = \\
& J(\pi) + J(\tilde{\mu}) + \int_{\mathcal{S}} d_\gamma^{\pi}(s) \Big( - V^\mu(s) + \gamma \int_\mathcal{A} \pi(a|s) \mathbb{E}\big[V^\mu(s') | a\big]da \Big) - J(\pi) = \\
& J(\tilde{\mu}) - J(\mu).
\end{flalign*}
\subsection{Trust region for continuous deterministic policies}
\label{appendix:prooftrust}
For this theorem we also use the following standard regularity conditions:
$I(\mathcal{S}) = \int_\mathcal{S} ds < \infty$ and $\Big|\Big| \tilde{\mu}(s) - \mu(s))\Big|\Big|_{2,\infty} < \infty$. $m$ denotes the number of dimension of the action space.
We start from the two terms we want to bound:
\begin{flalign}
&\Big| \int_{\mathcal{S}} d^{\tilde{\mu}}_\gamma(s) A^\pi(s, \tilde{\mu}(s)) - \int_{\mathcal{S}} d^{\pi}_\gamma(s) A^\pi(s, \tilde{\mu}(s)) \Big| = \notag \\
& \Big| \int_{\mathcal{S}} \big( d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \big) A^\pi(s, \tilde{\mu}(s)) \Big| \leq \notag \\
& \int_{\mathcal{S}} \Big| d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \Big| . \Big| A^\pi(s, \tilde{\mu}(s)) \Big| \leq \notag \\
& \epsilon \int_{\mathcal{S}} \Big| d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \Big|, \label{eq:proofstep3}
\end{flalign}
where $\epsilon = \text{max}_{s,a} |A^\pi(s,a)| $.
So, we need to bound the difference between $d^{\tilde{\mu}}$ and $d^{\pi}$ for a given state $s$:
\begin{flalign}
& \Big| d^{\tilde{\mu}}_\gamma(s) - d^{\pi}_\gamma(s) \Big| = \notag \\
& \Big| \int_{\mathcal{S}} T_0(s_0) \Big( \sum^\infty_{t=0} \gamma^{t} p(s|s_0,t,\tilde{\mu}) - \sum^\infty_{\pw{t=0}} \gamma^{t} p(s|s_0,t,\pi) \Big) ds_0 \Big| = \notag \\
& \Big| \int_{\mathcal{S}} T_0(s_0) \sum^\infty_{\pw{t=0}} \gamma^{t} \Big( p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big) ds_0 \Big| \leq \notag \\
& \int_{\mathcal{S}} \Big| T_0(s_0) \Big| \sum^\infty_{\pw{t=0}} \gamma^{t} \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| \notag ds_0 \leq \\
& \int_{\mathcal{S}} \sum^\infty_{\pw{t=0}} \gamma^{t} \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| ds_0 \leq \notag \\
& \int_{\mathcal{S}} \sum^\infty_{\pw{t=0}} \gamma^{t} \underset{t'>0}{\operatorname{max}} \Big| p(s|s_0,t',\tilde{\mu}) - p(s|s_0,t',\pi) \Big| ds_0 = \notag \\
& \frac{1}{1-\gamma} \int_{\mathcal{S}} \underset{t>0}{\operatorname{max}} \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| ds_0. \label{eq:proofstep4}
\end{flalign}
Finally, we have to bound the difference between $ p(s|s_0,t,\tilde{\mu})$ and $ p(s|s_0,t,\pi) $.
To do so, we define $\tau = \{s_1, ..., s_t=s\}$, and $\mathcal{D}_\tau$ all the possible path from the state $s_1$ to the state $s_t=s$.
\begin{flalign}
& \Big| p(s|s_0,t,\tilde{\mu}) - p(s|s_0,t,\pi) \Big| = \notag \\
& \Big| \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big( T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - \int_\mathcal{A} \pi(a|s_{k-1}) T( s_k | s_{k-1}, a ) da \Big) d\tau \Big| \leq \notag \\
& \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big| T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - \int_\mathcal{A} \pi(a|s_{k-1}) T( s_k | s_{k-1}, a ) da \Big| d\tau = \notag \\
& \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big| \int_\mathcal{A} \pi(a|s_{k-1}) \big( T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - T( s_k | s_{k-1}, a ) \big) da \Big| d\tau \leq \notag \\
& \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int_\mathcal{A} \pi(a|s_{k-1}) \Big| T(s_k | s_{k-1}, \tilde{\mu}(s_{k-1})) \notag - T( s_k | s_{k-1}, a ) \Big| da d\tau \leq \notag
\end{flalign}
\begin{flalign}
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int_\mathcal{A} \pi(a|s_{k-1}) \Big|\Big| \tilde{\mu}(s_{k-1}) - a\Big|\Big|_2 da d\tau = \label{eq:proofstep1} \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int \frac{1}{(\sigma \sqrt{2 \pi})^m} e^{-\frac{1}{2\sigma^2} ||b||_2^2} \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1}) + b\Big|\Big|_2 db d\tau \leq \label{eq:proofstep2} \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \int \frac{1}{(\sigma \sqrt{2 \pi})^m} e^{-\frac{1}{2\sigma^2} ||b||_2^2} \Big( \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1})\Big|\Big|_2 + \Big|\Big| b\Big|\Big|_2 \Big) db d\tau \leq \notag \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big( \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1})\Big|\Big|_2 + \int \frac{1}{(\sigma \sqrt{2 \pi})^m} e^{-\frac{1}{2\sigma^2} ||b||_2^2} \Big|\Big| b\Big|\Big|_1 \Big) db d\tau = \notag \\
& L \int_{\mathcal{D}_\tau} \prod_{k=1}^t \Big( \Big|\Big| \tilde{\mu}(s_{k-1}) - \mu(s_{k-1})\Big|\Big|_2 + \frac{2m\sigma}{\sqrt{2 \pi}} \Big) d\tau \leq \notag \\
& L \int_{\mathcal{D}_\tau} \Big( \underset{s_k \in \tau}{\operatorname{max\ }} \Big|\Big| \tilde{\mu}(s_{k}) - \mu(s_{k})\Big|\Big|_2 \notag + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t d\tau \leq \\
& I(\mathcal{S})^t L \Big( \underset{s_k \in \mathcal{S}}{\operatorname{max\ }} \Big|\Big| \tilde{\mu}(s_{k}) - \mu(s_{k})\Big|\Big|_2 + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t. \label{eq:proofstep5}
\end{flalign}
To obtain (\ref{eq:proofstep1}), we use the assumption that the transition function is L-Lipschitz continuous with respect to the action and the L2 norm.
To obtain (\ref{eq:proofstep2}), we use (\ref{eq:hypo_polstoch}).
Equation \ref{eq:proofstep5} does no longer depend on $s$ and $s_0$, thus added to (\ref{eq:proofstep4}) and (\ref{eq:proofstep3}) it gives:
\begin{flalign}
& \frac{\epsilon L}{1-\gamma} \underset{t>0}{\operatorname{max\ }} I(\mathcal{S})^{t+2} \Big( \Big|\Big| \tilde{\mu}(s) - \mu(s)\Big|\Big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t \leq \notag \\
&\frac{\epsilon L}{1-\gamma} \underset{t>0}{\operatorname{max\ }} \Big( \Big|\Big| \tilde{\mu}(s) - \mu(s)\Big|\Big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big)^t. \label{eq:proofstep6}
\end{flalign}
To obtain (\ref{eq:proofstep6}), we suppose that $I(\mathcal{S})$ is smaller than 1. We can make this assumption without losing in generality: it would only affect the magnitude of the Lipschitz constant.
Thus if $ \big|\big| \tilde{\mu}(s) - \mu(s)\big|\big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} $ stays smaller than $1$, the optimal $t$ will be $1$, and (\ref{eq:proofstep6}) could be reduced to: $$ \frac{\epsilon L}{1-\gamma} \Big( \Big|\Big| \tilde{\mu}(s) - \mu(s)\Big|\Big|_{2,\infty} + \frac{2m\sigma}{\sqrt{2 \pi}} \Big). $$
\section{Additional experiments on CACLA's update}
In those two experiments, we want to highlight the good performance of CACLA compared to SPG and DPG without neural networks. The main argument to use DPG instead of SPG is its efficiency when the action dimensions become large. In the first experiment, we study if CACLA suffers from the same variance problem as SPG. The second experiment supports our claim that CACLA is more robust than SPG and DPG when the approximation made by the critic is less accurate.
\subsection{Sensitivity to action space dimensionality}
\label{appendix:sensitivitydima}
We used a setup similar to that of \citeauthor{Silver2014} \shortcite{Silver2014}: those environments contain only one state and the horizon is fixed to one. They are designed such that the dimensionality of the action space can easily be controlled but there is only little bias in the critic approximation. The policy parameters are directly representing the action: $\mu_\theta(\cdot) = \theta$.
\noindent Compatible features are used to learn the Q value function for both SPG and DPG. For CACLA, the value function V is approximated through a single parameter.
The Gaussian exploration noise and the learning rate of both the critic and actor have been optimized for each algorithm on each environment.
In Figure~\ref{fig:1s}, similarly to \citeauthor{Silver2014} \shortcite{Silver2014}, we observe that SPG is indeed more sensitive to larger action dimensions.
CACLA is also sensitive to this increase in dimensionality but not as much as SPG.
Finally, we also note that even if the solution of CACLA and DPG are not exactly the same theoretically, they are very similar in practice.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.7\textwidth]{plots/1S-QUA-crop.pdf}
\includegraphics[width=0.7\textwidth]{plots/1S-RAS-crop.pdf}
\includegraphics[width=0.7\textwidth]{plots/1S-ROS-crop.pdf}
\end{center}
\caption{Comparison of DPG, SPG and CACLA over three domains with 100 seeds for each algorithm. On the left, the action dimensions is 5 and 50 on the right.}
\label{fig:1s}
\end{figure}
\subsection{Robustness to the critic approximation errors}
Compared to the previous experience, we introduce a bigger bias in the approximation of the critic by changing the application domains: the horizon is deeper and there is an infinite number of states.
The policy is represented as $\mu_\theta(s)=\phi(s) \cdot \theta$ where $\phi(s)$ are tiles coding features.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.33\linewidth]{plots/COCLA-MCC-crop.pdf}
\includegraphics[width=0.33\linewidth]{plots/COCLA-P-crop.pdf}
\includegraphics[width=0.33\linewidth]{plots/COCLA-RR-crop.pdf}
\end{center}
\caption{Comparison of CACLA, DPG and SPG over two environments of OpenAI Gym and one environment of Roboschool (60 seeds are used for each algorithm). }
\label{fig:cocla}
\end{figure}
In Figure~\ref{fig:cocla}, we observe that as soon as value functions become harder to learn, CACLA performs better than both SPG and DPG.
\section{Broader comparison between PeNFAC and NFAC}
\label{appendix:penfacvsnfac}
To avoid overloading previous curves, we did not report the performance of NFAC (except in the ablation study on the HalfCheetah environment). In Figure~\ref{fig:nfac}, we extend this study to two other domains of Roboschool: Hopper and Humanoid.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.48\linewidth]{plots/CO6-crop.pdf}
\includegraphics[width=0.48\linewidth]{plots/CO7-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC and NFAC over RoboschoolHopper and RoboschoolHumanoid with 60 seeds for each algorithm.}
\label{fig:nfac}
\end{figure}
We observe that PeNFAC is significantly better than NFAC which demonstrates the efficiency of the trust region update combined with CAC.
\section{Impact of evaluating PPO with a deterministic policy}
\label{appendix:deterppo}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.48\linewidth]{plots/CO9-crop.pdf}
\includegraphics[width=0.48\linewidth]{plots/CO8-crop.pdf}
\end{center}
\caption{Comparison of evaluating PPO with a deterministic policy instead of the stochastic policy produced by PPO.}
\label{fig:deter_ppo}
\end{figure}
In Figure~\ref{fig:deter_ppo}, we observe that using a deterministic policy to evaluate the performance of PPO is not penalizing.
This is the only experiment of the paper where deterministic policies and stochastic policies are compared.
\section{Hyperparameters}
\label{appendix:hyperparam}
For the sake of reproducibility \cite{Henderson2017}, the hyperparameters used during the grid search are reported here. In Tables~\ref{tab:hyper1}-\ref{tab:hyper4}, "ho", "ha" and "hu" stand respectively for Hopper, HalfCheetah, and Humanoid Roboschool environments.
\begin{table}[H]
\centering
\begin{tabular}{c|c}
$\gamma$ & $0.99$ \\
Actor network & $64 \times 64$ \\
Critic network & $64 \times 64$ \\
Actor output activation & TanH \\
\end{tabular}
\caption{Set of hyperparameters used during the training with every algorithm.}
\label{tab:hyperX}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\underset{\text{ho,ha,hu}}{\text{Leaky ReLU} (0.01)}$, TanH \\
Actor learning rate & $\underset{\text{ho,ha,hu}}{10^{-4}}$, $10^{-3}$, $10^{-2}$ \\
Critic learning rate & $10^{-4}$, $\underset{\text{ho,ha,hu}}{10^{-3}}$, $10^{-2}$ \\
Batch norm & first layer of the actor \\
$d_{\text{target}}$ & $\underset{\text{ho,ha,hu}}{0.03}$, $0.01$, $0.005$ \\
ADAM & $\underset{\text{ho,ha,hu}}{(0, 0.999, 10^{-8})}$, $(0.9, 0.999, 10^{-8})$ \\
Number of ADAM iteration (actor) & 10, $\underset{\text{ho,ha,hu}}{30}$, $50$\\
Number of ADAM iteration (critic) & 1\\
$\lambda$ & $0$, $0.5$, $0.6$, $\underset{\text{ho,ha,hu}}{0.9}$, $0.95$, $0.97$ \\
$\sigma^2$ (Truncated Gaussian law) & $0.01$, $0.05$, $0.1$, $\underset{\text{ho,ha,hu}}{0.2}$, $0.5$ \\
Number fitted iteration & $1$, $\underset{\text{ho,ha,hu}}{10}$, $20$, $50$ \\
Update each $x$ episodes & $1$, $2$, $3$, $\underset{\text{ha}}{5}$, $10$, $\underset{\text{ho}}{15}$, $20$, $30$, $\underset{\text{hu}}{50}$, $100$
\end{tabular}
\caption{Set of hyperparameters used during the training with PeNFAC.}
\label{tab:hyper1}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\underset{\text{ho,ha,hu}}{\text{TanH}}$, ReLu, Leaky ReLU (0.01) \\
Layer norm & no \\
ADAM & $(0.9, 0.999, 10^{-5})$ \\
Entropy coefficient & 0 \\
Clip range & 0.2 \\
$\lambda$ & $\underset{\text{ho,ha}}{0.97}, \underset{\text{hu}}{0.95}$ \\
Learning rate & $\underset{\text{hu}}{10^{-4}}, \underset{\text{ho,ha}}{3e^{-4}}$ \\
nminibatches & $\underset{\text{hu}}{4}$, $\underset{\text{ho,ha}}{32}$ \\
noptepochs & $4$, $\underset{\text{ho,ha}}{10}$, $15$, $\underset{\text{hu}}{50}$ \\
nsteps & $2^{11}$, $\underset{\text{ha}}{2^{12}}$, $\underset{\text{ho}}{2^{13}}$, $\underset{\text{hu}}{2^{14}}$, $2^{15}$, $2^{16}$ \\
sample used to make the policy more deterministic & 15 \\
\end{tabular}
\caption{Set of hyperparameters used during the training with PPO.}
\label{tab:hyper2}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\text{Leaky ReLU} (0.01)$ \\
Actor learning rate & $10^{-4}$ \\
Critic learning rate & $10^{-3}$ \\
Batch norm & first layer of the actor \\
ADAM & $\underset{\text{ho,ha,hu}}{(0, 0.999, 10^{-8})}$, $(0.9, 0.999, 10^{-8})$ \\
L2 regularization of the critic & $\underset{\text{ha,ho}}{0.01}$, $\underset{\text{hu}}{\text{without}}$ \\
Exploration & Gaussian ($0.2$), $\underset{\text{ha,ho,hu}}{\text{Ornstein Uhlenbeck} (0.001, 0.15, 0.01)}$\\
Mini batch size & $32$, $64$, $\underset{\text{hu,ha,ho}}{128}$ \\
Reward scale & $0.1$, $1$, $\underset{\text{hu,ha,ho}}{10}$ \\
Soft update of target networks & $\underset{\text{hu}}{0.001}$, $\underset{\text{ha,ho}}{0.01}$ \\
Replay memory & $10^6$ \\
N-step returns & $\underset{\text{ha}}{1}$, $\underset{\text{hu,ho}}{5}$
\end{tabular}
\caption{Set of hyperparameters used during the training with DDPG (DDRL implementation).}
\label{tab:hyper3}
\end{table}
\begin{table}[H]
\centering
\def1.5{1.5}
\begin{tabular}{c|c}
Network hidden activation & $\underset{\text{ho,ha,hu}}{\text{ReLu}}$, TanH \\
Actor learning rate & $10^{-4}$ \\
Critic learning rate & $10^{-3}$ \\
Layer norm & no \\
ADAM & $(0.9, 0.999, 10^{-5})$ \\
L2 regularization of the critic & $0.01$ \\
Exploration & Ornstein Uhlenbeck ($0.2$), $\underset{\text{ho,ha,hu}}{\text{Parameter Space (0.2)}}$\\
Mini batch size & $128$ \\
Reward scale & $1$, $\underset{\text{ho,ha,hu}}{10}$ \\
Soft update of target networks & $\underset{\text{ho,hu}}{0.001}$, $\underset{\text{ha}}{0.01}$ \\
Replay memory & $10^6$ \\
nb\_rollout\_steps & $10$,$\underset{\text{ho,ha,hu}}{100}$ \\
nb\_train\_steps & $1$,$10$,$\underset{\text{ho,ha,hu}}{50}$ \\
\end{tabular}
\caption{Set of hyperparameters used during the training with DDPG (OpenAI baselines implementation).}
\label{tab:hyper4}
\end{table}
\subsection{CACLA}
We first explain the relationship between an algorithm based on stochastic policy gradient (SPG) and CACLA.
For this discussion, we assume that SPG is applied to parametrized policies that are Gaussian policies $\pi_{\theta, \sigma}$ (i.e., Gaussian around $\mu_\theta$).
Then the first common feature between the two algorithms is that the distributions over states they induce during learning are the same (i.e., $d^{\pi}_\gamma(s)$) because they both use the same exploratory policy to interact with the environment.
Moreover, SPG can be written as follows:
\begin{align*}
&\nabla_\theta J(\pi_{\theta,\sigma}) \notag \\
&= \int_{\mathcal{S}} d^\pi_\gamma(s) \int_{\mathcal{A}} \pi_{\theta,\sigma}(a|s) A^\pi(s,a) \nabla_\theta \text{ log } \pi_\theta(a|s) da ds, \notag \\
&= \frac{1}{\sigma^2} \int_{\mathcal{S}} d^\pi_\gamma(s) \int_{\mathcal{A}} \pi_{\theta,\sigma}(a|s) A^\pi(s,a) \big(a - \mu_\theta(s)\big) \cdot \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nabla_\theta \mu_\theta(s) da ds \notag.
\end{align*}
For CACLA, we interpret update~(\ref{eq:base_cacla}) as a stochastic update in the following direction:
\begin{flalign}
\label{eq:caclaeq}
& \int_{\mathcal{S}} d^{\pi}_\gamma(s) \Delta_{\text{CACLA}}(s, \mu_\theta) ds,\\ \text{with } & \Delta_{\text{CACLA}}(s,\mu_\theta) =
\int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s)
H\big(A^\pi(s,a)\big) \times \notag \\
& \hspace{12em}
\big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s) da \notag,
\end{flalign}
where $H$ is the Heaviside function.
Indeed, the inner integral is estimated using a single Monte Carlo sample during the run of CACLA.
Under this form, it is easy to see the similarity between SPG and CACLA.
The constant factor $\frac{1}{\sigma^2}$ can be neglected because it may be integrated into the learning rate.
The sign difference of the term $(a-\mu_\theta(s))$ is because SPG performs gradient ascent and CACLA gradient descent.
So the main difference between SPG and CACLA is the replacement of $A^\pi(s,a)$ by $H(A^\pi(s,a))$.
Therefore CACLA optimizes its exploratory stochastic policy through an approximation of SPG hoping to improve the underlying deterministic policy (for a fixed state, the direction of CACLA and SPG are the same up to a scalar).
Moreover, relating CACLA's update with (\ref{eq:caclaeq}) also brings to light two main limitations.
The first one concerns the inner integral over the action space which has a high variance.
Therefore, we expect CACLA to be less and less data efficient in high-dimension action space (which is the main theoretical justification of DPG over SPG - see Appendix~\ref{appendix:sensitivitydima}).
The second limitation that appears is that over one update, CACLA does not share the same exact optimal solutions as DPG or SPG.
Indeed, if we define $\theta^*$ such as $\nabla_\theta J(\mu_{\theta})\big|_{\theta =\theta^*} = 0$ it is not possible to prove that (\ref{eq:caclaeq}) will also be 0 (because of the integral over the state space).
It means that CACLA could decrease the performance of this local optimal solution.
\subsection{CAC}
Similarly, the update in CAC can be seen as a stochastic update in the following direction:
\begin{flalign}
\label{eq:cac}
& \int_{\mathcal{S}} d^{\pi}_\gamma(s) \Delta_{\text{CAC}}(s, \mu_\theta) ds, \notag \\
\text{with } & \Delta_{\text{CAC}}(s,\mu_\theta) =
\int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) A^\pi(s,a) H\big(A^\pi(s,a)\big) \times \notag \\ & \hspace{10em} \big(\mu_\theta(s) - a\big) \nabla_\theta \mu_\theta(s) da \notag.
\end{flalign}
This shows that CAC is even closer to SPG than CACLA and provides a good theoretical justification of this update at a local level (not moving in potentially worst action).
However, there is also a justification at a more global level.
\begin{lemma} For a fixed state, when the exploration tends to zero,
CAC maintains the sign of the DPG update with a scaled magnitude:
\begin{equation}
\lim_{\sigma \rightarrow 0} \Delta_{\text{CAC}} (s, \mu_\theta) \gets g^+(s,\pi) \circ \Delta_{\text{DPG}} (s, \mu_\theta),
\end{equation}
where $g^+(s,\pi)$ is a positive function between $[0; 1]^{n}$ with $n$ as the number of parameters of the deterministic policy and $\circ$ is the Hadamard product (element-wise product).
\end{lemma}
The proof is provided in Appendix~\ref{appendix:prooflimCAC}.
The consequence of this lemma is that, for a given state and low exploration, a local optimal solution for DPG will also be one for CAC.
However it is still not the case for the overall update because of the integral over the different states.
The weights given to each direction over different states are not the same in CAC and DPG.
One might think that in such a case, it would be better to use DPG.
However, in practice, the CAC update may in fact be more accurate when using an approximate advantage function.
Indeed, there exist cases where DPG with an approximate critic might update towards a direction which could decrease the performance.
For instance, when the estimated advantage $\hat{A}\big(s,\mu(s) \big)$ is negative,
the advantage around $\mu(s)$ is therefore known to be poorly estimated.
In such a case, thanks to the Heaviside function,
CAC will not perform any update for actions $a$ in the neighborhood of $\mu(s)$ such that $\hat A(s, a) \le 0$.
However, in such a case, DPG will still perform an update according to this poorly estimated gradient.
\subsection{Performance of PeNFAC}
We compared the performance of PeNFAC to learn continuous deterministic policies with two state-of-the-art algorithms: PPO and DDPG.
A comparison with NFAC is available in the ablation study (Section \ref{sec:ablationstudy}) and in Appendix \ref{appendix:penfacvsnfac}.
Because PPO learns a stochastic policy, for the testing phases, we built a deterministic policy as follows $\mu(s) = \mathbb{E}[a | a \sim \pi_\theta(\cdot,s)]$.
We denote this algorithm as "deterministic PPO".
In Appendix \ref{appendix:deterppo}, we experimentally show that
this does not penalize the comparison with PPO, as deterministic PPO provides better results than standard PPO.
For PPO, we used the OpenAI Baseline implementation. To implement PeNFAC and compare it with NFAC, we use the DDRL library \cite{zimmer2018developmental}. Given that DDPG is present in those two libraries, we provided the two performances for it.
The OpenAI Baseline version uses an exploration in the parameter space and the DDRL version uses n-step returns.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.92\linewidth]{plots/HP-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC, DDPG and deterministic PPO over 60 different seeds for each algorithm in Hopper.}
\label{fig:penfacperf1}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.92\linewidth]{plots/HC-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC, DDPG and deterministic PPO over 60 different seeds for each algorithm in HalfCheetah.}
\label{fig:penfacperf2}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.92\linewidth]{plots/HU-crop.pdf}
\end{center}
\caption{Comparison of PeNFAC, DDPG and deterministic PPO over 60 seeds for each algorithm in Humanoid.}
\label{fig:penfacperf3}
\end{figure}
We performed learning experiments over three high-dimensional domains:
Hopper, HalfCheetah and Humanoid.
Dimensions of $\mathcal{S} \times \mathcal{A}$ are $15 \times 3$ (Hopper), $26 \times 6$ (HalfCheetah) and $44 \times 17$ (Humanoid).
The neural network architecture is composed of two hidden layers of 64 units for either the policy or the value function.
The choice of the activation function in the hidden units was optimized for each algorithm: we found that ReLU was better for all of them except for PPO (where tanh was better).
The output activation of the critic is linear and the output activation of the actor is tanh.
In Figures \ref{fig:penfacperf1}-\ref{fig:penfaccomp}, the lighter shade depicts one standard deviation around the average, while the darker shade is the standard deviation divided by the square root of the number of seeds.
In Figures \ref{fig:penfacperf1}-\ref{fig:penfacperf3}, PeNFAC outperforms DDPG and deterministic PPO during the testing phase.
On Humanoid, even after optimizing the hyperparameters, we could not obtain the same results as those of \citeauthor{PPO} \shortcite{PPO}.
We conjecture that this may be explained as follows:
1) the RoboschoolHumanoid moved from version 0 to 1,
2) deterministic PPO
\begin{figure}[H]
\begin{center}
\includegraphics[width=.89\linewidth]{plots/CO5-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO2-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO3-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO4-crop.pdf}
\includegraphics[width=.89\linewidth]{plots/CO1-crop.pdf}
\end{center}
\caption{Comparison of the different components ($\lambda$-returns, fitted value-iteration, CAC vs CACLA update, batch normalization) of the PeNFAC algorithm during the testing phase over the HalfCheetah environment and 60 seeds for each version. }
\label{fig:penfaccomp}
\end{figure}
\noindent might be less efficient than PPO,
3) neither LinearAnneal for the exploration, nor adaptive Adam step size is present in the OpenAI Baseline implementation.
However, we argue that the comparison should still be fair since PeNFAC also does not use those two components.
On Humanoid, we did not find a set of hyperparameters where DDPG could work correctly with both implementations.
\subsection{Components of PeNFAC}
\label{sec:ablationstudy}
In Figure~\ref{fig:penfaccomp}, we present an ablation analysis in the HalfCheetah domain to understand which components of the PenFAC algorithm are the most essential to its good performance.
From top to bottom plots of Figure~\ref{fig:penfaccomp}, we ran PenFAC with or without trust region, with or without $\lambda$-returns, with or without fitted value iteration, with CACLA update or CAC update, and finally with or without batch normalization.
It appears that $\lambda$-returns and fitted value iteration are the most needed, while the effect of batch normalization is small and mostly helps in the beginning of the learning.
We also tried updating the actor every timestep without taking into account the sign of the advantage function (i.e., using SPG instead of CAC), but the algorithm was not able to learn at all.
This also demonstrates that the CAC update is an essential component of PenFAC.
| proofpile-arXiv_065-7307 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Linear Problems} \label{sec:linear}
In this section we consider the homogeneous Dirichlet problem for the fractional Laplacian \eqref{eq:def_Laps}. Given $f : \Omega \to \mathbb{R}$, one seeks a function $u$ such that
\begin{equation}\label{eq:Dirichlet}
\left\lbrace\begin{array}{rl}
(-\Delta)^s u = f &\mbox{ in }\Omega, \\
u = 0 &\mbox{ in }\Omega^c :={\mathbb{R}^d}\setminus\Omega.
\end{array} \right.
\end{equation}
\subsection{Variational Formulation}\label{S:variational_form}
The natural variational framework for \eqref{eq:Dirichlet} is within the fractional Sobolev space ${\widetilde H}^s(\Omega)$, that is defined by
\[
{\widetilde H}^s(\Omega) := \{ v \in H^s({\mathbb{R}^d}) \colon \supp v \subset \overline{\Omega} \} .
\]
We refer to \cite{BoCi19} for definitions and elementary properties of fractional Sobolev spaces. Here we just state that on the space ${\widetilde H}^s(\Omega)$, because of the fractional Poincar\'e inequality, the natural inner product is equivalent to
\begin{equation} \label{eq:defofinnerprod}
(v,w)_s := \frac{C_{d,s}}2 \iint_{Q_\Omega} \frac{(v(x)-v(y))(w(x) - w(y))}{|x-y|^{d+2s}} \; dx \; dy,
\end{equation}
where $Q_\Omega = ({\mathbb{R}^d}\times{\mathbb{R}^d}) \setminus(\Omega^c\times\Omega^c)$.
The corresponding norm is $\|v\|_{{\widetilde H}^s(\Omega)} := (v,v)_s^{1/2}$.
The duality pairing between ${\widetilde H}^s(\Omega)$ and its dual $H^{-s}(\Omega)$ shall be denoted by $\langle \cdot , \cdot \rangle$. In view of \eqref{eq:defofinnerprod} we see that, whenever $v \in {\widetilde H}^s(\Omega)$ then $(-\Delta)^s v \in H^{-s}(\Omega)$ and that
\begin{equation}
\label{eq:innerprodisduality}
(v,w)_s = \langle (-\Delta)^s v, w \rangle, \quad \forall w \in {\widetilde H}^s(\Omega).
\end{equation}
Therefore, given $f \in H^{-s}(\Omega)$, the weak formulation of \eqref{eq:Dirichlet} reads: find $u \in {\widetilde H}^s(\Omega)$ such that
\begin{equation} \label{eq:weak_linear}
(u, v)_s = \langle f, v \rangle \quad \forall v \in {\widetilde H}^s(\Omega).
\end{equation}
Existence and uniqueness of weak solutions, and stability of the solution map $f \mapsto u$, follow straightforwardly from the Lax-Milgram Theorem.
\subsection{Regularity} \label{sec:regularity_linear}
A priori, it is not clear how smooth weak solutions are. If $f$ is more regular than $H^{-s}(\Omega)$, then $u$ could be expected to be more regular than ${\widetilde H}^s(\Omega)$.
We now review some results regarding regularity of solutions to problem \eqref{eq:weak_linear}. Since our main interest is to derive convergence rates for finite element discretizations, we shall focus on Sobolev regularity estimates.
Recently, using Fourier analytical tools, Grubb \cite{Grubb} obtained estimates of solutions in terms of the so-called H\"ormander $\mu$-spaces \cite{Hormander}, but such estimates can be reinterpreted in terms of standard Sobolev spaces. A drawback of the following result from \cite{Grubb} is that it assumes the domain $\Omega$ to have smooth boundary, which is a too restrictive condition for finite element applications.
\begin{theorem}[regularity on smooth domains] \label{T:reg_grubb}
Let $\Omega$ be a domain with $\partial\Omega \in C^\infty$, $s \in (0,1)$, $f \in H^r(\Omega)$ for some $r\ge -s$, $u$ be the solution of \eqref{eq:weak_linear} and $\gamma = \min \{ s + r, 1/2 -\varepsilon \}$, with $\varepsilon > 0$ arbitrarily small. Then, $u \in \widetilde{H}^{s + \gamma}(\Omega)$ and the following regularity estimate holds:
\[
\| u \|_{\widetilde{H}^{s + \gamma}(\Omega)} \le C(\Omega,d,s,\gamma) \|f \|_{H^r(\Omega)}.
\]
\end{theorem}
A rather surprising feature of the previous result is that no matter how smooth the right hand side $f$ is, we cannot guarantee that solutions are any smoother than $\widetilde{H}^{s + 1/2 - \varepsilon}(\Omega)$. Indeed, because the fractional Laplacian is an operator of order $2s$, it could be expected to have a lift of order $2s$. As the following example \cite{Getoor} shows, such a reasoning is incorrect, and \Cref{T:reg_grubb} is sharp.
\begin{example}[limited regularity]\label{ex:nonsmooth}
Consider $\Omega = B(0,1) \subset {\mathbb{R}^d}$ and $f \equiv 1$. Then, the solution to \eqref{eq:Dirichlet} is given by
\begin{equation} \label{eq:getoor}
u(x) = \frac{\Gamma(\frac{d}{2})}{2^{2s} \Gamma(\frac{d+2s}{2})\Gamma(1+s)} ( 1- |x|^2)^s_+,
\end{equation}
where $t_+ =\max\{t,0\}$.
\end{example}
As \Cref{ex:nonsmooth} illustrates, rough boundary behavior causes the reduced Sobolev regularity of solutions. Ros-Oton and Serra \cite{RosOtonSerra} studied problem \eqref{eq:Dirichlet} using potential theory tools, and were able to obtain a fine characterization of boundary behavior of solutions, that led them to deduce H\"older regularity estimates. In particular, it turns out that the asymptotic expansion
\begin{equation} \label{eq:boundary_behavior}
u (x) \approx d(x, \partial\Omega)^s \varphi(x),
\end{equation}
where $\varphi$ is a smooth function, is generic.
The H\"older estimates from \cite{RosOtonSerra} give rise to Sobolev estimates for solutions in terms of H\"older norms of the data. To capture the boundary behavior, reference \cite{AcosBort2017fractional} introduced fractional weighted norms, where the weight is a power of the distance to the boundary, and developeds estimates in such norms.
We denote
\[
\delta(x,y) := \min \big\{ \textrm{dist}(x, \partial \Omega), \textrm{dist}(y, \partial \Omega) \big\}
\]
and, for $\ell = k + s$, with $k \in \mathbb{N}$ and $s \in (0,1)$, and $\kappa \ge 0$, we define the norm
\[
\| v \|_{H^{\ell}_\kappa (\Omega)}^2 := \| v \|_{H^k (\Omega)}^2 + \sum_{|\beta| = k }
\iint_{\Omega\times\Omega} \frac{|D^\beta v(x)-D^\beta v(y)|^2}{|x-y|^{d+2s}} \, \delta(x,y)^{2\kappa} \, dy \, dx
\]
and the associated space
\begin{equation} \label{eq:weighted_sobolev}
H^\ell_\kappa (\Omega) := \left\{ v \in H^\ell(\Omega) \colon \| v \|_{H^\ell_\kappa (\Omega)} < \infty \right\} .
\end{equation}
The regularity estimate in the weighted Sobolev scale \eqref{eq:weighted_sobolev} reads as follows.
\begin{theorem}[weighted Sobolev estimate] \label{T:weighted_regularity}
Let $\Omega$ be a bounded, Lipschitz domain satisfying the exterior ball condition, $s\in(0,1)$, $f \in C^{1-s}(\overline\Omega)$ and $u$ be the solution of \eqref{eq:weak_linear}. Then, for every $\varepsilon>0$ we have $u \in \widetilde H^{s+1-2\varepsilon}_{1/2-\varepsilon}(\Omega)$ and
\[
\|u\|_{\widetilde H^{s+1-2\varepsilon}_{1/2-\varepsilon}(\Omega)} \le \frac{C(\Omega,d,s)}{\varepsilon} \| f \|_{C^{1-s}(\overline\Omega)}.
\]
\end{theorem}
For simplicity, the theorem above was stated using the weight $\kappa = 1/2 -\varepsilon$; a more general form of the result can be found also in \cite{BBNOS18}. We point out that, for its application in finite element analysis, such a choice is optimal. In principle, increasing the exponent $\kappa$ of the weight allows for a higher differentiability in the solution, with no restriction on $\kappa$ above (as long as $f$ is sufficiently smooth). However, when exploiting this weighted regularity by introducing approximations on a family of shape-regular and graded meshes, the order of convergence (with respect to the number of degrees of freedom) is only incremented as long as $\kappa < 1/2$.
It is worth pointing out that \Cref{T:weighted_regularity} is valid for Lipschitz domains satisfying an exterior ball condition. Although such a condition on the domain is much less restrictive than the $C^\infty$ requirement in \Cref{T:reg_grubb}, for polytopal domains it implies convexity. For that reason, we present here a result of an ongoing work \cite{BoNo19}, that characterizes regularity of solutions in terms of Besov norms.
\begin{theorem}[regularity on Lipschitz domains] \label{T:Besov_regularity}
Let $\Omega$ be a bounded Lipschitz domain, $s \in (0,1)$ and $f \in
H^r(\Omega)$ for some $r \in (-s,0]$. Then, the
solution $u$ to \eqref{eq:Dirichlet} belongs to the Besov space $B^{s+t}_{2,\infty}(\Omega)$, where $t = \min \{ s + r -\varepsilon , 1/2 \}$, with $\varepsilon > 0$ arbitrarily small, with
\begin{equation} \label{eq:Besov_regularity}
\| u \|_{B^{s+t}_{2,\infty}(\Omega)} \le C(\Omega,d,s,t) \|f\|_{H^r(\Omega)}.
\end{equation}
Consequently, by an elementary embedding, we deduce
\begin{equation} \label{eq:regularity}
\| u \|_{H^{s+\gamma}(\Omega)} \le\frac{C(\Omega,d,s,\gamma)}{\varepsilon} \|f\|_{H^r(\Omega)},
\end{equation}
where $\gamma = \min \{s+r, 1/2\} - \varepsilon$ is `almost' as in \Cref{T:reg_grubb}.
\end{theorem}
We briefly outline the main ideas in the proof of \Cref{T:Besov_regularity}, which follows a technique proposed by Savar\'e \cite{Savare98} for local problems. The point is to use the classical Nirenberg difference quotient method, and thus bound a certain Besov seminorm of the solution $u$.
Let $t \in (0,1)$ and $D$ be a set generating ${\mathbb{R}^d}$ and star-shaped with respect to the origin (for example, a cone). Then
the functional
\begin{equation} \label{eq:Besov-norm}
[v]_{s+t,2,\Omega} := \sup_{h \in D \setminus \{ 0 \}}
\frac{1}{|h|^t} |v-v(\cdot+h)|_{H^s(\Omega)}
\end{equation}
induces the standard seminorm in the Besov space $B^{s+t}_{2,\infty}(\Omega)$.
Because $\Omega$ is a Lipschitz domain, it satisfies a uniform cone property; upon a partition of unity argument, this gives (finitely many) suitable sets $D$ where translations can be taken.
For a localized translation operator $T_h$, it is possible to prove a bound of the
form
\begin{equation} \label{eq:translation_bound}
|T_h u - u|_s^2 \le C \, |h|^s \, |u|_s^2,
\end{equation}
which, in view of \eqref{eq:Besov-norm}, yields $u \in B^{3s/2}_{2,\infty}(\Omega)$. Once this estimate has been obtained, a bootstrap argument leads to \eqref{eq:Besov_regularity}. Moreover, a refined estimate in $B^{3s/2}_{2,\infty}(\Omega)$ reads
%
\[
|u|_{B^{3s/2}_{2,\infty}(\Omega)} \lesssim \|f\|_{B^{-s/2}_{2,1}(\Omega)},
\]
and interpolation with
$|u|_{\widetilde{H}^s(\Omega)} \lesssim \|f\|_{H^{-s}(\Omega)}$ yields the following result
\cite{BoNo19}.
\begin{theorem}[lift theorem on Lipschitz domains]\label{T:lift}
Let $\Omega$ be a bounded Lipschitz domain, $s \in (0,1)$ and $f \in
H^r(\Omega)$ for some $r \in (-s,-s/2]$. Then, the
solution $u$ to \eqref{eq:Dirichlet} belongs to the Sobolev space $\widetilde{H}^{r+2s}(\Omega)$, with
\[
\|u\|_{\widetilde{H}^{r+2s}(\Omega)} \lesssim \|f\|_{H^{r}(\Omega)}.
\]
\end{theorem}
\subsection{Finite element discretization} \label{sec:FE_linear}
In this section, we discuss a direct finite element method to approximate \eqref{eq:weak_linear} using piecewise linear continuous functions. We
consider a family $\{\mathcal{T}_h \}_{h>0}$ of conforming and simplicial meshes of $\Omega$, that we assume to be shape-regular, namely,
\[
\sigma := \sup_{h>0} \max_{T \in \mathcal{T}_h} \frac{h_T}{\rho_T} <\infty,
\]
where $h_T = \mbox{diam}(T)$ and $\rho_T $ is the diameter of the largest ball contained in $T$. As usual, the subindex $h$ denotes the mesh size, $h = \max_{T \in \mathcal{T}_h} h_T$ and we take elements to be closed sets.
We denote by $\mathcal{N}_h$ the set of interior vertices of $\mathcal{T}_h$, by $N$ the cardinality of $\mathcal{N}_h$, and by $\{ \varphi_i \}_{i=1}^N$ the standard piecewise linear Lagrangian basis, with $\varphi_i$ associated to the node $\texttt{x}_i \in \mathcal{N}_h$. Thus, the set of discrete functions is
\begin{equation} \label{eq:FE_space}
\mathbb{V}_h := \left\{ v \in C_0(\Omega) \colon v = \sum_{i=1}^N v_i \varphi_i \right\},
\end{equation}
and is clearly conforming: $\mathbb{V}_h \subset {\widetilde H}^s(\Omega)$ for all $s \in (0,1)$.
With the notation described above, the discrete counterpart to \eqref{eq:weak_linear} reads: find $u_h \in \mathbb{V}_h$ such that
\begin{equation} \label{eq:weak_linear_discrete}
(u_h, v_h)_s = \langle f, v_h \rangle \quad \forall v_h \in \mathbb{V}_h.
\end{equation}
Because $u_h$ is the projection of $u$ onto $\mathbb{V}_h$ with respect to the ${\widetilde H}^s(\Omega)$-norm, we have the best approximation property
\begin{equation} \label{eq:best_approximation_linear}
\|u - u_h \|_{{\widetilde H}^s(\Omega)} = \min_{v_h \in \mathbb{V}_h} \|u - v_h \|_{{\widetilde H}^s(\Omega)}.
\end{equation}
Therefore, in order to obtain a priori rates of convergence in the energy norm, it suffices to bound the distance between the discrete spaces and the solution. Although the bilinear form $(\cdot, \cdot)_s$ involves integration on $\Omega\times{\mathbb{R}^d}$, one can apply an argument based on a fractional Hardy inequality, to prove that the energy norm may be bounded in terms of fractional--order norms on $\Omega$ (see \cite{AcosBort2017fractional}). It follows that bounding errors within $\Omega$ leads to error estimates in the energy norm.
A technical aspect of fractional-order seminorms is that they are not additive with respect to domain decompositions. With the goal of deriving interpolation estimates, we define the star or first ring of an element $T \in \mathcal{T}_h$ by
\[
S^1_T := \bigcup \left\{ T' \in \mathcal{T}_h \colon T \cap T' \neq \emptyset \right\}.
\]
We also introduce the star of $S^1_T$ (or second ring of $T$),
\[
S^2_T := \bigcup \left\{ T' \in \mathcal{T}_h \colon S^1_T \cap T' \neq \emptyset \right\},
\]
and the star of the node $\texttt{x}_i \in \mathcal{N}_h$, $S_i := \mbox{supp}(\varphi_i)$.
Faermann \cite{Faermann} proved the localization estimate
\[
|v|_{H^s(\Omega)}^2 \leq \sum_{T \in \mathcal{T}_h} \left[ \int_T \int_{S^1_T} \frac{|v (x) - v (y)|^2}{|x-y |^{d+2s}} \; dy \; dx + \frac{C(d,\sigma)}{s h_T^{2s}} \| v \|^2_{L^2(T)} \right] \quad \forall v \in H^s(\Omega).
\]
This inequality shows that to estimate fractional seminorms over $\Omega$, it suffices to compute integrals over the set of patches $\{T \times S^1_T \}_{T \in \mathcal{T}_h}$ plus local zero-order contributions. Bearing this in mind, one can prove the following type of estimates for suitable quasi-interpolation operators (see, for example, \cite{BoNoSa18,CiarletJr}).
\begin{proposition}[interpolation estimates on quasi-uniform meshes] \label{prop:app_SZ}
Let $T \in {\mathcal{T}_h}$, $s \in (0,1)$, $\ell \in (s, 2]$, and $\Pi_h$ be a suitable quasi-interpolation operator.
If $v \in H^\ell (S^2_T)$, then
\begin{equation} \label{eq:interpolation}
\int_T \int_{S^1_T} \frac{|(v-\Pi_h v) (x) - (v-\Pi_h v) (y)|^2}{|x-y|^{d+2s}} \, d y \, d x \le C \, h_T^{2(\ell-s)} |v|_{H^\ell(S^2_T)}^2.
\end{equation}
where $C = C(\Omega,d,s,\sigma, \ell)$.
Therefore, for all $v \in H^\ell (\Omega)$, it holds
\begin{equation} \label{eq:global_interpolation}
| v - \Pi_h v|_{H^s(\Omega)} \le C(\Omega,d,s,\sigma, \ell) \, h^{\ell-s} |v|_{H^\ell(\Omega)}.
\end{equation}
\end{proposition}
The statement \eqref{eq:global_interpolation} in \Cref{prop:app_SZ} could have also been obtained by interpolation of standard integer-order interpolation estimates. However, the technique of summing localized fractional-order estimates also works for graded meshes (cf. \eqref{eq:weighted_interpolation} and \eqref{eq:global_weighted_interpolation} below).
Combining estimate \eqref{eq:global_interpolation} with the best approximation property \eqref{eq:best_approximation_linear} and the regularity estimates described in \Cref{sec:regularity_linear}, we can derive convergence rates. Concretely, the estimates from \Cref{T:reg_grubb} and \Cref{T:Besov_regularity} translate into a priori rates for quasi-uniform meshes. However, optimal application of \Cref{T:weighted_regularity} requires a certain type of mesh grading.
In two-dimensional problems ($d=2$), this can be attained by constructing \emph{graded} meshes in the spirit of \cite[Section 8.4]{Grisvard}. In addition to shape regularity, we assume that the family $\{\mathcal{T}_h \}$ satisfies the following property: there is a number $\mu\ge1$ such that given a parameter $h$ representing the meshsize at distance $1$ to the boundary $\partial\Omega$ and $T\in\mathcal{T}_h$, we have
\begin{equation} \label{eq:H}
h_T \leq C(\sigma)
\begin{dcases}
h^\mu, & T \cap \partial \Omega \neq \emptyset, \\
h \textrm{dist}(T,\partial \Omega)^{(\mu-1)/\mu}, & T \cap \partial \Omega = \emptyset.
\end{dcases}
\end{equation}
The number of degrees of freedom is related to $h$ by means of the parameter $\mu$ because (recall that $d=2$)
\[
N = \dim \mathbb{V}_h \approx
\begin{dcases}
h^{-2}, & \mu < 2, \\
h^{-2} | \log h |, & \mu = 2, \\
h^{-\mu}, & \mu > 2.
\end{dcases}
\]
Also, $\mu$ needs to be related to the exponent $\kappa$ used in the Sobolev regularity estimate (cf. \Cref{T:weighted_regularity}). It can be shown that the choice $\mu = 2$, that corresponds to $\kappa = 1/2$, yields optimal convergence rates in terms of the dimension of $\mathbb{V}_h$. We also remark that, as discussed in \cite{BoCi19}, for three-dimensional problems ($d=3$), the grading strategy \eqref{eq:H} becomes less flexible, and yields lower convergence rates. For optimal mesh grading beyond $\mu=2$ for both $d=2,3$ we need to break the shape regularity assumption and resort to anisotropic finite elements. They in turn are less flexible in dealing with the isotropic fractional norm of $H^s(\Omega)$ and its localization \cite{Faermann}. This important topic remains open.
Quasi-interpolation estimates in weighted Sobolev spaces \eqref{eq:weighted_sobolev} can be derived in the same way as in \Cref{prop:app_SZ}. More precisely, the weighted counterparts to \eqref{eq:interpolation} and \eqref{eq:global_interpolation} read
\begin{equation} \label{eq:weighted_interpolation}
\int_T \int_{S^1_T} \frac{|(v-\Pi_h v) (x) - (v-\Pi_h v) (y)|^2}{|x-y|^{d+2s}} \, d y \, d x \le C
h_T^{2(\ell-s-\kappa)} |v|_{H^\ell_\kappa(S^2_T)}^2,
\end{equation}
for all $v \in H^\ell_\kappa (S^2_T)$ and
\begin{equation} \label{eq:global_weighted_interpolation}
| v - \Pi_h v|_{H^s(\Omega)} \le C h^{\ell-s-\alpha} |v|_{H^\ell_\kappa(\Omega)} \quad \forall v \in H^\ell_\kappa (\Omega),
\end{equation}
respectively. The constants above are $C = C(\Omega,d,s,\sigma,\ell,\kappa)$.
We collect all the convergence estimates in the energy norm --involving quasi-uniform and graded meshes-- in a single statement \cite{AcosBort2017fractional}.
\begin{theorem}[energy error estimates for linear problem] \label{T:conv_linear}
Let $u$ denote the solution to \eqref{eq:weak_linear} and denote by $u_h \in \mathbb{V}_h$ the solution of the discrete problem \eqref{eq:weak_linear_discrete}, computed over a mesh ${\mathcal{T}_h}$ consisting of elements with maximum diameter $h$. If $f \in L^2(\Omega)$, we have
\[
\|u - u_h \|_{{\widetilde H}^s(\Omega)} \le C(\Omega,d,s,\sigma) \, h^\alpha |\log h| \, \|f\|_{L^2(\Omega)},
\]
where $\alpha = \min \{s, 1/2 \}$.
Additionally, if $\Omega \subset \mathbb{R}^2$, $f \in C^{1-s}(\overline{\Omega})$ and the family $\{\mathcal{T}_h\}$ satisfies \eqref{eq:H} with $\mu = 2$, we have
\[
\|u - u_h \|_{{\widetilde H}^s(\Omega)} \le C(\Omega,s,\sigma) \, h |\log h| \|f\|_{C^{1-s}(\overline{\Omega})}.
\]
\end{theorem}
To illustrate that \Cref{T:conv_linear} is sharp, we solve the problem from \Cref{ex:nonsmooth} on the discrete spaces \eqref{eq:FE_space} using a family of quasi-uniform meshes and a family of meshes graded according to \eqref{eq:H}. In \Cref{tab:ejemplo}, we report computational convergence rates in the energy norm for several values of $s$.
We observe good agreement with the rates predicted by \Cref{T:conv_linear}.
\begin{table}[htbp]\small\centering
\begin{tabular}{|c| c| c| c| c| c| c| c| c| c|} \hline
Value of $s$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline
Uniform meshes & 0.497 & 0.496 & 0.498 & 0.500 & 0.501 & 0.505 & 0.504 & 0.503 & 0.532 \\ \hline
Graded meshes & 1.066 & 1.040 & 1.019 & 1.002 & 1.066 & 1.051 & 0.990 & 0.985 & 0.977 \\ \hline
\end{tabular}
\bigskip
\caption{Computational rates of convergence (with respect to $h$) for the problem from \Cref{ex:nonsmooth} in $d=2$ dimensions. Rates using quasi-uniform meshes are listed in the second row, while rates using graded meshes, with $\mu = 2$ in \eqref{eq:H}, are reported in the third row.}
\label{tab:ejemplo}
\end{table}
\subsection{Computational challenges} \label{sec:computational_linear}
Having at hand theoretical estimates for finite element discretizations of \eqref{eq:weak_linear}, we still need to address how to compute discrete solutions and, in particular, how to accelerate the assembly and solution of the discrete system that arises.
\medskip\noindent
{\bf Matrix assembly.}
We first comment on key aspects of the finite element implementation for problems in dimension $d = 2$. If $\mathbf{U} = (u_i)_{i=1}^N$ and $u_h = \sum_{i=1}^N u_i \varphi_i$, it follows from \eqref{eq:FE_space} and \eqref{eq:weak_linear_discrete} that the linear finite element system reads $\mathbf{A} \mathbf{U} = \mathbf{F}$ with stiffness matrix $\mathbf{A}$ and right-hand side vector $\mathbf{F}$ given by
\[
\mathbf{A}_{ij} = (\varphi_i, \varphi_j)_s, \quad \mathbf{F}_i = \langle f, \varphi_i \rangle.
\]
Computation of the stiffness matrix is not an easy task. There are two numerical difficulties in taking a direct approach. In first place, the bilinear form $(\cdot, \cdot)_s$ requires integration on unbounded domains; we point out that --at least for homogeneous problems as the ones considered here-- integration over $\Omega\times\Omega^c$ can be reduced to a suitable integration over $\Omega\times\partial\Omega$ by using the Divergence Theorem \cite{AiGl17}. Secondly, suitable quadrature rules to compute the stiffness matrix entries are required. To handle the singular (non-integrable) kernel $|x|^{-d-2s}$, one could use techniques from the boundary element method \cite{ChernovPetersdorffSchwab:11,SauterSchwab}; we refer to \cite{AcosBersBort2017short} for details.
\medskip\noindent
{\bf Compression.}
Note that, independently of $s$, finite element spaces \eqref{eq:FE_space} give rise to {\em full} stiffness matrices. Indeed, for any pair of nodes $\texttt{x}_i, \texttt{x}_j$ such that $S_i \cap S_j = \emptyset$,
\[
\mathbf{A}_{ij} = - C_{d,s} \iint_{S_i \times S_j} \frac{\varphi_i(x) \; \varphi_j(y)}{|x-y|^{d+2s}} \; dy \; dx < 0.
\]
Thus, computation of the stiffness matrix $\mathbf{A}$ involves a large number of far-field interactions, that is, elements $\mathbf{A}_{ij}$ for $\texttt{x}_i$ and $\texttt{x}_j$ sufficiently far. However, these elements should be significantly smaller than the ones that involve nodes close to one another. In \cite{AiGl17, zhao2017adaptive} the cluster method from the boundary element literature was applied and, instead of computing and storing all individual elements from $\mathbf{A}$, far field contributions are replaced by suitable low-rank blocks. The resulting data-sparse representation has $\mathcal{O}(N \log^\alpha N)$ complexity for some $\alpha \ge 0$. Reference \cite{karkulik2018mathcal} shows that the inverse of $\mathbf{A}$ can be represented using the same block structure as employed to compress the stiffness matrix.
\medskip\noindent
{\bf Preconditioning.}
There are also issues to be addressed regarding the solution of the dense matrix equation $\mathbf{A} \mathbf{U} = \mathbf{F}$. The use of matrix factorization to solve such a system has complexity $\mathcal{O}(N^3)$. As an alternative, one can use a conjugate gradient method and thereby the number of iterations needed for a fixed tolerance scales like $\sqrt{\kappa(\mathbf{A})}$, where $\kappa(\mathbf{A})$ is the condition number of $\mathbf{A}$ and satisfies \cite{AiMcTr99}
\[
\kappa(\mathbf{A}) = \mathcal{O}\left( N^{2s/d} \left( \frac{h_{max}}{h_{min}} \right)^{d-2s} \right).
\]
Therefore, for two-dimensional problems, we deduce $\kappa(\mathbf{A}) = \mathcal{O}(h^{-2s})$ for quasi-uniform meshes, while $\kappa(\mathbf{A}) = \mathcal{O}(h^{-2} |\log h|^s)$ for meshes graded according to \eqref{eq:H} with $\mu = 2$. In the latter case, diagonal preconditioning allows us to recover the same condition number as for uniform meshes \cite{AiMcTr99}.
Recently, there have been some advances in the development of preconditioners for fractional diffusion. For instance, multigrid preconditioners were mentioned in \cite{AiGl17}, while operator preconditioners were studied in \cite{gimperlein2019optimal}. We now briefly comment on some features of an additive Schwarz preconditioner of BPX-type \cite{fractionalbpx} (see also \cite{faustmann2019optimal}). Assume we have a hierarchy of discrete spaces $\mathbb{V}_0 \subset \ldots \mathbb{V}_J = \mathbb{V}$, with mesh size $h_j = \gamma^{2j}$, and let $\iota_j : \mathbb{V}_j \to \mathbb{V}$ be the inclusion operator. The basic ingredients needed to apply the general theory for additive Schwarz preconditioners are:
\begin{itemize}[leftmargin=*]
\item {\it Stable decomposition}: for every $v\in \mathbb{V}$, there exists a decomposition $v = \sum_{j=0}^J v_j$ with $v_j \in \mathbb{V}_j$ such that
\begin{equation} \label{eq:stable-decomposition}
\sum_{j=0}^J h_j^{-2s} \|v_j\|_{L^2(\Omega)}^2 \leq c_0 \|v\|_{{\widetilde H}^s(\Omega)}^2.
\end{equation}
A fundamental ingredient to prove this estimate for polyhedral domains is the optimal regularity pickup estimate for Lipschitz domains of \Cref{T:lift}, that allows us to perform an Aubin-Nitsche duality argument.
\item {\it Boundedness}: for every $v = \sum_{j=0}^J v_j$ with $v_j \in \mathbb{V}_j$,
\begin{equation} \label{eq:boundedness}
\| \sum_{j=0}^J v_j \|_{{\widetilde H}^s(\Omega)}^2 \leq c_1 \sum_{j=0}^J h_j^{-2s} \|v_j\|_{L^2(\Omega)}^2.
\end{equation}
As usual, boundedness of multilevel decompositions can be proved by estimating how much scales interact (i.e., using a strengthened Cauchy-Schwarz inequality). Nonlocality adds some difficulties to the derivation of such an estimate, because one cannot integrate by parts elementwise. The argument in \cite{fractionalbpx} is based on the Fourier representation of the fractional Laplacian.
\end{itemize}
The conditions \eqref{eq:stable-decomposition} and \eqref{eq:boundedness} imply that the preconditioner $\mathbf{B} := \sum_{j=0}^J h_j^{2s-d} \iota_j \iota'_j$ satisfies $\kappa(\mathbf{B} \mathbf{A}) \le \frac{c_0}{c_1}$ for graded bisection grids \cite{fractionalbpx}. We
illustrate this statement in Table \ref{tab:FL-BPX-bisect} for $\Omega=(-1,1)^2$,
$f=1$ and $s=0.9, 0.5, 0.1$. We observe a mild increase of iteration counts but
rather robust performance with respect to $s$.
\begin{table}[!ht]
\centering
\begin{tabular}{|m{0.4cm}|m{1.cm}|m{0.75cm}|m{0.75cm}|m{0.75cm}
|m{0.75cm}|m{0.75cm}|m{0.75cm}|m{0.75cm}|m{0.75cm}|m{0.75cm}|}
\hline
\multirow{2}{*}{$\bar{J}$} & \multirow{2}{*}{$N$}
& \multicolumn{3}{c|}{$s=0.9$}
& \multicolumn{3}{c|}{$s=0.5$}
& \multicolumn{3}{c|}{$s=0.1$}\\ \cline{3-11}
& & GS & CG & PCG & GS & CG & PCG & GS & CG & PCG \\ \hline
8 & 209 & 101 & 18 & 17 & 22 & 14 & 16 & 9 & 21 & 18 \\ \hline
9 & 413 & 172 & 24 & 20 & 30 & 18 & 18 & 9 & 25 & 21 \\ \hline
10 & 821 & 314 & 32 & 22 & 42 & 20 & 19 & 9 & 29 & 23 \\ \hline
11 & 1357 & 494 & 41 & 23 & 55 & 24 & 20 & 9 & 31 & 24 \\ \hline
12 & 2753 & 792 & 55 & 25 & 72 & 30 & 21 & 9 & 36 & 26 \\ \hline
13 & 4977 & 1391 & 73 & 26 & 94 & 34 & 22 & 10 & 37 & 27 \\ \hline
14 & 9417 & 2357 & 95 & 27 & 133 & 39 & 23 & 10 & 40 & 28 \\ \hline
\end{tabular}
\bigskip
\caption{Number of iterations for Gauss-Seidel (GS), conjugate gradient (CG)
and preconditioned CG with BPX preconditioner (PCG). Stopping criteria is
$\frac{\|\mathbf{A}\mathbf{U} - \mathbf{F}\|_2}{\|\mathbf{F}\|_2} < 10^{-6}$.}
\label{tab:FL-BPX-bisect}
\end{table}
\subsection{Applications and related problems} \label{sec:applications_linear}
Numerical methods for fractional diffusion models have been extensively studied recently. Let us mention some applications on linear elliptic problems of the approach treated in this section:
\begin{itemize}[leftmargin=*]
\item {\em A posteriori error analyisis and adaptivity:} The reduced regularity of solutions of \eqref{eq:weak_linear} and the high computational cost of assembling the stiffness matrix $\mathbf{A}$ motivate the pursue of suitable adaptive finite element methods. A posteriori error estimates of residual type have been proposed and analyzed in \cite{AiGl17,faustmann2019quasi,gimperlein2019space,NvPZ}, and gradient-recovery based estimates in \cite{zhao2017adaptive}.
Adaptivity is however a topic of current research \cite{faustmann2019optimal,faustmann2019quasi,gimperlein2019space}.
\item {\em Eigenvalue problems:} The fractional eigenvalue problem arises, for example, in quantum mechanics problems in which the Brownian-like quantum paths are replaced by L\'evy-like ones in the Feynman path integral \cite{Laskin}. Reference \cite{BdPM} studies conforming finite element approximations and applies the Babu\v{s}ka-Osborn theory \cite{BO91}, thereby obtaining convergence rates for eigenfunctions (in the energy and in the $L^2$-norms) and eigenvalues. Other methods, implemented on one-dimensional problems, include finite differences \cite{duo2015computing} and matrix methods \cite{ghelardoni2017matrix, zoia2007fractional}.
\item {\em Control problems:} Finite element methods for linear-quadratic optimal control problems involving the fractional Laplacian \eqref{eq:def_Laps} have been studied recently. In these problems, the control may be located inside \cite{d2018priori} or outside the domain \cite{AntilKhatriWarma18}.
\item {\em Non-homogeneous Dirichlet conditions:}
A mixed method for the non-homogeneous Dirichlet problem for the integral fractional Laplacian was proposed in \cite{AcBoHe18}. Such a method is based on weak enforcement of the Dirichlet condition and using a suitable non-local derivative \cite{DROV17} as a Lagrange multiplier. To circumvent approximating the nonlocal derivative, \cite{AntilKhatriWarma18} proposed approximations of the non-homogeneous Dirichlet problem by a suitable Robin exterior value problem.
\end{itemize}
\subsection{Nonconforming FEM: Dunford-Taylor approach} \label{sec:dunford-taylor}
We finally report on a finite element approach for \eqref{eq:Dirichlet} proposed in \cite{BoLePa17} and based on the Fourier representation of the ${\widetilde H}^s(\Omega)$-inner product:
\[
(v,w)_s = \int_{{\mathbb{R}^d}} |\xi|^s \mathscr{F}(v) |\xi|^s \overline{\mathscr{F}(w)} d\xi
= \int_{{\mathbb{R}^d}} \mathscr{F}((-\Delta)^s v)(\xi) \overline{\mathscr{F}(w(\xi))} d\texttt{x}.
\]
This expression can be equivalently written as
\begin{equation} \label{eq:DT}
(v,w)_s = \frac{2\sin(s\pi)}{\pi}
\int_0^\infty t^{1-2s} \int_{{\mathbb{R}}^d} \big( -\Delta
(I-t^2\Delta)^{-1} v \big) w \, dx dt.
\end{equation}
To see this, use Parseval's formula to obtain
\[
\int_{{\mathbb{R}}^d} \big( -\Delta
(I-t^2\Delta)^{-1} v \big) w \, dx = \int_{{\mathbb{R}}^d}
\frac{|\xi|^2}{1+t^2|\xi|^2} \mathscr{F}(v)(\xi) \overline{\mathscr{F}(w)(\xi)} d\xi,
\]
followed by the change of variables $z=t|\xi|$, which converts the repeated integrals
in the expression for $(v,w)_s$ into separate integrals, one of them being
\[
\int_0^\infty \frac{z^{1-2s}}{1+z^2} dz = \frac{\pi}{2\sin(s \pi)}.
\]
Although identity \eqref{eq:DT} is not an integral representation of the operator
$(-\Delta)^s$, but rather of the bilinear form $(\cdot, \cdot)_s$, we regard it as a Dunford-Taylor representation.
To set up this formal calculation in the correct functional framework, given
$u\in {\widetilde H}^s(\Omega) \subset L^2({\mathbb{R}}^d)$ and $t>0$, let $v(u,t) \in H^{2+s}({\mathbb{R}}^d)$
be the solution to
$v - t^2 \Delta v = -u$ in ${\mathbb{R}^d}$, or equivalently $v = -(I-t^2\Delta)^{-1}u$.
Therefore, $\Delta v = t^{-2} (v+u)$ and
\[
(u,w)_s = \frac{2\sin(s \pi)}{\pi} \int_0^\infty t^{-1-2s} \langle u + v(u,t) , w \rangle \; dt \quad \forall u,w \in {\widetilde H}^s(\Omega).
\]
This representation is the starting point of a three-step numerical method
\cite{BoLePa17}.
\begin{itemize}[leftmargin=*]
\item {\em Sinc quadrature:} the change of variables $t = e^{-y/2}$ yields
\[ (u,w)_s = \frac{\sin(s\pi)}{\pi} \int_{-\infty}^\infty e^{sy} \langle u + v(u,t(y)) , w \rangle \; dy
\]
Thus, given an integer $N>0$ and a set of points $\{y_j\}_{j=-N}^N$ with uniform
spacing $\approx N^{-1}$,
the sinc quadrature $Q_s(u,w)$ approximation of $(\cdot,\cdot)_s$ is given by
\[
Q_s(u,w) = \frac{\sin(s\pi)}{N \pi} \sum_{j = -N}^{N} e^{sy_j}
\langle u + v(u,t(y_j)), w \rangle.
\]
\item {\em Domain truncation:} We stress that, in spite of $u$ being supported in $\Omega$, $v(u,t)$ is supported in all of ${\mathbb{R}^d}$ for all $t$; hence, some truncation is required. The method from \cite{BoLePa17} considers, for given $M>0$,
a family of balls $B^M(t)$ that contain $\Omega$ and whose radius depends on $M$ and $t$ and can be computed a priori.
\item {\em Finite element approximation:} Finally, a standard finite element discretization on $B^M(t)$ is performed. This requires meshes that fit $\Omega$ and $B^M(t) \setminus \Omega$ exactly -- a non-trivial task; let us denote the discrete spaces on $\Omega$ and $B^M(t)$ by $\mathbb{V}_h$ and $\mathbb{V}_h^M$, respectively. Given $\psi \in L^2({\mathbb{R}^d})$, $t>0$ and $M>0$, we define $v_h^M = v_h^M(\psi, t)\in\mathbb{V}_h^M$ to be the unique solution of
\[
\int_{B^M(t)} v_h^M w_h + t^2 \nabla v_h^M \cdot \nabla w_h \; dx = - \int_{B^M(t)} \psi w_h \; dx \quad \forall \, w_h \in \mathbb{V}_h^M.
\]
\end{itemize}
The fully discrete bilinear form reads:
\[
a_{\mathcal{T}_h}^{N,M}(u_h, w_h) := \frac{\sin(s\pi)}{N \pi} \sum_{j =-
N}^{N} e^{sy_j} \langle
u_h + v_h^M(u_h,t(y_j)), w_h
\rangle \quad\forall \, u_h, w_h \in \mathbb{V}_h.
\]
Using a Strang's type argument to quantify the consistency errors generated
by the three steps above, one obtains the a priori estimate
\cite[Theorem 7.7]{BoLePa17}
\[
\| u - u_h \|_{{\widetilde H}^s(\Omega)} \le C \left( e^{-c\sqrt{N}} + e^{-cM} + h^{\beta-s} |\log h| \right) \| u \|_{{\widetilde H}^\beta(\Omega)},
\]
where $\beta \in (s,3/2)$. Choosing $\beta = s + 1/2 -\varepsilon$, which is consistent with \Cref{T:reg_grubb} and \Cref{T:Besov_regularity}, $M = \mathcal{O}(|\log h|)$ and $N = \mathcal{O}(|\log h|^2)$ gives the convergence rate
\[
\| u - u_h \|_{{\widetilde H}^s(\Omega)} \le C h^{\min\{s,\frac12\}} |\log h| \, \|f\|_{L_2(\Omega)}.
\]
This is similar to the rate obtained in \Cref{T:conv_linear} for quasi-uniform meshes. To the best of the authors' knowledge, implementation of this
approach over graded meshes, while feasible in theory, has not yet been pursued in practice.
\section{Fractional minimal graphs} \label{sec:NMS}
In this section we discuss the fractional minimal graph problem. The line of study of this nonlinear fractional problem, that can be regarded as a nonlocal version of the classical Plateau problem, and other related problems, began with the seminal works by Imbert \cite{Imbert09} and Caffarelli, Roquejoffre and Savin \cite{CaRoSa10}.
As a motivation for the notion of fractional minimal sets, we show how the fractional perimeter arises in the study of a nonlocal version of the Ginzburg-Landau energy, extending a well-known result for classical minimal sets \cite{ModicaMortola77}. Let $\Omega \subset {\mathbb{R}^d}$ be a bounded set with Lipschitz boundary, $\varepsilon > 0$ and define the energy
\[
\mathcal{J}_{\varepsilon}[u;\Omega] := \frac{\varepsilon}{2} \int_\Omega |\nabla u(x)|^2 \; dx + \frac{1}{\varepsilon}\int_{\Omega} W(u(x)) \; dx,
\]
where $W(t) = \frac14(1-t^2)^2$ is a double-well potential. Then, for every sequence $\{ u_\varepsilon \}$ of minimizers of $\mathcal{J}_{\varepsilon}[u;\Omega]$ with uniformly bounded energies there exists a subsequence $\{ u_{\varepsilon_k} \}$ such that
\[ u_{\varepsilon_k} \to \chi_E - \chi_{E^c} \quad \mbox{in } L^1(\Omega),
\]
where $E$ is a set with minimal perimeter in $\Omega$.
We now consider a different regularization term: given $s \in (0,1/2)$, we set
\[
\mathcal{J}^s_{\varepsilon}[u;\Omega] := \frac{1}{2}\iint_{Q_{\Omega}} \frac{|u(x) - u(y)|^2}{|x-y|^{n+2s}} \; dx dy + \frac{1}{\varepsilon^{2s}} \int_{\Omega} W(u(x)) \; dx,
\]
where ${Q_{\Omega} = \left( {\mathbb{R}^d} \times {\mathbb{R}^d} \right) \setminus \left( {\Omega}^c \times {\Omega}^c \right)}$ as in \eqref{eq:defofinnerprod}.
The first term in the definition of $\mathcal{J}^s_\varepsilon$ involves the $H^s({\mathbb{R}^d})$-norm of $u$, except that the interactions over $\Omega^c \times \Omega^c$ are removed; for a minimization problem in $\Omega$, these are indeed fixed. As proved in \cite{SaVa12Gamma}, for every sequence $\{ u_\varepsilon \}$ of minimizers of $\mathcal{J}^s_{\varepsilon}$ with uniformly bounded energies there exists a subsequence $\{ u_{\varepsilon_k} \}$ such that
\[ u_{\varepsilon_k} \to \chi_E - \chi_{E^c} \quad \mbox{in } L^1(\Omega) \quad \mbox{as } \varepsilon_k \to 0^+.
\]
However, instead of minimizing the perimeter in $\Omega$, here the set $E$ is a $s$-minimal set in $\Omega$, because it minimizes the so-called fractional perimeter $P_s(E,\Omega)$ among all measurable sets $F \subset {\mathbb{R}^d}$ such that $F \setminus \Omega = E \setminus \Omega$. In \cite{CaRoSa10}
this notion of fractional perimeter (also known as nonlocal perimeter) was proposed, and nonlocal minimal set problems were studied.
We refer to \cite[Chapter 6]{BucurValdinoci16} and \cite{CoFi17} for nice introductory expositions to the topic and applications.
\subsection{Formulation of the problem and regularity}
Our goal is to compute fractional minimal graphs, that is, to study the nonlocal minimal surface problem under the restriction of the domain being a cylinder. Concretely, from now on we consider $\Omega' = \Omega \times \mathbb{R}$ with $\Omega \subset {\mathbb{R}^d}$ being a bounded Lipschitz domain. We assume that the exterior datum is the subgraph of some uniformly bounded function $g: {\mathbb{R}^d} \setminus \Omega \to \mathbb{R}$,
\begin{equation} \label{E:Def-E0}
E_0 := \left\{ (x', x_{d+1}) \colon x_{d+1} < g(x'), \; x' \in {\mathbb{R}^d} \setminus \Omega \right\}.
\end{equation}
The fractional minimal graph problem consists in finding a {\em locally} $s$-minimal set $E$ in $\Omega'$ such that $E \setminus \Omega' = E_0$. We refer to \cite{Lomb16Approx} for details on why the notion of locally $s$-minimality is the `correct' one. Under the conditions described above, it can be shown that minimal sets need to be subgraphs, that is,
\begin{equation}\label{E:Def-E}
E \cap \Omega' = \left\{ (x', x_{d+1}) \colon x_{d+1} < u(x'), \; x' \in \Omega \right\}
\end{equation}
for some function $u$ (cf. \cite[Theorem 4.1.10]{Lombardini-thesis}).
We shall refer to such a set $E$ as a {\em nonlocal minimal graph} in $\Omega$.
In order to find nonlocal minimal graphs, we introduce the space
\[
\mathbb{V}^g := \{ v \colon {\mathbb{R}^d} \to \mathbb{R} \; \colon \; v\big|_\Omega \in W^{2s}_1(\Omega), \ v = g \text{ in } {\Omega}^c\}
\]
(we write $\mathbb{V}^0$ whenever $g \equiv 0$) and, considering the weight function $F_s \colon \mathbb{R} \to \mathbb{R}$,
\begin{equation} \label{E:def_Fs}
F_s(\rho) := \int_0^\rho \frac{\rho-r}{\left( 1+r^2\right)^{(d+1+2s)/2}} dr.
\end{equation}
we define the energy functional
\begin{equation}\label{E:NMS-Energy-Graph}
I_s[u] := \iint_{Q_{\Omega}} F_s\left(\frac{u(x)-u(y)}{|x-y|}\right) \frac{1}{|x-y|^{d+2s-1}} \;dxdy.
\end{equation}
In \cite[Chapter 4]{Lombardini-thesis} it is shown that finding nonlocal minimal graphs is equivalent to minimizing the energy $I_s$ over $\mathbb{V}^g$.
Existence of solution $u$ follows from the existence of locally $s$-minimal sets \cite{Lomb16Approx}, while uniqueness is a consequence of $I_s$ being strictly convex. We also point out that for any function $v \colon {\mathbb{R}^d} \to \mathbb{R}$, its energy $I_s[v]$ is closely related to certain ${W^{2s}_1}$-seminorms \cite[Lemma 2.5]{BortLiNoch2019NMG-convergence}:
\begin{equation} \label{eq:norm_bound} \begin{aligned}
& |v|_{W^{2s}_1(\Omega)} \le C_1 + C_2 I_s[v],
& I_s[v] \le C_3 \iint_{Q_{\Omega}} \frac{|v(x)-v(y)|}{|x-y|^{d+2s}} dxdy .
\end{aligned} \end{equation}
To give a clearer picture of the nonlocal minimal graph problem, we compare it to its classical counterpart. Given a bounded domain $\Omega \subset {\mathbb{R}^d}$ with sufficiently smooth boundary, and $g \colon \partial \Omega \to {\mathbb{R}^d}$, the classical Plateau problem consists in finding $u \colon \Omega \to {\mathbb{R}^d}$ that minimizes the graph surface area functional
\begin{equation} \label{E:MS-Energy-Graph}
I [u] := \int_\Omega \sqrt{1 + |\nabla u (x)|^2 } \, dx
\end{equation}
among those functions $u \in H^1(\Omega)$ satisfying $u = g$ on $\partial \Omega$. By taking first variation of $I$, it follows that the minimizer $u$ satisfies
\begin{equation} \label{eq:classical-MS}
\int_\Omega \frac{\nabla u(x) \cdot \nabla v (x)}{\sqrt{1 + |\nabla u (x)|^2 }} \, dx = 0 \quad \forall \, v \in H^1_0(\Omega).
\end{equation}
The left hand side in \eqref{eq:classical-MS} consists of an $H^1$-inner product between $u$ and $v$, with a possibly degenerate weight that depends on $u$. For the nonlocal problem, after taking first variation of $I_s$ in \eqref{E:NMS-Energy-Graph}, we obtain that $u$ is a minimizer if and only if
\begin{equation}\label{E:WeakForm-NMS-Graph}
a_u(u,v) = 0 \quad \mbox{ for all } v \in \mathbb{V}^0,
\end{equation}
where the bilinear form $a_u \colon \mathbb{V}^g \times \mathbb{V}^0 \to \mathbb{R}$ is given by
\begin{equation} \label{E:def-a}
a_u(w,v) := \iint_{Q_{\Omega}} \widetilde{G}_s\left(\frac{u(x)-u(y)}{|x-y|}\right) \frac{(w(x)-w(y))(v(x)-v(y))}{|x-y|^{d+1+2s}}dx dy,
\end{equation}
and $\widetilde{G}_s(\rho) = \int_0^1 (1+ \rho^2 r^2)^{-(d+1+2s)/2} dr$ and hence it satisfies $\rho \widetilde{G}_s(\rho) = G_s(\rho) = F'_s(\rho)$. Similar to \eqref{eq:classical-MS}, the left hand side $a_u(u,v)$ in \eqref{E:WeakForm-NMS-Graph} is a weighted $H^{s+\frac{1}{2}}$-inner product with a possibly degenerate weight depending on $u$.
As for the regularity of nonlocal minimal graphs, the following result is stated in \cite[Theorem 1.1]{CabreCozzi2017gradient} and builds on the arguments in \cite{Barrios2014bootstrap, Figalli2017regularity}.
\begin{theorem}[interior smoothness of nonlocal minimal graphs] \label{thm:smoothness}
Assume $E \subset \mathbb{R}^{d+1}$ is a locally $s$-minimal set in $\Omega' = \Omega \times \mathbb{R}$, given by the subgraph of a measurable function $u$ that is bounded in an open set $\Lambda \supset \Omega$. Then, $u \in C^\infty (\Omega)$.
\end{theorem}
\begin{remark}[stickiness]\label{rem:stickiness}
\Cref{thm:smoothness} does not address boundary regularity. By using \eqref{eq:norm_bound}, it can be easily proved that $u \in W^{2s}_1(\Omega)$ but, because $2s < 1$, this does not even guarantee that $u$ has a trace on $\partial\Omega$. In fact, nonlocal minimal graphs can develop discontinuities across $\partial\Omega$. Furthermore, nonlocal minimal graphs generically exhibit this {\em sticky} behavior \cite{DipiSavinVald17, DipiSavinVald19}.
\end{remark}
\subsection{Finite element discretization}
In this section, we review the finite element discretization of the nonlocal minimal graph problem proposed in \cite{BortLiNoch2019NMG-convergence} and discuss its convergence and error estimates.
For simplicity, we assume that $\mbox{supp}(g) \subset \Lambda$ for some bounded set $\Lambda$. As before, we take a family $\{\mathcal{T}_h \}_{h>0}$ of conforming, simplicial and shape-regular meshes on $\Lambda$, which we impose to mesh $\Omega$ exactly. To account for non-zero boundary data, we make a slight modification on \eqref{eq:FE_space} to define the discrete spaces
\[
\mathbb{V}_h := \{ v \in C(\Lambda) \colon v|_T \in \mathcal{P}_1 \; \forall T \in \mathcal{T}_h \},
\]
and we define
\[
\mathbb{V}_h^g := \{ v_h \in \mathbb{V}_h \colon \ v_h|_{\Lambda \setminus \Omega} = \Pi_h^c g\}, \quad
\mathbb{V}_h^0 := \{ v_h \in \mathbb{V}_h \colon \ v_h|_{\Lambda \setminus \Omega} = 0\},
\]
where $\Pi_h^c$ denotes the Cl\'ement interpolation operator in $\Omega^c$.
With the notation introduced above, the discrete problem seeks $u_h \in \mathbb{V}^g_h$ such that
\begin{equation}\label{E:WeakForm-discrete}
a_{u_h}(u_h, v_h) = 0 \quad \mbox{for all } v_h \in \mathbb{V}^0_h.
\end{equation}
Existence and uniqueness of solutions to this discrete problem follow directly from \eqref{eq:norm_bound} and the strict convexity of $I_s$. To prove the convergence of the finite element scheme, the approach in \cite{BortLiNoch2019NMG-convergence} consists in proving that the discrete energy is consistent and afterwards using a compactness argument.
\begin{theorem}[convergence for the nonlocal minimal graph problem] \label{thm:consistency}
Let $s \in (0,1/2)$, $\Omega$ be a bounded Lipschitz domain and $g$ be uniformly bounded and satisfying $\mbox{supp}(g) \subset \Lambda$ for some bounded set $\Lambda$. Let $u$ and $u_h$ be, respectively, the solutions to \eqref{E:WeakForm-NMS-Graph} and \eqref{E:WeakForm-discrete}. Then, it holds that
\[
\lim_{h \to 0} I_s[u_h] = I_s[u] \quad
\mbox{ and } \quad
\lim_{h \to 0} \| u - u_h \|_{W^{2r}_1(\Omega)} = 0 \quad \forall r \in [0,s).
\]
\end{theorem}
The theorem above has the important feature of guaranteeing convergence without any regularity assumption on the solution. However, it does not offer convergence rates. We now show estimates for a geometric notion of error that mimics the one analyzed in \cite{FierroVeeser03} for the classical Plateau problem (see also \cite{BaMoNo04, DeDzEl05}). Such a notion of error is given by
\begin{equation}\label{eq:def-e}
\begin{aligned}
e^2(u,u_h) & :=\int_{\Omega} \ \Big| \widehat{\nu}(\nabla u) - \widehat{\nu}(\nabla u_h) \Big|^2 \;\frac{Q(\nabla u) + Q(\nabla u_h)}{2} \ dx , \\
& = \int_{\Omega} \ \Big( \widehat{\nu}(\nabla u) - \widehat{\nu}(\nabla u_h) \Big) \cdot \ \Big( \nabla (u-u_h), 0 \Big) dx ,
\end{aligned}
\end{equation}
where $Q(\pmb{a}) = \sqrt{1+|\pmb{a}|^2}$, $\widehat{\nu}(\pmb{a}) = \frac{(\pmb{a},-1)}{Q(\pmb{a})}$.
Because $\widehat{\nu}(\nabla u)$ is the normal unit vector on the graph of $u$, the quantity $e(u,u_h)$ is a weighted $L^2$-discrepancy between the normal vectors.
For the nonlocal minimal graph problem, \cite{BortLiNoch2019NMG-convergence} introduced
\begin{equation}\label{eq:def-es}
{
\begin{aligned}
e_s(u,u_h) := \left( \widetilde C_{d,s} \iint_{Q_{\Omega}} \Big( G_s\left(d_u(x,y)\right) - G_s\left(d_{u_h}(x,y)\right) \Big) \frac{d_{u-u_h}(x,y)}{|x-y|^{d-1+2s}} dxdy \right)^{1/2} ,
\end{aligned}}
\end{equation}
where $G_s(\rho) = F_s'(\rho)$, the constant $\widetilde C_{d,s} = \frac{1 - 2s}{\alpha_{d}}$, $\alpha_{d}$ is the volume of the $d$-dimensional unit ball and $d_v$ is the difference quotient of the function $v$,
\begin{equation}\label{eq:def-d_u}
d_v(x,y) := \frac{v(x)-v(y)}{|x-y|}.
\end{equation}
In \cite{BortLiNoch2019NMG-convergence}, this novel quantity $e_s(u,u_h)$ is shown to be connected with a notion of nonlocal normal vector, and its asymptotic behavior as $s\to 1/2^-$ is established.
\begin{theorem}[asymptotics of $e_s$] \label{Thm:asymptotics-es}
For all $u,v \in H^1_0(\Lambda)$, we have
\[
\lim_{s \to {\frac{1}{2}}^-} e_s(u,v) = e(u,v).
\]
\end{theorem}
A simple `Galerkin orthogonality' argument allows to derive an error estimate for $e_s(u,u_h)$ (cf. \cite[Theorem 5.1]{BortLiNoch2019NMG-convergence}).
\begin{theorem}[geometric error]\label{thm:geometric-error}
Under the same hypothesis as in \Cref{thm:consistency}, it holds that
\begin{equation} \label{eq:geometric-error} \begin{aligned}
e_s(u,u_h) &\le C (d,s) \, \inf_{v_h \in \mathbb{V}_h^g} \left( \iint_{Q_{\Omega}} \frac{|(u-v_h)(x)-(u-v_h)(y)|}{|x-y|^{d+2s}} dxdy \right) ^{1/2}.
\end{aligned} \end{equation}
\end{theorem}
Therefore, to obtain convergence rates with respect to $e_s(u,u_h)$, it suffices to prove interpolation estimates for the nonlocal minimizer. Although minimal graphs are expected to be discontinuous across the boundary, we still expect that $u \in BV(\Lambda)$ in general. Under this circumstance, the error estimate \eqref{eq:geometric-error} leads to
\[
e_s(u,u_h) \le C(d,s) \, h^{1/2-s} |u|^{1/2}_{BV(\Lambda)}.
\]
\subsection{Numerical experiments}
We conclude by presenting several numerical experiments and discussing the behavior of nonlocal minimal graphs. The first example we compute is on a one-dimensional domain, and is proposed and theoretically studied in \cite[Theorem 1.2]{DipiSavinVald17} as an illustration of stickiness phenomena.
\begin{example}[stickiness in $1$D]\label{Ex:1d_stickiness}
Let $\Omega = (-1,1)$ and $g(x) = \textrm{sign}(x)$ for $x \in \Omega^c$. Discrete nonlocal minimal graphs for $s\in\{0.1, 0.25,0.4\}$ are shown in \Cref{F:Ex-stickiness} (left).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.4\linewidth]{1d_stickiness.eps}
%
\includegraphics[width=0.4\linewidth]{2d_annulus.png}
\end{center}
\caption{\small Left: plot of $u_h$ for $s = 0.1, 0.25, 0.4$ (from left to right) in \Cref{Ex:1d_stickiness}. Right: plot of $u_h$ for uniform meshes with $h = 2^{-5}$ in \Cref{Ex:2d_annulus}. }
\label{F:Ex-stickiness}
\end{figure}
Although our method requires discrete functions to be continuous across the boundary of $\Omega$, the presented $1$D picture clearly suggests a stickiness phenomenon.
In addition, the plot also indicates that stickiness becomes more observable when $s$ gets closer to $0$.
\end{example}
\begin{example}[stickiness in an annulus]\label{Ex:2d_annulus}
Let $\Omega = B_1 \setminus \overline{B}_{1/2} \subset \mathbb{R}^2$, where $B_r$ denotes an open ball with radius $r$ centered at the origin, and let $g = \chi_{B_{1/2}}$.
The discrete nonlocal minimal graph for $s = 0.25$ is plotted in \Cref{F:Ex-stickiness} (right).
In this example, stickiness is clearly observed on $\partial B_{1/2}$, while the stickiness on $\partial B_1$ is less noticeable.
\end{example}
\begin{example}[effect of $s$]\label{Ex:2d_circle}
Let $\Omega = B_1 \subset \mathbb{R}^2$, $g = \chi_{B_{3/2}\setminus\Omega}$. \Cref{F:NMS-Ex_multis} shows minimizers for several values of $s$.
As $s \to 1/2$, the nonlocal minimal graphs get closer to the classical minimal graph, which is trivially constant in $\Omega$. On the other hand, as $s$ decreases we observe a stronger jump across $\partial\Omega$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.85\linewidth]{2d_circle.png}
\end{center}
\caption{\small Plot of $u_h$ for $s = 0.01, 0.1, 0.25, 0.4, 0.49$ (from left to right) with uniform $h = 2^{-5}$ in \Cref{Ex:2d_circle}.}
\label{F:NMS-Ex_multis}
\end{figure}
\end{example}
\wl{In the end, we make brief comments on the computational side for the examples presented above. In all of the experiments for fractional minimal graphs, we use Newton's method to solve the nonlinear equation \eqref{E:WeakForm-discrete}. Although the Jacobian matrix $\mathbf{A}_{u_k}$ in the iterative process corresponds to an $H^{s+1/2}-$inner product $a_{u_k} (w,v)$ with degenerate weights, our experiments indicate its condition number behaves like
$$
\kappa(\mathbf{A}_{u_k}) \approx \mathcal{O}\left( N^{2(s+\frac{1}{2})/d} \right)
$$
for $u_k$ whose gradient blows up near the boundary $\partial\Omega$, quasi-uniform meshes and dimensions $d=1,2$. This behavior is the same as the one in linear fractional diffusion with order $s+\frac{1}{2}$. However, the degenerate weight does bring more difficulties in preconditioning.
Another thing worth to be pointed out is the use of Dirichlet condition for our discrete space $\mathbb{V}_h^g$ requires the discrete function $u_h$ to be continuous across $\partial \Omega$. Due to the stickiness phenomenon in \Cref{rem:stickiness}, this may not be true for the solution $u$ of the minimal graph problem. Fortunately, this does not preclude the convergence in `trace blind' fractional Sobolev spaces $W^{2s}_1(\Omega)$, and we are still able to capture discontinuities across the boundary in practice. While permitting discontinuities would be desirable, it has conflicts with using Newton's method in solving \eqref{E:WeakForm-discrete} because the bilinear form $a_u(w,v)$ in \eqref{E:def-a} may not be well-defined. The question of how to solve the nonlinear equation \eqref{E:WeakForm-discrete} faster when allowing discontinuous across $\partial \Omega$ is still under investigation.
}
\section{Introduction} \label{sec:intro}
Diffusion, which is one of the most common physical processes, is the net movement of particles from a region of higher concentration to a region of lower concentration. The assumption that particles respond to Brownian motion leads to classical models of diffusion \cite{Albert}, that have been well studied for a long time.
Fick's first law states that the magnitude of the diffusive flux is proportional to the concentration gradient; by now, it is clear that such a constitutive relation is a questionable model for numerous phenomena \cite{MetzlerKlafter}. When the associated underlying stochastic process is not given by Brownian motion, the diffusion is regarded as \emph{anomalous}. In particular, anomalous superdiffusion refers to situations that can be modeled using fractional spatial derivatives or fractional spatial differential operators.
Integer-order differentiation operators are local because the derivative of a function at a given point depends only on the values of the function in an infinitesimal neighborhood of it. In contrast, fractional-order derivatives are nonlocal, integro-differential operators. A striking example of such an operator is the fractional Laplacian of order $s \in (0,1)$, which we will denote by $(-\Delta)^s$, and is given by
\begin{equation} \label{eq:def_Laps}
(-\Delta)^s u (x) := C_{d,s} \, \text{P.V.} \int_{\mathbb{R}^d} \frac{u(x)-u(y)}{|x-y|^{d+2s}} \; dy , \quad C_{d,s} := \frac{2^{2s} s \Gamma(s+\frac{d}{2})}{\pi^{d/2} \Gamma(1-s)} .
\end{equation}
We refer to \cite{Valdinoci} for an illustration of how the heat equation involving the fractional Laplacian arises from a simple random walk with jumps. The nonlocal structure of the operator \eqref{eq:def_Laps} is apparent: to evaluate $(-\Delta)^s u$ at a spatial point, information involving all spatial points is needed.
This work deals with fractional diffusion. Our main goal is to review finite element methods (FEMs) to approximate solution of elliptic problems involving $(-\Delta)^s$ or related operators on bounded domains. We shall not discuss methods for the {\em spectral} fractional Laplacian; the surveys \cite{BBNOS18,Lischke_et_al18} offer comparison between such an operator and the fractional Laplacian \eqref{eq:def_Laps} and review other numerical methods.
\rhn{We point out that the fractional Laplacian \eqref{eq:def_Laps} of order $s \in (0,1)$ is the infinitesimal generator of a $2s$-stable L\'evy process. In this regard, problems on a bounded domain with homogeneous Dirichlet boundary conditions arise when the process is killed upon exiting the domain.
}
Throughout this work, we assume that $\Omega$ is a bounded \rhn{Lipschitz domain. Whenever additional assumptions on $\partial\Omega$ are required, we shall state them explicitly.
Even though there is a wide variety of numerical methods for fractional-order problems available in the literature \cite{Lischke_et_al18}, in this work we shall focus on piecewise linear finite element methods as in \cite{BBNOS18}. We emphasize the interplay between regularity, including boundary behavior, and approximability. In fact, the convergence rates achievable for the fractional elliptic PDEs discussed below, both linear and nonlinear, are limited by the presence of an algebraic boundary layer regardless of the regularity of $\partial\Omega$ and the polynomial degree for shape regular elements.
}
The paper is organized as follows. \Cref{sec:linear} deals with the homogeneous Dirichlet problem for the fractional Laplacian in $\Omega$; we discuss regularity of solutions and discuss both theoretical and computational aspects of conforming finite element discretizations. We also comment on some recent applications of this approach and on an alternative
nonconforming FEM based on a Dunford-Taylor representation of the weak form of $(-\Delta)^s$.
Afterwards, in \Cref{sec:obstacle} we address the obstacle problem for the fractional Laplacian. To derive optimal convergence estimates, we focus on weighted Sobolev regularity, where the weight is a power of the distance to the boundary of $\Omega$. These estimates follow from a precise quantification of boundary regularity of solutions and how solutions detach from the obstacle. Finally, \Cref{sec:NMS} deals with fractional minimal graphs, which in fact are subgraphs that minimize a suitable nonlocal perimeter. We formulate a variational form for this problem, which is nonlinear and degenerate. We report on approximation properties of a conforming finite element scheme, and show convergence rates with respect to a novel geometric quantity. The paper concludes with a couple of computational explorations of the behavior of fractional minimal graphs for $d=2$.
\input{linear}
\input{obstacle}
\input{minimal_surfaces}
\section{Fractional Obstacle Problem} \label{sec:obstacle}
In this section we review finite element methods for the solution of the obstacle problem for the integral fractional Laplacian which, from now on, we shall simply refer to as the fractional obstacle problem.
The fractional obstacle problem appears, for example, in optimal stopping times for jump processes. In particular, it is used in the modeling of the rational price of perpetual American options \cite{ContTankov}. More precisely, if $u$ represents the rational price of a perpetual American option where the assets prices are modeled by a L\'evy process $X_t$, if $\chi$ denotes the payoff function, then $u$ solves a fractional obstacle problem with obstacle $\chi$.
An a posteriori error analysis of approximations of variational inequalities involving integral operators on arbitrary bounded domains was performed in \cite{NvPZ}. We also comment on two recent works related to the approach we review here. Reference \cite{BG18} deals with finite element discretizations to obstacle problems involving finite and infinite-horizon nonlocal operators. The experiments shown therein were performed on one-dimensional problems with uniform meshes, and indicate convergence with order $h^{1/2}$ in the energy norm. A theoretical proof of that convergence order was obtained in \cite{BoLeSa18}, where approximations using the
approach discussed in \Cref{sec:dunford-taylor} were considered. We also refer to \cite{gimperlein2019space} for computational comparisons between adaptive strategies and uniform and graded discretizations in two-dimensional problems.
\subsection{Variational formulation} \label{sec:formulation_obstacle}
As before, we assume that $\Omega \subset {\mathbb{R}^d}$ is an open and bounded domain and, for the sake of applying weighted regularity estimates, we assume that $\Omega$ has a Lipschitz boundary and satisfies the exterior ball condition.
Given $s \in (0,1)$ and functions $f: \Omega \to \mathbb{R}$ and $\chi : \overline\Omega \to \mathbb{R}$, with $\chi < 0$ on $\partial\Omega$, the obstacle problem is a constrained minimization problem on ${\widetilde H}^s(\Omega)$ associated with a quadratic functional. Defining the admissible convex set
\[
{\mathcal{K}} := \left\{ v \in {\widetilde H}^s(\Omega): v \geq \chi \mbox{ a.e in } \Omega \right\},
\]
the solution to the fractional obstacle problem is $u=\textrm{argmin}_{v\in{\mathcal{K}}} \mathcal{J}(v)$
where
\[
\mathcal{J}(v) := \frac12 \| v \|_{{\widetilde H}^s(\Omega)}^2 - \langle f, v \rangle.
\]
Existence and uniqueness of solutions is standard. Taking first variation of $\mathcal{J}$, we deduce that such a minimizer $u \in {\mathcal{K}}$ solves the variational inequality
\begin{equation}
\label{eq:obstacle}
(u,u-v)_s \leq \langle f, u-v \rangle \quad \forall v \in {\mathcal{K}}.
\end{equation}
It can be shown \cite{musina2017variational} that, if $f \in L^p(\Omega)$ for $p > d/2s$, then the solution to the obstacle problem is indeed a continuous function, and that it satisfies the complementarity condition
\begin{equation}
\label{eq:complementarity}
\min\left\{ \lambda, u-\chi \right\} = 0 \quad\mbox{ a.e in } \Omega, \mbox{ where } \quad \lambda := (-\Delta)^s u - f.
\end{equation}
For our discussion, we assume that $f$ is such that the solution is defined pointwise, and consequently, we define the coincidence (or contact) and non-coincidence sets,
\[
\Lambda := \{ x \in \Omega : u(x) = \chi(x) \}, \quad N := \Omega \setminus \Lambda.
\]
The complementarity condition \eqref{eq:complementarity} can be succinctly expressed as $\lambda \ge 0$ in $\Lambda$ and $\lambda = 0$ in $N$. The set $\partial \Lambda$, where the solution detaches from the obstacle, is the {\it free boundary.}
\subsection{Regularity} \label{sec:regularity_obstacle}
The following regularity results for solutions to the fractional obstacle problem are instrumental for error analysis. We recall our assumption that the obstacle $\chi$ is a continuous function and strictly negative on $\partial\Omega$:
\begin{equation} \label{eq:cond_chi}
\varrho := \textrm{dist}\left( \{ \chi>0\}, \partial \Omega \right) > 0.
\end{equation}
Furthermore, we shall assume that $f \ge 0$. Heuristically, these assumptions should guarantee that the behavior of solutions near $\partial\Omega$ is dictated by a linear problem and that the nonlinearity is confined to the interior of the domain. Finally, to derive regularity estimates, we assume that the data satisfy
\begin{equation}
\label{eq:defofcalF}
\chi \in C^{2,1}(\Omega), \quad f \in \mathcal{F}_s(\overline\Omega) = \begin{dcases}
C^{2,1-2s+\epsilon}(\overline\Omega), & s\in \left(0,\frac12\right], \\
C^{1,2-2s+\epsilon}(\overline\Omega), & s \in \left(\frac12,1\right),
\end{dcases}
\end{equation}
where $\epsilon>0$ is sufficiently small, so that $1-2s+\epsilon$ is not an integer.
Under these conditions, Caffarelli, Salsa and Silvestre \cite{CaSaSi08} proved that the solution to the problem posed in the whole space (with suitable decay conditions at infinity) is of class $C^{1,s}({\mathbb{R}}^d)$. It is worth examining the limiting cases $s=1$ and $s=0$. The former corresponds to the classical obstacle problem whose solutions are of class $C^{1,1}({\mathbb{R}}^d)$. The latter reduces to $\min\{u-\chi,u-f\}=0$ whose solutions are just of class $C^{0,1}({\mathbb{R}}^d)$. The regularity of \cite{CaSaSi08} is thus a natural intermediate result.
We emphasize that deriving interior regularity estimates for \eqref{eq:obstacle} from this result, which is valid for problems posed in ${\mathbb{R}^d}$, is not as straightforward as for classical problems. Indeed, the nonlocal structure of $(-\Delta)^s$ implies that, if $0\le\eta\le1$ is a smooth cut-off function such that $\eta=1$ in $\{ \chi > 0 \}$, then
\[
(-\Delta)^s (\eta u) \ne \eta (-\Delta)^s u \quad \mbox{in } \{ \eta = 1 \}.
\]
To overcome this difficulty, reference \cite{BoNoSa18} proceeds as follows. Given a set $D$ such that $\{\chi>0\}\subset D \subset \Omega$, one can define a cutoff $\eta$ such that $D \subset \{ \eta = 1 \}$ and split the space roughly into a region where $\eta = 1$, a region where $\eta = 0$ and a transition region. In the first two regions, $(-\Delta)^s(\eta u)$ essentially coincides with a convolution operator with kernel $|z|^{-d-2s}$ but regularized at the origin, while the latter region is contained in the non-contact set $N$ and allows one to invoke interior regularity estimates for linear problems involving $(-\Delta)^s$. An important outcome is that solutions to fractional obstacle problems are more regular near the free boundary ($C^{1,s}$) than near the domain boundary ($C^{0,s}$). This is critical for approximation.
Alternatively, one may invoke the Caffarelli-Silvestre extension \cite{CS:07} to obtain local regularity estimates \cite{CaSaSi08}. Since the extension problem involves a degenerate elliptic equation with a Muckenhoupt weight of class $A_2$ that depends only on the extended variable, one needs to combine fine estimates for degenerate equations with the translation invariance in the $x$-variable of the Caffarelli-Silvestre weight.
Once the interior regularity of solutions is established, one can invoke the H\"older boundary estimates for linear problems \cite{RosOtonSerra} and perform an argument similar to the one in \cite{AcosBort2017fractional} to deduce weighted Sobolev regularity estimates \cite{BoNoSa18}.
\begin{theorem}[weighted Sobolev regularity for the obstacle problem]
\label{T:regularity_obstacle}
Let $\Omega$ be a bounded Lipschitz domain satisfying the exterior ball condition, $s \in (0,1)$, and $\chi \in C^{2,1}(\Omega)$ satisfying \eqref{eq:cond_chi}.
Moreover, let $0 \leq f \in \mathcal{F}_s(\overline\Omega)$ and $u \in {\widetilde H}^s(\Omega)$ be the solution to \eqref{eq:obstacle}. For every $\varepsilon >0$ we have that $u \in {\widetilde H}^{s+1-2\varepsilon}_{1/2-\varepsilon}(\Omega)$ with the estimate
\begin{equation} \label{eq:regularity_obstacle}
\|u\|_{{\widetilde H}^{s+1-2\varepsilon}_{1/2-\varepsilon}(\Omega)} \leq \frac{C(\chi, s, d, \Omega, \varrho, \| f \|_{\mathcal{F}_s(\overline\Omega)})}\varepsilon.
\end{equation}
\end{theorem}
We have stated the estimate in \Cref{T:regularity_obstacle} in weighted spaces because we are interested in the application of that result for finite element schemes over graded meshes. With the same arguments as in \cite{BoNoSa18}, it can be shown that the solution to the fractional obstacle problem \eqref{eq:obstacle} satisfies $u \in {\widetilde H}^{s+1/2-\varepsilon}(\Omega)$ and
\[
\|u\|_{{\widetilde H}^{s+1/2-\varepsilon}(\Omega)} \leq \frac{C(\chi, s, d, \Omega, \varrho, \| f \|_{\mathcal{F}_s(\overline\Omega)})}\varepsilon.
\]
A similar result, for the obstacle problem for a class of integro-differential operators, was obtained in \cite{BoLeSa18}. In the case of purely fractional diffusion (i.e., problems without a second-order differential operator), the estimate builds on \cite{Grubb} (cf. \Cref{T:reg_grubb}). Therefore, we point out that using \Cref{T:Besov_regularity}, the requirement that $\Omega$ be a $C^\infty$ domain in \cite[Cases A and B]{BoLeSa18} can be relaxed to $\Omega$ being Lipschitz.
\subsection{Finite element discretization} \label{sec:FE_obstacle}
We consider the same finite element setting as in \Cref{sec:FE_linear}: let $\mathbb{V}_h$ be linear Lagrangian finite element spaces as in \eqref{eq:FE_space} over a family of conforming and simplicial meshes ${\mathcal{T}_h}$. An instrumental tool in the analysis we review here is the interpolation operator $\Pi_h : L^1(\Omega) \to \mathbb{V}_h$ introduced in \cite{ChenNochetto} that, besides satisfying \eqref{eq:interpolation}, is {\it positivity preserving}: it satisfies $\Pi_h v \ge 0$ for all $v \ge 0$. Such a property yields that, for every $v \in {\mathcal{K}}$,
\begin{equation} \label{eq:admissible_projection}
\Pi_h v \ge \Pi_h \chi \ \mbox{ in } \Omega.
\end{equation}
We therefore define the discrete admissible convex set
\[
{\mathcal{K}}_h := \left\{ v_h \in \mathbb{V}_h: v_h \geq \Pi_h \chi \mbox{ in } \Omega \right\},
\]
and consider the discrete fractional obstacle problem: find $u_h \in {\mathcal{K}}_h$ such that
\begin{equation}
\label{eq:obstacle_discrete}
(u_h,u_h-v_h)_s \leq \langle f, u_h-v_h \rangle \quad \forall v_h \in {\mathcal{K}}_h.
\end{equation}
We illustrate the delicate interplay between regularity and approximability next. We exploit that $u$ is both globally of class $C^{0,s}(\overline{\Omega})$, via graded meshes as in the linear problem, and locally of class $C^{1,s}(\Omega)$.
First, we split the error as
$\| u-u_h \|_{{\widetilde H}^s(\Omega)}^2 = (u - u_h, u - I_h u)_s + (u - u_h, I_h u - u_h)_s$, use Cauchy-Schwarz inequality and the interpolation estimate \eqref{eq:global_weighted_interpolation} to deduce
\[
\frac12 \|u-u_h \|_{{\widetilde H}^s(\Omega)}^2 \le
C h^{2(1-2\varepsilon)} \|u \|^2_{\widetilde H^{1+s-2\varepsilon}_{1/2-\varepsilon}(\Omega)} + (u - u_h, I_h u - u_h)_s.
\]
This is a consequence of Theorem \ref{T:weighted_regularity} and the use of graded meshes with parameter $\mu=2$ as in the linear theory of Section \ref{sec:FE_linear}. For the remaining term we integrate by parts and utilize the discrete variational inequality \eqref{eq:obstacle_discrete} to arrive at
\[
\begin{aligned}
( u - u_h &, I_h u - u_h )_s \le \int_\Omega (I_h u - u_h) \big((-\Delta)^su -f \big) \\
& = \int_\Omega \Big[ (u -\chi) + \underbrace{(I_h \chi - u_h)}_{\le 0} + \big(I_h (u - \chi) - (u -\chi) \big) \Big] \Big(\underbrace{(-\Delta)^su -f}_{\ge 0} \Big).
\end{aligned}
\]
Invoking the complementarity condition \eqref{eq:complementarity}, we obtain
\[
(u - u_h, I_h u - u_h)_s \le \sum_{T \in {\mathcal{T}_h}} \int_T \big( I_h (u - \chi) - (u - \chi) \big) \big((-\Delta)^su -f \big).
\]
We next observe that the integrand does not vanish only for elements $T$ in the vicinity of the free boundary, namely $T$'s for which $u\ne\chi$ and $(-\Delta)^su \ne f$. Exploiting that $u\in C^{1,s}(\Omega)$, we infer that $(-\Delta)^su -f \in C^{0,1-s}(\Omega)$, whence
\[
\big| \big( (-\Delta)^s u -f \big) \big( I_h (u - \chi) -
(u - \chi) \big) \big| \le C h^{2} .
\]
This yields the following optimal energy error estimate. We refer to \cite{BoNoSa18} for details.
\begin{theorem}[error estimate for obstacle problem]
\label{thm:conv_rates}
Let $u$ be the solution to \eqref{eq:obstacle} and $u_h$ be the solution to \eqref{eq:obstacle_discrete}, respectively. Assume that $\chi \in C^{2,1}(\Omega)$ satisfies \eqref{eq:cond_chi} and that $f \in \mathcal{F}_s(\overline\Omega)$. If $d=2$, $\Omega$ is a convex polygon, and the meshes satisfy the grading hypothesis \eqref{eq:H} with $\mu = 2$, then we have that
\[ \begin{aligned}
& \|u-u_h\|_{{\widetilde H}^s(\Omega)} \leq C h |\log h| & (s \ne 1/2), \\
& \|u-u_h\|_{\widetilde H^{1/2}(\Omega)} \leq C h |\log h|^2 & (s = 1/2),
\end{aligned} \]
where $C>0$ depends on $\chi$, $s$, $d$, $\Omega$, $\varrho$ and $\| f \|_{\mathcal{F}_s(\overline\Omega)}$.
\end{theorem}
We conclude this section with a computational example illustrating the qualitative behavior of solutions. Further experiments can be found in \cite{BoNoSa18}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.85\textwidth]{multi_obstacle.png}
\caption{Discrete solutions to the fractional obstacle problem for $s=0.1$ (left), $s=0.5$ (center) and $s=0.9$ (right) computed with graded meshes with $h = 2^{-5}$. Top: lateral view. Bottom: top view, with the discrete contact set highlighted.
} \label{fig:qualitative}
\end{figure}
\begin{example}[qualitative behavior]
Consider problem \eqref{eq:obstacle}, posed in the unit ball $B_1 \subset \mathbb{R}^2$, with $f = 0$ and the obstacle
\[
\chi (x_1, x_2) = \frac12 - \sqrt{ \left(x_1 - \frac14\right)^2 + \frac12 x_2^2}.
\]
\Cref{fig:qualitative} shows solutions for $s \in \{0.1, 0.5, 0.9 \}$ on meshes graded according to \eqref{eq:H} with $\mu =2$. The coincidence set $\Lambda$, which contains a neighborhood of the singular point $(1/4,0)$ is displayed in color in the bottom view. It can be observed that, while for $s=0.9$ the discrete solution resembles what is expected for the classical obstacle problem, the solution for $s=0.1$ is much flatter in the non-coincidence set $N$. Because $f \ge 0$, the solution $u$ satisfies $u \ge 0$. Therefore, the solution $u$ approaches $\chi_+$ in the limit $s \to 0$, while $u$ is expected to touch the obstacle only at the singular point and detach immediately in the diffusion limit $s=1$. The discrete nonlinear system has been solved using a semi-smooth Newton method.
\end{example}
| proofpile-arXiv_065-7319 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Many arithmetic functions of interest are multiplicative, for exemple, the M\"obius function, the Dirichlet characters, or the Liouville function, which is equal to $(-1)^k$ on integers with $k$ prime factors, counted with multiplicity. The behavior of such functions is far from being known with complete accuracy, even if partial results have been proven. This difficulty can be encoded by the corresponding Dirichlet series, which involve the Riemann zeta function. For exemple, the partial sum, up to $x$,
of the M\"obius function in known to be
negligible with respect to $x$, and it is conjectured to be negligible with respect to $x^{r}$ for all $r > 1/2$: the first statement can quite easily be proven to be equivalent to the prime number theorem, whereas the second is equivalent to the Riemann hypothesis.
It has been noticed that the same bound $x^r$ for all $r > 1/2$ is obtained if we take the partial sums of i.i.d., bounded and centered random variables. This suggests the naive idea to compare the M\"obius function on square-free integers with i.i.d. random variables on $\{-1,1\}$. However, a major difference between the two situations is that in the random case, we lose the multiplicativity of the function.
A less naive randomized version of M\"obius functions can be obtained as follows: one takes i.i.d. uniform random variables on $\{-1,1\}$ on prime numbers, $0$ on prime powers of order larger than or equal to $2$, and one completes the definition by multiplicativity.
In \cite{W}, Wintner considers a completely multiplicative function with i.i.d. values at primes, uniform on $\{-1,1\}$ (which corresponds to a randomized version of the Liouville function rather than the M\"obius function), and proves that we have almost surely the same bound $x^{r}$ ($r > 1/2$) for the partial sums, as for the sums of i.i.d. random variables, or for the partial sums of M\"obius function if the Riemann hypothesis is true. The estimate in \cite{W} has been refined by Hal\'asz in \cite{Hal}.
In order to get more general results, it can be useful to consider complex-valued random multiplicative functions. For example, it has been proven by Bohr and Jessen \cite{BJ} that for $\sigma >1/2$,
the law of $\zeta(\sigma + iTU)$, for $U$ uniformly distributed on $[0,1]$, tends to a limiting random variable when $T$ goes to infinity.
This limiting random variable can be written as $\sum_{n \geq 1} X_n n^{-\sigma}$, when $(X_n)_{n \geq 1}$ is a random completely multiplicative function such that $(X_p)_{p \in \mathcal{P}}$ are i.i.d. uniform on the unit circle, $\mathcal{P}$ denoting, as in all the sequel of the present paper, the set of prime numbers. The fact that the series just above converges is a direct consequence (by partial summation) of the analog of the result of Wintner for the partial sums of $(X_n)_{n \geq 1}$: one can prove that $\sum_{n \leq x} X_n = o(x^r)$ for $r > 1/2$.
This discussion shows that it is often much less difficult to prove accurate results for random multiplicative function than for the arithmetic functions which are usually considered. In some informal sense, the arithmetic difficulties are diluted into the randomization, which is much simpler to deal with.
In the present paper, we study another example of results which are stronger and less difficult to prove in the random setting than in the deterministic one.
The example we detail in this article is motivated by the following question, initially posed in the deterministic setting: for $k \geq 1$, what can we say about the distribution of the $k$-uples
$(\mu(n+1), \dots, \mu(n+k))$, or $(\lambda(n+1), \dots, \lambda(n+k))$, where $\mu$ and $\lambda$ are the M\"obius and the Liouville functions,
$n$ varies from $1$ to $N$, $N$ tends to infinity? This question is only very partially solved. One knows (it is essentially a consequence of the prime number theorem), that for $k = 1$, the proportion of integers such that $\lambda$ is equal to $1$ or $-1$ tends to $1/2$. For the M\"obius function, the limiting proportions are $3/\pi^2$ for $1$ or $-1$ and $1 - (6/\pi^2)$ for $0$.
It has be proven by Hildebrand \cite{Hil} that for $k=3$, the eight possible values
of $(\lambda(n+1), \lambda(n+2), \lambda(n+3))$ appears infinitely often. This result has been recently improved by Matom\"aki, Radziwill and Tao \cite{MRT}, who prove that these eight values
appear with a positive lower density: in other words, for all $(\epsilon_1, \epsilon_2, \epsilon_3) \in \{-1,1\}^3$,
$$\underset{N \rightarrow \infty}{\lim \inf} \frac{1}{N} \sum_{n=1}^N
\mathds{1}_{\lambda(n+1) = \epsilon_1,
\lambda(n+2) = \epsilon_2,
\lambda(n+3) = \epsilon_3} > 0.$$
The similar result is proven for the nine possible values of $(\mu(n+1), \mu(n+2))$.
Such results remain open for the M\"obius function and $k \geq 3$, or the Liouville function and $k \geq 4$. A conjecture by Chowla \cite{Ch} states that for all $k \geq 1$, each possible pattern of $(\lambda(n+1), \dots, \lambda(n+k))$ appears with asymptotic density $2^{-k}$.
In the present paper, we prove results similar to this conjecture for random completely multiplicative functions $(X_n)_{n \geq 1}$. The random functions we will consider take i.i.d. values on the unit circle on prime numbers. Their distribution is then
entirely determined by the distribution of $X_2$. The two particular cases we will study in the largest part of the paper are the following: $X_2$ is uniform on the unit circle $\mathbb{U}$, and $X_2$ is uniform on the set $\mathbb{U}_q$ of $q$-th roots of unity, for $q \geq 2$. In this case, we will show the following results: for all $k \geq 1$, and for all $n \geq 1$ large enough depending on $k$, the variables $X_{n+1}, \dots, X_{n+k}$ are independent, and exactly i.i.d. uniform on the unit circle if $X_2$ is uniform. Moreover, the empirical distribution
$$\frac{1}{N} \sum_{n=1}^N \delta_{(X_{n+1}, \dots, X_{n+k})}$$
tends almost surely to the uniform distribution on $\mathbb{U}^k$ if $X_2$ is uniform on $\mathbb{U}$, and to the uniform distribution on $\mathbb{U}_q^k$ if $X_2$ is uniform on $\mathbb{U}_q$.
In particular, the analog of Chowla's conjecture holds almost surely in the case where $X_2$ is uniform on $\{-1,1\}$. We have also an estimate on the speed of convergence of the empirical measure: in the case of the uniform distribution
on $\mathbb{U}_q$, each of the
$q^k$ possible patterns for $(X_{n+1}, \dots, X_{n+k})$ almost surely occurs with a proportion $q^{-k} + O(N^{-t})$ for $n$ running between $1$ and $N$, for all $t < 1/2$. We have a similar result in the uniform case, if the test functions we consider are sufficiently smooth.
It would be interesting to have similar results when the distribution of $X_2$ on the unit circle is not specified. For $k \geq 2$, we are unfortunately not able to show similar results, but we nevertheless can prove that the empirical distribution of $X_n$ almost surely converges to a limiting distribution for any distribution of $X_2$ on the unit circle. We specify this distribution, which is always uniform on $\mathbb{U}$ or uniform on $\mathbb{U}_q$ for some $q \geq 1$, and in the latter case, we give an estimate of the rate of convergence. This rate corresponds to a negative power of $\log N$, which is much slower than what we obtain when $X_2$ is uniform on $\mathbb{U}_q$.
The techniques we use in our proofs are elementary in general, mixing classical tools in probability theory and number theory. However, a part of our arguments need to use deep results on diophantine equations, in order to bound the number and the size of their solutions.
The sequel of the paper is organized as follows. In Sections \ref{uniform} and \ref{uniformq}, we study the law
of $(X_{n+1}, \dots, X_{n+k})$ for $n$ large depending on $k$, first in the case where $X_2$ is uniform on $\mathbb{U}$, then in the case where $X_2$ is uniform on $\mathbb{U}_q$. In Section \ref{uniform2}, we study the empirical measure of
$(X_{n+1}, \dots, X_{n+k})$ in the case of $X_2$ uniform on $\mathbb{U}$. In the proof of the convengence of this empirical measure, we need to estimate the second moment of sums of the form $\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j}$. The problem of estimating moments of order different from two for such sums is discussed in Section \ref{moments}. The proof of convergence of empirical measure in the case of uniform variables on $\mathbb{U}_q$ is given in Section \ref{uniform2q}.
Finally, we consider the case of a general distribution for $X_2$ in Section \ref{general}.
\section{Independence in the uniform case} \label{uniform}
In this section, we suppose that $(X_p)_{p \in
\mathcal{P}}$ are i.i.d. uniform random variables on the unit circle. By convenience, we will extend our multiplicative function to positive rational numbers by setting $X_{p/q} := X_p/X_q$: the result is independent
of the choice of $p$ and $q$, and we have $X_r X_s = X_{rs}$ for all rationals $r, s > 0$. Moreover,
$X_r$ is uniform on the unit circle for all positive rational $r \neq 1$.
In this section, we will show that for fixed $k \geq 1$,
$(X_{n+1},\dots, X_{n+k})$ are independent if $n$ is sufficiently large. The following result gives
a criterion for such independence:
\begin{lemma} \label{1.1}
For all $n, k \geq 1$, the variables $(X_{n+1},\dots, X_{n+k})$ are independent if and only if
$\log(n+1), \dots, \log(n+k)$ are linearly independent on $\mathbb{Q}$.
\end{lemma}
\begin{proof}
Since the variables $(X_{n+1}, \dots, X_{n+k})$ are uniform on the unit circle, they are independent if and only
if
$$\mathbb{E}[X_{n+1}^{m_1} \dots X_{n+k}^{m_k}] = 0$$
for all $(m_1, \dots, m_k) \in \mathbb{Z}^k \backslash \{(0,0, \dots, 0)\}$.
This equality is equivalent to
$$ \mathbb{E}[X_{(n+1)^{m_1} \dots (n+k)^{m_k}}] = 0,$$
i.e. $$(n+1)^{m_1} \dots (n+1)^{m_k} \neq 1$$
or
\begin{equation} m_1 \log (n+1) + \dots + m_k \log (n+k) \neq 0. \label{1}
\end{equation}
\end{proof}
We then get the following result:
\begin{proposition} \label{1.2}
The variables $(X_{n+1}, \dots, X_{n+k})$ are i.i.d. as soon as $n \geq (100k)^{k+1} $. In particular, for $k$ fixed, this property
is true for all but finitely many $n$.
\end{proposition}
\begin{remark}
The same result is proven in \cite{T}, Theorem 3. (i),
with an asymptotically better bound, namely $n \geq e^{ck \log \log (k+2)/ \log (k+1)}$ where $c >0$ is a constant.
However, their proof uses a deep result by Shorey \cite{Sh} on linear forms in the logarithms of algebraic numbers, involving technical tools by Gelfond and Baker, whereas our proof is elementary. Moreover, the constant $c$ involved in \cite{T} is not given, even if it is explicitly computable.
\end{remark}
\begin{proof}
Let us assume that we have a linear dependence \eqref{1} between $\log (n+1), \dots, \log(n+k)$: necessarily $k \geq 2$.
Moreover, the integers $n+j$ for which $m_j \neq 0$ cannot be divisible by a prime larger than $k$: otherwise
this factor remains in the product
$\prod_{\ell=1}^k (n+\ell)^{m_j}$ since none of the $n+\ell$ for $\ell \neq j$ can be divisible by $p$, and then the product cannot be equal to $1$.
We can rewrite the dependence as follows:
$$\log(n+j) = \sum_{\ell \in A} r_{\ell} \log (n+\ell),$$
for a subset $A$ of $\{1, \dots, k\} \backslash \{j\}$ and for $R := (r_{\ell})_{\ell \in A} \in \mathbb{Q}^A$. Let us assume that the cardinality $|A|$ is as small as possible. Taking the decomposition in prime factors, we get for all
$p \in \mathcal{P}$,
$$v_p(n+j) = \sum_{\ell \in A} v_p(n+\ell) r_{\ell}.$$
If $M := (v_p(n+\ell))_{p \in \mathcal{P}, \ell \in A}$, $V := (v_p(n+j))_{p \in \mathcal{P}}$, then we can write
these equalities in a matricial way $V = M R$. The minimality of $|A|$ ensures that the matrix $M$ has rank $|A|$.
Moreover, since all the prime factors of
$n+1, \dots, n+k$ is smaller than $k$,
all the rows of $M$ indexed by prime numbers larger than $k$ are identically zero, and then the rank $|A|$ of $M$ is at most
$\pi(k)$, the number of primes smaller than or equal to $k$.
Moreover, we can extract a subset $\mathcal{Q}$ of $\mathcal{P}$ of cardinality $|A|$ such that the restriction
$M^{(\mathcal{Q})}$ of $M$ to the rows with indices in $\mathcal{Q}$ is invertible.
We have with obvious notation: $V^{(\mathcal{Q})} = M^{(\mathcal{Q})} R$, and then by Cramer's rule,
the entries of $R$ can be written as the quotients of determinants of matrices obtained from $M^{(\mathcal{Q})}$
by replacing one column by $V^{(\mathcal{Q})}$, by the determinant of $M^{(\mathcal{Q})}$.
All the entries involved in these matrices are $p$-adic valuations of integers smaller than or equal to $n+k$, so
they are at most $\log (n+k)/ \log 2$. By Hadamard inequality, the absolute value of the
determinants are smaller than or equal to
$([\log (n+k)/\log(2)]^{|A|}) |A|^{|A|/2}$. Since $|A| \leq \pi(k)$, we deduce, after multiplying by $\det
(M^{(\mathcal{Q})})$,
that there exists a linear dependence between $\log (n+1), \dots, \log (n+k)$ involving only
integers of absolute value at most $D := [\sqrt{\pi(k)} \log (n+k)/\log2]^{\pi(k)}$:
let us keep the notation of \eqref{1} for this dependence.
Let $q$ be the smallest nonnegative integer such that $\sum_{j=1}^k j^q m_j \neq 0$: from the fact that the
Vandermonde matrices are invertible, one deduces that $q \leq k-1$. Using the fact that
$$\left| \log(n+j) - \left( \log n + \sum_{r = 1}^{q} (-1)^{r-1} \frac{j^r}{r n^r} \right) \right|
\leq \frac{j^{q+1}}{(q+1) n^{q+1}},$$
we deduce, by writing the dependence above:
$$ \left| \sum_{j=1}^k j^q m_j \right| \frac{1}{q n^q} \leq \sum_{j=1}^k \frac{|m_j| j^{q+1}}{(q+1) n^{q+1}}$$
if $q \geq 1$ and
$$ \left| \sum_{j=1}^k m_j \right| \log n \leq \sum_{j=1}^k \frac{|m_j| j}{n}$$
if $q = 0$.
Since the first factor in the left-hand side of these inequalities is a non-zero integer, it is at least $1$.
From the bounds we have on the $m_j$'s, we deduce
$$ \frac{1}{q n^q} \leq \frac{ D k^{q+2}}{(q+1) n^{q+1}}$$
for $q \geq 1$ and
$$ \log n \leq \frac{ D k^2}{n}.$$
for $q = 0$. Hence
$$1 \leq \left(\frac{q}{q+1} \vee \frac{1}{\log n} \right) \frac{ D k^{q+2}}{n} \leq \frac{ D k^{q+2}}{n}$$
if $n \geq 3$, which implies, since $q \leq k-1$,
$$n \leq D k^{k+1} \leq [\sqrt{\pi(k)} \log (n+k)/\log2]^{\pi(k)} k^{k+1}.$$
If $n \geq k \vee 3$, we deduce
$$ 2 n \leq 2 [\sqrt{\pi(k)} \log (2n)/\log2]^{\pi(k)} k^{k+1},$$
i.e.
$$ \frac{2n}{[\log (2n)]^{\pi(k)}} \leq 2 [\sqrt{\pi(k)}/\log2]^{\pi(k)} k^{k+1}
.$$
Now, one has obviously $\pi(k) \leq 2k/3$ for all $k \geq 2$, and then
$\sqrt{\pi(k)}/\log 2 \leq \sqrt{2 k}$ for all integers $k \geq 2$, and more accurately, it is known that
$(\pi(k) \log k )/k$, which tends to $1$ at infinity by the prime number theorem, reaches its maximum at $k = 113$.
Hence,
$$\frac{2n}{[\log (2n)]^{\pi(k)}}
\leq 2 (2 k)^{c k /\log k} k^{k+1}$$
where
$$c = \frac{1}{2} \, \frac{\pi(113) \log 113}{113} \leq 0.63$$
and then
$$\frac{2n}{[\log (2n)]^{\pi(k)}}
\leq 2 ( 2^{0.63 k/ \log 2}) k^{0.63 k/\log k} k^{k+1} \leq 2 e^{1.26 k} k^{k+1}
\leq (e^{1.26} k)^{k+1} \leq (3.6 k)^{k+1}.$$
Let us assume that $2n \geq (100k)^{k+1}$. The function
$x \mapsto x/\log^{\pi(k)}(x)$ is increasing
for $x \geq e^{\pi(k)}$. Moreover, by
studying the function $x \mapsto
\log \log (100 x) / \log (x+1)$ for $x \geq 2$, we check that $\log (100k) \leq (k+1)^{1.52}$ for all $k \geq 2$. Hence, since $\pi(k) \leq \pi(k+1)$,
$$\frac{2n}{[\log (2n)]^{\pi(k)}}
\geq \frac{(100k)^{k+1}}{
((k+1)\log (100k))^{\pi(k)}}
\geq \frac{(100k)^{k+1}}{(k+1)^{2.52 \pi(k+1)}}
$$ $$\geq \frac{(100k)^{k+1}}{(k+1)^{(2.52)(1.26) (k+1)/\log(k+1)}}
\geq (100 k e^{-3.18})^{k+1} \geq (4 k)^{k+1},$$
which contradicts the previous inequality.
Hence,
$$n \leq 2n \leq (100k)^{k+1},$$
and this bound is of course also available for $n \leq k \vee 3$.
\end{proof}
This result implies that theoretically, for fixed $k$, one can find all the values of $n$ such that $(X_{n+1}, \dots, X_{n+k})$ are not independent by brute force computation. In practice, the bound we have obtained is far from optimal, and is too poor to be directly useable except for very small values of $k$, for which a more careful reasoning can solve the problem directly. Here is an example for $k=5$:
\begin{proposition}
For $n \geq 1$, the variables $(X_{n+1},X_{n+2}, X_{n+3}, X_{n+4}, X_{n+5})$ are independent except if $n \in \{1,2,3,4,5,7\}$.
\end{proposition}
\begin{proof}
If $\prod_{j=1}^5 (n+j)^{m_j} = 1$ with integers $m_1, \dots m_5$ not all equal to zero,
then $m_j = 0$ as soon as $n+j$ has a prime factor larger than or equal to $5$: otherwise, this prime factor cannot be cancelled by the factors $(n+k)^{m_k}$ for $k \neq j$. Hence, the values of $n+j$ such that $m_j \neq 0$ have only prime factors $2$ and $3$, and at most one of them has both factors since it should then be divisible by $6$. Moreover, if $n \geq 4$, there can be at most one power of $2$ and one power of $3$ among $n+1, \dots, n+5$.
One deduces that dependence is only possible if among $n+1, \dots, n+5$, there are three numbers, respectively of the form
$2^k, 3^\ell, 2^r.3^s$, for integers $k, \ell, r, s > 0$. The quotient between two of these integers is between $1/2$ and $2$ since we here assume $n \geq 4$. Hence, $2^k \geq 2^r.3^s /2 \geq
2^r$ and then $k \geq r$. Similarly,
$3^{\ell} \geq 2^r.3^s /2 \geq 3^s$, which implies $\ell \geq s$. The numbers $2^k$ and $2^r.3^s$ are then both divisible by
$2^r$; since they differ by at most $4$,
$r \leq 2$. The numbers $3^{\ell}$ and
$2^r.3^s$ are both divisible by $3^s$, and then $s \leq 1$. Therefore,
$2^r.3^s \leq 12$ and $n \leq 11$.
By checking case by case the values of $n$ smaller than or equal to $11$, we get the desired result.
\end{proof}
The results above give an upper bound, for fixed $k$, of the maximal value of
$n$ such that $(X_{n+1}, \dots, X_{n+k})$ are not independent. By considering two consecutive squares and their geometric mean, whose logarithms are linearly dependent, one deduces the lower bound
$ ([k/2]-1)^2 - 1 \geq (k-1)(k-5)/4$ for the maximal $n$.
As written in a note by Dubickas \cite{D}, this bound can be improved to a quantity equivalent to $(k/4)^3$,
by considering the identity:
$$(n^3 - 3n - 2)(n^3 - 3n +2) n^3
= (n^3 - 4n)(n^3 - n)^2.$$
In \cite{D}, as an improvement of a result of \cite{T}, it is also shown that for all $\epsilon > 0$, the lower bound
$e^{\log^2 k /[(4 + \epsilon) \log \log k]}$ occurs for infinitely many values of $k$.
A computer search
gives, for $k$ between $3$ and $13$, and
$n \leq 1000$, the following largest values for which we do not have independent variables: 1, 5, 7, 14, 23, 24, 47, 71, 71, 71, 239. For example, if $k = 13$ and $n = 239$, the five integers $240, 243, 245, 250, 252$ have only the four prime factors $2, 3, 5, 7$, so we necessarily have a dependence, namely:
$$240^{65} \cdot 243^{31} \cdot
245^{55} \cdot 250^{-40} \cdot
252^{-110} = 1.$$
It would remain to check if there are dependences for $n > 1000$.
\section{Independence in the case of roots of unity} \label{uniformq}
We now suppose that
$(X_p)_{p \geq 1}$ are i.i.d., uniform on the set of $q$-th roots of unity, $q \geq 1$ being a fixed integer. If $q = 2$, we get symmetric Bernoulli random variables.
For all integers $s \geq 2$, we will denote
by $\mu_{s,q}$ the largest divisor $d$ of
$q$ such that $s$ is a $d$-th power.
The analog of
Lemma \ref{1.1} in the present setting is the following:
\begin{lemma}
For $n, k \geq 1$, the variables
$(X_{n+1}, \dots, X_{n+k})$ are all
uniform on the set of $q$-th roots of unity if and only if $\mu_{n+j,q} = 1$ for all $j$ between $1$ and $k$. They are
independent if and only if
the only $k$-uple $(m_1, \dots, m_k)$,
$0 \leq m_j < q/ \mu_{n+j,q}$ such that
$$\forall p \in \mathcal{P}, \sum_{j=1}^n m_j v_p (n+j)
\equiv 0 \, (\operatorname{mod. } q)$$
is $(0,0,\dots,0)$.
\end{lemma}
\begin{proof}
For any $s \geq 2$, $\ell \in \mathbb{Z}$, we have
$$\mathbb{E}[X_s^\ell] =
\prod_{p \in \mathcal{P}} \mathbb{E} [X_p^
{\ell v_p(s)}],$$
which is equal to $1$ if
$\ell v_p(s)$ is divisible by $q$ for all $p \in \mathcal{P}$, and to $0$ otherwise.
The condition giving $1$ is equivalent
to the fact that $\ell$ is a multiple
of $ q/(\operatorname{gcd}(q, (v_p(s))_{p \in \mathcal{P}})$, which is $q/\mu_{s,q}$.
Hence, $X_s$ is a uniform $(q/\mu_{s,q})$-th root of unity, which implies the first part of the proposition.
The variables $(X_{n+1}, \dots, X_{n+k})$ are independent if and only if for all $m_1, \dots, m_k \in \mathbb{Z}$,
$$ \mathbb{E} \left[ \prod_{j=1}^k
X_{n+j}^{m_j} \right]
= \prod_{j=1}^k \mathbb{E}[ X_{n+j}^{m_j} ].$$
Since $X_{n+j}$ is a uniform $(q/\mu_{n+j,q})$-th root of unity, both sides of the equality depend only on the values of $m_j$ modulo $q/\mu_{n+j,q}$ for $1 \leq j \leq k$. This implies that we can assume, without loss of generality, that $0 \leq m_j < q/\mu_{n+j,q}$ for all $j$. If all the $m_j$'s are zero, both sides are obviously equal to $1$. Otherwise, the right-hand side is equal to zero, and then we have independence if and only if it is also the case of the left-hand side, i.e. for all
$(m_1, \dots, m_k) \neq (0,0, \dots, 0)$,
$0 \leq m_j < q/\mu_{n+j,q}$,
$$\mathbb{E} \left[ \prod_{j=1}^k
X_{n+j}^{m_j} \right]
= \mathbb{E} \left[ \prod_{p \in \mathcal{P}} X_p^{\sum_{1 \leq j \leq k}
m_j v_p(n+j)} \right]
= \prod_{p \in \mathcal{P}}
\mathbb{E} \left[ X_p^{\sum_{1 \leq j \leq k}
m_j v_p(n+j)} \right] = 0,$$
which is true if and only if
$$\exists p \in \mathcal{P}, \sum_{j=1}^n m_j v_p (n+j)
\not\equiv 0 \, (\operatorname{mod. } q).$$
\end{proof}
We then have the following result, similar to Proposition \ref{1.2}:
\begin{proposition}
For fixed $k, q \geq 1$, there exists an explicitely computable $n_0(k,q)$ such that $(X_{n+1}, \dots, X_{n+k})$ are independent as soon as $n \geq n_0(k,q)$.
\end{proposition}
The bound $n_0(k,q)$ can be deduced from bounds on the solutions of certain diophantine equations which are available in the literature: we do not take care of its precise value, which is anyway far too large to be of any use if we want to find in practice the values of $n$ such that $(X_{n+1}, \dots, X_{n+k})$ are not independent.
\begin{proof}
For each value of $n \geq 1$ such that
$(X_{n+1}, \dots, X_{n+k})$ are dependent,
there exist $0 \leq m_j < q/\mu_{n+j,q}$, not all zero, such that
$$\forall p \in \mathcal{P}, \sum_{j=1}^n m_j v_p (n+j)
\equiv 0 \, (\operatorname{mod. } q).$$
There are finitely many choices, depending only on $k$ and $q$, for
the $k$-uples $(\mu_{n+j,q})_{1 \leq j \leq k}$ and
$(m_j)_{1 \leq j \leq k}$, so it is sufficient to show that the values of $n$ corresponding to each
choice of $k$-uples is bounded by an explicitely computable quantity.
At least two of the $m_j$'s are non-zero: otherwise $m_j v_p(n+j)$ is divisible by
$q$ for all $p \in \mathcal{P}$, $j$ being the unique index such that $m_j \neq 0$, and then $m_j$ is divisible by $q/\mu_{n+j,q}$: this contradicts the inequality
$0 < m_j < q/\mu_{n+j,q}$.
On the other hand, if $p$ is a prime larger than $k$, at most one of the terms
$m_j v_p(n+j)$ is non-zero, and then
all the terms are divisible by $q$, since
it is the case for their sum.
We deduce that $n+j$ is the product of a power of order $\rho_j := q/\operatorname{gcd}(m_j,q)$
and a number $A_j$ whose prime factors are all
smaller than $k$. Moreover, on can assume
that $A_j$ is "$\rho_j$-th power free", i.e. that all its $p$-adic valuations
are strictly smaller than $\rho_j$.
Hence there exist
$$A_j \leq \prod_{p \in \mathcal{P},
p \leq k} p^{\rho_j - 1} \leq (k!)^q$$
and an integer $B_j \geq 1$ such that
$n+j = A_j B_j^{\rho_j}$.
The value of the exponents $\rho_j$ are
fixed by the $m_j$'s, and at least two
of them are strictly larger than $1$, since at least two of the $m_j$'s are non-zero. Let us first assume that there exist distinct $j$ and $j'$ such that $\rho_j \geq 2$ and $\rho_{j'} \geq 3$.
One finds an explicitly computable bound on $n$ in this case as soon of we find an explicitly computable bound for the solutions of each diophantine equation in $x$ and $z$:
$$A z^{\rho_j} - A' x{^{\rho_{j'}}} = d$$
for each $A, A', d$ such that $1 \leq A, A' \leq (k!)^q$ and $-k < d < k$, $d \neq 0$.
These equations can be rewritten as: $y^{\rho_j} = f(x)$, where $y = Az$ and
$$f(x) = A^{\rho_j - 1} (A'x^{\rho_{j'}} + d).$$
This polynomial has all simple roots (the
$\rho_{j'}$-th roots of $-d/A'$) and then
at least two of them; it has at least three if $\rho_j = 2$ since $\rho_{j'}$ is supposed to be at least $3$ in this case.
By a result of Baker \cite{B}, all the solutions are bounded by an explicitly computable quantity, which gives the desired result (the same result with an ineffective bound was already proven by Siegel).
In remains to deal with the case where $\rho_j = 2$ for all $j$ such that $m_j \neq 0$. In this case, $q$ is even and $m_j$ is divisible by $q/2$, which implies that $m_j = q/2$ when $m_j \neq 0$. By looking at the prime factors larger than $k$, one deduces that for all $j$ such that $m_j \neq 0$,
$m_j$ is a square times a product of distinct primes smaller than or equal to $k$. If at least three of the $m_j$'s are non-zero, it then suffices to find an explicitly computable bound for the solutions of each system of diophantine equations:
$$ B y^2 = A t^2 + d_1, C z^2 = A t^2 + d_2$$
for $1 \leq A, B, C \leq k!$ squarefree, $-k < d_1, d_2 < k$, $d_1, d_2, d_1 - d_2 \neq 0$.
From these equations, we deduce, for $x = BC yz$:
$$x^2 = BC (At^2 + d_1)(At^2 + d_2).$$
The four roots of the right-hand side are the square roots of $-d_1/A$ and $-d_2/A$, which are all distinct since $d_1 \neq d_2$, $d_1 \neq 0$, $d_2 \neq 0$. Again by Baker's result, one deduces that the solutions are explicitly bounded, which then gives an explicit bound for $n$.
The remaining case is when exactly two of the $m_j$'s are non-zero, with $\rho_j = 2$, and then $m_j = q/2$.
The dependence modulo $q$ then means that
$(n+j)(n+j')$ is a square for distinct $j, j'$ between $1$ and $k$, which implies that
$(n+j)/g$ and $(n+j')/g$ are both squares
where $g = \operatorname{gcd}(n+j, n+j')$.
These squares have difference smaller than $k$, which implies that they are smaller than $k^2$. Moreover, $g$ divides $|j-j'| \leq k$, and then $g \leq k$, which gives $n \leq k^3$.
\end{proof}
Here, we explicitly solve a particular case:
\begin{proposition}
For $q = 2$, $(X_{n+1}, \dots, X_{n+5})$ are independent for all $n \geq 2$ and not for $n=1$.
\end{proposition}
\begin{proof}
A dependence means that there exists a product
of distinct non-square factors among $n+1, \dots, n+5$ which is a square.
For a prime $p \geq 5$, at most one $p$-adic valuation is non-zero, which implies that all the $p$-adic valuations are even.
Hence, the factors involved in the product are all squares multiplied by $2, 3$ or $6$. Since they differ by at most $4$, they cannot be in the same of the three "categories", which implies, since the product is a square, that there exist three numbers, respectively of the form $2x^2$, $3y^2$, $6z^2$, in the interval between $n+1$ and $n+5$. Now, Hajdu and Pint\'er \cite{HP} have determined all the triples of distinct integers in intervals of length at most 12 whose product is a square. For length $5$, the only positive triple is $(2,3,6)$, which implies that the only dependence in the present setting is $X_2 X_3 X_6 = 1$.
\end{proof}
\begin{remark}
The list given in \cite{HP} shows that for $q = 2$, there are dependences for quite large values of $n$ as soon as $k \geq 6$. For example,
we have $X_{240} X_{243} X_{245} = 1$ for $k =6$ and $X_{10082}X_{10086}X_{10092} = 1$ for $k = 11$.
\end{remark}
\section{Convergence of the empirical measure in the uniform case} \label{uniform2}
In this section, $(X_p)_{p \in \mathcal{P}}$ are uniform on the unit circle, and $k \geq 1$ is a fixed integer. For $N \geq 1$, we consider the empirical measure of the $N$ first $k$-uples:
$$\mu_{k,N} := \frac{1}{N}\sum_{n=1}^N
\delta_{(X_{n+1}, \dots X_{n+k})}.$$
It is reasonnable to expect that $\mu_{k,N}$ tends to the uniform distribution on $\mathbb{U}^k$, which is the common distribution of
$(X_{n+1}, \dots X_{n+k})$ for all but finitely many values of $n$.
In order to prove this result, we will estimate the second moment of the Fourier transform of $\mu_{k,N}$, given by
$$\hat{\mu}_{k,N}(m_1, \dots, m_k)
= \int_{\mathbb{U}^k} \prod_{j=1}^k
z_j^{m_j} d\mu_{k,N} (z_1, \dots, z_k).$$
\begin{proposition} \label{momentorder2}
Let $m_1, \dots, m_k$ be integers, not all
equal to zero. Then, for all $N > N' \geq 0$,
$$\mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
\leq k (N-N') $$
and there exists $C_{m_1, \dots, m_k} \geq 0$, independent of $N$ and $N'$, such that
$$N-N' \leq \mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
\leq N - N' + C_{m_1, \dots, m_k}.$$
Moreover, under the same assumption,
$$\mathbb{E} \left[ |\hat{\mu}_{k,N}(m_1, \dots, m_N)|^2
\right] \leq \frac{k}{N},$$
$$\frac{1}{N} \leq \mathbb{E} \left[ |\hat{\mu}_{k,N}(m_1, \dots, m_N)|^2
\right] \leq \frac{1}{N} + \frac{C_{m_1, \dots, m_k}}{N^2}.$$
Finally, for $k \in \{1,2\}$, one can take
$C_{m_1}$ or $C_{m_1, m_2}$ equal to $0$,
and for $k = 3$, one can take
$C_{m_1, m_2,m_3} = 2$ if $(m_1,m_2,m_3)$ is proportional to $(2,1,-4)$ and $C_{m_1,m_2,m_3} = 0$ otherwise.
\end{proposition}
\begin{proof}
We have, using the completely multiplicative extension of $X_r$ to all $r \in \mathbb{Q}_+^*$:
$$\mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
= \sum_{N' < n_1, n_2 \leq N}
\mathbb{E} \left[X_{\prod_{j=1}^{k}
(n_1 + j)^{m_j} / (n_2 + j)^{m_j} } \right],$$
and then the left-hand side is equal to the number of couples $(n_1, n_2)$
in $\{N'+1, \dots, N\}^2$ such that
\begin{equation}\prod_{j=1}^k (n_1 + j)^{m_j}
= \prod_{j=1}^k (n_2 + j)^{m_j}.
\label{n1n2}
\end{equation}
The number of trivial solutions $n_1 = n_2 $ of this equation is equal to $N - N'$, which gives a lower bound on the second moment we have to estimate.
On the other hand, the derivative of the rational fraction $\prod_{j=1}^k (X + j)^{m_j}$ can be written as the product of $\prod_{j=1}^k (X + j)^{m_j - 1}$, which is strictly positive on $\mathbb{R}_+$, by the
polynomial
$$Q(X) = \prod_{j=1}^k (X+j) \left[
\sum_{j=1}^k \frac{m_j}{X + j} \right].$$
The polyomial $Q$ has degree at most $k-1$ and is non-zero, since $(m_1, \dots, m_k)
\neq (0, \dots, 0)$ and then $\prod_{j=1}^k
(X+j)^{m_j}$ is non-constant.
We deduce that $Q$ has at most $k-1$ zeros, and then on $\mathbb{R}_+$, $\prod_{j=1}^k (X+j)^{m_j}$ is strictly monotonic on each of at most $k$ intervals of $\mathbb{R}_+$, whose bounds are $0$, the positive zeros of $Q$ and $+\infty$. Hence, for each choice of $n_1$, there are at most $k$ values of $n_2$
satisfying \eqref{n1n2}, i.e. at most one in each interval, which gives the upper bound $k(N-N')$ for the moment we are estimating.
Moreover, since $\prod_{j=1}^k (X+j)^{m_j}$ is strictly monotonic on an interval
of the form $[A, \infty)$ for some $A > 0$, we deduce that for any non-trivial
solution $(n_1,n_2)$ of \eqref{n1n2}, the minimum of $n_1$ and $n_2$ is at most $A$.
Hence, there are finitely many possibilities for the common value of the two sides of
\eqref{n1n2}, and for each of these values, at most $k$ possibilities for $n_1$ and for $n_2$. Hence, for fixed $(m_1, \dots, m_k)$, the total number of non-trivial solutions of \eqref{n1n2} is finite, which gives the bound $N-N'+ C_{m_1, \dots, m_k}$ of the proposition.
The statement involving the empirical measure is deduced by taking $N' = 0$ and by dividing everything by $N^2$.
The claim for $k \leq 3$ is an immediate consequence of the following statement we will prove now: the only integers $n_1 > n_2 \geq 1$, $(m_1,m_2,m_3) \neq (0,0,0)$,
such that \begin{equation}
(n_1+1)^{m_1} (n_1+2)^{m_2}
(n_1+3)^{m_3} = (n_2+1)^{m_1} (n_2+2)^{m_2}
(n_2+3)^{m_3} \label{n1n22}
\end{equation}
are $n_1 = 7$, $n_2 = 2$, $(m_1,m_2,m_3)$ proportional to $(2,1,-4)$, which corresponds to the equality: $$
8^2 \cdot 9 \cdot 10^{-4} = 3^2 \cdot 4 \cdot 5^{-4}.$$
If $m_1, m_2, m_3$ have the same sign and are not all zero, $(n+1)^{m_1}(n+2)^{m_2}
(n+3)^{m_3}$ is strictly monotonic in $n \geq 1$, and then we cannot get a solution of \eqref{n1n22} with $n_1 > n_2$.
By changing all the signs if necessary, we may assume that one of the integers $m_1, m_2,m_3$ is strictly negative and the others are nonnegative. For $n \geq 1$,
the fraction obtained by writing
$(n+1)^{m_1}(n+2)^{m_2}
(n+3)^{m_3}$ can only be simplified by prime
factors dividing two of the integers $n+1, n+2, n+3$, and then only by a power of $2$.
If $m_2 < 0$ and then $m_1, m_3 \geq 0$, the numerator and the denominator have different parity, and then the fraction is irreducible for all $n$: we do not get any solution of \eqref{n1n22} in this case.
Otherwise, $m_1$ or $m_3$ is strictly negative. If $(n_1, n_2)$ solves \eqref{n1n22}, let us define $s := 1$ and $j := n_2 + 1$ if $m_1 < 0$, and $s := -1$ and $j := n_2 + 3$ if $m_3 < 0$.
The denominators of the two fractions corresponding to the two sides of \eqref{n1n22} are respectively a power of
$j$ and the same power of $n_1 + 2 - s$: if \eqref{n1n22} is satisfied, these denominators should differ only by a power of $2$, since the fractions can be only simplified by such a power.
Hence, $n_1 + 2 - s = 2^{\ell} j$ for some $\ell \geq 0$, and by looking at the numerators of the fractions, we deduce that there exists $r \geq 0$ such that
$$2^r (j+s)^{m_2} (j+2s)^{m_{2 + s}}
= (2^{\ell}j+s)^{m_2} (2^{\ell}j+2s)^{m_{2 + s}}.$$
If $\ell \geq 2$, the ratios
$(2^{\ell}j+s)/(j+s)$ and
$(2^{\ell}j+2s)/(j+2s)$ are at least
$(4\cdot2 + 2)/(2 + 2) = 5/2$ since $j \geq n_2 +1 \geq 2$ and $|2s| \leq 2$, and then
the ratio between the right-hand side
and the left-hand side of the previous equality is at least $(5/2)^{m_{2 + s} + m_2} 2^{-r}$,
which gives
$$ 2^r \geq (5/2)^{m_{2 + s} + m_2}.$$
On the other hand, the $2$-adic valution
of the right-hand side is $m_{2+s}$ since
$2^{\ell} j + 2s \equiv 2$ modulo 4, whereas the valuation of the left-hand side is at least $r$, which gives
$$ 2^r \leq 2^{m_{2+s}}.$$
We then get a contradiction for $\ell \geq 2$, except in the case $m_{2+s} = m_2 = 0$,
where we aleardy know that there is no solution of \eqref{n1n22}.
If $\ell = 1$, we get
$$2^r (j+s)^{m_2} (j+2s)^{m_{2 + s}}
= (2j+s)^{m_2} (2j+2s)^{m_{2 + s}}.$$
In this case, the prime factors of $2j+s$, which are odd ($|s| = 1$), should divide $j+s$ or $j+2s$, then $2j+2s$ or $2j+4s$, and finally $s$ or $3s$. Hence, $2j+s$ is a power of $3$.
Similarly, the odd factors of $j+2s$, and then of $2j + 4s$, should divide $2j+s$ or $2j+2s$, and then $s$ or $3s$: $2j+4s$ is the product of a power of $2$ and a power of $3$.
If we write $2j+s = 3^a$, $2j + 4s = 2^b 3^c$, we must have $|3^a - 2^b 3^c| = 3$.
If $a \leq 1$, we have $2j + s \leq 3$. If $s = 1$, we get $n_2 + 1 = j \leq 1$, and if $s = -1$, we get $n_2 + 3 = j \leq 2$, which is impossible.
If $a \geq 2$, $3^a$ is divisible by $9$, and then $2^b 3^c$ is congruent to $3$ or $6$ modulo $9$, which implies $c = 1$, and then $|3^{a-1} - 2^b| = 1$. Now, by induction, one proves that the order of $2$ modulo $3^{a-1}$ is equal to $2.3^{a-2}$ (i.e. $2$ is a primitive root modulo the powers of $3$), and this order should divide $2b$, which implies that $b \geq 3^{a-2}$ ($b = 0$ is not possible) and then
$2^{3^{a-2}} \leq 3^{a-1} + 1$, which implies $a \in \{2,3\}$.
If $a = 2$ and $s = 1$, we get $2j+1 = 9$,
$j= 4$, and then $n_1 = 7$, $n_2 = 3$.
We should solve $4^{m_1} 5^{m_2} 6^{m_3} =
8^{m_1} 9^{m_2} 10^{m_3}$. Taking the $3$-adic valuation gives $m_3 = 2 m_2$, taking the $5$-adic valuation gives $m_3 = m_2$, and then $m_2 = m_3 = 0$, which implies $m_1 = 0$.
If $a = 2$ and $s = -1$, we get $2j - 1 = 9$, $j=5$, $n_1 = 7$, $n_2 = 2$, which gives the equation $3^{m_1} 4^{m_2} 5^{m_3} =
8^{m_1} 9^{m_2} 10^{m_3}$.
Taking the $2$-adic valuation gives $2m_2 = 3m_1 + m_3$, taking the $3$-adic valuation gives $m_1 = 2 m_2$, and then $(m_1,m_2,m_3)$ should be proportional to $(2,1,-4)$: in this case, we get one of the solutions already mentioned.
If $a = 3$, $2^b$ should be $8$ or $10$, and then $b = 3$, $2j+ s = 27$, $2j + 4s = 24$, $j = 14$, $s = -1$, $n_1 = 25$, $n_2 = 11$. We have to solve $12^{m_1} 13^{m_2} 14^{m_3} = 26^{m_1} 27^{m_2} 28^{m_3}$.
Taking the $3$-adic valuation gives $m_1 = 3m_2$, taking the $13$-adic valuation gives $m_1 = m_2$, and then $m_1 = m_2 = m_3 = 0$.
\end{proof}
\begin{corollary}
For all $(m_1, \dots, m_k) \in \mathbb{Z}^k$, $\hat{\mu}_{k,N}(m_1, \dots, m_k)$ converges
in $L^2$, and then in probability, to $\mathds{1}_{m_1 = \dots = m_k = 0}$, i.e. to the corresponding Fourier coefficient of the uniform distribution $\mu_{k}$ on $\mathbb{U}^k$. In other words, $\mu_{k,N}$ converges weakly in probability to $\mu_{k}$.
\end{corollary}
In this setting, we also have a strong law of large numbers, with an estimate of the
rate of convergence, for sufficiently smooth test functions. Before stating the corresponding result, we will show the following lemma, which will be useful:
\begin{lemma} \label{lemmaLLN}
Let $\epsilon > \delta \geq 0$, $C > 0$, and let $(A_n)_{n \geq 0}$ be a sequence of
random variables such that $A_0 = 0$ and
for all $N > N' \geq 0$,
$$\mathbb{E}[|A_{N} - A_{N'}|^2]
\leq C (N-N')N^{2 \delta}.$$
Then, almost surely, $A_N = O(N^{1/2 + \epsilon})$: more precisely, we have for $M > 0$,
$$\mathbb{P} \left(\sup_{N \geq 1}
|A_N|/(N^{1/2 + \epsilon}) \geq M
\right) \leq K_{\epsilon, \delta} C M^{-2},$$
where $K_{\epsilon, \delta} > 0$ depends
only on $\delta$ and $\epsilon$.
\end{lemma}
\begin{proof}
For $\ell, q \geq 0$, $M > 0$ and $\epsilon' := (\delta + \epsilon)/2
\in (\delta, \epsilon)$, we have:
\begin{align*}\mathbb{P} \left(|A_{(2\ell + 1).2^q} -
A_{(2\ell).2^q}| \geq M [(2\ell + 1).2^q]^{1/2 + \epsilon'} \right)
& \leq M^{-2} [(2\ell + 1).2^q]^{-1 -2 \epsilon'}
\mathbb{E} \left[|A_{(2\ell + 1).2^q} -
A_{(2\ell).2^q}|^2\right]
\\ & \leq M^{-2}.C.2^q.[(2\ell + 1).2^q]^{2 \delta -1 -2 \epsilon'}
\\ & \leq M^{-2}.C.2^{-2q (\epsilon' - \delta)}
(2\ell + 1)^{-1 - 2 (\epsilon' - \delta)}.
\end{align*}
Since $\epsilon' > \delta$, we deduce that
none of the events above occur with probability at least $1 - D CM^{-2}$, where $D$ depends only on $\epsilon'$ and $\delta$, and then only on $\delta$ and $\epsilon$.
In this case, we have
$$ |A_{(2\ell + 1).2^q} -
A_{(2\ell).2^q}| \leq M [(2\ell + 1).2^q]^{1/2 + \epsilon'}$$
for all $\ell, q \geq 0$.
Now, if we take the binary expansion $N = \sum_{j=0}^{\infty} \delta_j
2^j$
with $\delta_j \in \{0,1\}$, and if $N_r
= \sum_{j=r}^{\infty} \delta_j 2^j$ for all $r \geq 0$, we get $|A_{N_r} - A_{N_{r+1}}|
= 0$ if $\delta_r = 0$, and
\begin{align*} |A_{N_r} - A_{N_{r+1}}|
& = |A_{2^r (2 (N_{r+1}/2^{r+1}) + 1)}
- A_{2^r (2 N_{r+1}/2^{r+1})}|
\\ & \leq M[2^r (2( N_{r+1}/2^{r+1}) + 1)]^{1/2 + \epsilon'}
= M (N_r)^{1/2 + \epsilon'} \leq M N^{1/2 + \epsilon'}
\end{align*}
if $\delta_r = 1$. Adding these inequalities from $r = 0$ to $\infty$, we deduce that $|A_N| \leq M \mu(N) N^{1/2
+ \epsilon'}$, where $\mu(N)$ is the number of $1$'s in the binary expansion of $N$. Hence,
$$|A_N| \leq M \left(1 + (\log N/\log 2)
\right) N^{1/2 + \epsilon'}
< B M N^{1/2 + \epsilon} ,$$
where $B > 0$ depends only on $\epsilon'$ and $\epsilon$ (recall that $\epsilon > \epsilon'$), and then only on $\delta$ and $\epsilon$.
We then have, for $M' := BM$:
$$\mathbb{P} \left(\exists N \geq 1,
|A_N| \geq M' N^{1/2 + \epsilon} \right)
\leq D C M^{-2} = D C B^{2} (M')^{-2},$$
which gives the desired result after replacing $M'$ by $M$.
\end{proof}
From this lemma, we deduce the following:
\begin{proposition}
Almost surely, $\mu_{k,N}$ weakly converges to $\mu_k$. More precisely, the following holds with probability one: for all $u > k/2$, for all continuous functions $f$ from
$\mathbb{U}^k$ to $\mathbb{C}$ such that
$$\sum_{m \in \mathbb{Z}^k} |\hat{f}(m)|
\, ||m||^{u} < \infty,$$
$|| \cdot ||$ denoting any norm on
$\mathbb{R}^k$,
and for all $\epsilon > 0$,
$$\int_{\mathbb{U}^k} f d \mu_{k,N}
= \int_{\mathbb{U}^k} f d \mu_{k}
+ O(N^{-1/2 + \epsilon}).$$
\end{proposition}
\begin{remark}
By Cauchy-Schwarz inequality, we have
$$\sum_{m \in \mathbb{Z}^d}
|\hat{f}(m)| (1+||m||)^{u}
\leq \left(\sum_{m \in \mathbb{Z}^d}
|\hat{f}(m)|^2 (1 + ||m||)^{4u} \right)^{1/2}
\left(\sum_{m \in \mathbb{Z}^d}
(1+||m||)^{- 2u} \right)^{1/2},$$
which implies that the assumption on $f$ given in the proposition is satisfied for all $f$ in the Sobolev space $H^s$ as soon
as $s > k$.
Unfortunately, the proposition does not apply if $f$ is a product of indicators of arcs. The weak convergence implies that
$$\int_{\mathbb{U}^k} f d\mu_{k,N} \underset{N \rightarrow \infty}{\longrightarrow}
\int_{\mathbb{U}^k} f d\mu_{k} $$
even in this case, but we don't know at which rate this convergence occurs.
\end{remark}
\begin{proof}
From Proposition \ref{momentorder2}, and
Lemma \ref{lemmaLLN} applied to $\epsilon > 0$, $\delta = 0$ and
$$A_N := N \hat{\mu}_{k,N} (m),$$
we get, for all $m \in \mathbb{Z}^d \backslash \{0\}$,
$M > 0$,
$$\mathbb{P} \left(\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/N^{-1/2 + \epsilon} \geq M
\right) \leq k K_{\epsilon, 0} M^{-2}$$
In particular, almost surely,
$$\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/(N^{-1/2 + \epsilon}
||m||^u) < \infty$$
for all $m \in \mathbb{Z}^d \backslash \{0\}$, $\epsilon > 0$, $u > k/2$.
Moreover,
$$\mathbb{P} \left(\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/N^{-1/2 + \epsilon} \geq ||m||^{u} \right) \leq
k K_{\epsilon, 0} ||m||^{- 2u}.$$
Since $- 2u < -k$, we deduce, by Borel-Cantelli lemma, that almost surely,
for all $\epsilon > 0$, $u > k/2$,
$$\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/(N^{-1/2 + \epsilon}
||m||^u) \leq 1$$
for all but finitely many $m \in \mathbb{Z}^d \backslash \{0\}$. Therefore, almost surely, for all $\epsilon > 0$, $u > k/2$,
$$\sup_{m \in \mathbb{Z}^d \backslash \{0\}}
\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/(N^{-1/2 + \epsilon}
||m||^u) < \infty$$
i.e.
$$\hat{\mu}_{k,N} (m)
= O(N^{-1/2 + \epsilon} ||m||^u)$$
for $m \in \mathbb{Z}^d \backslash \{0\}$, $N \geq 1$.
Let us now assume that this almost sure property holds, and let $f$ be a function
satisfying the assumptions of the proposition. Since the Fourier coefficients of $f$ are summable (i.e. $f$ is in the Wiener algebra of $\mathbb{U}^k$), the corresponding Fourier series converges uniformly to a function which is necessarily equal to $f$, since it has the same Fourier coefficients. We can then
write:
$$f(z_1, \dots, z_k) =
\sum_{m_1, \dots, m_k \in \mathbb{Z}}
\hat{f}(m_1, \dots, m_k) \prod_{j=1}^k
z_j^{m_j},$$
which implies
$$\int_{\mathbb{U}^k} f d\mu_{k,N}
= \sum_{m \in \mathbb{Z}^k}
\hat{f}(m)
\hat{\mu}_{k,N} (m)
= \int_{\mathbb{U}^k} f d\mu_{k}
+ \sum_{m \in \mathbb{Z}^k \backslash \{0\}}
\hat{f}(m)
\hat{\mu}_{k,N} (m).$$
Now, for all $u > k/2$, the last sum is dominated by
$$N^{-1/2 + \epsilon} \sum_{m \in \mathbb{Z}^k}
\hat{f}(m) ||m||^u,$$
which is finite and $O(N^{-1/2 + \epsilon})$ if $u$ is taken small enough.
\end{proof}
\section{Moments of order different from two} \label{moments}
Since we have a law of large number on
$\mu_{k,N}$, with rate of decay of order $N^{-1/2+ \epsilon}$, it is natural to look if we have a central limit theorem.
In order to do that, a possibility consists in studying moments of sums in $n$ of products of variables from $X_{n+1}$ to $X_{n+k}$.
For the sums $\sum_{n=1}^N X_n$, it seems very probable that we have not convergence to a Gaussian random variable after normalization. Indeed, the second moment of the absolute value of the renormalized sum$\frac{1}{\sqrt{N}} \sum_{n=1}^N X_n$
is equal to $1$, so if the variable converges to a standard complex Gaussian variable, we need to have
$$\mathbb{E} \left[\frac{1}{\sqrt{N}}\left|\sum_{n=1}^N
X_n \right| \right] \underset{N \rightarrow \infty}{\longrightarrow}
\frac{\sqrt{\pi}}{2}.$$
There are (at least) two contradictory conjectures about the behavior of this first moment: a conjecture by Helson \cite{H}, saying that it should tend to $0$, and a conjecture by Heap and Lindqvist \cite{HL}, saying that it should tend to an explicit constant which is appoximately 0.8769, i.e. close, but slightly smaller, than $\sqrt{\pi}/2$. In \cite{HL}, the authors show that the upper limit of the moment is smaller than 0.904, whereas
in \cite{HNR}, Harper, Nikeghbali and Raziwill prove that the first moment decays at most like $(\log \log N)^{-3 + o(1)}$: they also conjecture that the moment tends to a non-zero constant.
An equivalent of the moments of even order are computed in \cite{HNR} and \cite{HL}, and they are not bounded with respect to $N$: the moment of order $2p$ is equivalent to an explicit constant times $(\log N)^{(p-1)^2}$.
In the case of sums different from $\sum_{n=1}^N X_n$, the moment computations involve arithmetic problems of different nature. Here, we will study the example of the fourth moment of the absolute value of
$ \sum_{n=1}^N X_n X_{n+1}$: notice that the second moment is clearly equal to $N$.
We have the following result:
\begin{proposition}
We have
$$\mathbb{E}\left[ \left|\sum_{n=1}^N
X_n X_{n+1} \right|^4 \right]
= 2N^2 - N + 8 \mathcal{N}(N)
+ 4 \mathcal{N}_{=} (N),$$
where $\mathcal{N}(N)$ (resp. $\mathcal{N}_{=}(N)$) is the number of solutions of the diophantine equation
$a(a+1)d(d+1) = b(b+1)c(c+1)$ such that
the integers $a, b, c, d$ satisfy
$0 < a < b < c <d \leq N$ (resp. $0 < a < b = c < d \leq N$). Moreover, for all $\epsilon > 0$, there exists
$C_{\epsilon} > 0$, independent of $N$, such that for all $N \geq 8$,
$$ N/2 \leq 8 \mathcal{N}(N)
+ 4 \mathcal{N}_{=} (N) \leq C_{\epsilon} N^{3/2 + \epsilon}.$$
Hence,
$$\mathbb{E}\left[ \left|\sum_{n=1}^N
X_n X_{n+1} \right|^4 \right]
= 2N^2 + O_{\epsilon} (N^{3/2 + \epsilon}).$$
\end{proposition}
\begin{proof}
Expanding the fourth moment, we immediately obtain that
it is equal to the total number of solutions of the previous diophantine equation, with
$a, b, c, d \in \{1,2, \dots, N\}$.
One has $2N^2 - N$ trivial solutions: $N(N-1)$ for which $a = c\neq b = d$, $N(N-1)$ for which $a = b \neq c = d$, $N$ for which $a = b = c = d$. It remains to count the number of non-trivial solutions.
Such a solution has a minimal element among $a, b, c, d$. This element is unique: if two minimal elements are on the same side, then necessarily $a = b = c=d$, if two minimal elements are on different sides, then the other elements should be equal, which also gives a trivial solution.
Dividing the number of solutions by four, we can assume that $a$ is the unique smallest integer, which implies that $d$ is the largest one. For $b = c$, we get $\mathcal{N}_= (N)$ solutions, and for
$b \neq c$, we get $ 2 \mathcal{N}_= (N)$
solutions, the factor $2$ coming from the possible exchange between $b$ and $c$.
The lower bound $N/2$ comes from the solutions $(1,3,3,8)$ and $(1,2,5,9)$ for $8 \leq N \leq 24$, and from the solutions
of the form $(n,2n+1,3n,6n+2)$ for $N \geq 25$.
Let us now prove the upper bound.
The odd integers $A = 2a + 1$, $B = 2b+1$, $C = 2c + 1$, $D = 2d + 1$ should satisfy:
$$(A^2 - 1)(D^2-1) = (B^2 - 1)(C^2 -1).$$
Since $B$ and $C$ are closer than $A$ and $D$, we deduce
$$A^2 - 1+ D^2-1 > B^2 - 1 + C^2 - 1,$$
and then
$$\delta := (AD - BC)/2 > 0,$$
since
$$A^2 D^2 - B^2 C^2 = A^2 + D^2 - B^2 - C^2 > 0.$$
Note that $\delta$ is an integer. The last equation gives
$$4 \delta (AD - \delta) = A^2 + D^2
- (B-C)^2 - 2AD + 4 \delta,$$
and in particular
$$A^2 - 2(2 \delta +1) AD + D^2 + 4 \delta (\delta +1 ) = (B-C)^2 \geq 0.$$
If $1 < D/A \leq 2 \delta + 2$, we deduce
$$AD \left( \frac{1}{2 \delta + 2}
- 4 \delta - 2 + 2\delta + 2 \right) + 4 \delta(\delta +1) \geq 0,$$
and then
$$AD \leq \frac{4 \delta(\delta + 1)}{ 2\delta - (1/4)} = 2 (\delta +1)
\left(1 - \frac{1}{8 \delta} \right)^{-1}
\leq 2 (\delta + 1) \left( 1 + \frac{1}{7 \delta} \right) \leq 2 \delta + 2 + (4/7),
$$
$AD \leq 2 \delta +1$ since it is an odd integer, and then $BC = AD - 2 \delta \leq 1$, which gives a contradiction.
Any solution should then satisfy $D/A > 2\delta +2$.
For $\delta > \sqrt{N}$, we have necessarily $A < D /( 2\sqrt{N} + 2)
\leq (2N+1)/(2 \sqrt{N} + 2) = O(\sqrt{N})$, and then $a = O(\sqrt{N})$,
and then there are only $O(N^{3/2})$ possibilities for the couple $(a,d)$. Now, $b$ and $c$ should be divisors of $a(a+1)d(d+1) = O(N^4)$, and by the classical divisor bound, we deduce that there are $O(N^{\epsilon})$ possibilities for $(b,c)$ when $a$ and $d$ are chosen.
Hence, the number of solutions for $\delta > \sqrt{N}$ is bounded by the estimate we have claimed.
It remains to bound the number of solutions for $\delta \leq \sqrt{N}$.
We need
$$A^2 - 2(2 \delta +1) AD + D^2 + 4 \delta (\delta +1 ) = (B-C)^2,$$
i.e.
$$[D - (2 \delta + 1)A]^2 + \delta (\delta +1 )
= 4\delta(\delta + 1) A^2 + (B-C)^2.$$
We know that $D \geq A (2 \delta +2)$, and then $0 < D - (2 \delta + 1)A \leq N$, which gives, for each value of $\delta$,
$O(N)$ possibilities for $D - (2\delta +1)A$. For the moment, let us admit that for each of these possibilities, there are $O(N^{\epsilon})$ choices for $B-C$ and $A$. Then, for fixed $\delta$, we have
$O (N^{1+ \epsilon})$ choices for $(D -
(2\delta +1) A, A, B-C)$. For each choice, $B-C, A, D$ are fixed, and then also $BC = AD - 2 \delta$, and finally $B$ and $C$.
Hence, we have $O(N^{1+\epsilon})$ solutions for each $\delta \leq \sqrt{N}$, and then $O(N^{3/2 + \epsilon})$ solutions by counting all the possible $\delta$.
The claim we have admitted is a consequence of the following fact we will prove now: for $\epsilon > 0$,
the number of representations of $M$ in integers by the quadratic form $X^2 + P Y^2$ is $O(M^{\epsilon})$, uniformly in the strictly positive integer $P$. Indeed,
for such a representation, the ideal
$(X + Y \sqrt{-P})$ should be a divisor of
$(M)$ in the ring of integers $\mathcal{O}_P$ of $\mathbb{Q}[\sqrt{-P}]$, and each such ideal gives at most $6$ couples $(X,Y)$ representing $M$ (the number of elements of norm $1$ in an imaginary quadratic field is at most $6$). It is then sufficient to bound the number of divisors of $(M)$ in
$\mathcal{O}_P$ by $O(M^{\epsilon})$, uniformly in $P$. The number of divisors of $(M)$ is $\prod_{\mathfrak{p}} (v_\mathfrak{p}(M) + 1)$, where we have the prime ideal decomposition
$$(M) = \prod_{\mathfrak{p}} \mathfrak{p}^
{v_\mathfrak{p}(M)}.$$
Now, by considering the decomposition of prime numbers as products of ideals, we deduce:
$$(M) = \prod_{p \in \mathcal{P}, \, p \operatorname{inert}} (p)^{v_p(M)}
\prod_{p \in \mathcal{P}, \, p \operatorname{ramified}} \mathfrak{p}_p^{2 v_p(M)}
\prod_{p \in \mathcal{P}, \, p \operatorname{split}} \mathfrak{p}_p^{ v_p(M)} \overline{\mathfrak{p}_p}^{\, v_p(M)},
$$
$\mathfrak{p}_p$ denoting an ideal of norm $p$, and then the number of divisors of $(M)$ is
$$\prod_{p \in \mathcal{P}, \, p \operatorname{inert}} (v_p(M) + 1)
\prod_{p \in \mathcal{P}, \, p \operatorname{ramified}} (2 v_p(M) + 1)
\prod_{p \in \mathcal{P}, \, p \operatorname{split}} (v_p(M) + 1)^2
\leq \prod_{p \in \mathcal{P}}
(v_p(M) + 1)^2 = [\tau(M)]^2,
$$
where $\tau(M)$ is the number of divisors, in the usual sense,
of the integer $M$. This gives the desired bound $O(M^{\epsilon})$.
\end{proof}
\begin{remark}
Using the previous proof, one can show the following quite curious property: all the solutions of $a(a+1) d(d+1) = b(b+1) c (c+1)$ in integers $0 < a < b \leq c < d$ satisfy $d/a > 3 + 2 \sqrt{2}$. Indeed, let us assume the contrary. With the previous notation, $3 + 2 \sqrt{2} \geq d/a \geq D/A
> 2 \delta + 2$, and then $\delta = 1$, which gives $A^2 - 6AD + D^2 + 8 \geq 0$,
i.e.
$$(2a + 1)^2 - 6(2a+1)(2d+1) + (2d+1)^2
+ 8 \geq 0,$$
$$ 4 (a^2 - 6 ad + d^2) - 8a - 8d + 4 \geq 0,$$
a contradiction since $1 < d/a \leq 3 + 2 \sqrt{2}$ implies $a^2 - 6ad + d^2 \leq 0$.
The bound $3+2 \sqrt{2}$ is sharp, since we have the solutions of the form
$(u_{2k}, u_{2k+1}, u_{2k+1}, u_{2k+2})$, where
$$u_r := \frac{(1+\sqrt{2})^{r}
+ (1 - \sqrt{2})^{r} - 2}{4}.$$
\end{remark}
A consequence of the previous proposition corresponds to a bound on all the moments of order $0$ to $4$:
\begin{corollary}
We have, for all $q \in [0,2]$,
$$c_q + o(1) \leq \mathbb{E} \left[ \left| \frac{1}{\sqrt{N}} X_n X_{n+1} \right|^{2q} \right] \leq C_q + o(1),$$
where $c_q = 2^{-(q-1)_{-}} \geq 1/2$ and
$C_q = 2^{(q-1)_{+}} \leq 2$.
\end{corollary}
\begin{proof}
This result is a direct consequence of the
previous estimates for $q = 1$ and $q = 2$ and H\"older's inequality.
\end{proof}
We have proven the same equivalent for the moments of order $4$ as for sum of i.i.d. variables with the same law as $X_n X_{n+1}$. This suggests that there are "more chances" for a central limit theorem than for the sums $\sum_{n=1}^N X_n$ discussed before. Indeed, we have the following:
\begin{proposition}
If for all integers $q \geq 1$, the number of non-trivial solutions
$(n_1, \dots, n_{2r}) \in \{1, \dots, N\}^{2r}$ of the diophantine equation
$$\prod_{r=1}^q n_r(n_r+1)
= \prod_{r=1}^q n_{q+r} (n_{q+r} + 1)$$
is negligible with respect to the number of trivial solutions when $N \rightarrow \infty$ (i.e. $o(N^q)$) then we have
$$ \frac{1}{\sqrt{N}} \sum_{n=1}^{N} X_n X_{n+1} \underset{N \rightarrow \infty}{\longrightarrow} \mathcal{N}_{\mathbb{C}},$$
where $\mathcal{N}_{\mathbb{C}}$ denotes a standard Gaussian complex variable, i.e.
$(\mathcal{N}_1 + i \mathcal{N}_2)/\sqrt{2}$ where $\mathcal{N}_1, \mathcal{N}_2$ are independent standard real Gaussian variables.
\end{proposition}
\begin{proof}
If $$Y_N := \frac{1}{\sqrt{N}} \sum_{n=1}^{N} X_n X_{n+1},$$
then for integers $q_1, q_2 \geq 0$,
the moment $\mathbb{E}[Y_N^{q_1} \overline{Y_N}^{\, q_2}]$ is equal to $N^{-(q_1+ q_2)/2}$ times the number of solutions
$(n_1, \dots, n_{q_{1} + q_2}) \in \{1, \dots, N\}^{q_1 + q_2}$ of
$$\prod_{r=1}^{q_1} n_r(n_r+1)
= \prod_{r=1}^{q_2} n_{q_1+r} (n_{q_1+r} + 1).$$
If $q_1$ or $q_2$ vanishes, there is no solution, so the moment is zero. If $0 < q_1 < q_2$, there are at most $N^{q_1}$ choices for $n_1, \dots, n_{q_1}$, and once these integers are fixed, at most $N^{o(1)}$ choices for $n_{q_1+1}, \dots, n_{q_1+q_2}$ by the divisor bound. Hence, the moment tends to zero when $N \rightarrow \infty$, and we have the same conclusion for $0 < q_2 < q_1$. Finally, if $0 < q_1 = q_2 = q$, by assumption, the moment is equivalent to $N^{-q}$ times the number of trivial solutions of the corresponding diophantine equation, i.e. to the corresponding moment for the sum of i.i.d. variables, uniform on the unit circle. By the central limit theorem,
$$\mathbb{E}[|Y_N|^{2q} ] \underset{N \rightarrow \infty}{\longrightarrow}
\mathbb{E}[|\mathcal{N}_{\mathbb{C}}|^{2q}].$$
We have then proven that for all integers $q_1, q_2 \geq 0$,
$$\mathbb{E}[Y_N^{q_1} \overline{Y_N}^{\, q_2} ] \underset{N \rightarrow \infty}{\longrightarrow}
\mathbb{E}[|\mathcal{N}_{\mathbb{C}} ^{q_1} \overline{\mathcal{N}_{\mathbb{C}}}^{\, q_2} ],$$
which gives the claim.
\end{proof}
We have proven the assumption of the previous proposition for $q \in \{1,2\}$, however, our method does not generalize to larger values of $q$. The divisor bound gives immediately a domination by $N^{q + o(1)}$ for the number of solutions, and then it seems reasonable to expect that the arithmetic constraints implied by the equation are sufficient to save at least a small power of $N$. Note that this saving is not possible for the sums $\sum_{n=1}^N
X_n$, which explains a different behavior.
The previous proposition giving a "conditional CLT" can be generalized to the sums of the form
$$\sum_{n=1}^N \prod_{j=1}^k X_{n + j}^{m_j},$$
when the $m_j$'s have the same sign. The situation is more difficult if the $m_j$'s have different signs since the divisor bound alone does not directly give a useful bound on the number of solutions.
\section{Convergence of the empirical measure in the case of roots of unity} \label{uniform2q}
Here, we suppose that $(X_p)_{p \in \mathcal{P}}$ are i.i.d. uniform on the set
$\mathbb{U}_q$ of $q$-th roots of unity, $q\geq 1$ being fixed. With the notation of the previous section, we now get:
\begin{proposition} \label{qboundL2}
Let $m_1, \dots, m_k$ be integers, not all
divisible by $q$, let $\epsilon > 0$ and let $N > N' \geq 0$. Then,
$$\mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
\leq C_{q,k, \epsilon} (N-N') N^{\epsilon}$$
and
$$\mathbb{E} \left[ |\hat{\mu}_{k,N}(m_1, \dots, m_N)|^2
\right] \leq \frac{C_{q,k, \epsilon}}{N^{1-\epsilon}},$$
where $C_{q,k, \epsilon} > 0$ depends only on $q, k, \epsilon$.
\end{proposition}
\begin{proof}
We can obviously assume that $m_1, \dots, m_k$ are between $0$ and $q-1$, which gives finitely many possibilities for these integers, depending only on $q$ and $k$. We can then suppose that $m_1, \dots, m_k$ are fixed at the beginning.
We have to bound the number of couples $(n_1,n_2)$ on $\{N'+1, \dots, N\}^2$ such that
$$\frac{\prod_{j=1}^k (n_1 +j)^{m_j}}{
\prod_{j=1}^k (n_2 +j)^{m_j}} \in
(\mathbb{Q}_+^*)^q,$$
where, in this proof, $(\mathbb{Q}_+^*)^q$ denotes the set of $q$-th powers of positive rational numbers.
Now, any positive integer $r$
can be decomposed as a product of a "smooth" integer whose prime factors are all strictly smaller than $k$,
and a "rough" integer whose prime factors are all larger than or equal to $k$. If
the "rough" integer is denoted $\sharp_k(r)$, the condition just above implies:
$$\frac{\sharp_k \left(\prod_{j=1}^k (n_1 +j)^{m_j}\right)}{
\sharp_k \left(\prod_{j=1}^k (n_2 +j)^{m_j}\right)} \in
(\mathbb{Q}_+^*)^q.$$
Now, the numerator and the denominator of this expression can both be written in a unique way as a product of a $q$-th perfect power and an integer whose $p$-adic valuation is between $0$ and $q-1$ for all $p \in \mathcal{P}$. If the quotient is a $q$-th power, necessarily the numerator and the denominator have the same "$q$-th power free" part. Hence, there exists a $q$-th power free integer $g$ such that
$$\sharp_k \left(\prod_{j=1}^k (n_1 +j)^{m_j}\right), \;
\sharp_k \left(\prod_{j=1}^k (n_2 +j)^{m_j}\right) \in g \mathbb{N}^q,$$
$\mathbb{N}^q$ being the set of $q$-th powers of positive integers.
Hence, the number of couples $(n_1,n_2)$ we have to estimate is bounded by
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}} [\mathcal{N}(q,k,g,N',N)]^2,
$$
where $\mathcal{N}(q,k,g,N',N)$ is the
number of integers $n \in \{N'+1, \dots, N\}$ such that
$$\sharp_k \left(\prod_{j=1}^k (n +j)^{m_j}\right) \in g \mathbb{N}^q.$$
If a prime number $p \in \mathcal{P}$ divides $n + j$ and $n+j'$ for $j \neq j' \in \{1,\dots, k\}$, it divides $|j-j'|
\in \{1, \dots, k-1\}$, and then $p < k$. Hence, the rough parts of $(n+j)^{m_j}$ are
pairwise coprime. Now, if $g_1, \dots, g_k$ are the $q$-th power free integers such that $\sharp_k[(n+j)^{m_j}] \in g_j \mathbb{N}^q$, we have $g_1 g_2 \dots g_k \in g (\mathbb{Q}_+^*)^q$, and since $g_1, \dots, g_k$ are coprime, $g_1 g_2 \dots g_k$ is $q$-th power free (as $g$), which implies $g_1 \dots g_k = g$.
Hence
$$\mathcal{N}(q,k,g,N',N)
\leq \sum_{g_1 g_2 \dots g_k = g}
\left| \left\{n \in \{N'+1, \dots N\},
\, \forall j \in \{1, \dots, k\}, \;
\sharp_{k}[(n+j)^{m_j}] \in g_j \mathbb{N}^q \right\} \right|. $$
Let us now fix an index $j_0$ such that $m_{j_0}$ is not multiple of $q$. We have
$$\mathcal{N}(q,k,g,N',N)
\leq \sum_{g_1 g_2 \dots g_k = g}
\left| \left\{n \in \{N'+1, \dots N\},
\, \sharp_{k}[ (n +j_0)^{m_{j_0}}] \in g_{j_0} \mathbb{N}^q, \forall j \neq j_0, \;
\operatorname{rad}(g_j)|(n+j) \right\} \right|,$$
where $\operatorname{rad}(g_j)$ denotes the product of the prime factors of $g_j$.
The condition on $(n +j_0)^{m_{j_0}}$
means that for all $p \in \mathcal{P}$,
$p \geq k$,
$$m_{j_0} v_p (n+j_0) \equiv v_p (g_{j_0})
\, (\operatorname{mod. } q),$$
i.e. $v_p (g_{j_0})$ is divisible by $\operatorname{gcd}( m_{j_0}, q)$ and
$$(m_{j_0}/ \operatorname{gcd}( m_{j_0}, q)) v_p (n+j_0) \equiv
v_p (g_{j_0})/ \operatorname{gcd}( m_{j_0}, q)\, (\operatorname{mod. } \rho_{j_0}),$$
where $\rho_{j_0} := q/ \operatorname{gcd}( m_{j_0}, q) $. Since $m_{j_0}/ \operatorname{gcd}( m_{j_0}, q)$ is coprime
with $\rho_{j_0}$, the last congruence is
equivalent to a congruence modulo $\rho_{j_0}$ between $v_p(n+j_0)$ and a fixed integer, which is not divisible by
$\rho_{j_0}$ if and only if $p$ divides
$g_{j_0}$. We deduce that the
condition on $(n+j_0)^{m_{j_0}}$ implies that $\sharp_k (n+j_0) \in h(q,m_{j_0},g_{j_0}) \mathbb{N}^{\rho_{j_0}}$, i.e.
$$n+j_0 = \alpha h(q,m_{j_0},g_{j_0})
A^{\rho_{j_0}},$$
where $\alpha$ is a $\rho_j$-th power
free integer whose prime factors are strictly smaller than $k$, $A$ is an integer and $h(q,m_{j_0},g_{j_0})$
is an integer depending only on $q$, $m_{j_0}$ and $g_{j_0}$, which is divisible by
$\operatorname{rad} (g_{j_0})$.
For a fixed value of $\alpha$, the
values of $A$ should be in the interval
$$I = \left( \big((N' +j_0)/[\alpha h(q,m_{j_0}, g_{j_0})] \big)^{1/\rho_{j_0}},
\big((N +j_0)/[\alpha h(q,m_{j_0}, g_{j_0})] \big)^{1/\rho_{j_0}} \right],$$
whose size is at most
$$ [\operatorname{rad}(g_{j_0})]^{-1/\rho_{j_0}} [(N+j_0)^{1/\rho_{j_0}} -
(N'+j_0)^{1/\rho_{j_0}} ]
\leq 1+ \left(\frac{N - N'}{\operatorname{rad}(g_{j_0})} \right)^{1/2},
$$
by the concavity of the power $1/\rho_{j_0}$, the fact that $\rho_{j_0}
\geq 2$ since $m_{j_0}$ is not divisible by $q$, which implies
$x^{1/\rho_{j_0}} \leq 1 + \sqrt{x}$.
Now, the conditions on $n+j$ for $j \neq j_0$ imply a condition of congruence
for $\alpha h (q, m_{j_0}, g_{j_0}) A^{\rho_{j_0}}$, modulo all the primes dividing one of the $g_j$'s for $j \neq j_0$. These primes do not divide $\alpha$, since $\alpha$ has all prime factors smaller than $k$, and $g_j$ divides $\sharp_k[(n_0 + j)^{m_j}]$. They also do not divide $h (q, m_{j_0}, g_{j_0})$,
since this integer has the same prime factors as $g_{j_0}$, which is prime with $g_j$.
Hence, we get a condition of congruence
for $A^{\rho_{j_0}}$ modulo all primes dividing $g_j$ for some $j \neq j_0$.
For each of these primes, this gives
at most $\rho_{j_0} \leq q$ congruence classes, and then, by the chinese reminder theorem, we get at most $q^{\omega\left(\prod_{j \neq j_0}
g_j\right)}$ classes modulo
$\prod_{j \neq j_0} \operatorname{rad}(g_j)$, where $\omega$ denotes the number of prime factors of an integer.
The number of integers $A \in I$ satisfying the congruence conditions is then at most:
$$q^{\omega\left(\prod_{j \neq j_0}
g_j\right)}
\left[ 1 + \frac{1}{\prod_{j \neq j_0}
\operatorname{rad}(g_j)} \left(
1 + \frac{N-N'}{\operatorname{rad}(g_{j_0})}
\right)^{1/2} \right]
\leq [\tau(g)]^{\log q/\log 2}
\left[ 1+ \left(1 + \frac{N - N'}{\operatorname{rad}(g)} \right)^{1/2} \right],$$
where $\tau(g)$ denotes the number of divisors of $g$.
Now, $\alpha$ has primes factors smaller than $k$ and $p$-adic valuations smaller than $q$, which certainly gives $\alpha \leq
(k!)^q$. Hence, by considering all the possible values of $\alpha$, and all the possible $g_1, \dots, g_k$, which should divide $g$, we deduce
$$ \mathcal{N}(q,k,g,N',N)
\leq (k!)^q
[\tau(g)]^{k + (\log q/\log 2)}
\left[2 + \left( \frac{N - N'}{\operatorname{rad}(g)} \right)^{1/2} \right].$$
If $\mathcal{N}(q,k,g,N',N) > 0$, we have necessarily $$g \leq \prod_{j=1}^k (N+j)^{m_k} \leq (N+k)^{kq}
\leq (1+k)^{kq} N^{kq}.$$ Using the divisor bound, we deduce that for all $\epsilon > 0$, there exists $C^{(1)}_{q,k,
\epsilon}$ such that for all $g \leq
(1+k)^{kq} N^{kq}$,
$$2 (k!)^q
[\tau(g)]^{k + (\log q/\log 2)}
\leq C^{(1)}_{q,k,
\epsilon} N^{\epsilon},$$
and then
$$\mathcal{N}(q,k,g,N',N)
\leq C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\left[ 1 + \left( \frac{N - N'}{\operatorname{rad}(g)} \right)^{1/2} \right],$$
i.e.
$$\left(\mathcal{N}(q,k,g,N',N)
- C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\right)_+ \leq C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
(N - N')^{1/2} (\operatorname{rad}(g))^{-1/2}.$$
Summing the square of this bound for all possible $g$ gives
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}} \left(\mathcal{N}(q,k,g,N',N)
- C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\right)^2_+
\leq \left(C^{(1)}_{q,k,
\epsilon}\right)^2 N^{2 \epsilon} (N-N')
\sum_{g \geq 1, q\operatorname{-th \, power \, free}}
\frac{\mathds{1}_{g \leq (1+k)^{kq} N^{kq}}}{\operatorname{rad}(g)}.$$
Now, since all numbers up to
$(1+k)^{kq}N^{kq}$ have prime factors smaller than this quantity, we deduce, using the multiplicativity of the radical:
\begin{align*}\sum_{g \geq 1, q\operatorname{-th \, power \, free}}
\frac{\mathds{1}_{g \leq (1+k)^{kq} N^{kq}}}{\operatorname{rad}(g)}
& \leq \prod_{p \in \mathcal{P},
p \leq (1+k)^{kq}N^{kq}} \left(
\sum_{j=0}^{q-1} \frac{1}{\operatorname{rad} (p^j)} \right)
\\ & \leq \prod_{p \in \mathcal{P},
p \leq (1+k)^{kq}N^{kq}} \left(
1 + \frac{q-1}{p} \right) \,
\leq \prod_{p \in \mathcal{P},
p \leq (1+k)^{kq}N^{kq}} \left(
1 - \frac{1}{p} \right)^{1-q}
\end{align*}
which, by Mertens' theorem, is smaller
than a constant, depending on $k$ and $q$,
times $\log^{q-1}(1+ N)$.
We deduce that there exists a constant
$C^{(2)}_{q,k,\epsilon} >0$, such that
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}} \left(\mathcal{N}(q,k,g,N',N)
- C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\right)^2_+
\leq C^{(2)}_{q,k,\epsilon}
N^{3\epsilon} (N-N').$$
Now, it is clear that
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}}\mathcal{N}(q,k,g,N',N)
= N'-N,$$
since this sum counts all the integers $n$
from $N'+1$ to $N$, regrouped in function
of the $q$-th power free part of
$\sharp_k\left( \prod_{j=1}^k (n+j)^{m_j}
\right)$.
Using the inequality $x^2 \leq (x-a)_+^2 + 2ax$, available for all $a, x \geq 0$, we deduce
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}}[\mathcal{N}(q,k,g,N',N)]^2
\leq C^{(2)}_{q,k,\epsilon}
N^{3\epsilon} (N-N') +
2 C^{(1)}_{q,k,
\epsilon} N^{\epsilon} (N-N').$$
This result gives the first inequality of the proposition, for
$$C_{q,k,\epsilon}
= C^{(2)}_{q,k,\epsilon/3}
+ 2C^{(1)}_{q,k,\epsilon/3}.$$
The second inequality is obtained by taking $N' = 0$ and dividing by $N^2$.
\end{proof}
\begin{corollary}
For all $(m_1, \dots, m_k) \in \mathbb{Z}^k$, $\hat{\mu}_{k,N}(m_1, \dots, m_k)$ converges
in $L^2$, and then in probability, to the corresponding Fourier coefficient of the uniform distribution $\mu_{k,q}$ on $\mathbb{U}_q^k$. In other words, $\mu_{k,N}$ converges weakly in probability to $\mu_{k,q}$.
\end{corollary}
We also have a strong law of large numbers.
\begin{proposition}
Almost surely, $\mu_{k,N}$ weakly converges to $\mu_{k,q}$. More precisely, for all $(t_1, \dots, t_k) \in (\mathbb{U}_q)^k$, the proportion of $n \leq N$ such that
$(X_{n+1}, \dots, X_{n+k}) = (t_1, \dots, t_k)$ is almost surely $q^{-k} + O(N^{-1/2 + \epsilon})$ for all $\epsilon > 0$.
\end{proposition}
\begin{proof}
By Lemma \ref{lemmaLLN} and Proposition
\ref{qboundL2}, we deduce that almost surely, for all $\epsilon > 0$,
$0 \leq m_1, \dots, m_k \leq q-1$,
$(m_1, \dots, m_k) \neq (0,0, \dots, 0)$,
$$\hat{\mu}_{k,n}(m_1, \dots, m_k)
= O( N^{-1/2 + \epsilon}).$$
Since we have finitely many values of $m_1, \dots, m_k$, we can take the $O$ uniform in $m_1, \dots, m_k$. Then, by inverting discrete Fourier transform on $\mathbb{U}_q^k$, we deduce the claim.
\end{proof}
\section{More general distributions on the unit circle} \label{general}
In this section, $(X_p)_{p \in \mathcal{P}}$ are i.i.d., with any distribution on the unit circle. We will study the empirical distribution of $(X_n)_{n \geq 1}$, but not of the patterns $(X_{n+1}, \dots, X_{n+k})_{n \geq 1}$ for $k \geq 2$.
More precisely, the goal of the section is to prove a strong law of large numbers for $N^{-1} \sum_{n=1}^N \delta_{X_n}$ when $N$ goes to infinity. We will use the following result, due to Hal\'asz, Montgomery and Tenenbaum (see \cite{GS}, \cite{M}, \cite{Te} p. 343):
\begin{proposition} \label{HMT}
Let $(Y_n)_{n \geq 1}$ be a multiplicative function such that $|Y_n| \leq 1$ for all $n \geq 1$. For $N \geq 3, T > 0$, we set
$$M(N,T) := \underset{|\lambda| \leq 2T} {\min}
\sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re (Y_p p^{-i\lambda})}{p}.$$
Then:
$$\left|\frac{1}{N} \sum_{n=1}^N Y_n \right|
\leq C \left[(1 + M(N,T)) e^{- M(N,T) } + T^{-1/2} \right],$$
where $C > 0$ is an absolute constant.
\end{proposition}
From this result, we show the following:
\begin{proposition}
Let $(Y_n)_{n \in 1}$ be a random multiplicative function such that $(Y_p)_{p \in \mathcal{P}}$ are i.i.d., with $\mathbb{P} [|Y_p| \leq 1] = 1$, and $\mathbb{P} [Y_p = 1] < 1$. Then, almost surely,
for all $c \in (0, 1 - \mathbb{E}[\Re(Y_2)])$
$$\frac{1}{N} \sum_{n=1}^N Y_n
= O((\log N)^{-c}).$$
\end{proposition}
\begin{proof}
First, we observe that for $1 < N' < N$ integers, $\lambda > 0$,
$$\sum_{p \in \mathcal{P}, N' < p \leq N}
p^{-1 - i\lambda} =
\int_{N'}^{N} \frac{d \theta(x)}{x^{1+i \lambda} \log x} = \left[ \frac{\theta(x)}{x^{1+i \lambda} \log x} \right]_{N'}^N
+ \int_{N'}^{N} \left( \frac{(1+i \lambda)}{x^{2 + i \lambda} \log x}
+ \frac{1}{x^{1 + i \lambda} \, x \log^2 x} \right) \theta(x) dx,$$
where, by a classical refinement of the prime number theorem,
$$\theta(x) := \sum_{p \in \mathcal{P},
p \leq x} \log p = x + O_A(x/\log^{A} x)$$
for all $A > 1$.
The bracket is dominated by $1/\log (N')$, the second part of the last integral is dominated by
$$\int_{N'}^{\infty} \frac{dx}{x \log^2 x} = \int_{\log N'}^{\infty} \frac{dy}{y^2} = 1/\log (N'),$$
and the error term of the first part is dominated by $(1+\lambda)/ \log^{A} (N')$. Hence
$$\sum_{p \in \mathcal{P}, N' < p \leq N} p^{-1-i\lambda}= I_{N',N, \lambda} + O_A \left( \frac{1}{\log N'} +
\frac{\lambda}{ \log^{A} N'} \right),$$
where
$$I_{N',N, \lambda} = (1+i\lambda) \int_{N'}^{N} \frac{dx}{x^{1 + i \lambda} \log x} =
(1+i\lambda) \int_{\lambda \log N'}^{ \lambda \log N}
\frac{e^{-i y}}{y} dy.
$$
Now, for all $a \geq 1$,
$$\int_{a}^{\infty} \frac{e^{-i y}}{y} dy = \left[ \frac{e^{-i y}}{-iy} \right]_a^{\infty}
- \int_{a}^{\infty} \frac{e^{-i y}}{iy^2} dy = O(1/a),$$
which gives
$$I_{N',N, \lambda}
= \int_{\lambda \log N'}^{ \lambda \log N}
\frac{e^{-i y}}{y} dy
+ O(1/\log N').$$
Now, the integral of $(\sin y)/y$ on $\mathbb{R}_+$ is conditionally convergent, and then uniformly bounded in any interval, which implies
$$\Im(I_{N',N,\lambda}) = O(1).$$
We deduce
$$\Im \left(\sum_{ p \in \mathcal{P},
N' < p \leq N} p^{-1-i\lambda} \right)
= O_A \left( 1 + \frac{\lambda}{\log^A N'} \right).$$
Bounding the sum on primes smaller than $N'$ by taking the absolute value, we get:
$$\left|\Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
\leq \log \log (3+N') + O_A \left( 1 + \frac{\lambda}{\log^A N'} \right),$$
and then by taking $N' = e^{(\log N)^{10/A}}$, for $N$ large enough depending on $A$,
$$\left| \Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
\leq \frac{ 10\log \log N}{A} + O_A \left( 1 + \frac{\lambda}{\log^{10} N} \right),
$$
$$\underset{N \rightarrow \infty}{\lim \sup} \sup_{0 < \lambda \leq \log^{10} N}
(\log \log N)^{-1} \left|\Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
\leq 10/A,$$
and then by letting $A \rightarrow \infty$ and using the symmetry of the imaginary part for $\lambda \mapsto -\lambda$,
$$ \sup_{|\lambda| \leq \log^{10} N} \left|\Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
= o(\log \log N)$$
for $N \rightarrow \infty$.
Now, for all $\rho$ whose real part
is in $[-1,1)$, we have
\begin{align*}\min_{|\lambda| \leq \log^{10} N} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(\rho \, p^{-i \lambda})}{p}
& \geq
\min_{|\lambda| \leq \log^{10} N} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(\rho) \Re( p^{-i \lambda})}{p}
\\ & - \max_{|\lambda| \leq \log^{10} N} \left| \sum_{p \in \mathcal{P}, p \leq N}
\frac{\Im(\rho) \Im( p^{-i \lambda})}{p} \right|.
\end{align*}
The first term is at least the sum of
$1 - \Re(\rho)$ divided by $p$, and then at least $[1 - \Re(\rho) + o(1)] \log \log N$. The second term is $o(\log \log N)$ by the previous discussion. Hence,
\begin{equation}\min_{|\lambda| \leq \log^{10} N} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(\rho \, p^{-i \lambda})}{p} \geq [1 - \Re(\rho) + o(1)] \log \log N. \label{eqrho}
\end{equation}
In fact, we have equality, as we check by taking $\lambda = 0$.
Now, let $\rho := \mathbb{E}[Y_2]$, and
$Z_{p,\lambda} := \Re[(Y_p - \rho)
p^{-i\lambda}]$. The variables
$(Z_{p,\lambda})_{p \in \mathcal{P}}$ are centered, independent,
bounded by $2$. By Hoeffding's lemma,
for all $u \geq 0$,
$$\mathbb{E}[e^{u Z_{p,\lambda}/p}]
\leq e^{2(u/p)^2},$$
and then by independence,
$$\mathbb{E}[e^{u \sum_{p \in \mathcal{P}, p \leq N} Z_{p,\lambda}/p}] \leq e^{2u^2 \sum_{p \in \mathcal{P}, p \leq N} p^{-2}} \leq e^{2u^2 (\pi^2/6)} \leq e^{4u^2},$$
\begin{align*} \mathbb{P} \left[
\sum_{p \in \mathcal{P}, p \leq N} \frac{Z_{p,\lambda}}{p}
\geq (\log \log N)^{3/4} \right]
& \leq e^{-(\log \log N)^{3/2} / 8}
\mathbb{E} \left[ e^{(1/8)(\log \log N)^{3/4} \sum_{p \in \mathcal{P}, p \leq N} \frac{Z_{p,\lambda}}{p} } \right]
\\ & \leq e^{-(\log \log N)^{3/2} / 8}
e^{4 [(1/8)(\log \log N)^{3/4}]^2}
= e^{- (\log \log N)^{3/2} / 16}.
\end{align*}
Applying the same inequality to $-Z_{p, \lambda}$, we deduce
$$\mathbb{P} \left[ \left|
\sum_{p \in \mathcal{P}, p \leq N} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right|
\geq (\log \log N)^{3/4} \right]
\leq 2 e^{- (\log \log N)^{3/2} / 16},$$
$$\mathbb{P} \left[ \max_{|\lambda| \leq
\log^{10} N, \lambda \in (\log^{-1} N) \mathbb{Z} } \left|
\sum_{p \in \mathcal{P}, p \leq N} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right|
\geq (\log \log N)^{3/4} \right]
= O\left( \log^{11}N \, e^{- (\log \log N)^{3/2} /16} \right).$$
The derivative of the last sum
in $p$ with respect to $\lambda$ is dominated by
$$ \sum_{p \in \mathcal{P}, p \leq N}
\frac{\log p}{p} = O(\log N)$$
and then the sum cannot vary more than
$O(1)$ when $\lambda$ runs between two consecutive multiples of $\log^{-1} N$.
Hence,
\begin{align*}
\mathbb{P} \left[ \max_{|\lambda| \leq
\log^{10} N} \left|
\sum_{p \in \mathcal{P}, p \leq N} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right|
\geq (\log \log N)^{3/4} + O(1) \right]
& = O \left( (\log N)^{11 - \sqrt{\log \log N}/16} \right)
\\& = O(\log^{-10} N).
\end{align*}
If we define, for $k \geq 1$, $N_k$ as the integer part of $e^{k^{1/5}}$, we deduce, by Borel-Cantelli lemma, that almost surely, for all but finitely many $k \geq 1$,
$$\max_{|\lambda| \leq
\log^{10} N_k} \left|
\sum_{p \in \mathcal{P}, p \leq N_k} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right| \leq (\log \log N_k)^{3/4} + O(1).$$
If this event occurs, we deduce, using \eqref{eqrho},
$$\min_{|\lambda| \leq \log^{10} N_k} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(Y_p \, p^{-i \lambda})}{p} \geq [1 - \Re(\rho) + o(1)] \log \log N_k.$$
Then, by Proposition \ref{HMT}, we get
$$\left|\frac{1}{N_k}
\sum_{n=1}^{N_k} Y_n \right|
\leq C \left[ \left( 1 +
[1 - \Re(\rho) + o(1)] \log \log N_k
\right) (\log N_k)^{-(1 - \Re(\rho)) + o(1)} + \sqrt{2} \, \log^{-5} N_k \right].$$
Since $- (1 - \Re(\rho)) \geq -2 > -5$, we deduce
$$\left|\frac{1}{N_k}
\sum_{n=1}^{N_k} Y_n \right| =
O((\log N_k)^{-(1 - \Re(\rho)) + o(1)}),$$
which gives the claimed result along the sequence $(N_k)_{k \geq 1}$.
Now, if $N \in [N_k, N_{k+1}]$, we have, since all the $Y_n$'s have modulus at most $1$,
\begin{align*}
\left|\frac{1}{N_k}
\sum_{n=1}^{N_k} Y_n
- \frac{1}{N} \sum_{n=1}^{N} Y_n
\right|
& \leq
\left| \frac{1}{N} \sum_{n=N_k+1}^N Y_n
\right| + \left( \frac{1}{N_k} - \frac{1}{N} \right) \left|\sum_{n=1}^{N_k} Y_n \right|
\leq \frac{N - N_k}{N} + N_k \left(\frac{1}{N_k} - \frac{1}{N} \right)
\\ & = \frac{2(N - N_k)}{N}
\leq \frac{2 (e^{(k+1)^{1/5}} - e^{k^{1/5}} + 1)}{e^{k^{1/5}} - 1}
= O \left(e^{(k+1)^{1/5} - k^{1/5}} - 1 + e^{-k^{1/5}} \right)
\\ & = O( k^{-4/5} ) = O(\log^{-4} N).
\end{align*}
This allows to remove the restriction to the sequence $(N_k)_{k \geq 1}$.
\end{proof}
Using Fourier transform, we deduce a law of large numbers for the empirical measure $\mu_{N} = \frac{1}{N} \sum_{n=1}^N \delta_{X_n}$, under the assumptions of this section.
\begin{proposition}
If for all integers $q \geq 1$, $\mathbb{P} [X_2 \in \mathbb{U}_q] < 1$, then almost surely, $\mu_N$ tends to the uniform measure on the unit circle.
\end{proposition}
\begin{proof}
For all $m \neq 0$, $X_2^m$ takes its values on the unit circle, and it is not a.s. equal to $1$. Applying the previous proposition to $Y_n = X_n^m$, we deduce that $\hat{\mu}_N(m)$ tends to zero almost surely, which gives the desired result.
\end{proof}
\begin{proposition}
If for $q \geq 2$, $X_2 \in \mathbb{U}_q$ almost surely, but $\mathbb{P} [X_2 \in \mathbb{U}_r] < 1$ for all strict divisors $r$ of $q$, then almost surely, $\mu_N$ tends to the uniform measure on $\mathbb{U}_q$. More precisely, almost surely, for all $t \in \mathbb{U}_q$, the proportion of $n \leq N$ such that $X_n = t$ is $q^{-1} + O((\log N)^{-c})$, as soon as
$$c < \inf_{1 \leq m \leq q-1}
\left(1 - \mathbb{E}[\Re(X_2^m)] \right),$$
this infimum being strictly positive.
\end{proposition}
\begin{proof}
The infimum is strictly positive since by assumption, $\mathbb{P}[X_2^m = 1] < 1$ for all $m \in \{1, \dots, q-1\}$.
Now, we apply the previous result to $Y_n = X_n^m$ for all $m \in \{1, \dots, q-1\}$, and we get the claim after doing a discrete Fourier inversion.
\end{proof}
| proofpile-arXiv_065-7355 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Physical systems with some stochastic or chaotic properties have some randomness in the setup of the fundamental hamiltonian, which could be effectively simulated in the context of random matrix theory. When choosing an ensemble from random matrix theory for a chaotic hamiltonian, we often need to consider the symmetries in the dynamics of the related physical system. The choice of standard matrix ensembles from symmetries, historically comes from the invention of Dyson \cite{testbook}, which is called three-fold way when classifying Gaussian Unitary Ensemble (GUE), the Gaussian Orthogonal Ensemble (GOE), and the Gaussian Symplectic Ensemble (GSE). For more general symmetry discussion of interaction systems, the Altland-Zirnbauer theory gives a more complete description as a ten-fold classification \cite{Zirnbauer1996,AZ1997}. In the practical usage, one of the most celebrated works would be the classification of interaction inside topological insulators and topological phases in a ten-fold way \cite{ludwig,ki}.
\\
\\
In the recent study, the rising interests of studies on Sachdev-Ye-Kitaev (SYK) model gives another profound application in the random matrix theory classification. SYK model \cite{kitaev,Sachdev:1992fk} is a
microscopic quantum hamiltonian with random Gaussian non-local
couplings among majonara fermions. As is maximally chaotic and nearly
conformal, this model could be treated as a holographic dual of
quantum black hole with $\text{AdS}_2$ horizon through the (near)
AdS/CFT correspondence
\cite{Almheiri:2014cka,Cvetic:2016eiv,Fu:2016yrv,Polchinski:2016xgd,Jevicki:2016bwu,Maldacena:2016hyu,Jensen:2016pah,Jevicki:2016ito,Bagrets:2016cdf,Maldacena:2016upp}. In
the recent research people have also discussed several generalizations
of the SYK model
\cite{Gu:2016oyy,Gross:2016kjj,Berkooz:2016cvq,Fu:2016vas}, such as
higher dimensional generalizations and supersymmetric
constraints. Some other related issues and similar models are
discussed in
\cite{old,Sachdev2,Sannomiya:2016mnj,Sachdev:2010um,Garcia-Alvarez:2016wem,Hayden:2007cs,Anninos:2013nra,Sachdev:2015efa,Perlmutter:2016pkf,Anninos:2016szt,Danshita:2016xbo,Roberts:2016hpo,Betzios:2016yaq,Witten:2016iux,Patel:2016wdy,Klebanov:2016xxf,Blake:2016jnn,Nishinaka:2016nxg,Davison:2016ngz,Anninos:2016klf,Liu:2016rdi,Magan:2016ehs,Peng:2016mxj,Krishnan:2016bvg,Turiaci:2017zwd,Ferrari:2017ryl,Garcia-Garcia:2017pzl,Bi:2017yvx,Ho:2017nyc}. In the recent discussions, people have discovered that the SYK
hamiltonian has a clear correspondence with the categories of the three fold standard
Dyson ensembles, unitary, orthogonal and sympletic ensembles, in the random matrix theory \cite{You:2016ldz,Garcia-Garcia:2016mno,Dyer:2016pou,Cotler:2016fpe}. In the recent work, \cite{Dyer:2016pou,Cotler:2016fpe}, it is understood that the time-dependent quantum dynamics of the temperature-dependent spectral form factor, namely, the combinations of partition functions with a special analytic continuation in SYK model, is computable in the late time by form factors in the random matrix theory with the same analytic continuation, as a probe of the discrete nature of the energy spectrum in a quantum black hole, and also a solid confirmation on the three-fold classification \cite{Cotler:2016fpe}.
\\
\\
In the route towards Dyson's classification, one only considers the set of simple unitary or anti-unitary operators as symmetries when commuting or anticomuting with the hamiltonian. An interesting question would be, what is the influence of supersymmetry, the symmetry between fermions and bosons in the spectrum, in the classification of symmetry class?
\\
\\
As is illuminated by research in the past, supersymmetry \cite{Sohnius:1985qm} has several crucial influences in the study of disorder system and statistical physics \cite{Efetov:1997fw}, and could be emergent from condensed matter theory models \cite{Lee:2010fy}. Originating from particle physics, supersymmetry will enlarge the global symmetry group in the theory, has fruitful algebras and strong mathematical power used in several models in quantum mechanics and quantum field theory, and is extremely useful to simplify and clarify classical or quantized theories. In the recent study of SYK model, the supersymmetric generalization for the original SYK has been discussed in detail in \cite{Fu:2016vas}, which shows several different behaviors through supersymmetric extensions. This model might give some implications in the quantum gravity structure of black hole in two dimension in a supersymmetric theory, and also a related conjecture in \cite{Cotler:2016fpe} for spectral form factor and correlation functions in super Yang-Mills theory.
\\
\\
In order to explore the supersymmetric constraints on the random matrix theory classification, in this paper we will study the symmetry classification and random matrix behavior of the $\mathcal{N}=1$ supersymmetric extension of SYK model by Fu-Gaiotto-Maldacena-Sachdev's prescription \cite{Fu:2016vas}. The effect of supersymmetry in the symmetry classification could be summarized in the following aspects,
\begin{itemize}
\item Supersymmetry will cause the hamiltonian to show a quadratic expression. Namely, we could write $H$ as the square of $Q$. This condition will greatly change the distribution of the eigenvalues. From random matrix language \cite{statistical}, if $Q$ is a Gaussian random matrix, then $H$ should be in a Wishart-Laguerre random matrix, with the eigenvalue distribution changing from Wigner's semi-circle to the Marchenko-Pastur distribution. In another sense, the quadratic structure will fold the eigenvalues of $Q$ and cause a positivity condition for all eigenvalues. Namely, if $Q$ has the eigenvalue distribution that eigenvalues come in pair with positive and negative signs, the squaring $Q$ will cause larger degeneracies and a folded structure in eigenvalues of energy. More over, the coupling degree might be changed when considering $Q$ instead of $H$. For instance, in the $\mathcal{N}=1$ extended SYK model, $Q$ is a non-local three point coupling, which is not even. This will change the previous classification in the hamiltonian based on the representation of Clifford algebra from mathematical point of view.
\item We find the Witten index or Witten parity operator $(-1)^F$, which is well-known as a criterion for supersymmetry breaking \cite{Sohnius:1985qm,Witten:1981nf,Cooper:1982dm,Cooper:1994eh}, is crucial in classifying the symmetry class for supercharge $Q$. Some evidence of this point also could be found in some other models or setups. For instance, Witten parity is the Klein operator which separates the bosonic and fermionic sectors in the $\mathcal{N}=2$ supersymmetric systems \cite{oikonomou:2014kba,Oikonomou:2014jea}. \cite{MateosGuilarte:2016mxm} provides a more nontrivial example, where the odd parity operators are used to move states along a chain of different fermion sectors. Reversely, in some systems where one can define a graded algebra, Klein operator serves as a key factor in realizing supersymmetry, which is helpful in models of bosonization and higher spin theories, etc.\cite{Brink:1993sz,Plyushchay:1994re,Plyushchay:1996ry,Bekaert:2005vh}.
For example, \cite{Plyushchay:1996ry} constructs the bosonized Witten supersymmetric quantum mechanics by realizing the Klein operator as a parity operator. \cite{Plyushchay:1994re} realize a Bose-Fermi transformation with the help of the deformed Heisenberg algebra which involves a Klein operator.
Another interesting application of Witten operator is \cite{VanHove:1982cc}, where the author argues that incorporating the Witten operator is crucial in some computation in supersymmetric systems with finite temperature. In the supersymmetric SYK model we are considering, Witten parity and the anti-unitary operator together become a new anti-unitary operator, which will significantly enlarge the set of symmetries in the hamiltonian, and change the eight-fold story for supercharge $Q$ and hamiltonian $H$.
\end{itemize}
These aspects will be investigated in a clearer and more detailed way in the paper.
\\
\\
This paper will be organized as the following. In Section \ref{models} we will review the model construction and thermodynamics of SYK model and its
supersymmetric extensions. In Section \ref{RMT} we will discuss the random matrix classification for models, especially supersymmetric extensions of the SYK model. In Section \ref{data} we will present our numerical confirmation for symmetry classifications from the exact diagonalization, including the computation of the density of states and spectral form factors. In Section \ref{conclu}, we will arrive at a conclusion and discuss the directions for future works. In the appendix, we will review some knowledge to make this paper self-contained, including basics on Altland-Zirnbauer theory and a calculation on the random matrix theory measure.
\section{Introduction on models}\label{models}
In this paper, we will mostly focus on SYK models and their extensions. Thus before the main investigation, we will provide a simple introduction on the necessary knowledge of related models to be self-contained.
\subsection{The SYK model}
In this part, we will simply review the SYK model mainly following \cite{Maldacena:2016hyu}. The SYK model is a microscopic model with some properties of quantum black hole. The hamiltonian\footnotemark is given by
\begin{align}
H=\sum\limits_{i<j<k<l}{{{J}_{ijkl}}{{\psi }^{i}}{{\psi }^{j}}{{\psi }^{k}}{{\psi }^{l}}}
\end{align}
where $\psi^i$ are Majorana fermions and they are coupled by the four point random coupling with Gaussian distribution
\begin{align}
\left\langle {{J}_{ijkl}} \right\rangle =0~~~~~~\left\langle J_{ijkl}^{2} \right\rangle =\frac{6J_{\text{SYK}}^{2}}{{{N}^{3}}}=\frac{12\mathcal{J}_{\text{SYK}}^{2}}{{{N}^{3}}}
\end{align}
where $J_\text{SYK}$ and $\mathcal{J}_\text{SYK}$ are positive constants, and $J_\text{SYK}=\sqrt{2}\mathcal{J}_\text{SYK}$. The large $N$ partition function is given by
\begin{align}
Z(\beta )\sim\exp (-\beta {{E}_{0}}+N{{s}_{0}}+\frac{cN}{2\beta })
\end{align}
where $E_0$ is the total ground state energy proportional to $N$ and it is roughly $E_0=-0.04 N$ \cite{Cotler:2016fpe}. $s_0$ is the ground state entropy contributed from one fermion, and one can estimate it theoretically \cite{Maldacena:2016hyu},
\begin{align}
{{s}_{0}}=\frac{G}{2\pi }+\frac{\log 2}{8}=0.2324
\end{align}
where $G$ is the Catalan number. $c$ is the specific heat, which could be computed by
\begin{align}
c=\frac{4{{\pi }^{2}}{{\alpha }_{S}}}{\mathcal{J}_\text{SYK}}=\frac{0.3959}{J_\text{SYK}}
\end{align}
and $\alpha_S=0.0071$ is a positive constant. This contribution $c/\beta$ is from the Schwarzian, the
quantum fluctuation near the saddle point of the effective action in the SYK model. The Schwarzian partition function is
\begin{align}
{{Z}_\text{Sch}}(\beta)\sim\int{\mathcal{D}\tau (u)}\exp \left( -\frac{\pi N{{\alpha }_{S}}}{\beta \mathcal{J}_\text{SYK}}\int_{0}^{2\pi }{du\left( \frac{\tau '{{'}^{2}}}{\tau {{'}^{2}}}-\tau {{'}^{2}} \right)} \right)
\end{align}
where the path integral is taken for all possible reparametrizations $\tau(u)$ of the thermal circle in different equivalent classes of the $\text{SL}(2,\mathbb{R})$ symmetry. The Schwarzian corresponds to the broken reparametrization symmetry of the SYK model. One can compute the one-loop correction from the soft mode of the broken symmetry,
\begin{align}
{{Z}_{\text{Sch}}}(\beta )\sim\frac{1}{{{(\beta J_\text{SYK})}^{3/2}}}\exp \left( \frac{cN}{2\beta } \right)
\end{align}
As a result, one can consider the correction from the soft mode if we consider an external one-loop factor $(\beta J_\text{SYK})^{-3/2}$. The density of states could be also predicted by the contour integral of the partition function as
\begin{align}
\rho (E)\sim\exp (N{{s}_{0}}+\sqrt{2cN(E-{{E}_{0}})})
\end{align}
\footnotetext{One could also generalize the SYK model to general $q$ point non-local interactions where $q$ are even numbers larger than four. The hamiltonian should be
\begin{align}
H=i^{q/2}\sum\limits_{{{i}_{1}}<{{i}_{2}}<\ldots <{{i}_{q}}}{{{J}_{{{i}_{1}}{{i}_{2}}\ldots {{i}_{q}}}}{{\psi }^{{{i}_{1}}}}{{\psi }^{{{i}_{2}}}}\ldots {{\psi }^{{{i}_{q}}}}}
\end{align}
where
\begin{align}
\left\langle {{J}_{{{i}_{1}}{{i}_{2}}\ldots {{i}_{q}}}} \right\rangle =0~~~~~~\left\langle J_{{{i}_{1}}{{i}_{2}}\ldots {{i}_{q}}}^{2} \right\rangle =\frac{{J_\text{SYK}^2}(q-1)!}{{{N}^{q-1}}}=\frac{{{2}^{q-1}}}{q}\frac{{\mathcal{J}_{\text{SYK}}^{2}}(q-1)!}{{{N}^{q-1}}}
\end{align}
Sometimes we will discuss the general $q$ in this paper but we will mainly focus on the $q=4$ case.
}
\subsection{$\mathcal{N}=1$ supersymmetric extension}
Following \cite{Fu:2016vas}, in the supersymmetric extension of SYK model, firstly we define the supercharge\footnotemark
\begin{align}
Q=i\sum\limits_{i<j<k}{{{C}_{ijk}}{{\psi }^{i}}{{\psi }^{j}}{{\psi }^{k}}}
\end{align}
for Majonara fermions $\psi^i$. $C_{ijk}$ is a random tensor with the Gaussian distribution as the coupling,
\begin{align}
\left\langle {{C}_{ijk}} \right\rangle =0 ~~~~~~\left\langle C_{ijk}^{2} \right\rangle =\frac{2J_{\mathcal{N}=1}}{{{N}^{2}}}
\end{align}
where $J_{\mathcal{N}=1}$ is also a constant with mass dimension one. The square of the supercharge will give the hamiltonian of the model
\begin{align}\label{susyhamiltonian}
H={{E}_{c}}+\sum\limits_{i<j<k<l}{{{J}_{ijkl}}{{\psi }^{i}}{{\psi }^{j}}{{\psi }^{k}}{{\psi }^{l}}}
\end{align}
where
\begin{align}\label{susyenergy}
& {{E}_{c}}=\frac{1}{8}\sum\limits_{i<j<k}{C_{ijk}^{2}} ~~~~~~{{J}_{ijkl}}=-\frac{1}{8}\sum\limits_{a}{{{C}_{a[ij}}{{C}_{kl]a}}}
\end{align}
where $[\cdots]$ is the summation of all possible antisymmetric permutations. Besides the shifted constant $E_c$, the distribution of $J_{ijkl}$ is different from the original SYK model because it is not a free variable of Gaussian distribution, which changes the large $N$ behavior of this model. In the large $N$ limit, the model has an unbroken supersymmetry with a bosonic superpartner $b^i$. The Lagrangian of this model is given by
\begin{align}
L=\sum\limits_{i}{\left( \frac{1}{2}{{\psi }^{i}}{{\partial }_{\tau }}{{\psi }^{i}}-\frac{1}{2}{{b}^{i}}{{b}^{i}}+i\sum\limits_{j<k}{{{C}_{ijk}}{{b}^{i}}{{\psi }^{j}}{{\psi }^{k}}} \right)}
\end{align}
In this model, the Schwarzian is different from the original SYK model. We also have the expansion for the large $N$ partition function
\begin{align}
Z(\beta )\sim\exp (-\beta {{E}_{0}}+N{{s}_{0}}+\frac{cN}{2\beta })
\end{align}
But the results of $E_0$ and $s_0$ are different (while the specific heat is the same for these two models). In the large $N$ limit, the supersymmetry is preserved, thus we have the ground state energy $E_0=0$. The zero temperature entropy is given by
\begin{align}
{{s}_{0}}=\frac{1}{2}\log (2\cos \frac{\pi }{6})=\frac{1}{4}\log 3=0.275
\end{align}
Moreover, the one-loop correction from Schwarzian action is different. As a result of supersymmetry constraint, the one-loop factor is $(\beta J_{\mathcal{N}=1})^{-1/2}$
\begin{align}
{{Z}_\text{Sch}}(\beta )\sim\frac{1}{{{(\beta J_{\mathcal{N}=1})}^{1/2}}}{{e}^{N{{s}_{0}}+cN/2\beta }}
\end{align}
which predicts a different behavior for the density of states
\begin{align}
\rho (E)\sim\frac{1}{({EJ_{\mathcal{N}=1}})^{1/2}}{{e}^{N{{s}_{0}}+2c N E }}
\end{align}
\footnotetext{For the generic positive integer $\hat{q}$ we can also define the $\mathcal{N}=1$ supersymmetric extension with non-local interaction of $2\hat{q}-2$ fermions. The supercharge should be
\begin{align}
Q={{i}^{\frac{\hat{q}-1}{2}}}\sum\limits_{{{i}_{1}}<{{i}_{2}}<\ldots <{{i}_{{\hat{q}}}}}{{{C}_{{{i}_{1}}{{i}_{2}}\ldots {{i}_{{\hat{q}}}}}}{{\psi }^{{{i}_{1}}}}{{\psi }^{{{i}_{2}}}}\ldots {{\psi }^{{{i}_{{\hat{q}}}}}}}
\end{align}
where
\begin{align}
\left\langle {{C}_{{{i}_{1}}{{i}_{2}}\ldots {{i}_{{\hat{q}}}}}} \right\rangle =0~~~~~~\left\langle C_{{{i}_{1}}{{i}_{2}}\ldots {{i}_{{\hat{q}}}}}^{2} \right\rangle =\frac{(\hat{q}-1)!J_{\mathcal{N}=1}}{{{N}^{\hat{q}-1}}}=\frac{{{2}^{\hat{q}-2}}(\hat{q}-1)!\mathcal{J}_{\mathcal{N}=1}}{q{{N}^{\hat{q}-1}}}
\end{align}
And $\hat{q}=3$ will recover the case in the main text.
}
\section{Random matrix classification}\label{RMT}
It is established that SYK model is classified by random matrix theory
in that the random interacting SYK hamiltonian fall into one of the
three standard Dyson ensembles in the eight-fold way
\cite{You:2016ldz,Garcia-Garcia:2016mno,Dyer:2016pou,Cotler:2016fpe}. It
is natural to believe that the supersymmetric extension can also be
described by random matrix theory. To sharpen the argument, we derive
the exact correspondence between each SYK hamiltonian and some random
matrix ensembles, in other words, the eight-fold rule for
supersymmetric case. A priori, the supersymmetric SYK hamiltonian
should lead to a different random matrix theory description than the
original case. Superficially, the original SYK theory and its
supersymmetric cousin are different have two major differences, which
have been also mentioned in the previous discussions.
\begin{itemize}
\item The degeneracy of the two hamiltonian matrices are different. The degeneracy of supersymmetric SYK model is also investigated by \cite{Fu:2016vas}, which we derive again using some different discussion in Section~\ref{sec:RMTH}. The degeneracy space is enlarged by supersymmetry. Generally, the energy level distribution of random matrices is sensitive to the degeneracy and is thus sensitive to the supersymmetric extension.
\item Another difference is the apparent positive semidefiniteness of the hamiltonian being the square of the supercharge. We will see later that the positive constraint leads to a new eigenvalue distribution different from those of Gaussian ensembles.
\end{itemize}
Symmetry analysis is crucial in classifying the random matrix statistics of hamiltonian matrices. \cite{You:2016ldz,Cotler:2016fpe} argue that the particle-hole symmetry operator determines the class of random matrix theory statistics. The random matrix classification dictionary is determined by the degeneracy and the special relations required by having the symmetry. The systematic method of random matrix classification is established as the Atland-Zirnbauer theory \cite{Zirnbauer1996,AZ1997}, reviewed in appendix \ref{AZ}. The anti-unitary operators play a central role in the classifications. The Atland-Zirnbauer also applies to extended ensembles different from the three standard Dyson ensembles, which we find useful in classifying the supersymmetric SYK theory. In Section \ref{sec:RMTSYK} we derive again the eight-fold way classification of original SYK hamiltonian using Atland-Zirnbauer theory and find unambiguously the matrix representations of hamiltonian in each mod eight sectors. We notice that the matrix representation of hamiltonian takes block diagonal form with each block being a random matrix from a certain ensemble. This block diagonal form is also found by \cite{You:2016ldz} in a different version.
\\
\\
Naively one would apply the same argument to the supersymmetric hamiltonian, since it also enjoys the particle-hole symmetry. But this is not the full picture. First, one need to take into account of hamiltonian being the square of the supercharge and is thus not totally random. In Section~\ref{sec:RMTQ} we argue that the supercharge $Q$ has a random matrix description which falls into one of the extended ensembles. Using the Atland-Zirnbauer theory on $Q$ we obtain its matrix representation in block diagonal form and use it to determine the matrix representation of the hamiltonian in Section~\ref{sec:RMTH}. Second, in order to obtain the correct classification one needs to consider the full set of symmetry operators. Apparently particle-hole is not enough since supersymmetry enlarges the SYK degeneracy space. We argue that the Witten index operator, $(-1)^F$, is crucial in the symmetry analysis of any system with supersymmetry. Incorporating $(-1)^F$ we obtain the full set of symmetry operators. Finally, the squaring operation, will change the properties of the random matrix theory distribution of supercharge $Q$, from Gaussian to Wishart-Laguerre. The quantum mechanics and statistics in supersymmetric SYK models, based on the main investigation in this paper, might be a non-trivial and compelling example of supersymmetric symmetry class.
\subsection{SYK}\label{sec:RMTSYK}
Now we apply the Altland-Zirnbauer classification theory (see appendix \ref{AZ} for some necessary knowledges) to the
original SYK model \cite{You:2016ldz,Garcia-Garcia:2016mno,Dyer:2016pou,Cotler:2016fpe}. This is accomplished by finding the symmetry of the theory (and has been already discussed in other works, see \cite{You:2016ldz,Cotler:2016fpe}). First, one can change the majonara fermion operators to creation annihilation operators $c^\alpha$ and $\bar {c}^\alpha$ by
\begin{align} {{\psi
}^{2\alpha}}=\frac{{{c}^{\alpha}}+{{{\bar{c}}}^{\alpha}}}{\sqrt{2}}~~~~~~{{\psi
}^{2\alpha-1}}=\frac{i({{c}^{\alpha}}-{{{\bar{c}}}^{\alpha}})}{\sqrt{2}}
\end{align} where $\alpha = 1,2\cdots,N_d=N/2$. The fermionic number
operator $F=\sum_\alpha\bar{c}^\alpha c^\alpha$ divides the total
Hilbert space with two different charge parities. One can define the
particle-hole operator
\begin{align}
P=K\prod\limits_{\alpha=1}^{N_d}{({{c}^{\alpha}}+{{\bar{c}}}^{\alpha})}
\end{align} where $K$ is the complex conjugate operator ($c^{\alpha}$
and $\bar{c}^{\alpha}$ are real). The operation of $P$ on fermionic
operators is given by
\begin{align} P{{c}^{\alpha }}P=\eta {{c}^{\alpha
}}~~~~~~P{{{\bar{c}}}^{\alpha }}P=\eta {{{\bar{c}}}^{\alpha
}}~~~~~~P{{\psi }^{i}}P=\eta {{\psi }^{i}}
\end{align} where
\begin{align} \eta ={{(-1)}^{[3{{N}_{d}}/2-1]}}
\end{align} From these commutation relation we can show that
\begin{align} [H,P]=0
\end{align} To compare with the Altland-Zirnbauer classification, we
need to know the square of $P$ and this is done by direct calculation
\begin{align}
{{P}^{2}}={{(-1)}^{[{{N}_{d}}/2]}}=\left\{ \begin{matrix} +1 & N\bmod
8=0 \\ +1 & N\bmod 8=2 \\ -1 & N\bmod 8=4 \\ -1 & N\bmod 8=6 \\
\end{matrix} \right.
\end{align} Now we discover that $P$ can be treated as a $T_+$
operator and it completely determines the class of the
hamiltonian. Before we list the result, it should be mentioned that
the degeneracy of hamiltonian can be seen from the properties of $P$:
\begin{itemize}
\item $N\text{ mod }8=2\text{ or }6$:\\
The symmetry $P$ exchanges the parity sector of a state, so there is
a two-fold degeneracy. However, there is no further symmetries
caused by $P$ in each block, Thus it is given as a combination of two GUEs, where
two copies of GUEs are degenerated.
\item $N\text{ mod }8=4$: \\
The symmetry $P$ is a parity-invariant mapping and $P^2=-1$, so
there is a two-fold degeneracy. There is no further independent
symmetries. From Altland-Zirnbauer theory we know that in each
parity block there is a GSE matrix. Also, where two copies
of GSEs are independent.
\item $N\text{ mod }8=0$: \\
The symmetry $P$ is a parity-invariant mapping and $P^2=1$. There is
no further symmetries so the degeneracy is one. From
Altland-Zirnbauer theory we know that in each parity block there is
a GOE matrix. Also, two copies of GOEs are independent.
\end{itemize}
We summarize these information in the following table as a summary of
SYK model,
\begin{center}
\begin{tabular}{ c | c | c | c | c | c }
$N \bmod 8$ & Deg. & RMT & Block & Type & Level stat.\\
\hline
0 & 1 & $\text{GOE}$ & $\left( \begin{matrix}
A & 0 \\
0 & B \\
\end{matrix} \right)\text{ }A,B \text{ real symmetric}$
&$\mathbb{R}$& GOE\\
2 & 2 & $\text{GUE}$ &$\left( \begin{matrix}
A & 0 \\
0 & \bar{A} \\
\end{matrix} \right)\text{ }A \text{ Hermitian}$
&$\mathbb{C}$& GUE\\
4 & 2 & $\text{GSE}$ &$\left( \begin{matrix}
A & 0 \\
0 & B \\
\end{matrix} \right)\text{ }A,B \text{ Hermitian quaternion}$ &$\mathbb{H}$& GSE\\
6 & 2 & $\text{GUE}$ &$\left( \begin{matrix}
A & 0 \\
0 & \bar{A} \\
\end{matrix} \right)\text{ }A\text{ Hermitian}$ &$\mathbb{C}$& GUE\\
\end{tabular}
\end{center}
where the level statistics means some typical numerical evidence of
random matrix, for instance, Wigner surmise, number variance, or
$\Delta_3$ statistics, etc. Although the SYK hamiltonian can be
decomposed as two different parity sectors, we can treat them as
standard Dyson random matrix as a whole because these two sectors are
either independent or degenerated (The only subtleties will be
investigating the level statistics when considering two independent
sectors, where two mixed sectors will show a many-body localized phase
statistics instead of a chaotic phase statistics, which has been
discussed originally in \cite{You:2016ldz}.) In the following we will
also numerically test the random matrix behavior, and based on the
numerical testing range of $N$ we can summarize the following table
for practical usage.
\begin{center}
\begin{tabular}{ c | c | c | c | c | c | c| c| c| c | c }
$N$ & 10 & 12 & 14 & 16 &18 & 20 & 22 & 24 & 26 & 28 \\
\hline
Ensemble & GUE& GSE& GUE & GOE& GUE& GSE & GUE & GOE & GUE & GSE\\
\end{tabular}
\end{center}
\subsection{$\mathcal{N}=1$ supersymmetric classification}
Supersymmetry algebra is a $\mathbb{Z}_2$-graded algebra, where states and operators are subdivided into two distinct parity sectors. In such an algebra there may exist a Klein operator \cite{JunkerGeorg:2012} which anti-commutes with any operators with odd parity and commutes with any operators with even parity. The Klein operator of supersymmetry algebra is naturally the Witten index operator.
\\
\\
Witten index might plays a role in the symmetric structure and block decomposition in the supersymmetric quantum mechanics. A simple example is \cite{JunkerGeorg:2012}, in $\mathcal{N}=2$ supersymmetry algebra, Define $W$ be the Witten operator. The Witten operator has eigenvalue $\pm 1$ and separates the Hilbert space into two parity sectors
\begin{equation}
\mathcal{H} = \mathcal{H}^+ \oplus \mathcal{H}^-~.
\end{equation}
We can also define projection operators $P^\pm = 1/2(1\pm W)$. In the parity representation the operators take $2 \times 2$ block diagonal form
\begin{equation}
W = \left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right)~, ~~~
P^+ = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array}
\right)~, ~~~
P^- = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1
\end{array}
\right)~.
\end{equation}
Because of $Q^2=0$ and $\{Q,W\}=0$ the complex supercharges are necessarily of the form
\begin{equation}\label{eq:Z2gradingN2Generic}
Q = \left(
\begin{array}{cc}
0 & A \\
0 & 0
\end{array}
\right)~, ~~~
Q^\dagger = \left(
\begin{array}{cc}
0 & 0 \\
A^\dagger & 0
\end{array}
\right)~,
\end{equation}
which imply
\begin{equation}
Q_1 = \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
0 & A \\
A^\dagger & 0
\end{array}
\right)~, ~~~
Q_2 = \frac{i}{\sqrt{2}}\left(
\begin{array}{cc}
0 & -A \\
A^\dagger & 0
\end{array}
\right)~.
\end{equation}
In the above equation, $A$ takes $\mathcal{H}^- \rightarrow\mathcal{H}^+$ and its adjoint $A^\dagger$ takes $\mathcal{H}^+ \rightarrow\mathcal{H}^-$. The supersymmetric hamiltonian becomes diagonal in this representation
\begin{equation}\label{eq:Z2gradedH}
H=\left(
\begin{array}{cc}
AA^\dagger & 0 \\
0 & A^\dagger A
\end{array}
\right)~.
\end{equation}
In this construction, the Hilbert space is divided by Witten parity operator. The hamiltonian is shown to take the block diagonal positive semidefinite form without even referring to the explicit construction of the hamiltonian. It is remarkable that the above computation is very similar to our work from Section \ref{sec:RMTQ} to \ref{sec:RMTH}. Applications of this property can be found in\cite{oikonomou:2014kba,Oikonomou:2014jea}. They describe a supersymmetric Quantum Mechanics system where fermions scatter off domain walls. The supercharges are defined as a differential operator and its adjoint. From (\ref{eq:Z2gradedH}) the number of ground states of each $\mathbb{Z}_2$ sector is simply the kernel of the differential operator and the Witten index is computed. A more non-trivial example is provided by \cite{MateosGuilarte:2016mxm}. In this work, the Hilbert space is divided into an $N$ fermions Fock space. Thus the Hamiltonian can be expressed as the direct sum of all fermion sectors. The ladder operators $Q$ and $Q^\dagger$ are odd operators and move states between different sectors.
\\
\\
The argument can also work in reversive way. Hidden supersymmetry can be found in a bosonic system such as a Calogero-like model
\cite{Calogero:1969xj}, a system of one dimensional Harmonic oscillators with inverse square interactions and extensions. What makes supersymmetry manifest is the Klein operator. The model and its various extensions are studied in \cite{Brink:1993sz,Plyushchay:1994re,Plyushchay:1996ry,Bekaert:2005vh,Brink:1992xr}. A trivial simple Harmonic operator has algebra $ \left[ a^- , a^+ \right] = 1$. The algebra describes a bosonic system. $\mathbb{Z}_2$ grading is realized by introducing an operator $K = \cos (\pi a^+ a^-)$. The new operator anti-commutes with $a^-$ and $a^+$ thus is a Klein operator. Based on the Klein operator one can construct the projection operators on both sectors and also the supercharge. In this way the simple harmonic oscillator is ``promoted'' to have supersymmetry. A generalization to simple hamornic oscillator is the deformed Heisenberg algebra, $\left[ a^- , a^+ \right] = 1 + \nu K$. The corresponding system is an $\mathcal{N}=2$ supersymmetric extension of the 2-body Calogero. The model is also used in considerably simplifying Calogero model.
\\
\\
These evidences strongly support the argument that supersymmetry will change the classification of symmetry class in quantum mechanical models. In the following work, we will show that supersymmetric SYK model symmetry class can be explicitly constructed and change the classification of random matrix theory ensembles.
\subsubsection{Supercharge in $\mathcal{N}=1$ SYK}
\label{sec:RMTQ}
In the $\mathcal{N}=1$ supersymmetric model, it should be more
convenient to consider the spectrum of $Q$ instead of $H$, because $H$
is the square of $Q$. Although $Q$ is not a hamiltonian, since we only
care about its matrix type, and the Altland-Zirnbauer theory is purly
mathematical, $Q$ can be treated as a hamiltonian. Similiar to the
original SYK model, we are concerned about the symmtry of the
theory. We notice that the Witten index $(-1)^F$ is
\begin{align}
(-1)^F =(-2i)^{N_d}\prod _{i=1}^{N}\psi ^i= \prod
_{\alpha=1}^{N_d}(1-2\bar {c}^\alpha c^\alpha)
\end{align} which is the fermionic parity operator up to a sign
$(-1)^{N_d}$. Witten index and particle-hole symmetry have the
following commutation relation:
\begin{align} P(-1)^F=(-1)^{N_d} (-1)^F P
\end{align} Now we define a new operator, $R=P(-1)^F$. It has a
compact form
\begin{align} R=K\prod _{\alpha=1}^{N_d} (c^\alpha-\bar {c}^\alpha)
\end{align} $R$ and $P$ are both anti-unitary symmetries of $Q$, with
commutation relations:
\begin{center}
\begin{tabular}{ c | c | c } $N$ mod 8 & $P$ & $R$ \\ \hline 0 &
$[P,Q]=0$ & $\{R,Q\}=0$\\ 2 & $\{P,Q\}=0$ & $[R,Q]=0$\\ 4 & $[P,Q]=0$
& $\{R,Q\}=0$\\ 6 & $\{P,Q\}=0$ & $[R,Q]=0$\\
\end{tabular}
\end{center} and squares
\begin{align} {{P}^{2}}={{(-1)}^{[{{N}_{d}}/2]}}\text{,
}R^2=(-1)^{[N_d/2]+N_d}
\end{align} Thus, in different values of $N$, the two operators $P$
and $R$ behave different and replace the role in $T_+$ and $T_-$ in
the Altland-Zirnbauer theory. Now we can list the classification for
the matrix ensemble of $\mathcal N=1$ supersymmetric SYK model
\begin{center}
\begin{tabular}{ c | c | c | c | c | c} $N$ mod 8 & $T_+^2$ & $T_-^2$
& $\Lambda^2$ & Cartan Label & Type \\ \hline 0 & $P^2=1$ & $R^2=1$ &
1& BDI (chGOE) & $\mathbb{R}$ \\ 2 & $R^2=-1$ & $P^2=1$ & 1& DIII
(BdG) & $\mathbb{H}$ \\ 4 & $P^2=-1$ & $R^2=-1$ & 1& CII (chGSE) &
$\mathbb{H}$\\ 6 & $R^2=1$ & $P^2=-1$ & 1& CI (BdG) & $\mathbb{R}$ \\
\end{tabular}
\end{center} One can also write down the block representation of
$Q$. Notice that the basis of block decomposition is based on the
$\pm 1$ eigenspaces of anti-unitary operators, namely, it is
decomposed based on the parity.
\subsubsection{Hamiltonian in $\mathcal{N}=1$ theory}\label{sec:RMTH}
Now we already obtain the random matrix type of the supercharge. Thus
the structure of the square of $Q$ could be considered case by
case. Before that, we can notice one general property, that unlike the
GOE or GSE group in SYK, in the supersymmetric model there is a
supercharge $Q$ contains odd number of Dirac fermions as a symmetry of
$H$, thus it always changes the parity. Thus the spectrum of $H$ is
always decomposed to two degenerated blocks. Another general property
is that the spectrum of $H$ is always positive because $Q$ is
Hermitian and $H=Q^2>0$. Thus the random matrix class of
$\mathcal{N}=1$ will be some classes up to positivity constraint.
\begin{itemize}
\item $N=0 \bmod 8$:
In this case $Q$ is a BDI (chGOE) matrix. Thus we can write down the block decomposition as
\begin{align}
Q=\left( \begin{matrix}
0 & A \\
{{A}^{T}} & 0 \\
\end{matrix} \right)
\end{align}
where $A$ is a real matrix. Thus the hamiltonian is obtained by
\begin{align}
H=\left( \begin{matrix}
A{{A}^{T}} & 0 \\
0 & {{A}^{T}}A \\
\end{matrix} \right)
\end{align}
Since $ AA^{T}$ and $A^T A$ share the same eigenvalues ($\{R,Q\}=0$
thus $R$ flips the sign of eigenvalues of $Q$, but after squaring
these two eigenvalues with opposite signatures become the same), and
there is no internal structure in $A$ (in this case $P$ is a symmetry
of $Q$, $[P,Q]=0$, but $P^2=1$, thus $P$ cannot provide any further
degeneracy), we obtain that $H$ has a two-fold degeneracy. Moreover,
because $A A^T$ and $A^T A$ are both real positive-definite symmetric
matrix without any further structure, it is nothing but the subset of
GOE symmetry class with positivity condition. These two sectors will be exactly degenerated.
\item $N=4\text{ mod }8$: In this case $Q$ is a CII (chGSE)
matrix. Thus we can write down the block decomposition as
\begin{align} Q=\left( \begin{matrix} 0 & B \\ {{B}^{\dagger }} & 0 \\
\end{matrix} \right)
\end{align} where $B$ is a quaternion Hermitian matrix. Thus after
squaring we obtain
\begin{align} H=\left( \begin{matrix} B{{B}^{\dagger }} & 0 \\ 0 &
{{B}^{\dagger }}B \\
\end{matrix} \right)
\end{align}
Since $ B{{B}^{\dagger }}$ and ${{B}^{\dagger }}B$ share the same
eigenvalues, and each block has a natural two-fold degeneracy by the
property of quaternion (Physically it is because $\{R,Q\}=0$ thus $R$
flips the sign of eigenvalues of $Q$, but after squaring these two
eigenvalues with opposite signatures become the same. Also, in this
case $P$ is a symmetry of $Q$, $[P,Q]=0$, and $P^2=-1$), we get a
four-fold degeneracy in the spectrum of $H$. Because $B B^\dagger$ and
$B^\dagger B$ are quaternion Hermitian matrices when $B$ is quaternion
Hermitian\footnotemark, $B B^\dagger=B^\dagger B$ are both quaternion
Hermitian positive-definite matrix without any further structure. As a
result, it is nothing but the subset of GSE symmetry class with positivity
condition. These two sectors will be exactly degenerated.
\item $N=2\text{ mod }8$:
In this case $Q$ is a DIII (BdG) matrix. Thus we can write down the block decomposition as
\begin{align}
Q=\left( \begin{matrix}
0 & Y \\
-\bar{Y} & 0 \\
\end{matrix} \right)
\end{align}
where $Y$ is a complex, skew-symmetric matrix. Thus after squaring we obtain
\begin{align}
H=\left( \begin{matrix}
-Y\bar{Y} & 0 \\
0 & -\bar{Y}Y \\
\end{matrix} \right)
\end{align}
Firstly let us take a look at the degeneracy. Since $-Y\bar{Y}$ and $-\bar{Y}Y$ share the same eigenvalues and each block has a natural two-fold degeneracy because in skew-symmetric matrix the eigenvalues come in pair and after squaring pairs coincide (Physically it is because $\{P,Q\}=0$ thus $P$ flips the sign of eigenvalues of $Q$, but after squaring these two eigenvalues with opposite signatures become the same. Also, in this case $R$ is a symmetry of $Q$, $[R,Q]=0$, and $R^2=-1$), we obtain a four-fold spectrum of $H$.
\\
\\
Now take the operator $Q$ as a whole, from the previous discussion, we may note that it is quaternion Hermitian because it could be easily verified that $Q\Omega=\Omega Q$ and $Q^\dagger=Q$. Thus $Q^2=H$ must be a quaternion Hermitian matrix (there is another way to see that, which is taking the block decomposition by another definition of quarternion Hermitian, squaring it and check the definition again). Moreover, $H$ has a two-fold degenerated parity decomposition thus in each part it is also a quarternion Hermitian matrix. Because in the total matrix it is a subset of GSE symmetry class (with positivity constraint), in each degenerated parity sector it is also in a subset of positive definite GSE symmetry class (one can see this by applying the total measure in the two different, degenerated part).
\item $N=6\text{ mod }8$:
In this case $Q$ is a CI (BdG) matrix. Thus we can write down the block decomposition as
\begin{align}
Q=\left( \begin{matrix}
0 & Z \\
{\bar{Z}} & 0 \\
\end{matrix} \right)
\end{align}
where $Z$ is a complex symmetric matrix. Thus after squaring we obtain
\begin{align}
H=\left( \begin{matrix}
Z\bar{Z} & 0 \\
0 & \bar{Z}Z \\
\end{matrix} \right)
\end{align}
Since $Z\bar{Z}$ and $\bar{Z}Z$ share the same eigenvalues ($\{P,Q\}=0$ thus $P$ flips the sign of eigenvalues of $Q$, but after squaring these two eigenvalues with opposite signatures become the same), and there is no internal structure in $Z$ (in this case $R$ is a symmetry of $Q$, $[R,Q]=0$, but $R^2=1$, thus $R$ cannot provide any further degeneracy), we obtain that $H$ has a two-fold degeneracy.
\\
\\
Similar with the previous $N \bmod 8=2$ case, we can take the operator $Q$ and $H$ as the whole matrices instead of blocks. For $H$ we notice that the transposing operations make the exchange of these two sectors. However, the symmetric matrix statement is basis-dependent. Formally, similar with the quarternion Hermitian case, we can extend the definition of symmetric matrix by the following. Define
\begin{align}
\Omega'=\left( \begin{matrix}
0 & 1 \\
1 & 0 \\
\end{matrix} \right)
\end{align}
and we could see that a matrix $M$ is symmetric real (or symmetric Hermitian) if and only if $M^\dagger=M$ and $M^T\Omega'=\Omega' M$ (where $\Omega'$ means the basis changing over two sectors). We can check easily that $Q$ satisfies this condition, thus $Q^2=H$ must satisfy. Thus we conclude that the total matrix $H$ in a subset of GOE symmetry class (with positivity constraint).
\end{itemize}
Although from symmetric point of view, the hamiltonian of $\mathcal{N}=1$ model should be classified in the subsets of standard Dyson ensembles. But what the subset exactly is? In fact, the special structure of the squaring from $Q$ to $H$ will change the distribution of the eigenvalues from Gaussian to Wishart-Laguerre \cite{Wishart,statistical,MIT} (Although there are some differences in the powers of terms in the eigenvalue distributions.) We will roughly called them as LOE/LUE/LSE, as has been used in the random matrix theory research. Some more details will be summarized in the appendix \ref{Dist}.
\\
\\
However, the difference in the details of the distribution, beyond numerical tests of the distribution function of the one point-eigenvalues, will not be important in some physical tests, such as spectral form factors and level statistics (eg. Wigner surmise). The reason could be given as follows. From the supercharge point of view, because $Q$ is in the Altland-Zirnbauer distribution with non-trivial $\tilde{\alpha}$ (see appendix \ref{Dist}), the squaring operation will not change the level statistics such as Wigner surmise and spectral form factors (which could also be verified by numerics later). From the physical point, as is explained in \cite{You:2016ldz}, the details of distribution (even if not Gaussian), cannot change the universal properties of symmetries.
\\
\\
Finally, we can summarize these statements in the following classification table (the degeneracies have been already calculated in \cite{Fu:2016vas}),
\begin{center}
\begin{tabular}{ c | c | c | c | c | c }
$N \bmod 8$ & Deg. & RMT & Block & Type & Level stat.\\
\hline
0 & 2 & $\text{LOE}$ & $\left( \begin{matrix}
AA^T & 0 \\
0 & A^TA \\
\end{matrix} \right)\text{ }A \text{ real}$
&$\mathbb{R}$& GOE\\
2 & 4 & $\text{LSE}$ &$\left( \begin{matrix}
-Y\bar{Y} & 0 \\
0 & -\bar{Y}Y \\
\end{matrix} \right)\text{ }Y \text{ complex skew-symmetric}$
&$\mathbb{H}$& GSE\\
4 & 4 & $\text{LSE}$ &$\left( \begin{matrix}
BB^\dagger & 0 \\
0 & B^\dagger B \\
\end{matrix} \right)\text{ }B \text{ Hermitian quaternion}$ &$\mathbb{H}$& GSE\\
6 & 2 & $\text{LOE}$ &$\left( \begin{matrix}
Z\bar{Z} & 0 \\
0 & \bar{Z}Z \\
\end{matrix} \right)\text{ }Z \text{ complex symmetric}$ &$\mathbb{R}$& GOE\\
\end{tabular}
\end{center}
For our further practical computational usage, we may summarize the following table for different $N$s in the supersymmetric SYK random matrix correspondence. As we show in the next section, for $N \ge 14$, these theoretical consideration perfectly fits the level statistics.
\begin{center}
\begin{tabular}{ c | c | c | c | c | c | c| c| c| c | c }
$N$ & 10 & 12 & 14 & 16 &18 & 20 & 22 & 24 & 26 & 28 \\
\hline
RMT & LSE & LSE & LOE & LOE& LSE& LSE & LOE & LOE & LSE & LSE\\
Universal Stat.& GSE & GSE & GOE & GOE & GSE & GSE & GOE & GOE & GSE & GSE
\end{tabular}
\end{center}
\footnotetext{We say a matrix $M$ is a quaternion Hermitian matrix if and only if \[M=\left( \begin{matrix}
A+iB & C+iD \\
-C+iD & A-iB \\
\end{matrix} \right)\] for some real $A,B,C,D$ in a basis, and $A$ is symmetric while $B,C,D$ is skew-symmetric. There is an equivalent definition that, defining
\[\Omega=\left( \begin{matrix}
0 & 1 \\
-1 & 0 \\
\end{matrix} \right)\]
thus $M$ is a quaternion Hermitian matrix if and only if $M^\dagger=M$ and $M\Omega=\Omega M$. Thus it is shown directly that if $M$ is quaternion Hermitian then $(M M^\dagger)^\dagger=M M^\dagger$ and $M M^\dagger \Omega= M (M \Omega)=M \Omega M=\Omega M^2=\Omega M M^\dagger$, thus $M M^\dagger=M^2=M^\dagger M$ is still a quaternion Hermitian matrix.
}
\section{Exact Diagonalization}\label{data}
In this part, we will present the main results from numerics to test the random matrix theory classification in the previous investigations. One can diagonalize the hamiltonian exactly with the representation of the Clifford algebra by the following. For operators acting on $N_d=N/2$ qubits, one can define
\begin{align}
& {{\gamma }_{2\zeta -1}}=\frac{1}{\sqrt{2}}\left( \prod_{p=1}^{{{N}_{d}}-1}\sigma _{p}^{z} \right)\sigma _{{{N}_{d}}}^{x} \nonumber\\
& {{\gamma }_{2\zeta }}=\frac{1}{\sqrt{2}}\left( \prod _{p=1}^{{{N}_{d}}-1}\sigma _{p}^{z} \right)\sigma _{{{N}_{d}}}^{y}
\end{align}
where $\sigma_p$ means standard Pauli matrices acting on the $p$-th qubit, tensor producting the identity matrix on the other parts, and $\zeta=1,2,...... ,N_d$. This construction is a representation of the Clifford algebra
\begin{align}
\left\{ {{\gamma }_{a}},{{\gamma }_{b}} \right\}={{\delta }_{ab}}
\end{align}
And one can exactly diagonalize the hamiltonian by replacing the majonara fermions with gamma matrices to find the energy eigenvalues. Thus, all quantities are computable by brute force in the energy eigenstate basis.
\\
\\
The main results of the following investigation would be the following. In the density of supercharge eigenstates and energy eigenstates in the supersymmetric SYK model, the behavior is quite different, but coincides with our estimations from the random matrix theory classification: the spectral density of supercharge $Q$ shows clearly the information about extended ensembles from Altland-Zirnbauer theory, and the spectral density of energy $H$ shows a clear Marchenko-Pastur distribution from the statistics of Wishart-Laguerre. Moreover, because both $Q$ and $H$ both belongs to the universal level statistical class for GOE, GUE and GSE, the numerics from Wigner surmise and spectral form factor will show directly these eight-fold features.
\subsection{Density of states}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{figSYKHEigenvalues}~~~
\includegraphics[width=0.3\textwidth]{figSUSYHEigenvalues}~~~
\includegraphics[width=0.3\textwidth]{figQEigenvalues}~~~
\caption{\label{rho} The density of states for original SYK model
Hamiltonian (left), supersymmetric SYK Hamiltonian (middle) and
SUSY SYK supercharge operators treated as Hamiltonian (right) by
exact diagonalization. Density of states from $N=10$ to $N=28$ are
plotted in colors from light blue to dark blue. The eigenvalues have been rescaled by $E(Q)/NJ$ while the density of
states has been also rescaled to match the normalization that the
integration should be 1.}
\end{figure}
The plots for density of states in SYK model and its supersymmetric
extension are shown in Figure \ref{rho} for comparison. For each
realization of random hamiltonian, we compute all eigenvalues. After
collecting large number of samples one can plot the histograms for all
samples as the function $\rho(E)$. For density of states in SYK model,
in small $N$ tiny vibrations are contained, while in the large $N$ the
distribution will converge to a Gaussian distribution besides the
small tails. However, in the supersymmetric SYK model the energy eigenvalue structure is
totally different. All energy eigenvalues are larger than zero because
$H=Q^2> 0$. Because of supersymmetry the lowest energy eigenvalues
will approach zero for large $N$, and the figure will come to a
convergent distribution. The shape of this distribution matches the eigenvalue distribution of Wishart-Laguerre, which is
the Marchenko-Pastur distribution \cite{lec} in the large $N$ limit. For the supercharge matrices, as $N$ becomes
larger the curve acquires a dip at zero, which is a clear feature for extended ensembles and could match the averaged
density of eigenvalues of random matrices in CI, DIII \cite{AZ1997}
and chiral \cite{Jackson:1996xt} ensembles at large $N$.
\\
\\
For numerical details, we compute $N=10$ (40000 samples), $N=12$ (25600 samples),
$N=14$ (12800 samples), $N=16$ (6400 samples), $N=18$ (3200 samples),
$N=20$ (1600 samples), $N=22$ (800 samples), $N=24$ (400 samples),
$N=26$ (200 samples), and $N=28$ (100 samples). The results for
original SYK model perfectly match the density of states obtained in
previous works (eg. \cite{Maldacena:2016hyu,Cotler:2016fpe}).
\subsection{Wigner surmise}\label{sec:wigner}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{wignerstandard}
\caption{\label{wignerstandard} The theoretical Wigner surmises for three different standard ensembles. The lower (blue), middle (red) and higher (green) curves are corresponding to GOE, GUE and GSE universal class respectively.}
\end{figure}
There exists a practical way to test if random matrices from a theory are from some specific ensembles. For a random realization of the hamiltonian, we have a collection of energy eigenvalues $E_n$. If we arrange them in ascending order $E_n<E_{n+1}$, we define, $\Delta E_n= E_n- E_{n-1}$ to be the level spacing, and we compute the ratio for the nearest neighbourhood spacing as $r_n=\Delta E_n/\Delta E_{n+1}$. For matrices from the standard Dyson ensemble, the distribution of level spacing ratio satisfies the Wigner-Dyson statistics\cite{wignerSurmise}) (which is called the \emph{Wigner surmise}
\begin{align}
p(r)=\frac{1}{Z}\frac{{{(r+{{r}^{2}})}^{\tilde{\beta} }}}{{{(1+r+{{r}^{2}})}^{1+3\tilde{\beta} /2}}}
\end{align}
for GOE universal class, $\tilde{\beta}=1$, $Z=8/27$; for GUE universal class, $\tilde{\beta}=2$, $Z=4\pi/(81\sqrt{3})$; for GSE universal class, $\tilde{\beta}=4$, $Z=4\pi/(729\sqrt{3})$ (In fact, these are labels for the field of representation. See appendices for more details). Practically we often change $r$ to $\log r$, and the new distribution after the transformation is $P(\log r)=r p(r)$. Standard Wigner surmises are shown in the Figure \ref{wignerstandard}. \cite{You:2016ldz} has computed the nearest-neighbor level spacing distribution of the SYK model, which perfectly matches the prediction from the eight-fold classification.
\\
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{hstatArray}\vspace{-3.8cm}
\flushright
\begin{minipage}{0.62\textwidth}
\caption{\label{wignersusy} The nearest-neighbor level spacing
distribution for hamiltonian of $\mathcal{N}=1$ supersymmetric
SYK model for different $N$. The lower (blue), middle (red) and
higher (green) curves are theoretical predictions of Wigner
surmises from GOE, GUE and GSE respectively. The black dashed
curves are distributions for all $r$s from a large number of
samples.
}
\end{minipage}
\end{figure}
\\
What is the story for the $\mathcal{N}=1$ supersymmetric SYK model? A
numerical investigation shows a different correspondence for the
eight-fold classification, which is given by
Figure \ref{wignersusy}. One can clearly see the new correspondence in
the eight-fold classification for supersymmetric SYK models, as has
been predicted in the previous discussions.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{qstatArray}\vspace{-3.8cm}
\flushright
\begin{minipage}{0.62\textwidth}
\caption{\label{fig:qstat} The nearest-neighbour level spacing
distribution for the supercharge matrix $Q$ of $\mathcal{N}=1$
supersymmetric SYK model for different $N$. The lower (blue),
middle (red) and higher (green) curves are theoretical
prediction of Wigner surmises from GOE, GUE and GSE,
respectively. The black dashed curves are distributions for all
$r$s from a large number of samples. }
\end{minipage}
\end{figure}
\\
\\
Some comments should be given in this prediction. Firstly, one have
some subtleties in obtaining correct $r$s. Considering there are two
different parities in the SYK hamiltonian ($F \text{ mod } 2$), each
group of parity should only appear once in the statistics of
$r_n$. For $N \text{ mod } 8= 0,4$ in SYK, the particle hole operator
$P$ maps each sector to itself, thus if we take all $r_n$ the
distribution will be ruined, serving as a many-body-localized
distribution (the Poisson distribution). For $N \text{ mod } 8= 2,6$
in SYK, the particle hole operator $P$ maps even and odd parities to
each other, and one can take all possible $r$s in the distribution
because all fermionic parity sectors are degenerated. Similar things
are observed for all even $N$ in the supersymmetric SYK model. As we
mentioned before, the reason is that the supercharge $Q$ is a symmetry
of $H$, which always changes the particle number because it is an
odd-point coupling term. Moreover, the standard ensemble behavior is
only observed for $N \ge 14$, and for small enough $N$s we have no
clear correspondence. Similar things happen for original SYK model,
where the correspondence works only for $N \ge 5$, because there is no
thermalization if $N$ is too small \cite{You:2016ldz}. However, the
threshold for obtaining a standard random matrix from $\mathcal{N}=1$
supersymmetric extension is much larger.
\\
\\
In Section~\ref{sec:RMTQ}, we argued that the supercharge operator $Q$ in $\mathcal{N}=1$ supersymmetric SYK theory are also random matrices in some extended ensembles \cite{AZ1997,Zirnbauer1996}. We compute the level statistics of $Q$ and compare it with the Wigner surmises of three standard Dyson ensembles in cases with different $N$. The result is presented in Figure~\ref{fig:qstat}. We see the level statistics of $Q$ matrices match the same ensembles as the corresponding hamiltonian. This result confirms the relationship between $Q$'s random matrix ensemble and that of the corresponding $H$. That we do not see extended ensemble in the $Q$'s level statistics because the level statistic does not see all the information in the ensembles.
\subsection{Spectral form factors}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figHSFF}
\caption{\label{spec_SUSY} The spectral form factors $g(t)$, $g_c(t)$ and $g_d(t)$ in the supersymmetric SYK model with $J_{\mathcal{N}=1}=1$, $\beta=0,5,10$ respectively.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figQSFF}
\caption{\label{fig:QSFF} The ``spectral form factors'' $g(t)$, $g_c(t)$ and $g_d(t)$ in the supersymmetric SYK model, treating the supercharge matrix as the Hamiltonian, with $J_{\mathcal{N}=1}=1$, $\beta=0,5,10$ respectively.}
\end{figure}
Before presenting the numeric results of spectral form factors, we will review the discreteness of spectrum and the spectral form factor following \cite{Cotler:2016fpe}. For a quantum mechanical system, the partition function
\begin{align}
Z(\beta )=\text{Tr}({{e}^{-\beta H}})
\end{align}
could be continued as
\begin{align}
Z(\beta ,t)=Z(\beta +it)=\text{Tr}({{e}^{-\beta H-iHt}})
\end{align}
The analytically continued partition function $Z(\beta,t)$ is an important quantity to understand a discrete energy spectrum. Typically, people will compute the time average to understand the late time behavior, but for $Z(\beta,t)$, it vibrates near zero at late time and the time average should be zero. Thus, we often compute ${{\left| \frac{Z(\beta ,t)}{Z(\beta )} \right|}^{2}}$. For a discrete energy eigenvalue spectrum, we have
\begin{align}
{{\left| \frac{Z(\beta ,t)}{Z(\beta )} \right|}^{2}}=\frac{1}{Z{{(\beta )}^{2}}}\sum\limits_{m,n}{{{e}^{-\beta ({{E}_{m}}+{{E}_{n}})}}{{e}^{i({{E}_{m}}-{{E}_{n}})t}}}
\end{align}
It's hard to say anything general directly for a general spectrum, but one can use the long-term average
\begin{align}
\frac{1}{T}\int_{0}^{T}{{{\left| \frac{Z(\beta ,t)}{Z(\beta )} \right|}^{2}}dt}=\frac{1}{Z{{(\beta )}^{2}}}\sum\limits_{E}{n_{E}^{2}{{e}^{-2{{\beta }_{E}}}}}
\end{align}
for large enough $T$ ($n_E$ means the degeneracy). For a non-degenerated spectrum, it should have a simple formula
\begin{align}
{{\left| \frac{Z(\beta ,t)}{Z(\beta )} \right|}^{2}}=\frac{Z(2\beta )}{Z{{(\beta )}^{2}}}
\end{align}
However, for a continuous spectrum, the quantity has vanishing long-term average. Thus, the quantity should be an important criterion to detect the discreteness. In this paper, we will use a similar quantity, which is called the spectral form factor
\begin{align}
& g(t,\beta )=\frac{\left\langle Z(\beta +it)Z(\beta -it) \right\rangle }{{{\left\langle Z(\beta ) \right\rangle }^{2}}} \nonumber\\
& {{g}_{d}}(t,\beta )=\frac{\left\langle Z(\beta +it) \right\rangle \left\langle Z(\beta -it) \right\rangle }{{{\left\langle Z(\beta ) \right\rangle }^{2}}} \nonumber\\
& {{g}_{c}}(t,\beta )=g(t,\beta )-{{g}_{d}}(t,\beta )=\frac{\left\langle Z(\beta +it)Z(\beta -it) \right\rangle -\left\langle Z(\beta +it) \right\rangle \left\langle Z(\beta -it) \right\rangle }{{{\left\langle Z(\beta ) \right\rangle }^{2}}}
\end{align}
In the SYK model, these quantities will have similar predictions with the hamiltonian replaced by random matrix from some specific given Dyson ensembles. For example, for a given realization $M$ from a random matrix ensemble with large $L$, we have the analytically continued partition function
\begin{align}
{{Z}_{\text{rmt}}}(\beta ,t)=\frac{1}{{{\mathcal{Z}}_{\text{rmt}}}}\int{d{{M}_{ij}}}\exp \left( -\frac{L}{2}\operatorname{Tr}({{M}^{2}}) \right)\text{Tr(}{{e}^{-\beta M-iMt}}\text{)}
\end{align}
where
\begin{align}
{{\mathcal{Z}}_{\text{rmt}}}=\int{d{{M}_{ij}}}\exp \left( -\frac{L}{2}\operatorname{Tr}({{M}^{2}}) \right)
\end{align}
The properties of spectral form factors given by random matrix theory, $g_\text{rmt}(t)$, have been studied in \cite{Cotler:2016fpe}. There are three specific periods in $g_\text{rmt}(t)$. In the first period, the spectral form factor will quickly decay to a minimal until \emph{dip time} $t_d$. Then after a short increasing (the \emph{ramp}) towards a \emph{plataeu time} $t_p$, $g_\text{rmt}(t)$ will arrive at a constant plataeu. This pattern is extremely similar with SYK model. Theoretically, in the early time (before $t_d$), $g(t)$ should not obtained by $g_\text{rmt}(t)$ because of different initial dependence on energy, while in the late time these two systems are conjectured to be coincide \cite{Cotler:2016fpe}.
\\
\\
With the data of energy eigenvalues one could compute the spectral form factors, which have been shown in Figure \ref{spec_SUSY} for supersymmetric SYK model. We perform the calculation for three different functions $g(t)$, $g_d(t)$ and $g_c(t)$ with $\beta=0, 5, 10$ and several $N$s. Clear patterns similar with random matrix theory predictions are shown in these numerical simulations. One could directly see the dip, ramp and plateau periods. For small $\beta$s there exist some small vibrations in the early time, while for large $\beta$ this effect disappears. The function $g_d$ is strongly vibrating because we have only finite number of samples. One could believe that the infinite number of samples will cancel the noisy randomness of the curves.
\\
\\
A clear eight-fold correspondence has been shown in the spectral form factor. Near the plateau time of $g(t)$ one should expect roughly a smooth corner for GOE-type, a kink for GUE-type, and a sharp peak for GSE-type. Thus, we observe roughly the smooth corners for $N=14,16,22,24$, while the sharp peaks for $N=18,20,26,28$ (although the peaks look not very clear because of finite sample size). For $N=10,12$, as shown in Figure \ref{wignersusy} there is no clear random matrix correspondence because $N$ is too small, thus we only observe some vibrations near the plateau time.
\\
\\
We also perform a similar test on the supercharge $Q$, plotted in Figure \ref{fig:QSFF}.
In Section~\ref{sec:wigner}, we numerically tested the nearest neighbour level statistics of $Q$ which matches perfectly the statistics of the corresponding $H$. The spectral form factors of $Q$ are slightly different from those of $H$, yet they show exactly the same eight fold behavior.
\subsection{Dip time, plateau time and plateau height}
More quantitative data could be read off from the spectral form factors. In Figure \ref{diptime}, Figure \ref{platime} and Figure \ref{platheight} we present our numerical results for dip time $t_d$ of $g(t)$, plateau time $t_p$ of $g(t)$, and plateau height $g_d$ of $g_c(t)$ respectively. For numerical technics, we choose the linear fitting in the ramp period, and the plateau is fitted by a straight line parallel to the time axis. The dip time is read off as the averaged minimal point at the end of the dip period, and the error bar could be computed as the standard deviation.
\\
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{diptime}
\caption{\label{diptime} The dip time $t_d$ for supersymmetric SYK model. In the left figure, we evaluate three different temperatures and compute the dip time with respect to $N$, where the error bar is given as the standard deviation when evaluating $t_d$ because of large noise is around the minimal point of $g(t)$. In the middle figure we fit the dip time by polynomials and exponential functions for $t_d(N)$ at the temperature $\beta=5$. In the right figure we separately fit the dip time for two different random matrix classes with the same temperature $\beta=5$ and the same fitting functions.}
\end{figure}
\\
It is claimed in \cite{Cotler:2016fpe} that polynomial and exponential fitting could be used to interpret the dip time as a function of $N$ with fixed temperature. We apply the same method to the supersymmetric extension. However, we find that in the supersymmetric extension, the fitting is much better if we fit the GOE-type group ($N \bmod 8=0,6$) and the GSE-type group ($N \bmod 8=2,4$) separately. On the other hand, although we cannot rule out the polynomial fitting, the fitting effect of exponential function is relatively better. On the exponential fittings with respect to different degeneracy groups, the coefficients before $N$ are roughly the same ($0.24N$ for $\beta=5$) while the overall constants are different. That indicates that the eight-fold degeneracy class or random matrix class might influence the overall factors in the dip time exponential expressions.
\\
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{plateautime}
\caption{\label{platime} The plateau time $t_p$ for supersymmetric SYK model. We choose three different temperatures and evaluate the plateau time with respect to $N$, and we use the exponential function to fit $t_p(N)$. In the left figure we use all $N$s, while in the right figure we separately fit two different random matrix classes.}
\end{figure}
\\
One could also read off the plateau time and exponentially fit the data. Similar with dip time, we could also separately fit the plateau time with respect to two different random matrix classes, and one could find a difference in the overall factors of these two groups, while the coefficients before $N$ are the same. There is a non-trivial check here. Theoretically from random matrix theory one can predict that the plateau time scales like $t_p\sim e^{S(2\beta)}$ \cite{Cotler:2016fpe}. In the large $\beta$ limit, the entropy should be roughly the ground state entropy. Analytically, the entropy is predicted by $S(\beta=\infty)=N s_0=0.275 N$. Now check the largest $\beta$ we take ($\beta=10$), we can read off the entropy by $0.277N$ (GSE-type), $0.275N$ (GOE-type), or $0.277N$ (two groups together), which perfectly matches our expectation.
\\
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{plateauheight}
\caption{\label{platheight} The plateau height $g_p$ for supersymmetric SYK model. In the left figure we choose several temperatures and fix $N$ in each curve, while in the right we fix $\beta$ and evaluate $g_p(N)$.}
\end{figure}
\\
For the plateau height, one can clearly see an eight-fold structure. From the previous discussion we obtain that the plateau height should equals to $Z(2\beta)/Z(\beta)^2$ times a contribution from the degeneracy, which is clearly shown in the figure. For $N =14,16,22,24$ (GOE-type), the degeneracy is two thus points should be on the lower line, while for $N=18,20,26,28$ (GSE-type), the degeneracy is four thus points should be on the upper line. These observations match the prediction from random matrix theories.
\section{Conclusion and outlook}\label{conclu}
In this paper, we use analytic arguments and numerical evidence to explore the supersymmetric constraints on the random matrix theory symmetry class. We focus on the $\mathcal{N}=1$ supersymmetric SYK model, a supersymmetric generalization of nonlocal-coupled majonara fermions with similar chaotic behavior for a two dimensional quantum black hole.
\\
\\
Use the direct classification from random matrix theory, we show that for $\mathcal{N}=1$ supersymmetric SYK model has a different behavior for $N \text{ mod } 8$ structure. These arguments might be made to be more general: supersymmetry could directly change the universal class of Hamiltonian (GOE/GUE/GSE) by classifying the symmetry class of supercharge, where combinations of Witten index and antiunitary operators will make some new anti-unitaries; On the other hand, the quadratic structure of the Hamiltonian will change the original type of distribution from Gaussian to Wishart-Laguerre. These points may happen for generic supersymmetric statistical physics models.
\\
\\
We also use numerical method, exact diagonalization to confirm the random matrix theory classification on the Hamiltonian and the supercharge of the supersymmetric SYK model. It is clear that if we check the spectrum density, the supercharge $Q$ shows a clear feature from one-point function of extended random matrix theory ensembles, while the Hamiltonian shows a feature of quadratic semi-circle (Marchenko-Pastur). However, for level statistics (eg. Wigner surmise and spectral form factor), the universal class GSE/GOE could capture important physical features, and the new eight-fold rule could be verified.
\\
\\
Several future directions could be investigated. Firstly, one may consider higher supersymmetry constraints on the SYK model, such as $\mathcal{N}=2$ generalization. Many thermodynamical and field theory properties of higher supersymmetric SYK theory are non-trivial, and it might be interesting to connect these properties to random matrix theory. Moreover, to understand the spectral form factor with supersymmetric constraints, one could also try to study superconformal field theory partition functions at late time. Finally, introducing supersymmetries in the symmetry classification of phases in the condensed matter theory will bring more understanding at the boundary of condensed matter and high energy physics. We leave these interesting possibilities to future works.
\section*{Acknowledgments}
We thank Xie Chen, Kevin Costello, Liam Fitzpatrick, Davide Gaiotto, Yingfei Gu, Nicholas Hunter-Jones, Alexei Kitaev, Andreas Ludwig, Evgeny Mozgunov, Alexandre Streicher for valuable discussions. We thank Takuya Kanazawa for comments on the draft. JL is deeply grateful to Guy Gur-Ari for communications on the symmetry of the original and supersymmetric SYK models. TL, JL, YX and YZ are supported by graduate student programs of University of Nebraska, Caltech, Boston University and Perimeter Institute.
| proofpile-arXiv_065-7378 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro}
Corner Transfer Matrices (CTMs) were introduced by Baxter in the context of exactly solvable models in 2d.
{ In his 1968 paper \cite{baxterCTM} he laid, without noticing it, some of the basics of CTMs, together with those of the density matrix renormalization group (DMRG) and matrix product states (MPS), when dealing with the dominant eigenvector of a 1d transfer matrix.}
CTMs are a key ingredient in the exact solution of several statistical-mechanical models \cite{CTMstat}, and have also inspired many advances in the study of quantum many-body entanglement \cite{PeschelCTM, dirCTM, 3dCT}.
CTMs have also been important for the numerical simulation of lattice systems, both classical and quantum.
{ In retrospect, Baxter proposed in 1978 a variational method over CTMs \cite{BaxterVarCTM}, inspired by an earlier numerical method from 1941 by Kramers and Wannier \cite{KWappro}.}
This, in turn, was one of the inspirations of Nishino and Okunishi's CTM Renormalization Group method (CTMRG) \cite{CTMRG}.
(For the avid reader, a good source of information about this history can be found in Ref.\cite{ctmnote}.)
Similar numerical CTM techniques are also currently used in the calculation of low-energy properties of infinite-size quantum lattice systems in 2d \cite{dirCTM, ffUpdate, frankCTM}, for which they have become one of the standard tools in the approximate calculation of physically-relevant quantities such as expectation values of local observables and low energy excitations. CTMs and their algorithms have also been generalized to 3d by the so-called Corner Tensors (CTs) \cite{CT, 3dCT}, in turn allowing to explore higher-dimensional systems with Tensor Network (TN) methods.
Still, CTMs and CTs contain a great amount of holographic information about the bulk properties of the system which, a bit surprisingly, has not yet been fully exploited in the context of numerical simulations. Apart from being a useful object in the calculation of observables, the corner objects also contain, by themselves, information about the universal properties of the simulated model, providing a nice instance of the bulk-boundary correspondence for Tensor Networks (TNs) \cite{tn}. Bulk information is encoded holographically at the ``boundary" corners, in a way similar to the study of the so-called ``entanglement spectrum" and ``entanglement Hamiltonians" \cite{eHam}.
For instance, Peschel, Kaulke and Legeza~\cite{PeschelCTM} showed that the entanglement spectrum of a quantum spin chain (w.r.t. a partition into two semi-infinite segments) is identical, up to some normalization constant, to the spectrum of some CTM in 2d, which could be computed exactly in some cases. This was the case of the Ising and Heisenberg quantum spin chains in a transverse field, for which they were able to compute such a entanglement spectrum exactly as eigenvalues of a ``corner Hamiltonian", which here we call ``corner energies". Nevertheless, and in spite of these results, the study of the physical information encoded holographically in CTMs and CTs has been traditionally overlooked in numerical simulations, especially in the case of 2d quantum lattice systems, in spite of the fact that this is, indeed, a quite natural thing to do.
In this paper we explore the fingerprints of universal physics that are encoded holographically in numerical CTMs and CTs. We do this by studying the eigenvalue spectra of these objects or, more precisely, of \emph{contractions} of these objects, together with its associated entropy, in a way to be explained later. We provide several examples of this both for classical and quantum systems, including classical and quantum Ising, XY, XXZ and $N$-state Potts models, as well as several instances of 2d Projected Entangled Pair States (PEPS) \cite{PEPS} describing perturbed $\mathbb{Z}_2$, $\mathbb{Z}_3$, symmetry-protected, and chiral topological orders \cite{Z2, IsingPEPS, Z3_1, Z3_2, spt, chiralPEPS, chiral1, chiral2}. To achieve this goal we use a variety of TN methods for CTMs and CTs. For the case of ground-state properties of a quantum Hamiltonian $H_q$ in $d$ dimensions, we set up a corner method for a $d+1$ dimensional TN as described in Ref.~\cite{3dCT} via the imaginary-time evolution operator $e^{-\tau H_q}$ for large enough $\tau$. From a broad perspective, some of our results can be understood as a generalization of the work by Peschel, Kaulke and Legeza \cite{PeschelCTM} to 2d quantum systems. Additionally, whenever we have direct access to the ground-state wavefunction $|\psi_G\rangle$ in the form of a TN (e.g., a PEPS), we can also study the CTMs originating from the TN for the norm $\langle \psi_G|\psi_G\rangle$, which can be regarded as the partition function of some fictitious 2d classical model with complex weigths. Throughout this paper we shall refer to this setup as {\it reduction} CTM (rCTM), since it is a scheme that ``reduces" the wavefunction to a partition function. Such CTMs are, in fact, readily available in several TN algorithms (such as the full update and fast full update for infinite PEPS \cite{iPEPS, ffUpdate}). Along the way, we also compare different schemes for the classical-quantum correspondence, and provide some pedagogical derivations.
When the quantum state $|\psi_G\rangle$ is explicitly given by a TN, we can directly obtain its associated CTs. To do this we propose a new scheme for quantum state renormalization. In this case, the entanglement spectrum of a partition (of infinite size) can be readily obtained by diagonalizing a contraction of CTs, as we shall explain.
{ First we use the Ising PEPS in the disorder phase as an example to demonstrate how to obtain the CTs entanglement spectrum. }
Then we also apply this quantum state renormalization to two cases of chiral topological ordered states, with $SU(2)_k$ edge modes (for $k=1,2$), and find the degeneracy pattern in the entanglement spectrum matches that in the corresponding conformal tower for the vacuum of the $SU(2)_k$ WZW model.
Our work is organized as follows: in Sec.(\ref{sec:CornerIntro}) we provide a reminder on CTMs, CTs, some of their properties, as well as a summary of previous relevant results. In Sec.(\ref{Sec3}) we provide a summary of the TN numerical methods used to study the 1d, 2d and 3d classical and quantum lattice systems explored in this paper. Moreover, we also provide a new numerical scheme for quantum state renormalization in 2d using CTs. In Sec.(\ref{Sec4}) we analyze, as a first test, several models in the universality class of the quantum Ising spin chain in a transverse field. In Sec.(\ref{Sec5}) we show how the quantum-classical correspondence can be identified from corner properties, for 1d quantum vs 2d classical and 2d quantum vs 3d classical models. In this section we also review the theory behind several approaches for the quantum-classical correspondence, namely, the partition function approach, Peschel's approach, and Suzuki's approach for the XY model \cite{SuzukiXY}. Then, in Sec.(\ref{Sec6}) we provide further examples where the calculation of corner properties is useful. In particular, we show how corner properties can be used to pinpoint phase transitions in quantum systems ``almost for free" in common tensor network numerical algorithms, without the need to compute observables explicitly. We show this for several PEPS with topological order, including symmetry-protected, as well as for the 2d XXZ model. In Sec.(\ref{Sec7}) we show how CTs can be used to compute the entanglement spectrum of several bipartitions of an infinite 2d system. In particular, we apply the idea to chiral topological PEPS \cite{chiral1}, showing that the obtained spectra encode the expected symmetries of the chiral conformal field theory (CFT) describing its gapless edge, specifically, $SU(2)_k$ WZW models for $k=1,2$. Finally, in Sec.(\ref{sec:Conclusion}) we wrap up with a summary of the results, conclusions and perspectives.
\section{Corner objects} \label{sec:CornerIntro}
\subsection{Corner transfer matrices}
CTMs are objects that can be defined for any 2d tensor network. Here, for simplicity, we assume the case of a 2d TN on a square lattice. Such a TN could be, e.g., the partition function of a classical lattice model, the time-evolution of a 1d quantum system, or the norm of a 2d PEPS. To define what a CTM is, we notice that the contraction of the 2d TN can be obtained, at least theoretically, by multiplying four matrices $C_1,C_2,C_3$ and $C_4$, one for each corner (see Fig.~\ref{fig:CTMmethod}a). Therefore, one has that
\begin{equation}
Z = {\rm tr} \left(C_1C_2C_3C_4 \right) ,
\label{ctmm}
\end{equation}
where $Z$ is the scalar resulting from the contraction. Matrices $C_1,C_2,C_3$ and $C_4$ are the \emph{Corner Transfer Matrices} of the system. They correspond to the (sometimes approximate) contraction of all the tensors in each one of the four corners of the 2d TN. In some cases, when the appropriate lattice symmetries are present, the four CTMs are equal, i.e., $C \equiv C_1 = C_2 = C_3 = C_4$. For the sake of simplicity, in this section we shall assume that this is the case, though in the following sections the four CTMs are different when computed numerically.
It is also convenient to define diagonal CTMs $C_d = P C P^{-1}$. Depending on the symmetries of the system (and thus of $C$), matrix $P$ may be arbitrary, unitary or orthogonal. Let us call the eigenvalues $\nu_{\alpha}$, with $\alpha = 1, 2, \ldots, \chi$, and $\chi$ the \emph{bond dimension} of the CTM. Then, the contraction of the full TN reads
\begin{equation}
Z = {\rm tr} \left(C_d^4 \right) = \sum_{\alpha=1}^\chi \nu_{\alpha}^4 .
\label{ctm2}
\end{equation}
In fact, one can understand this as the trace of the exponential of a ``corner Hamiltonian" $H_C$, i.e.,
\begin{equation}
Z = {\rm tr} \left(e^{-H_C} \right),
\label{ctm3}
\end{equation}
with
\begin{equation}
H_C \equiv - \log{ \left(C_d^4 \right) }.
\label{ctm4}
\end{equation}
Notice that a similar Hamiltonian can also be defined individually for each one of the corners.
\begin{figure}
\includegraphics[width=0.5\textwidth]{1_CTMmethod.pdf}
\caption{ [Color online] (a) The contraction of a 2d square lattice of tensors results in a scalar $Z$, understood as the trace of the product of four CTMs, one for each corner. (b) A reduced density matrix $\rho$ of a system with a CTM at every corner.}
\label{fig:CTMmethod}
\end{figure}
Depending on the symmetries of the CTMs, $H_C$ may be a Hermitian operator or not. From the point of view of quantum states of 1d quantum lattice systems, it is well known \cite{3dCT} that operator $e^{-H_C}$ is related to the reduced density matrix of half an infinite chain (with $H_C$ Hermitian in this case), see Fig.~\ref{fig:CTMmethod}b. In fact, the spectrum of Schmidt coefficients $\lambda_\alpha$ of half an infinite quantum chain in its ground state is given by $\lambda_{\alpha} = \nu_{\alpha}^2$. These Schmidt coefficients are related to the eigenvalues $\omega_\alpha$ of the reduced density matrix of half an infinite quantum system (the so-called ``entanglement spectrum" \cite{eHam}) by $\omega_\alpha = \lambda_\alpha^2 = \nu_\alpha^4$, which are known to codify universal information about the system when close enough to criticality \cite{PeschelCTM}. In terms of $\omega_\alpha$, the contraction of the 2d TN reads $Z = \sum_{\alpha = 1}^\chi \omega_\alpha$. Aditionally, the eigenvalues $\varepsilon_\alpha$ of the corner Hamiltonian $H_C$ read
\begin{equation}
\varepsilon_\alpha \equiv - \log \omega_\alpha.
\end{equation}
In this paper we call these eigenvalues $\varepsilon_\alpha$'s \emph{corner energies}.
\subsection{Corner Tensors}
Similarly to CTMs for 2d TNs, one can define corner objects for higher dimensions, which we generically call \emph{Corner Tensors} (CT). Formally speaking, a CT is the (sometimes approximate) contraction of all the tensors at one of the corners of a TN. For instance, for a TN on a 3d cubic lattice, one would have that its contraction $Z$ is equivalent to the contraction of eight CTs, i.e.,
\begin{equation}
Z = f(C_1, C_2, C_3, C_4, C_5, C_6, C_7, C_8),
\end{equation}
with $C_i$ ($i=1, \ldots, 8$) eight three-index tensors (the CTs), and $f(\cdot)$ a function specifying the contraction pattern, see Fig.~\ref{fig:CT}.
For the case of systems with CTs it is also possible to define corner Hamiltonians. For instance, contractions such as the ones in Fig.~\ref{fig:CT} correspond, for the case of a 2d quantum lattice system, to tracing over three quarters or half of the infinite system. For quantum systems described by a 2d PEPS, it is possible to obtain these types of contractions by using the quantum state renormalization scheme from Sec.(\ref{Sec3}). In such cases, these contractions correspond to the reduced density matrices $\rho$ of either one quarter or half an infinite 2d system, with eigenvalues $\omega_\alpha$, $\alpha = 1, \ldots, \chi$ (entanglement spectrum). The contraction of the full 3d TN thus amounts to $Z = \sum_{\alpha = 1}^\chi \omega_\alpha$, as in the lower-dimensional case of CTMs. Again, it is possible to define a corner Hamiltonian $H_C$ and corner energies $\varepsilon_\alpha$ in an analogous way as for CTMs.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{2_3Dcorner.png}
\caption{[Color online] 3d corner tensors which correspond to tracing over, respectively, (a) three quarters and (b) half of a given 2d quantum system.}
\label{fig:CT}
\end{figure}
\subsection{Previous results}
CTMs and CTs have proven to be important in a variety of contexts, both for theory and numerics. In statistical mechanics they were used to solve the hard hexagon model and many others \cite{baxterCTM, CTMstat}. From the perspective of quantum information, it is well known that the corner Hamiltonian $H_C$ is related to a quantum system which, in some cases, can be diagonalized exactly \cite{PeschelCTM}. Numerically, Baxter developed a variational method to approximate the partition function per site of a 2d classical lattice model by truncating in the eigenvalue spectrum of the CTM \cite{BaxterVarCTM}. This was later refined by Nishino and Okunishi, who developed the Corner Transfer Matrix Renormalization Group method (CTMRG) \cite{CTMRG}. Alternative truncation schemes for CTMRG have also been studied, based on a directional approach and with a direct application in infinite-PEPS algorithms \cite{iPEPS, dirCTM}. In fact, CTMs have been applied extensively in the calculation of effective environments in infinite-PEPS simulations \cite{oruscorboz}. Moreover, they have been used as well in the generalization to 2d of the time-dependent variational principle \cite{frankCTM}, which is also useful in the calculation of 2d excitations. As for generalizations, CTMs have also been used in other 2d geometries, including lattice discretizations of AdS manifolds \cite{ads}. Numerical methods with CTMs were also implemented in systems with periodic boundary conditions \cite{pbc} as well as stochastic models \cite{stoch}. Methods targeting directly the corner Hamiltonian have also been considered \cite{Hc, Kim2016}. Finally, the higher-dimensional generalization to corner tensors has also been used to develop new numerical simulation algorithms \cite{CT, 3dCT}.
\section{Approach and methods}
\label{Sec3}
\subsection{Generalities}
In the following sections we shall show how the spectrum of eigenvalues $\omega_\alpha$, or equivalently the spectrum of corner energies $\varepsilon_\alpha$, encodes useful universal information when computed numerically for a variety of classical and quantum lattice systems. This is also true for the ``corner entropy" given by
\begin{equation}
S \equiv - \sum_\alpha \omega_\alpha \log \omega_\alpha .
\end{equation}
In particular, we will show explicitly how the spectrum as well as the entropy exactly coincide if compared between some $d$-dimensional quantum and $(d+1)$-dimensional classical spin systems, as expected from the quantum-classical correspondence. Moreover, we will also study them for a variety of other models, including several instances of topologically-ordered states. We will see that this can be useful to pinpoint phase transitions as well as to study edge physics of chiral topological states.
Concerning numerical algorithms, in our simulations we have used the following, depending on the nature of the system to be studied:
\begin{figure}
\includegraphics[width=0.5\textwidth]{3_effective_WF.pdf}
\caption{[Color online] 2d PEPS on a square lattice and its renormalized version with CTs}
\label{fig:effective_WF}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{4_qsrg_ctm.pdf}
\caption{[Color online] 2d quantum state renormalization with corner tensors: a left move, where one column is absorbed to the left. The procedure is the same as in the directional CT approach from Ref.~\cite{3dCT}, but on a single layer of PEPS tensors instead of two layers. Consequently, at every step we need to renormalize with isometries not just the bond indices, but also the physical indices, which proliferate at every iteration.
Several prescriptions are possible for the calculation of the isometries, e.g, one could consider higher-order singular value decompositions of the resulting tensors \cite{genSVD}, or compute the reduced density operators of the indices to be truncated \cite{CTMRG}.}
\label{fig:qsrg_ctm}
\end{figure}
\begin{enumerate}
\item{\emph{For 1d quantum:} the infinite Time-Evolving Block Decimation (iTEBD) \cite{itebd} to approximate ground states. The spectrum $\omega_\alpha$ obtained from CTMs is easily related \cite{PeschelCTM} to the Schmidt coefficients $\lambda_\alpha$ of a bipartition, readily available from iTEBD or iDMRG \cite{idmrg}, as $\omega_\alpha = \lambda_\alpha^2$. In some instances we also use the simplified one-directional 1d method from Ref.~\cite{3dCT}.}
\item{\emph{For 2d classical:} 2d directional CTM approach \cite{dirCTM}. }
\item{\emph{For 2d quantum:} if a quantum Hamiltonian is given, then we use the 3d directional CTM approach~\cite{dirCTM} to compute properties of CTs, as well as infinite-PEPS (iPEPS) \cite{iPEPS} to approximate ground states. If the ground state $\ket{\psi_G}$ is given, then we use the directional CTM approach for the double-layer tensors of the norm \cite{dirCTM} to compute
the ``reduced" spectrum $\omega^{(r)}_\alpha$ from rCTM. Moreover, we also use the 2d quantum state renormalization described in the next section to compute properties of CTs. As we shall see, this method is single-layer and targets directly the quantum state.}
\item{\emph{For 3d classical:} simplified one-directional 2d method~\cite{3dCT}.}
\end{enumerate}
\subsection{2d quantum state renormalization with CTs}
\label{qsr}
The procedure of quantum state renormalization is important in 2d to obtain the contractions from Fig.~\ref{fig:CT} in the quantum case, which give the reduced density matrix by tracing spins in three quadrants or half-infinite plane. The entanglement spectrum can then be obtained from the eigenvalues of such reduced density matrix. We have implemented our own approach for the case of a 2d PEPS, using CTs and single-layer contractions. This procedure, which is an independent algorithm by itself, is explained in detail in what follows.
The quantum state renormalization group (QSRG) transformation acts directly on a quantum state and aims to extract a fixed-point wave function encoding universal properties \cite{qsrg}. The basic idea is to remove non-universal short-range entanglement related to the microscopic details of the system. After many rounds of QSRG, the original ground state flows to a simpler fixed-point state, from which one can identify to which phase the system belongs to.
In order to determine the fixed-point wave function we make use of CTs, see Fig.~\ref{fig:effective_WF}. The distinction from the usual QSRG is that here the fixed-point wave function will be encoded in these CTs.
The procedure is similar to the directional CTM approach from Ref.~\cite{dirCTM}, but this time acting directly on the PEPS, which is single-layer, and not on the TN for the norm, which is double-layer. An example of a left-move is in Fig.~\ref{fig:qsrg_ctm}, where we show also a simple option to obtain the isometrics needed for the coarse-grainings. We follow this procedure by absorbing rows and columns towards the left, up, right and down directions until convergence is reached. In the end, the corner tensors $C$ represent the renormalization of one quadrant of the 2d PEPS, and the half-row/half-column tensors $T$ to the renormalization of half an infinite row or column of tensors in the PEPS. One then follows the contractions in Fig.~\ref{fig:CT} to obtain the corresponding reduced density matrix and hence the entanglement spectrum.
\section{First test: the 1d quantum Ising universality class}
\label{Sec4}
In order to build some intuition about the numerical information contained in the spectrum $\varepsilon_\alpha$ of corner energies, we have first performed a series of numerical tests in systems belonging to the universality class of the 1d quantum Ising model in a transverse field. The analyzed models undergo a 2nd order quantum or classical phase transition, with the critical point being described by an effective $(1+1)$-dimensional CFT of a free fermion \cite{cftFreeFermion}. The models and methods considered are:
\medskip\underline{\emph{(i) 1d quantum Ising:}} the quantum Hamiltonian is given by
\begin{equation}
H_q= - \sum_i \sigma_x^{[i]} \sigma_x^{[i+1]} - h \sum_i \sigma_z^{[i]},
\end{equation}
with $\sigma_\alpha^{[i]}$ the corresponding $\alpha$-Pauli matrices at site $i$, and $h$ the transverse magnetic field, with critical point at $h_c = 1$. We used iTEBD to approximate the ground state by a Matrix Product State (MPS) \cite{mps} and here the square of the Schmidt coefficients $\lambda^2_\alpha$ (hence the entanglement spectrum) is obtained. We also
use the simplified one-directional 1d method from Ref.~\cite{3dCT} to obtain the corner spectrum $\omega_\alpha$. As argued in Ref.~\cite{PeschelCTM} we expect and verify that $\{\lambda_\alpha^2\}$ agrees with $\{\omega_\alpha\}$.
\medskip\underline{\emph{(ii) 2d classical Ising:}} the partition function is given by
\begin{equation}
Z_c = \sum_{\{ s \}}e^{- \beta H_c(\{ s \})},
\end{equation}
with classical Hamiltonian
\begin{equation}
H_c\{s \}= -\sum_{\langle i, j \rangle } s^{[i]} s^{[j]},
\label{cIsing}
\end{equation}
where $\beta = 1/T$ is the inverse temperature, $s^{[i]} = \pm 1$ is a classical spin variable at site $i$, $\{ s \}$ is a spin configuration, and the sum in the Hamiltonian runs over nearest neighbours on the square lattice. The model is exactly solvable, and the critical point satisfies $\beta_c = \frac{1}{2}\log { \left( 1+\sqrt{2} \right) }$. It is well known \cite{itebd} that the partition function $Z_c$ can be written as an exact 2d tensor network with tensors on the sites of a square lattice. The approximate contraction is therefore amenable to tensor network methods. We use the directional CTM approach to compute the corner spectra and corner entropy from the tensors defining the partition function of the model.
\medskip\underline{\emph{(iii) 2d Ising PEPS:}} as explained in Ref.~\cite{IsingPEPS}, it is actually possible to write an exact Projected Entangled Pair State (PEPS) \cite{PEPS} with bond dimension $D=2$ whose expectation values are the ones of the 2d classical Ising model. The way to construct this PEPS is simple: one starts by considering the quantum state
\begin{equation}
\ket{\psi (\beta)}=\frac{1}{Z_c}e^{\left( \frac{\beta}{2} \sum_{\langle i,j \rangle} \sigma_z^{[i]} \sigma_z^{[j]} \right)} \ket{+, +, \cdots, +},
\end{equation}
with $\beta$ some inverse temperature and $\ket{+}$ the $+1$ eigenstate of $\sigma_x$. It is easy to see that the expectation values of this quantum state match the ones of the 2d classical Ising model, e.g.,
\begin{equation}
\bra{\psi (\beta)} \sigma_z^{[i]} \sigma_z^{[j]} \ket{\psi (\beta)} = \frac{1}{Z_c} \sum_{ \{ s \} }s^{[i]} s^{[j]} e^{-\beta H_c(\{ s \})} = \langle s^{[i]} s^{[j]} \rangle_\beta,
\end{equation}
with $H_c(\{ s \} )$ the classical Hamiltonian in Eq.(\ref{cIsing}), and $\langle \cdot \rangle_\beta$ the expectation value in the canonical ensemble at inverse temperature $\beta$.
For a square lattice, one can also see \cite{IsingPEPS} that the state $\ket{\psi (\beta) }$ can be written exactly as a 2d PEPS with bond dimension $D=2$. If $A$ is the tensor defining the PEPS, its non-zero coefficients are given by
\begin{eqnarray}
A_{0000}^+ &=& \left( \cosh (\beta/2) \right)^4 \nonumber \\
A_{0010}^- &=& \left( \cosh (\beta/2) \right)^3 \left( \sinh (\beta/2) \right) \nonumber \\
A_{0110}^+ &=& \left( \cosh (\beta/2) \right)^2 \left( \sinh (\beta/2) \right)^2 \nonumber \\
A_{1110}^- &=& \left( \cosh (\beta/2) \right) \left( \sinh (\beta/2) \right)^3 \nonumber \\
A_{1111}^+ &=& \left( \sinh (\beta/2) \right)^4
\end{eqnarray}
and permutations thereof. In the above equations, the convention for the PEPS indices is $A_{\alpha \beta \gamma \delta}^i$, with $\alpha, \beta, \gamma, \delta$ the left, up, right and down indices, and $i$ the physical index (this time in the $+/-$ basis). By construction, this PEPS is critical at the same critical $\beta_c$ than the classical Ising model, and belongs also to the same universality class. For the numerical simulations it is sometimes convenient to parametrize the PEPS in terms of $g = \frac{1}{2} \arcsin (e^{-\beta})$, and therefore $g_c \approx 0.349596$. For this state, we computed the corner spectra and entropy from the double-layer TN defining its norm, using the directional CTM approach \cite{dirCTM}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{5_1dqising_2d.pdf}
\caption{[Color online] (a) Entanglement spectra $\lambda_{\alpha}^2$ and the entanglement entropy obtained from iTEBD of 1d quantum Ising model with parameter $t (h)$ as the function of transverse field $h$.
The corner spectra $\omega_\alpha$ and the corner entropy $S$ of: (b) also the 1d quantum Ising model with parameter $t (h)$ as the function of transverse field $h$, but computed with the simplified one-directional 1d method \cite{3dCT}; (c) 2d classical Ising model with temperature $t= T/T_c$; (d) 2d quantum Ising PEPS with parameter $t(g)$ as the function of $g$. In (c,d) the corner tensors are obtained from the rCTM setting, see also examples in Sec.~\ref{Sec6}. In all cases, the bond dimension of the CTMs - equivalent to the bond dimension of the MPS in case (a) - is $\chi=40$.}
\label{fig:1dqising_2d}
\end{figure}
For these three models and the methods mentioned we have computed the spectrum $\omega_\alpha$ as a function of the relevant parameter (magnetic field, inverse temperature, perturbation...), as well as the corner entropy $S = - \sum_\alpha \omega_\alpha \log \omega_\alpha$.
The results are shown in Fig.~\ref{fig:1dqising_2d}.
The differences between models correspond to rescalings in the defining variables and parameters that map the different models among them. More specifically, we can rescale the parameters $h$ and $g$ using the 2d classical Ising reduced temperature $t=T/T_c$ as the basic variable, which is related to the magnetic field $h$ of the 1d quantum model by $t = T_c/\arcsinh \sqrt{1/h}$, and to the parameter $g$ of the 2d Ising PEPS by $ t = - T_c/ \log \left(\sin(2g) \right)$. As shown in the plots, in all cases one can see that the entropy $S$ tends to have the same type of divergence. Concerning the corner spectra $\omega_\alpha$, we see that all the models reproduce the same type of branches on both the symmetric and the symmetry-broken phases.
As expected, all spectra match perfectly between the different calculations, since the different models can be mapped into each other exactly.
\section{Benchmarking the quantum-classical correspondence}
\label{Sec5}
In this section we consider the corner energies for a variety of quantum and classical systems, which allows us to study in good detail the correspondence between quantum spin systems in $d$ dimensions and classical systems in $d+1$ dimensions. There are several approches and here we focus mainly on three of them, which we shall refer to as the partition-function method~\cite{partition}, Peschel's method~\cite{PeschelMap}, and Suzuki's method~\cite{SuzukiXY}, respectively. We will give pedagoical treatment, specializing to a few models and show numerical results for a variety of 1d and 2d quantum and 2d and 3d classical models.
\subsection{Partition-function approach}
We now review the standard procedure behind the partition-function approach for quantum-classical mapping and then examine such correspondence in terms of entanglement and corner spectra. The main idea is that, for a d-dimensional quantum Hamiltonian $H_q$ at inverse temperature $\beta$, the canonical quantum partition function $Z_q = \mathop{\mathrm{tr}} (e^{-\beta H_q})$ can be evaluated by writing it as a path integral in imaginary time, i.e.,
\begin{align}
Z_q &= \mathop{\mathrm{tr}} \left(e^{-\beta H_q} \right) = \sum_{m} \langle m | e^{-\beta H_q} | m \rangle,
\end{align}
with $\ket{m}$ a given basis of the Hilbert space. Introducing resolutions of the identity at intermediate steps in imaginary time one has
\begin{align}
Z_q = \sum_{\{ m \} } \! \langle m_0 | U | m_{L\!-1\!} \rangle
\langle m_{L\!-1\!} | U | m_{L\!-2\!} \rangle \cdots \langle m_{1} |U | m_{0} \rangle,
\end{align}
with $U \equiv e^{-\delta \tau H_q}$, $\delta \tau \equiv \beta/L \ll 1$ (smaller than all time scales of $H_q$), and where the sum is for all the configurations of $m_\alpha, \alpha = 0, 1, \ldots, L-1$ with $m_L = m_0$, i.e., periodic boundary condition in imaginary time.
As such, this way of writing the partition function can be interpreted in some cases as the one of a \emph{classical} model with some variables $m_\alpha$ along an extra dimension emerging from the imaginary-time evolution. In what follows, we make this specific for the quantum Ising and Potts models, and benchmark the theory with numerical simulations using CTMs and CTs computing the corner spectra and corner entropy.
\subsubsection{Transverse field quantum Ising model in d dimensions}
\emph{\underline{(i) Mapping via the partition function:}} let us consider the quantum Ising model with a transverse field in d dimensions for $L$ spins. For convenience, we use now the following notation for its Hamiltonian:
\begin{align}
H_q = -J_z \sum_{\langle i, j \rangle} \sigma_z^{[i]} \sigma_z^{[j]} - J_x \sum_{i} \sigma_x^{[i]}
= H_z + H_x ,
\end{align}
where $\sigma_\alpha^{[i]} $ is the $\alpha$th Pauli matrix on site $i$, $J_z$ is the interaction coupling, $J_x$ the field strength, and the sum $\langle i, j \rangle$ runs over nearest-neighbors. The canonical quantum partition function of this model is given by
\begin{align}
Z_q &= \mathop{\mathrm{tr}} \left(e^{-\beta H_q} \right) = \sum_{\eta_z } \Big{\langle} \{\eta_z \} \Big{|} e^{-\beta H_q } \Big{|} \{\eta_z \} \Big{\rangle},
\end{align}
with $ \Big{|} \{ \eta_z \} \Big{\rangle} \equiv | \eta_z^{[1]}, \eta_z^{[2]}, \cdots, \eta_z^{[L]} \rangle $ the diagonal z-basis of the $N$ spins, so that $\eta_z^{[i]} = \pm 1, i = 1,2,...,L$. Splitting the imaginary time $\beta$ into infinitesimal time steps $\delta \tau$ we obtain
\begin{align}
& \Big{\langle} \{ \eta_z(\tau +\delta \tau) \} \Big{|} e^{-\delta \tau H_q } \Big{|} \{ \eta_z(\tau) \} \Big{\rangle} \notag \\
& \approx
\Big{\langle} \{ \eta^z(\tau+\delta \tau) \} \Big{|} e^{-\delta \tau H_x } e^{-\delta \tau H_z } \Big{|} \{ \eta^z(\tau) \} \Big{\rangle} \notag \\
& =e^ { - \delta \tau H_z (\{\eta_z(\tau) \} ) }
\Big{\langle} \{\eta^z (\tau+\delta \tau)\} \Big{|} e^{-\delta \tau H_x } \Big{|} \{\eta^z(\tau)\} \Big{\rangle},
\label{split}
\end{align}
where in the first line we performed a first-order Trotter approximation with $O(\delta \tau^2)$ error. Next, we consider the term with Hamiltonian $H_x$. In the single-site z-basis this can be written as
\begin{align}
& \langle \eta_z^{[i]} ({\tau+\delta \tau}) | e^{ \delta \tau J_x \sigma_x^{[i]} } | \eta_z^{[i]}{(\tau)} \rangle \notag\\
& = \sum_{\eta_x^{[i]}=\pm 1} \langle \eta_z^{[i]} ({\tau+\delta \tau}) | e^{ \delta \tau J_x \sigma_x^{[i]}} |\eta_x^{[i]} \rangle \langle \eta_x^{[i]} | \eta_z^{[i]}{(\tau)} \rangle \notag\\
& = \sum_{\eta_x^{[i]}=\pm 1} e^{ \delta \tau J_x \eta_x^{[i]} } \langle \eta_z^{[i]} ({\tau+\delta \tau}) |\eta_x^{[i]} \rangle \langle \eta_x^{[i]} | \eta_z^{[i]}{(\tau)} \rangle.
\end{align}
We can now use the overlap relation
\begin{equation}
\langle \eta_x^{[i]} | \eta_z^{[i]} \rangle = \frac{1}{\sqrt{2}}e^{ i \pi\left(\frac{1-\eta_x^{[i]}}{2} \right) \left( \frac{1-\eta_z^{[i]}}{2} \right)},
\end{equation}
and define $\eta_z^{\prime [i]} \equiv \eta_z^{[i]} ({\tau+\delta \tau}) $, $\eta_z^{[i]} \equiv \eta_z^{[i]} ({\tau}) $. Using this notation, we now have
\begin{align}
& \langle \eta_z^{\prime [i]} | e^{ \delta \tau J_x \sigma_x^{[i]}} | \eta_z^{[i]} \rangle \notag\\
& = \sum_{\eta_x^{[i]}=\pm 1} e^{ \delta \tau J_x \eta_x^{[i]} } \times \frac{1}{2}
e^{ i \pi \left( \frac{1-\eta_x^{[i]}}{2} \right) \left(\frac{1-\eta_z^{\prime [i]}}{2} + \frac{1-\eta_z^{[i]}}{2} \right) } \notag \\
& = \frac{1}{2} \left( e^{ \delta \tau J_x} + e^{ -\delta \tau J_x} \eta_z^{\prime [i]} \eta_z^{[i]} \right) \notag \\
& = \frac{1}{2} e^{ \delta \tau J_x} \left( 1 + e^{ -2 \delta \tau J_x} \eta_z^{\prime [i]} \eta_z^{[i]} \right).
\label{CIrepresentation}
\end{align}
Moreover, we have the alternative representation
\begin{align}
\langle \eta_z^{\prime [i]} | e^{ \delta \tau J_x \sigma_x^{[i]} } | \eta_z^{[i]} \rangle &= C e^{J_{\tau} \eta_z^{\prime [i]} \eta_z^{[i]} } \notag \\
& = C \left( \cosh (J_{\tau}) + \sinh (J_{\tau}) \eta_z^{\prime [i]} \eta_z^{[i]} \right) \notag \\
& = C \cosh (J_{\tau}) \left(1 + \tanh (J_{\tau}) \eta_z^{\prime [i]} \eta_z^{[i]} \right),
\label{QIrepresentation}
\end{align}
with $C$ a normalization constant. Comparing Eqs.~(\ref{CIrepresentation}) and ~(\ref{QIrepresentation}), we obtain the relation
$ \tanh (J_{\tau}) = e^{ -2 \delta \tau J_x}$. Finally, the partition function $Z_q$ of the transverse-field quantum Ising model can be written as
\begin{align}
\label{eq:dQpartition}
Z_q \approx \sum_{ \{ \eta \}} C'e^{J_s \sum_{\alpha, \langle i,j \rangle } \eta_z^{[i]}(\tau_\alpha) \eta_z^{[j]}(\tau_\alpha)} \nonumber \\
\times e^{J_{\tau} \sum_{\alpha, i } \eta_z^{[i]}(\tau_{\alpha + 1}) \eta_z^{[i]}(\tau_\alpha)},
\end{align}
where the ``coupling constants" along the imaginary-time ($\tau$) and space ($s$) directions are given by
\begin{eqnarray}
J_{\tau} &=& \tanh^{-1}\left( e^{-2 \delta \tau J_x} \right) \nonumber \\
J_s &=& J_z \delta \tau .
\end{eqnarray}
\begin{figure}
\includegraphics[width=0.3\textwidth]{6_2dlattice.pdf}
\caption{[Color online] Coupling constants for a 2d classical Ising model. In connection with the quantum-classical correspondence, the vertical direction corresponds to imaginary-time.}
\label{fig:2dlattice}
\end{figure}
Therefore, the canonical quantum partition function of a $d$-dimensional quantum Ising model with a transverse field at inverse temperature $\beta$ can be approximately represented by the classical partition function of a $(d+1)$-dimensional classical Ising model of size $\beta$ in the imaginary-time direction.
The exact correspondence arrives if we take the number of sites $L$ in the imaginary time drection to be infinity, giving $\delta=\beta/L \to 0$, and then the corresponding classical model has the couplings $J_s\to 0$ and $J_\tau\to \infty$. In Monte Carlo simulations, tricks can be used to deal with such as a limit~\cite{BloteDeng}. For our simulations using correspondence from such a partition-function approach, we have to take $\delta$ increasingly small to obtain the exact correspondence of the spectrum.
\begin{figure}
\includegraphics[width=0.5\textwidth]{7_diagonalT.pdf}
\caption{The diagonal transfer matrix of square lattice. }
\label{fig:diagonalT}
\end{figure}
Re-parametrizing the derived classical 2d anisotropic Ising model (see Fig.~\ref{fig:2dlattice}) we have
\begin{align}
\label{eq:d+1C_Hamiltonian}
\beta H_c = -\sum_{\langle i,j \rangle} \left(K_x s^{[i,j]} s^{[i,j+1]} + K_y s^{[i,j]} s^{[i+1,j]} \right),
\end{align}
where $K_x , K_y$ are respectively the horizontal and vertical couplings, $s^{[i,j]}=\pm1$ are classical spins at site $[i,j]$, and the sum runs over nearest neighbors on a square lattice. The classical canonical partition function of this model is given by
\begin{align}
Z_c = \sum_{\{s \}} e^{\left(\sum_{\langle i,j \rangle} K_x s^{[i,j]} s^{[i,j+1]} + K_y s^{[i,j]} s^{[i+1,j]} \right)}.
\label{eq:d+1C_partition}
\end{align}
Comparing Eq.~(\ref{eq:dQpartition}) with Eq.~(\ref{eq:d+1C_partition}) we then have the relations
\begin{align}
K_x= J_{s}= J_z \delta\tau, \quad K_y= J_{\tau} = \tanh^{-1}(e^{-2 \delta \tau J_x}),
\label{relations}
\end{align}
where we can set $J_z=1$ and $J_x=h$. We thus obtain the relation between $h$ and $K_x,K_y$,
\begin{align}
\tanh K_y= e^{-2 K_x h}.
\end{align}
The exact mapping is obtained in the limit
$K_x \to 0$ and $K_y\to 0$.
The case of a 3d classical Ising model on a cubic lattice, analogous to a 2d quantum Ising model in a transverse field on the square lattice, only introduces one more relation in additional to those Eq.~(\ref{relations}) for an extra coupling along a spatial direction. i.e., \begin{align}
& K_x= J_{s}= J_z \delta\tau, \quad K_y= J_{s}= J_z \delta\tau, \notag \\
& K_z= J_{\tau} = \tanh^{-1}(e^{-2 \delta \tau J_x}).
\label{2dQCrelations}
\end{align}
Such a $d$-dimensional quantum Ising model is mapped to a corresponding $(d+1)$-dimensional classical Ising model, which has homogeneous couplings along $d$ spatial dimensions, and is anisotropic in the extra (imaginary) temporal dimension.
\medskip\emph{\underline{(ii) Peschel's mapping in 2d:}}
In a work by Peschel~\cite{PeschelMap}, it was shown that a 2d classical Ising model with an isotropic coupling $K$ is in exact correspondence to a 1d quantum spin chain with Hamiltonian
\begin{align}
\label{PeschelIsing}
H_q = - \sum_{i=1}^{L-1} \sigma_x^{[i]} -\delta \sigma_x^{[L]} - \lambda \sum_{i=1}^{L-1} \sigma_z^{[i]} \sigma_z^{[i+1]},
\end{align}
where $\delta = \cosh 2K$ and $\lambda =\sinh^{2}K $, by using a transfer matrix technique.
The transverse field labeled as $\delta$ at the right end can be neglected for large $L$.
Then one arrives at the usual homogeneous chain.
Let us briefly review how this is derived. Consider the classical Hamiltonian of the 2d isotropic Ising model given by
\begin{align}
\beta H_c = -\sum_{i,j} K (s^{[i,j]} s^{[i,j+1]} + s^{[i,j]} s^{[i+1,j]} ),
\end{align}
where $s^{[i,j]} = \pm 1$ is a classical spin at site $[i,j]$ and $\beta$ is the inverse temperature.
The partition function is given by
\begin{align}
Z_c = \sum_{\{s\}} e^{ \left( K \sum_{i,j} (s^{[i,j]} s^{[i,j+1]} + s^{[i,j]} s^{[i+1,j]} ) \right)}.
\end{align}
Firstly, by drawing the lattice diagonally (i.e., rotate the square lattice by 45 degrees), the sites can form a row as shown in Fig.~\ref{fig:diagonalT}, and these rows can be classified into two types: open circles and solid circles. This means that the number of rows must be even.
Let now $N$ be the number of rows and $M$ is the number of sites in each row.
Moreover, let $\phi_r$ denote all spins in row $r$ with $2^M$ possible values.
In particular, the partition function can be represented by the diagonal-to-diagonal transfer matrix $W$ and $V$ as follows:
\begin{align}
Z_c = \sum_{\phi_1} \sum_{\phi_2}\cdots \sum_{\phi_N} &(D_1)_{\phi_1,\phi_2} (D_2)_{\phi_2,\phi_3} (D_1)_{\phi_3,\phi_4} \notag \\
&\cdots(D_1)_{\phi_{N-1},\phi_N} (D_2)_{\phi_N,\phi_1}.
\end{align}
Here, $(D_1)_{\phi_{j},\phi_{j+1}}$ contains all Boltzmann weight factors of the spins (from open circles to solid circles) in the adjacent rows $j$ and $j+1$. Similarly, $(D_2)_{\phi_{j},\phi_{j+1}}$ contains the other type of spins (from solid circles to open circles).
We now consider three rows labeled as $\phi, \phi', \phi''$, where $\phi = \{ s_1,s_2,...,s_M \}$ are the spins in the lower row and similarly for $\phi'$ and $\phi''$. Then the diagonal-to-diagonal transfer matrix is given by
\begin{align}
& (D_1)_{\phi,\phi'} = e^{ K ( \sum_{j=1}^M (s_{j+1} s_j' + s_{j} s_j' ) ) }, \notag \\
& (D_2)_{\phi',\phi''} = e^{ K ( \sum_{j=1}^M (s'_{j} s_j'' + s'_{j} s_{j+1}'' ) ) }.
\end{align}
The partition function can thus be written as $Z_c = {\rm tr(}D_1D_2...D_1D_2 ) = {\rm tr}(D_1D_2)^{N/2} = {\rm tr} (V) ^{N/2}$.
One can verify that $[H_q,V] = 0$ if the couplings are chosen to satisfy
\begin{align}
\delta = \cosh 2K, \quad \lambda =\sinh^{2}K.
\end{align}
If the lattice size is large enough, then the single term $\sigma_L^x$ can be neglected. In this case the Hamiltonian can be written as a 1d quantum Ising chain with transverse field $h$, $H_q /\lambda= - \sum_{i=1}^{L-1} h \sigma_x^{[i]} - \sum_{i=1}^{L-1} \sigma_z^{[i]} \sigma_z^{[i+1]}$ with $h = 1/\lambda=1/\sinh^{2}K$. It is worth mentioning that the mapping is exact in the sense that no limit in any parameter needs to be taken (in contrast to, e.g., the partition-function approach, where we had $\delta\tau \rightarrow 0$).
\begin{figure}
\includegraphics[width=0.5\textwidth]{8_1dising_2dC_specn.pdf}
\caption{[Color online] (a) Entanglement spectra and entanglement entropy of the 1d quantum Ising model in a transverse field $h$ as obtained with iTEBD. (b,c,d) Corner spectra and corner entropy of: (b) the 2d classical isotropic Ising model, as a function of $h$, with isotropic coupling $K$ satisfying $1/h =\sinh^2K$; (c,d) 2d anisotropic classical Ising model with fixed $K_x=0.1$ (c), $K_x=0.01$ (d), and $K_y$ as a function of $h$ satisfying $\tanh K_y = e^{2 K_x h}$. The corner bond dimension is $\chi=20$ in all cases.}
\label{fig:1dising_2dC_specn}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{9_2dising_3d_spec.pdf}
\caption{[Color online]
Corner spectra and corner entropy of:
(a) the 2d quantum Ising model in a transverse field $h$ by using the simplified one-directional 2d method \cite{3dCT}; (b,c,d) 3d anisotropic classical Ising model (also with the same method) with fixed $K_x=K_y=0.1$ (b), $K_x=K_y=0.05$ (c), and $K_x=K_y=0.01$ (d), and $K_z$ as a function of $h$ satisfying $\tanh K_z = e^{2 K_x h}$.
The corner bond dimension for the CTs is $\chi = 4$ in all cases.}
\label{fig:2dising_3d_spec}
\end{figure}
\medskip\emph{\underline{(iii) Numerical results:}} according to the mapping described above, we have computed the corner spectra $\omega_\alpha$ and the associated corner entropy for Ising models, first comparing the 1d quantum and 2d classical, and then the 2d quantum and 3d classical, using the numerical techniques mentioned earlier. On the one hand, the comparison of 1d quantum vs 2d classical is shown in Fig.~\ref{fig:1dising_2dC_specn}, where we also include in the second panel the mapping to the \emph{isotropic} classical Ising model by Peschel~\cite{PeschelMap}. Regarding the anisotropic classical model, the mapping becomes more and more precise as $\delta \tau \rightarrow 0$, i.e. as $K_x$ becomes smaller. In our results, when plotted with respect to the same variables, we see a remarkably perfect agreement for all the numerical values of $\omega_\alpha$ and $S$ among all the models. On the other hand, we show in Fig.~\ref{fig:2dising_3d_spec} our results comparing the 2d quantum vs 3d classical (anisotropic) case. The match in this case is not as perfect as in the 1d vs 2d case, but nevertheless, it is still quite remarkable, especially considering the inner workings and associated errors of the higher-dimensional numerical algorithms that we used.
\begin{figure} [t]
\includegraphics[width=0.5\textwidth]{10_1dqising_2dcising_entropy.pdf}
\caption{[Color online] Corner entropy of the 1d quantum Ising model, 2d classical isotropic Ising model $(1/h =\sinh^2 K)$, and 2d classical anisotropic Ising model with fixed $K_x=0.1$, $K_x=0.01$, and $K_x=0.001$ $(\tanh K_y = e^{2 K_x h})$ as a function of the transverse field $h$ with bond dimension $\chi=20$. In the inset we show the difference $\Delta$ between the 2d corner entropies and the 1d entanglement entropy.}
\label{fig:1dqising_2dcising_entropy}
\end{figure}
\begin {figure} [ht]
\includegraphics[width=0.5\textwidth]{11_1dising}
\caption{[Color online] Corner entropy of the 2d classical anisotropic Ising model with fixed (upper) $h=0.8$ and (lower) $h=1.2$ as a function of $K_x$ with corner dimension $\chi=20$. The blue dashed lines show the entanglement entropy of the ground state of the corresponding 1d quantum Ising model obtained by using the iTEBD method.}
\label{fig:1dising}
\end{figure}
\begin{figure} [h]
\includegraphics[width=0.5\textwidth]{12_2dqisinf_3dcising_entropy.pdf}
\caption{[Color online] Corner entropy of the 2d quantum Ising model and 3d classical anisotropic Ising model with fixed $K_x=0.1$, $K_x=0.05$ , $K_x=0.01$ $(\tanh K_y = e^{2 K_x h})$ as a function of the transverse field $h$ with corner dimension $\chi=4$. }
\label{fig:2dqisinf_3dcising_entropy}
\end{figure}
To understand further the data obtained from the corners, we show in Fig.~\ref{fig:1dqising_2dcising_entropy} the corner entropies in more detail, as well as difference between the 2d classical corner entropy and the one for the 1d quantum case (which equates the entanglement entropy). One can see in a more precise way that the entropy in the classical anisotropic case tends to the quantum one as the coupling $K_x$ tends to zero, as expected from Eq.~(\ref{relations}) for $\delta \tau \ll 1$. The effect of a finite $K_x$ is better appreciated in Fig.~\ref{fig:1dising}, where one can see clearly how the classical value tends to match as a limiting case the quantum value as $K_x \rightarrow 0$. Finally, in Fig.~\ref{fig:2dqisinf_3dcising_entropy} we show a comparison of the entropies for the 2d quantum vs 3d classical case. Again, as expected, the agreement between the quantum and the classical case improves as $K_x$ gets closer to zero.
\subsubsection{Transverse field quantum N-Potts model in 1 dimension}
\emph{\underline{(i) Mapping:}} we now consider the 1d quantum N-state Potts model in 1d for $L$ sites.
The corresponding 1d quantum Potts Hamiltonian is given by
\begin{align}
\label{potts}
H_q
& =- \sum_{i=1}^{L-1} \left( \sum_{n=1}^{N-1} \left( Z^{[i] \dagger} Z^{[i+1]} \right)^n \right)
- h \sum_{i=1}^L \left( \sum_{n=1}^{N-1} \left(X^{[i]} \right)^n \right) \\
& = H_z+ H_x, \notag
\end{align}
where operators $Z$ and $X$ at every site satisfy
\begin{equation}
Z \ket{q} = \omega^q \ket{q}, ~~~ X\ket{q} = \ket{q-1},
\end{equation}
with $\omega = e^{i 2 \pi / N}$ and $q \in \mathbb{Z}_N$.
Similar to the case of the Ising model, the quantum canonical partition function is given again by
\begin{align}
Z_q &= \mathop{\mathrm{tr}} \left(e^{-\beta H_q} \right) = \sum_{\eta_z } \Big{\langle} \{\eta_z \} \Big{|} e^{-\beta H_q } \Big{|} \{\eta_z \} \Big{\rangle},
\end{align}
but this time $ \Big{|} \{ \eta_z \} \Big{\rangle} \equiv | \eta_z^{[1]}, \eta_z^{[2]}, \dots, \eta_z^{[L]} \rangle $ is the diagonal basis of $Z$ for the $L$ spins, so that $\eta_z^{[i]} = 0,1,2,\dots,N-1, i = 1,2,...,L$. Proceeding as for the Ising model in the previous section, now we have a similar expression as in Eq.~(\ref{split}), but with $H_z$ and $H_x$ being the ones in Eq.~(\ref{potts}). For the Hamiltonian term $H_z$ we find
\begin{align}
\label{eq:potts_Zterm}
& \Big{\langle} \eta_z^{\prime [i]} \eta_z^{\prime [i+1]} \Big{|}
e^{\delta \tau \left( \sum_{n=1} ^{N-1} (Z^{\dagger [i]} Z^{[i+1]})^n \right) }
\Big{|} \eta_z^{[i]} \eta_z^{[i+1]} \Big{\rangle} \notag\\
& = e^{\delta \tau \vartheta_z } \delta_{ \eta_z^{[i]} \eta_z^{\prime [i]}} \delta_{ \eta_z^{[i+1]} \eta_z^{\prime [i+1]}} ,
\end{align}
where $\eta_z^{\prime [i]} \equiv \eta_z^{[i]} ({\tau+\delta \tau}) $ and $\eta_z^{[i]} \equiv \eta_z^{[i]} ({\tau}) $. The coefficient $ \vartheta_z$ is $ \vartheta_z=N-1$ if $ \eta_z^{[i]} = \eta_z^{[i+1]}$, and $\vartheta_z=-1$ otherwise. Additionally, for the term $H_x$ one has
\begin{align}
\label{eq:potts_Xterm}
& \Big{\langle} \eta_z^{\prime [i]} \Big{|} e^{\delta \tau h ( \sum_{n=1}^{N-1} ( X^{[i]})^n ) } \Big{|} \eta_z^{[i]} \Big{\rangle} \notag\\
& = \Big{\langle} \eta_z^{\prime [i]} \Big{|} \cosh(\delta \tau h) \mathbb{I} + \sinh(\delta \tau h) ( \sum_{n=1}^{N-1} ( X^{[i]})^n ) \Big{|} \eta_z^{[i]} \Big{\rangle} \notag\\
& =
\begin{cases}
\cosh(\delta \tau h) & \quad \text{if } \eta_z^{[i]} = \eta_z^{\prime [i]} \\
\sinh(\delta \tau h) & \quad \text{otherwise}. \\
\end{cases}
\end{align}
\begin{figure*}
\includegraphics[width=1.0\textwidth]{13_1d_2d_Potts}
\caption{[Color online] Entanglement spectra and entanglement entropy for the 1d quantum N-state Potts model in transverse field $h$ for (a) N=2, (c) N=3, (e) N=4, and (g) N=5 by using iTEBD method. Corner spectra and corner entropy for the 2d classical N-state Potts model as a function of $h$, where $h$ is a function of $K_y$ as in Eq.~(\ref{QCpotts}), with $K_x=0.01$ for (b) N=2, (d) N=3, (f) N=4, and (h) N=5 computed with the 2d directional CTM method. The corner bond dimension is $\chi = 20$ in all cases.}
\label{fig:1d_2d_Potts}
\end{figure*}
For the classical case, the Hamiltonian of the 2d classical N-state Potts model on a square lattice is defined by
\begin{align}
\label{eq:classical_potts}
\beta H_c = -\sum_{\langle i,j \rangle} \left( K_x \delta_{s^{[i,j]},s^{[i,j+1]} } + K_y \delta_{s^{[i,j]},s^{[i+1,j]} } ) \right),
\end{align}
with ``Potts spin variables" $s^{[i,j]}=0,1,2,...,N-1$ at each site.
The classical partition function is then
\begin{align}
\label{eq:classical_potts_partition}
Z_c =\sum_{ \{ s \} } e^{\left( \sum_{\langle i,j \rangle } K_x \delta_{s^{[i,j]},s^{[i,j+1]} }
+ K_y \delta_{s^{[i,j]},s^{[i+1,j]} } \right)}.
\end{align}
From Eqs.~(\ref{eq:potts_Zterm}), ~(\ref{eq:potts_Xterm}), and (\ref{eq:classical_potts_partition}) one finds the relations
\begin{align}
\label{QCpotts}
K_x = N \delta \tau, ~~ \tanh(\delta \tau h ) = e^{-K_y},
\end{align}
which establish the quantum-classical mapping.
\medskip\emph{\underline{(ii) Numerical results:}} as we did for the case of the Ising model, now we have benchmarked the quantum-classical correspondence by computing numerically the corner spectra $\omega_\alpha$ and their associated corner entropy for several quantum and classical Potts models. Our results are summarized in Fig.~\ref{fig:1d_2d_Potts}, where we show the corner spectra and corner entropy for the 1d quantum and 2d classical $N$-state Potts models for $N$=2,3,4 and 5. Again, we find a remarkable almost-perfect match for the corner properties as computed with different methods for 1d quantum and 2d classical systems, once the parameters in the models are rescaled according to the relations found in the previous section. The spectrums for the 2-state Potts model coincide with those of the Ising model, as expected. As $N$ increases, we find small variations in the corner for different values of $N$, even though the branches corresponding to the lowest corner spectra seem to be very similar for all the computed $N$.
\subsection{Suzuki's approach for the quantum XY model}
In a work by Suzuki \cite{SuzukiXY} it was proven that a 2d classical Ising model in the absence of a magnetic field and with anisotropic couplings is ``equivalent", in the sense of having the same expectation values and physical properties, to the ground state of a XY quantum spin chain. Unlike the partition function approach, which maps a quantum model to a classical model in one dimension higher, Suzuki's approach works from the other direction: it starts from the $(d+1)$-dimensional classical partition function, and then builds a $d$-dimensional quantum model with the same physics. We note that the mapping is exact and does not involve the limit. However, if one uses the quantum XY model to study the transverse-field Ising model, then a similar limit needs to be taken. Morever, it was known that there is a range of couplings in the quantum XY model that there is no valid classical correspondence (see the ``O'' region in Fig.~\ref{fig:pd_1dXY}).
\medskip\emph{\underline{(i) The mapping:}} let us review the theory behind this approach by considering first the classical Hamiltonian of the anisotropic 2d XY model, i.e.,
\begin{align}
\beta H_c = -\sum_{i,j} \left( K_x s^{[i,j]} s^{[i,j+1]} + K_y s^{[i,j]} s^{[i+1,j]} \right),
\end{align}
where indices $i,j$ denote respectively rows and columns, $K_x, K_y$ are the horizontal and vertical couplings, $s^{[i,j]} = \pm 1$ are classical spin variables at each site, and $\beta$ is the inverse temperature. For concreteness let us imagine that we have a finite periodic square lattice with $N\times M$ sites.
The canonical partition function is given by
\begin{align}
Z_c = \sum_{\{s \}} e^{\left(K_x \sum_{i,j} s^{[i,j]} s^{[i,j+1]} + K_y \sum_{i,j} s^{[i,j]} s^{[i+1,j]} \right)}.
\end{align}
The first sum inside the brackets in the exponential is over horizontal edges, and the second over the vertical ones. Let $N$ be the number of rows in the lattice and $M$ the number of sites in each row. Now let $\phi_r$ denote all spins in row $r$, so that $\phi_r$ has $2^M$ possible values.
The partition function can thus be thought of as a function of $\phi_1$,.....,$\phi_N$, and can be rewritten as
\begin{align}
Z_c = \sum_{\phi_1}...\sum_{\phi_N} T_{ {\phi_1},{\phi_2}}... T_{{\phi_{N-1}},{\phi_N}} T_{{\phi_N}.{\phi_1}},
\end{align}
Here $T_{\phi_i ,\phi_{i+1} }$ is the 1d transfer matrix of the system, which contains all the Boltzmann weight factors of the spin in the adjacent rows.
Let $\phi= \{ s_{1}s_{2},...s_{M} \}$ be the spins in a given row, and $\phi' = \{ s'_{1},s'_{2},...s'_{M} \}$ the ones in the following row. Then the transfer matrix is given by
\begin{align}
T_{\phi,\phi' }
&= e^{ \left(K_x \sum_{i} s_{i} s_{i+1} +K_y\sum_{i} s_{i} s'_{i} \right) } \notag \\
& = e^{K_x \sum_{i} s_{i} s_{i+1}}\times e^{K_y\sum_{i} s_{i} s'_{i}} \notag \\
& \equiv V_1 V_2.
\end{align}
Here $V_1$ can be decomposed as a product of $2 \times 2$ matrices,
\begin{align}
(V_1)_{s_i,s_{i+1}} =
\begin{pmatrix}
e^{K_x} & e^{-K_x} \\
e^{-K_x} & e^{K_x}
\end{pmatrix},
\end{align}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{14_pd_1dXY.pdf}
\caption{[Color online] Phase diagram of (a) 1d \cite{XYmodel} and (b) 2d \cite{2dXYmodel} quantum XY model. There are three phases: oscillatory (O), ferromagnetic (F), and paramagnetic (P). The equation on top is the Barouch-McCoy circle~\cite{BarouchMcCoy} that sets the boundary between the oscillatory and non-oscillatory ferromagnetic regions (which is only a crossover). The separation between F and P in (a) is at $h=1$ and in (b) the exact location is not known and only indicated schematically.}
\label{fig:pd_1dXY}
\end{figure}
\begin{figure*}
\includegraphics[width=1.0\textwidth]{15_1dQXY}
\caption{[Color online] Corner spectra and entropy for the 1d XY quantum spin chain, with (a) $\gamma = 0.5$, (c) $\gamma = 0.9$, and (e) $\gamma = 0.99$, together with the corresponding anisotropic 2d classical Ising model (b), (d), (f), respectively, with $K_x$ and $K_y$ being functions of $h$ as described in Eq.(\ref{rela}). The corner bond dimension (equivalent to the MPS bond dimension in the 1d quantum case) is $\chi = 40$ in both cases. The correspondence of parameters has a solution only for values of $h$ larger than (b) $h \approx 0.85$, (d) $h \approx 0.4$, (f) $h \approx 0.1$, and therefore the left hand side of each plot in the lower panel is empty.}
\label{fig:1dQXY_gamma}
\end{figure*}
which can also be written as
\begin{align}
(V_1)_{s_i,s_{i+1}} &= e^{K_x}\mathbb{I} + e^{-K_x}\sigma_x \notag \\
&= e^{K_x} ( \mathbb{I} + e^{-2K_x}\sigma_x ) \notag \\
& = (2\sinh2K_x )^{1/2} e^{K_x^*\sigma_x } \equiv V_1(i),
\end{align}
with $\mathbb{I}$� the $2 \times 2$ identity matrix, $\sigma_x$ the x-Pauli matrix, and where we define $\tanh K_x^* \equiv e^{-2K_x} $ (and $\tanh K_x \equiv e^{-2K_x^*}$), as well as use the relation $\sinh 2K_x \sinh 2K_x^* = 1 $. Moreover, one has the $4 \times 4$ matrix $ \big( (V_2)_{s_i,s_j;s'_i,s'_j} \big) \delta_{s_i,s'_i} \delta_{s_j,s'_j} $ given by
\begin{align}
& (V_2)_{s_i,s_j;s'_i,s'_j} \notag\\
&= \bordermatrix{~
& (+1,+1) & (+1,-1) & (-1,+1) & (-1,-1) \cr
& e^{K_y} & 0& 0& 0\cr
& 0 & e^{-K_y} & 0& 0\cr
& 0 & 0 & e^{-K_y} & 0\cr
& 0& 0& 0 & e^{K_y} \cr
} \notag\\
&= \exp (K_y \sigma_z^i \sigma_z^{i+1} ) \notag\\
& = \cosh K_y \mathbb{I}^{i} \mathbb{I}^{i+1}+ \sinh K_y \sigma_z^i \sigma_z^{i+1} \notag\\
& \equiv V_2(i,i+1) .
\end{align}
It is clear that the partition function is the trace of a matrix product, given by
\begin{align}
Z_c = \mathop{\mathrm{tr}} (V_1V_2...V_1V_2 ) =\mathop{\mathrm{tr}} (V_1V_2)^N.
\end{align}
Thus, $Z_c$ can also be written as
\begin{align}
Z_c = \mathop{\mathrm{tr}} (V_2^{1/2} V_1V_2^{1/2} )^N =\mathop{\mathrm{tr}} (V)^N,
\end{align}
or
\begin{align}
Z_c = \mathop{\mathrm{tr}} (V_1^{1/2} V_2V_1^{1/2} ) =\mathop{\mathrm{tr}} (V')^N,
\end{align}
where
\begin{align}
V_1 = (2\sinh2K_x )^{M/2} e^{(K_x^* \sum_{i=1}^m \sigma_z^i )},
\end{align}
and
\begin{align}
V_2= e^{(K_y \sum_{i=1}^M \sigma_z^i \sigma_z^{i+1} )}.
\end{align}
The next step is to show that $V$ and the quantum Hamiltonian $H_q$ for the 1d quantum XY model can commute, and therefore have common eigenvectors. The usual XY quantum spin chain is defined by the Hamiltonian
\begin{align}
H_q = - \sum_i \left( J_x \sigma_x^{[i]} \sigma_x^{[i+1]} +J_y \sigma_y^{[i]} \sigma_y^{[i+1]} \right) +h \sum_{i} \sigma_z^{[i]},
\end{align}
where $\gamma = (J_x-J_y)$ is the anisotropy, and $h$ the magnetic field.
The phase diagram of the model is well known \cite{XYmodel} and is sketched in Fig.~\ref{fig:pd_1dXY}a.
To prove that the commutator of $V$ and $H_q$ can sometimes be zero, we first define $V_2(i,i+1)^{\frac{1}{2}} \equiv v_2(i,i+1) $. The 1d quantum Hamiltonian is a sum of two-body operators $H_q= \sum_i h(i,i+1)$. Thus, the commutator reads \begin{align}
[V, H_q] &= \sum_i [V, h(i,i+1) ] \notag \\
& = \sum_i \big(\cdots[ v_2(i-1,i)v_2(i,i+1) v_2(i+1,i+2)\notag \\
& V_1(i) V_1(i+1) v_2(i-1,i)v_2(i,i+1)v_2(i+1,i+2) \notag \\
& , h(i,i+1) ]\cdots \big) =0.
\end{align}
The last equality imposes a constraint on the couplings of the classical and quantum models in order for the commutator to vanish. One can see that this implies the relations between the couplings
\begin{align}
& \frac{J_y}{J_x} = e^{-4 K_x},~~ \frac{h}{J_x}= 2e^{-2 K_x} \coth(2K_y),
\label{rela}
\end{align}
which make explicit the quantum-classical mapping. Importantly, for fixed $h, J_x$ and $J_y$, these equations do not have a real solution in the oscillatory phase of Fig.~\ref{fig:pd_1dXY}, so that the mapping is only valid outside of that phase. Finally, the mapping can also be extended easily to the 3d classical vs 2d quantum case, by considering a 2d homogeneous coupling $K_x = K_y = K$ and adding an extra equation for $K_z$, i.e.,
\begin{align}
& \frac{J_y}{J_x} = e^{-4 K},~~ \frac{h}{J_x}= 2e^{-2 K} \coth(2K_z).
\end{align}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{16_2dqXY_3dcising_gamma07.pdf}
\caption{[Color online] Corner spectra and corner entropy of: (a) 2d quantum XY model with $\gamma = 0.7$ in a transverse field $h$ by using the simplified one-directional 2d method \cite{3dCT}; (b) the corresponding 3d anisotropic classical Ising model as a function of $h$ satisfying Eq.~(\ref{rela}). The corner bond dimension is $\chi=4$ in all cases. The correspondence of parameters has a solution only for values of $h$ larger than $h \approx 1.45$, and therefore the left hand side of the plot in the lower panel is empty.}
\label{fig:2dqXY_3dcising07}
\end{figure}
\medskip\emph{\underline{(ii) Numerical results:}} we have explicitly checked this equivalence by computing numerically the corner spectra and the associated corner entropy for the quantum and classical XY models in 1d, 2d and 3d. For the 1d quantum vs 2d classical case, this is shown in Fig.~\ref{fig:1dQXY_gamma} for different values of the anisotropy in the quantum XY model. The expressions in Eq.~(\ref{rela}) have only a real solution for $K_x, K_y$ if the value of $h$ is outside of the oscillatory phase, as shown in the plots. We can see that the agreement between the quantum and classical corner spectra and corner entropy is remarkably good, both qualitatively and quantitatively, with a slightly larger error around the critical region $h = 1$.
The comparison between 2d quantum vs 3d classical can be found in Fig.~\ref{fig:2dqXY_3dcising07}.
Again in this case the match between the numerically-computed classical and quantum values is quite remarkable, considering the different numerical techniques that were used in this case.
\section{2d corner phase transitions}
\label{Sec6}
We now show how the study of corner properties can provide other useful information when studying a quantum or classical many-body system. In particular, we show how the corner spectra and corner entropy from 2d rCTMs (i.e., the CTMs obtained from the 2d TN for the norm) are useful in determining phase transitions without the need to compute physical observables.
The usual way to study quantum and classical phase transitions is through the study of observables, which have specific properties at the transition point (e.g., the singular behavior of the observable). The study of entanglement and correlations in many-body systems has shown us that it is actually possible to study these transitions from properties of the state only, such as entanglement entropy, fidelities \cite{fidelity}, entanglement spectra \cite{eHam}, and similar quantities. Following this trend, in this section we show that one can assess phase transitions from properties of the corners only, in particular the rCTM that we introduced in Sec.~\ref{sec:intro}. This is very useful in the context of numerical simulations of, e.g., 2d quantum many-body systems, since such corner objects are produced ``for free" (e.g., in the infinite-PEPS method with a full or fast-full update \cite{iPEPS, ffUpdate}). In what follows we show three practical examples where phase transitions, both topological and non-topological, can be clearly pinpointed by looking only at the corner objects.
\subsection{2d quantum XXZ model}
First we consider the 2d quantum XXZ model for spin-1/2 on an infinite square lattice, under the effect of a uniform magnetic field $h$ along the z-axis. Its Hamiltonian is given by
\begin{align}
H_q = -\sum_{\langle i,j \rangle} \left( \sigma_x^{[i]} \sigma_x^{[j]} + \sigma_y^{[i]} \sigma_y^{[j]} -\Delta \sigma_z^{[i]} \sigma_z^{[j]} \right) - h\sum_i \sigma_z^{[i]},
\end{align}
where as usual the sum $\langle i,j \rangle$ runs over nearest neighbors on the 2d square lattice, and $\Delta$ is the anisotropy. In the large $\Delta>1$ limit, it has been shown \cite{xxz} that a first-order transition takes place at some point $h_1$ from a N\'{e}el phase to a spin-flipping phase. As the field increases further, another phase transition at $h_2 = 2(1+\Delta)$ occurs towards the fully polarized phase.
Here we consider the case with $\Delta= 1.5$. We have approximated the ground state of the model using the iPEPS algorithm with simple update and bond dimension $D=2$ \cite{su}, and then computed the reduced corner spectra $\omega^{(r)}_\alpha$ and entropy of the double-layer tensor defining the norm via the directional CTM approach, as a function of $h$. Our results are shown in Fig.~\ref{fig:2DXXZ}, where one can clearly see that the two phase transitions are clearly pinpointed by the spectrum and the entropy. In particular, we observe the first transition happening at $h_1 \approx 1.8$, and the second one at $h_2=5.0$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{17_2DXXZ.pdf}
\caption{[Color online] Corner spectra $\omega^{(r)}_\alpha$ for the norm of the numerical $D=2$ PEPS for the XXZ model in a field, at $\Delta = 1.5$, on the square lattice with $\chi=40$, together with the corner entropy computed from the corner spectra. }
\label{fig:2DXXZ}
\end{figure}
\subsection{Perturbed $\mathbb{Z}_N$ topological order}
Here we consider exact wavefunctions that exhibit topological phase transitions for $\mathbb{Z}_2$ and $\mathbb{Z}_3$ topological order.
\medskip\underline{\emph{(i) 2d perturbed $\mathbb{Z}_2$ Toric Code PEPS:}}
we consider the 2d PEPS on a square lattice for the Toric Code ground state \cite{TC, IsingPEPS}, perturbed by a string tension $g$. This can be represented by a tensor $A_{\alpha \beta \gamma \delta}^{i,j,k,l}$ with with four physical indices $i,j,k,l=0,1$ and four virtual indices $\alpha,\beta, \gamma,\delta=0,1$. The coefficients of the tensor are given by
\begin{align}
A_{i,j,k,l}^{i,j,k,l} = \left\{
\begin{array}{l l}
g^{i+j+k+l}, & \quad \text{if $i+j+k+l=0$ mod 2}, \\
0, & \quad \text{otherwise}.
\end{array} \right.
\end{align}
The norm of this state can be described by a double-layer 2d TN on a square lattice, where at every site one has the tensor $\mathbb{T}_{ijkl}^{ijkl} \equiv \mathbb{T}[ijkl]$, with coefficients
\begin{align}
& \mathbb{T}[0000] =1, \quad \mathbb{T}[1111]= g^8, \notag \\
&\mathbb{T}[0011] =\mathbb{T}[0110]= \mathbb{T}[1100]=\mathbb{T}[1001] = g^4\notag \\
&\mathbb{T}[0101] = \mathbb{T}[1010] =g^4.
\label{db}
\end{align}
Parameter $g$ is used to tune a crossover from a topological to a trivial phase. For $g=1$ the state reduces to the ground state of the Toric Code model with $\mathbb{Z}_2$ topological order.
For $g=0$ it reduces to the polarized state $\ket{0, 0, \cdots, 0}$. There is a quantum phase transition between these two phases which, as shown in Ref.~\cite{Z2_deform}, occurs at $g_c\approx 0.802243$.
One can see, moreover, that the double tensor $\mathbb{T}$ consists of two copies of the partition function of the 2d classical Ising model in Eq.~(\ref{cIsing}). In fact, one also finds the relation $g=(\sinh(\beta))^{1/4} $, with $g$ the perturbation parameter of the Toric Code and $\beta$ the inverse temperature of the Ising model. Both models, therefore, belong to the same universality class. In this case we have implemented the directional CTM method on the norm tensor $\mathbb{T}$ \cite{dirCTM} to study the corner properties.
This is shown in Fig.~\ref{fig:ZNTO}a, where one can see that the corner spectrum and its associated entropy clearly pinpoint the quantum phase transition.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{18_ZNTO.pdf}
\caption{[Color online] Corner spectra and corner entropy of the (a) $\mathbb{Z}_2$ and (b) $\mathbb{Z}_3$ topological PEPS with perturbation $g$ on the square lattice with CTM bond dimension $\chi=20$. In (b) the lines show the first transition point at $g_1 \approx 0.944$ as well as the second transition point at $g_2 \approx 1.238$ \cite{Z3_2}.}
\label{fig:ZNTO}
\end{figure}
\medskip\underline{\emph{(ii) 2d perturbed $\mathbb{Z}_3$ topological order:}} furthermore, we consider a 2d PEPS with $\mathbb{Z}_3$ topological order under perturbations described by deformations $\{q_0,q_1,q_2\}$. The PEPS is given by a tensor $A_{\alpha,\beta, \gamma,\delta}^{i,j,k,l}$ with with four physical indices $i,j,k,l=0,1,2$ and four virtual indices $\alpha,\beta, \gamma,\delta=0,1,2$, with coefficients
\begin{align}
A_{i,j,k,l}^{i,j,k,l} = \left\{
\begin{array}{l l}
q_0^{n_0} q_1^{n_1} q_2^{n_2}, & \quad \text{if $i+j+k+l=0$ mod 3}, \\
0, & \quad \text{otherwise},
\end{array} \right.
\end{align}
where $n_0,n_1,n_2$ means the number of the inner indices in 0, 1, and 2 respectively.
We first study the case $q_0=1, n_1=0, q_2=g$. In such a case, the bond indices of the wavefunction live in an effective 2d Hilbert space spanned by $|0\rangle$ and $|2\rangle$. At $g=0$, the remaining tensor represents a product state of all state $0$. Therefore, the region near $g=0$ is a trivial phase that is adiabatically connected to a product state. At $g> 0$, the nonzero components of the double-layer tensor $\mathbb{T}$ for the norm are
\begin{align}
&\mathbb{T}[0222] = \mathbb{T}[2022] = \mathbb{T}[2202]= \mathbb{T}[2220]= g^6 \notag \\
& \mathbb{T}[0000]= 1,
\end{align}
where we used the same notation as in Eq.~(\ref{db}). For $g \gg 1$ one can neglect the component $\mathbb{T}[0000]$, and the tensor becomes mathematically equivalent to the one for the classical dimer model at Rokhsar-Kivelson (RK) point, which is critical \cite{RK}, and where the topological degenerate ground state is an equal weight superposition of all possible configurations in a given winding parity sector on the square lattice. It was shown in Ref.~\cite{Z3_2} that for $0.944 \leq g < 1.238$ the PEPS belongs to the $\mathbb{Z}_3$ topologically ordered phase \cite{Z3_1}, whereas for $g > 1.238$ the state is critical.
We have computed the rCTM spectra obtained by contracting the TN for the norm using the directional CTM approach \cite{dirCTM}, and as a function of the deformation $g$. This is shown in Fig.~\ref{fig:ZNTO}b. The corner spectra show different patterns depending on the phase: in the trivial phase only one eigenvalue is non-zero, whereas more eigenvalues become populated in the topological and critical phases. The two transitions are also clearly pinpointed in the spectrum, as a change of behavior in the numerically-computed values (in paticular, the spectrum remains almost constant as a function of $g$ in the critical phase). In Fig.~\ref{fig:ZNTO}b we show the associated corner entropy, which clearly signals also the phase transitions. In particular, we observe that for $g>1.2$, the corner entropy depends strongly on $\chi$, which is a clear signal of the critical phase.
\begin{figure}
\includegraphics[width=0.5\textwidth]{19_Z2SPT.pdf}
\caption{[Color online] Corner spectra $\omega^{(r)}_\alpha$ for the norm of the $\mathbb{Z}_2$ SPT PEPS with deformation $g$ on the square lattice with $\chi=40$, together with the corner entropy computed from the corner spectra.}
\label{fig:Z2SPT}
\end{figure}
\subsection{Perturbed SPT order}
Next, we study the quantum phase transition between two different $\mathbb{Z}_2$ symmetry-protected topological (SPT) phases on the 2d square lattice. The fixed point wave function from the 3-cocycle condition can be described by a 2d PEPS \cite{spt}, defined by a tensor $A^{i,j,k,l}_{\alpha\alpha,\beta\beta',\gamma\gamma',\delta\delta'}\equiv A[ijkl]$ satisfying $i=\alpha=\alpha'$, $j=\beta=\beta'$, $k=\gamma = \gamma'$, and $l = \delta = \delta'$, as follows:
\begin{align}
& A[0000] = A[1111] = A[0011] = A[1100] = 1 \notag \\
& A[1001] = A[0110] = A[0101] = A[1010] = 1 \notag \\
& A[0001] = A[1110] = A[0100] = A[1011] = 1 \notag \\
& A[1000] = A[0001] = g \notag \\
& A[0100] = A[1110] = |g|.
\end{align}
At $g=1$, this tensor represents a fixed-point wave function for the trivial $\mathbb{Z}_2$ SPT phase. As $g=-1$, it is the fixed-point wave function of the nontrivial $\mathbb{Z}_2$ SPT phase.
As a function of $g$, the tensor smoothly interpolates between the two phases.
For large $|g|$ the tensor is also in an ordered phase.
We have computed the corner spectra $\omega^{(r)}_\alpha$ and corner entropy for the doble-layer norm tensor of this state by using rCTM, which we show in Fig.~\ref{fig:Z2SPT}. We can see clearly that both the spectrum and entropy pinpoint all the phase transitions mentioned above. We find the transition to the ordered phase at $|g|=1.7$, in agreement with the results from Ref.~\cite{spt}.
\section{Chiral topological corner entanglement spectrum}
\label{Sec7}
We have seen earlier that given a 2d Hamiltonian we can use CTs (in a 3d setup) to obtain the entanglement spectrum of a bipartite cut separating two semi-infinite planes. We can obtain this entanglement spectrum using the 2d quantum state renormalization approach described earlier using CTs.
{ In this section,
we first consider the so-called Ising PEPS \cite{isingpeps} which, by construction, has a quantum phase transition that corresponds to the classical Ising transition, which was studied earlier in Sec.~\ref{Sec5} using the rCTM method. Here we use this state to benchmark the method, and we show the entanglement spectrum in the disordered phase. Then, we use this approach to study the boundary theory of 2d chiral topological quantum spin liquids that can be exactly described as a PEPS.
\begin{figure}
\includegraphics[width=0.475\textwidth]{20_isingPEPS.pdf}
\caption{[Color online] Entanglement pectra $\omega_{\alpha}(\rho_r)$ of a half of 2d quantum system (see Fig.~\ref{fig:CT}) for the Ising PEPS model in disordered phase from Ref.~\cite{isingpeps}, for bond dimension (a) $\chi = 30$, (b) $\chi = 40$, and (c) $\chi = 50$.}
\label{fig:isingPEPS}
\end{figure}
\subsection{The disorder phase: the Ising PEPS}
Let us first consider the Ising PEPS~\cite{isingpeps} on the
square lattice with tensor
$ A=|0\rangle \langle \theta, \theta, \theta, \theta|+|1\rangle \langle \bar{\theta},\bar{\theta}, \bar{\theta}, \bar{\theta}|$, where the ket (bra) corresponds to the physical (virtual) degrees of freedom, and
$|\theta \rangle =\cos \theta |0\rangle + \sin \theta|1\rangle$ as well as
$|\bar{\theta} \rangle =\sin \theta |0\rangle + \cos \theta|1\rangle$ with $\theta \in [0,\pi/4]$. A corresponding local
Hamiltonian can be written down that has this PEPS as a ground state (not shown here)~\cite{isingpeps}.
In Ref.\cite{isingpeps} it was shown that there is a second-order quantum phase transition from ordered phase to disorder phase occurring at $\theta_c \approx 0.349596$.
To illustrate that our method is not limited by the usage of corner tensors, we include results from the 2d Ising PEPS in the disorder phase with $\theta=0.5$ in Fig.~\ref{fig:isingPEPS}.
This was studied previously in finite systems on a cylinder~\cite{isingpeps}.
We observe that, first, there is a unique lowest entanglement eigenvalue (or one unique largest eigenvalue of corresponding transfer matrix), which is clearly identified by our method.
Second, it is known that the low-lying entanglement spectrum seems to form one-dimensional bands (vs momentum).
Because of the effective size introduced by the finite bond dimension, the effective momenta are discrete and we expect that our CT entanglement spectrum will see closely spaced values in one band, separated by a large gap from other bands. The number of such discrete values will depend on the bond dimension (see Fig.~\ref{fig:isingPEPS}), and the larger the bond dimension, the more points will be picked up within a band. This is exactly what we saw. }
\begin{figure}
\includegraphics[width=0.475\textwidth]{21_spectra-chiral.pdf}
\caption{[Color online] Entanglement pectra $\omega_{\alpha}(\rho_r)$ of (a) one quarter and (b) a half of 2d quantum system (see Fig.~\ref{fig:CT}) for the chiral topological state from Ref.~\cite{chiralPEPS}, for bond dimension $\chi = 50$. In (b) the largest spectral values are mostly converged and coincide with the expected degeneracies of the vacuum Virasoro tower of the $SU(2)_1$ WZW model describing the chiral gapless edge.}
\label{fig:spectra-chiral}
\end{figure}
\begin{figure}
\includegraphics[width=0.475\textwidth]{22_SU2_2.pdf}
\caption{[Color online] Entanglement pectra $\omega_{\alpha}(\rho_r)$ of (a) one quarter and (b) a half of 2d quantum system (see Fig.~\ref{fig:CT}) for the chiral topological state from Ref.~\cite{chiralPEPS}, for bond dimension $\chi = 40$. In (b) the largest spectral values are mostly converged and coincide with the expected degeneracies of the vacuum Virasoro tower of the $SU(2)_2$ WZW model describing the chiral gapless edge.}
\label{fig:spectra-SU2_2chiral}
\end{figure}
\subsection{$SU(2)_1$ WZW chiral edge state}�
We have first studied the exact 2d PEPS with $D=3$ on a square lattice corresponding to a chiral topological quantum spin liquid with $SU(2)$ symmetry from Ref.~\cite{chiral1}.
The state is known to be critical, and has a chiral gapless edge described by a $SU(2)_1$ Wess-Zumino-Witten (WZW) CFT. The gapless edge state has been characterized previously by studying the entanglement spectrum of the PEPS on an infinitely-long but finite-circumference cylinder \cite{eHam, chiral1, chiral2}.
In that calculation it was actually possible to find the degeneracies of the different Virasoro towers of $SU(2)_1$ corresponding to each of the highest weight states.
If no parity or topological sector are explicitly fixed, then the numerical calculation of the entanglement spectrum naturally produces the Virasoro tower of the CFT vacuum state \cite{chiral2}.
This wave function can be given by a PEPS tensor $A^{s}_{i,j,k,l}$ with $s=\pm 1/2$ and $i,j,k,l=0,1,2$, with non-zero coefficients as follows:
\begin{align}
& A^{-1/2}_{2,0,1,1} = \!\!-\lambda_1-i \lambda_2,
\; A^{-1/2}_{2,1,1,0} = \!\!-\lambda_1+i \lambda_2,
\; A^{-1/2}_{2,1,0,1} = \!\!-\lambda_0 ; \notag\\
& A^{-1/2}_{1,1,2,0} = \!\! -\lambda_1-i \lambda_2,
\; A^{-1/2}_{1,0,2,1} = \!\!-\lambda_1+i \lambda_2,
\; A^{-1/2}_{0,1,2,1} = \!\!-\lambda_0 ; \notag\\
& A^{-1/2}_{1,2,0,1} = \,\; \lambda_1+i \lambda_2,
\; A^{-1/2}_{0,2,1,1} = \,\; \lambda_1-i \lambda_2,
\; A^{-1/2}_{1,2,1,0} = \,\; \lambda_0 ; \notag\\
& A^{-1/2}_{0,1,1,2} = \,\; \lambda_1+i \lambda_2,
\; A^{-1/2}_{1,1,0,2} = \,\; \lambda_1-i \lambda_2,
\; A^{-1/2}_{1,0,1,2} = \,\; \lambda_0 ; \notag\\
& A^{1/2}_{2,1,0,0} = \,\; \lambda_1+i \lambda_2,
\; A^{1/2}_{2,0,0,1} = \,\; \lambda_1-i \lambda_2,
\; A^{1/2}_{2,0,1,0} = \,\; \lambda_0 ; \notag\\
& A^{1/2}_{0,0,2,1} = \,\; \lambda_1+i \lambda_2,
\; A^{1/2}_{0,1,2,0} = \,\; \lambda_1-i \lambda_2,
\; A^{1/2}_{1,0,2,0} = \,\; \lambda_0 ; \notag\\
& A^{1/2}_{0,2,1,0} = \!\!-\lambda_1-i \lambda_2,
\; A^{1/2}_{1,2,0,0} = \!\!-\lambda_1+i \lambda_2,
\; A^{1/2}_{0,2,0,1} = \!\!-\lambda_0 ; \notag\\
& A^{1/2}_{1,0,0,2} = \!\!-\lambda_1-i \lambda_2,
\; A^{1/2}_{0,0,1,2} = \!\!-\lambda_1+i \lambda_2,
\; A^{1/2}_{0,1,0,2} = \!\!-\lambda_0,
\label{TNS_CSL}
\end{align}
where $\lambda_0=-2, \, \lambda_1=1,\text{and } \lambda_2=1$.
Here we have computed the entanglement spectrum of this PEPS wave function, using the quantum state renormalization approach explained previously. Our results are in Fig.~\ref{fig:spectra-chiral} for CT with a bond dimension $\chi = 50$. In the case of the entanglement spectrum for a quadrant, we see that the eigenvalues obey an almost flat distribution with a sudden drop. However, the spectrum of half an infinite system tends to obey the expected degeneracies of the Virasoro tower for the vacuum (which has angular momentum $j=0$) of the $SU(2)_1$ WZW model that describes the edge physics of this state. More specifically, the degeneracies of the 4 largest multiplets of eigenvalues are well converged and equal to $1,3,4,7$ and $13$, exactly matching the first 4 degeneracies of the Virasoro tower for the vacuum of the $SU(2)_1$ WZW model \cite{chiral1, chiral2}. We suspect the reason that we are able to see discrete spectrum rather than a continuous one is due to the effective size that the finite bond dimension introduces, even though we are using the infinite setting of the PEPS description. However, we do not see the degeneracy corresponding to the angular momentum $j=1/2$ tower.
\subsection{$SU(2)_2$ WZW chiral edge state}
Moreover, we have considered the calculation of the entanglement spectrum from the corner properties for the double-layer chiral topological PEPS from Ref.~\cite{chiral2}, which has a gapless edge modes described by a $SU(2)_2$ WZW model. The PEPS is constructed simply from two layers of the tensors in Eq.~(\ref{TNS_CSL}) symmetrizing the physical indices (i.e., projecting in the total spin-1 subspace). Our results are in Fig.~\ref{fig:spectra-SU2_2chiral} for CT with a bond dimension $\chi = 40$. Once again we see an almost flat spectrum with a sudden drop when we consider one quadrant. However, for half an infinite system, we see that the degeneracies of the 3 largest multiplets of eigenvalues tend to be $1,3,9$ and $15$, in agreement with the first 3 degeneracies of the Virasoro tower for the vacuum of the $SU(2)_2$ WZW model \cite{chiral2}.
{ Furthermore, our results on chiral topological states obtained from CT agree well with the studies using cylindrical geometry~\cite{chiral1,chiral2}. In those studies as well as in ours it is found that those (discrete) degeneracy patterns show up in the low-lying entanglement spectrum and agree with the counting from conformal field theory.}
\section{Conclusions}
\label{sec:Conclusion}
In this paper we have shown that CTMs and CTs encode universal properties of bulk physics in classical and quantum lattice systems, and that this can be computed efficiently with current state-of-the-art numerical methods. We have seen this for a wide variety of models in 1d, 2d, and 3d, both classical and quantum. First we have checked the structure of the corner energies and corner entropy for three models in the universality class of 1d quantum Ising. Then, we have used this formalism to check explicitly the correspondence between quantum systems in d dimensions and classical systems in (d+1) dimensions. In this context, we have first used the partition function approach to do this mapping, and checked numerically the correspondence for the 1d quantum Ising and quantum Potts models vs 2d classical anisotropic Ising and Potts models. Then, we have reviewed an approach by Suzuki mapping the 2d anisotropic classical Ising model to the 1d quantum XY model, and for which the corner energies and entropies showed a perfect match between the models. For completeness we have also reviewed Peschel's approach for the quantum-classical mapping. We have also shown that corner properties can be used to pinpoint phase transitions in quantum lattice systems without the use of observable quantities. We have shown this for the 2d quantum XXZ model, perturbed 2d PEPS with $\mathbb{Z}_2$ and $\mathbb{Z}_3$ topological order, and a PEPS with perturbed SPT order.
Perhaps more surprising is that the corner objects can be used to obtain entanglement spectrums of 2d systems, even with chiral topological order and gapless $SU(2)_k$ edge modes, which we demonstrated for for $k=1,2$. For this we have proposed a new quantum state RG in the setting of corner matrices and tensors, which can be applied very generally to cases where the wavefunction can be written in the PEPS form. This enables efficient computation for entanglement spectrum for 2d infinite systems, which is much harder than the 1d case. Our state RG algorithm can also be straightforwardly generalized to 3d systems. All in all, we have shown that CTMs and CTs, apart from being useful numerical tools, also encode by themselves very relevant physical information that can be retrieved in a natural way from usual implementations of numerical TN algorithms.
The results in this paper can be extended in a number of ways. For instance, it would be interesting to check how dynamical properties affect corner properties. A similar analysis should also be possible for dissipative systems and steady states of 2d quantum systems \cite{diss}, as well as for models with non-abelian topological order. Concerning the calculation of 2d entanglement spectra, two further considerations are in order. First, notice that one could in principle compute the ``usual" entanglement spectrum on half an infinite cylinder from the half-row and half-column tensors obtained from rCTM, wrapping them around a cylinder of finite width and proceeding as usual with the calculation of the reduced density matrix. Second, notice that a limitation of our calculation with corner tensors is that it does not provide a ``natural" way of labelling the different eigenvalues in terms of a momenta quantum number. We believe however, that this may be possible by defining appropriate translation operators on CTMs. This idea will be pursued in future works.
\acknowledgements
This work was partially supported by the National Science Foundation under Grant No. PHY 1314748 and Grant No. PHY 1620252. R.O. acknowledges the C. N. Yang Institute for Theoretical Physics for hosting him during the time that this work was initiated.
| proofpile-arXiv_065-7379 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro}
Ego-centric social networks (ego-networks) map the interactions that occur between the social contacts of individual people. Because they provide the view of the social world from a personal perspective, these structures are fundamental information blocks to understand how individual behaviour is linked to group life and societal dynamics. Despite the growing availability of interaction data from online social media, little research has been conducted to unveil the structure and evolutionary dynamics of ego-networks~\cite{arnaboldi12analysis,kikas13bursty}. In an online context, people expand their social circles also as a result of automatic recommendations that are offered to them, which makes it harder to disentangle spontaneous user behavior from algorithmically-induced actions.
We aim to provide an all-round description about how ego-networks are formed and how automated contact recommendations might bias their growth. We do so by analyzing the full longitudinal traces of 170M ego-networks from Flickr and Tumblr ($\S$\ref{sec:dataset}), answering several open research questions about the shape of their boundaries, their community structure, and the process of neighbor selection in time ($\S$\ref{sec:analysis}). The richness of the data we study allows for the identification of those Tumblr links that have been created as a result of recommendations served by the platform, which positions us in an unique standpoint to investigate the impact of link recommender systems on the process of network growth.
Some of our key findings are:
\vspace{-1pt}
\begin{itemize}[leftmargin=*]
\itemsep1pt
\item The backbone of a typical ego-network is shaped within the initial month of a node's activity and within the first $\sim$50-100 links created. In that period, new contacts are added in larger batches and the main communities emerge. Unlike global social networks, whose diameter shrinks in time, the average distance between nodes in ego-networks expands rapidly and then stabilizes.
\item The selection criteria of new neighbors change as new contacts are added, with popular contacts being more frequently followed in earlier stages of the ego's life, and friends-of-friends being selected in later stages. The neighbor selection is also heavily driven by the ego-network's community structure, as people tend to grow different sub-groups sequentially, with an in-depth exploration strategy.
\item The link recommender system skews the process of ego-network construction towards more popular contacts but at the same time restraining the growth of its diameter, compared to spontaneous behavior. With a matching experiment aimed at detecting causal relationships from observational data, we find that the bias introduced by the recommendations fosters diversity: people exposed to recommendations end up creating pools of contacts that are more different from each other compared to those who were not exposed.
\end{itemize}
The outcomes of our analysis have theoretical implications in network science and find direct application in link recommendation and prediction tasks. We run a prediction experiment ($\S$\ref{sec:prediction}) to show that simple temporal signals could be crucial features to improve link prediction performance, as the criteria of ego-network expansion vary as the ego grows older. In a second experiment, we test the algorithmic capability to tell apart spontaneous links from recommendation-induced ones. This ability opens up the way to train link predictors that mitigate existing algorithmic biases by suggesting links whose properties better adhere to the natural criteria that people follow when connecting to others.
\section{Related work} \label{sec:related}
\noindent \textbf{Structure and dynamics of social networks.}
For decades, network science research has explored extensively the structural and evolutionary properties of online social graphs and of the communities they encompass~\cite{garton97studying,barabasi02evolution,kossinets06empirical,backstrom06group,palla07quantifying,mislove07measurement,wilson09user,yang11patterns,tan15all}, unveiling universal patterns of their dynamics. Individual connectivity and activity are broadly distributed~\cite{mislove07measurement,ugander11anatomy}; the creation of new links is driven by reciprocation, preferential attachment~\cite{mislove08growth}, triangle closure~\cite{leskovec08microscopic}, and homophily~\cite{aiello12tweb,yuan14exploiting}. Globally, the number of edges in a social network grows superlinearly with its number of nodes, and the average path length shrinks with the addition of new nodes~\cite{leskovec05graphs}, after an initial expansion phase~\cite{ahn07analysis}. The regular patterns that drive the link creation process have enabled the development of accurate methods for link prediction and recommendation~\cite{hasan11survey} based on either local~\cite{libennowell03link} or global structural information~\cite{bahmani10fast,backstrom11supervised,shin15tumblr}. Fine-grained temporal traces of user activity in online social platforms opened up new avenues to investigate in detail the impact of time on network growth~\cite{zignani14link}. For example, the relationship between the node age and its connectivity has been measured in several online social graphs including Flickr~\cite{leskovec08microscopic,yin11link}.
\vspace{4pt}
\noindent \textbf{Ego-networks.}
To date, not much research has been conducted on how nodes build their \textit{local} social neighborhoods in time. Research done by Aranboldi et al. has looked into ego-networks of online-mediated relationships including the Facebook friendship network~\cite{arnaboldi12analysis} and the Twitter follow graph~\cite{arnaboldi13ego,arnaboldi16ego}, as well as professional relationships such as Google Scholar's co-authorship network~\cite{arnaboldi16analysis}. Using community detection, hierarchical clusters are discovered, in agreement with Robin Dunbar's theory on the hierarchical arrangement of social ego-circles~\cite{zhou05discrete}. Similar findings have been confirmed by independent studies on the Facebook network~\cite{desalve16impact}. In the attempt of comparing the properties of the global network with those of ego-networks, recent studies found that local structural attributes are characterized by local biases~\cite{gupta15structural} that are direct implications of the friendship paradox~\cite{feld91friends}. Multiple techniques have been proposed to discover social or topical sub-groups within ego-networks~\cite{weng14topic,mcauley14discovering,muhammad15duke,biswas15community}, but with little attention to the dynamics of their growth. Kikas et al. conducted one of the few studies touching upon the the temporal evolution of ego-networks, using a dataset of Skype contacts~\cite{kikas13bursty}. They find that most edges are added in short bursts separated by long inactivity intervals.
\vspace{4pt}
\noindent \textbf{Effect of social recommender systems.}
In the past years, computer scientists developed increasingly effective contact recommender systems for online social media~\cite{gupta13wtf}. Only recently, the community has adopted a more critical standpoint with respect to the \textit{effects} that those recommendations may have on the collective user dynamics. Algorithms based on network proximity are better suited to find contacts that are already known by the user, whereas algorithms based on similarity of user-generated content are stronger at discovering new friends~\cite{chen09make}. Surveys administered to members of corporate social networks revealed that contact recommendations with high number of common neighbors are usually well-received~\cite{daly10network}. Recommendation-induced link creations have a substantial effect on the growth of the social graph; for example, the introduction of the ``people you may know'' service in Facebook increased considerably the number of links created and the ratio of triangle closures~\cite{zignani14link}. A recent study on Twitter compared the link creation activity before and after the introduction of the ``Who To Follow'' service~\cite{su16effect}, showing that popular nodes are those who most benefit from recommendations. On a wider perspective, the debate around recommender systems fostering or limiting access to novel information is still open. On one hand, recommenders may originate a filter bubble effect by providing information that increasingly reinforces existing viewpoints~\cite{pariser12filter}. On the other hand, recent research has pointed out that individual choices, more than the effect of algorithms, limit exposure to cross-cutting content~\cite{bakshy15exposure}. It has been argued that recommender systems have a limited effect in influencing people's free will. Observational studies on Amazon found that $75\%$ of click-throughs on recommended products would likely have occurred also in the absence of recommendations~\cite{sharma15estimating}. In the context of link recommendation systems, a key open question is how they affect ego-network diversity.
\section{Dataset and preliminaries} \label{sec:dataset}
We study two social media platforms that differ in both scope and usage. The data includes only interactions between users who voluntarily opted-in for research studies. All the analysis has been performed in aggregate and on anonymized data.
\subsection{Tumblr} \label{sec:dataset:tumblr}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.50\columnwidth]{tumblr_recs_screenshot.png}
\caption{Tumblr's contact recommender system.}
\label{fig:tumblr_recs_screenshot}
\end{center}
\end{figure}
Tumblr is a popular social blogging platform. The types of user-generated content range from simple textual messages to multimedia advertising campaigns~\cite{chang14what,grbovic15gender}. Users might own multiple blogs, but for the purpose of this study we consider blogs as users, and we will use the two terms interchangeably. Users receive updates from the blogs they follow; the following relationship is directional and might not be reciprocated. In Tumblr, 326 million blogs and 143 billion posts have been created\footnote{\url{https://www.tumblr.com/about} (Dec 2016)} since the release of the platform to the public in July 2007. We extracted a large random sample of the social network in October 2015, which includes almost $7B$ follow links created between $130M$ public blogs over approximately 8 years. All the social links are marked with the exact timestamps of their creation, which allows for a fine-grained longitudinal analysis of the network evolution.
In October 2012, Tumblr launched a new version of its \textit{recommended blogs} feature. On the web interface, a shortlist of four recommended blogs is displayed in a panel next to the user's feed (Figure~\ref{fig:tumblr_recs_screenshot}). Users can get more recommendations by clicking on ``explore''. At every page refresh, the shortlist may change according to a randomized reshuffling strategy that surfaces new recommended contacts from the larger pool. Tumblr's link recommendation algorithm is not publicly disclosed, but it considers a mixture of two signals: topical preferences and network structure. The user's tastes are estimated since the onboarding phase, in which registrants are asked to indicate their preference on a set of pre-determined topics organized in a taxonomy (e.g., sports, football). The topical profile helps to overcome the cold-start problem. As the number of contacts grows, new blogs are recommended following the triangle closure (friend-of-a-friend) principle.
For all the links created after January 2015, we can reliably estimate if they have been created as an effect of a recommendation. This information is inferred by combining the log of recommendation impressions (i.e., when recommendations are visualized by the user) with the log of link creations. When the link is created shortly after the recommendation is displayed, we count the link creation as triggered by a recommendation.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{tumblr_degree.pdf}
\includegraphics[width=0.49\columnwidth]{flickr_degree.pdf}
\caption{Degree distributions in Tumblr ($\mu_{in}=108$, $\mu_{out}=58$) and Flickr ($\mu_{in}=19$, $\mu_{out}=21$).}
\label{fig:degree_distr}
\vspace{2mm}
\end{center}
\end{figure}
\subsection{Flickr} \label{sec:dataset:flickr}
Flickr is a popular photo-sharing platform in which users can upload a large amount (up to 1 TB) of pictures and share them with friends. Users can establish directed social links by following other users and get updates on their activity. Since its release in February 2004, the platform has gathered almost 90 million registered members who upload more than 3.5 million new images daily\footnote{\url{http://www.theverge.com/2013/3/20/4121574/flickr-chief-markus-spiering-talks-photos-and-marissa-mayer}}. We collected a sample of the follower network composed by the nearly $40M$ public Flickr profiles that are opted-in for research studies and by the $500M+$ links that connect them. Links carry the timestamp of their creation and they span approximately 12 years ending March 2016. Similar to Tumblr, Flickr has a contact recommendation module. However, we do not have access to recommendation data and we cannot measure their effect on the link creation process.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.95\columnwidth]{tumblr_timeline.pdf}\\
\includegraphics[width=0.95\columnwidth]{flickr_timeline.pdf}
\caption{Number of links, triangle-closing links, and recommended links (Tumblr only) created each day, by the nodes in our sample, during the whole lifespan of the platforms.}
\label{fig:timelines}
\end{center}
\end{figure}
\subsection{Concepts and notation} \label{sec:dataset:notation}
\noindent \textbf{Graph and ego-network.} Consider a follower graph $\mathcal{G}$ composed by a set of nodes $\mathcal{N}$ and a set of directed edges\footnote{We will use the terms \textit{directed edges}, \textit{edges}, and \textit{links} interchangeably to indicate a directional connection between nodes.} $\mathcal{E} \in \mathcal{N} \times \mathcal{N}$. When building the follower graph, we draw an edge from node $i$ to node $j$ if $i$ has followed $j$ at any time. The \textit{ego-network} $\mathcal{G}_i$ of node $i \in \mathcal{N}$ is the subgraph induced by $i$'s out-neighbors $\Gamma_{out}(i)$~\cite{freeman82centered}. Formally: $\mathcal{G}_i = (\mathcal{N}_i, \mathcal{E}_i)$, where $\mathcal{N}_i = \Gamma_{out}(i)$, and $\mathcal{E}_i = \{ (j,l) \in \mathcal{E} | $ $ j \in \Gamma_{out}(i) \wedge l \in \Gamma_{out}(i) \}$. Note that the ego-network does \textit{not} include the links between the ego $i$ and its neighbors.
\vspace{4pt}
\noindent \textbf{Structural graph metrics in time.} The temporal trace of link creations allows us to build a time graph~\cite{kumar03bursty} and to recover the structural properties of nodes and links at any point in time. The superscript $t$ applied to any indicator means that the metric refers to a snapshot of the graph at time $t$. For example, the neighbor set of node $i$ at time $t$ is denoted as $\Gamma^t(i)$ and its degree as $k^t(i)$. When studying the evolution of ego-networks in isolation, we will consider time on a discrete scale where each event corresponds to the $n^{th}$ node being added to the ego-network. We will use the letter $n$ to denote time passing on this discrete scale. All the graphs we consider are directed, so we use the definition of triangle closure adapted to directed graphs~\cite{romero10directed}: a new link created between $i$ and $j$ at time $t$ closes a directed triangle if $\exists l \in \mathcal{N} | l \in \Gamma^t_{out}(i) \wedge l \in \Gamma^t_{in}(j)$.
\vspace{4pt}
\noindent \textbf{Spontaneous vs. recommended links.} We distinguish links that are created for effect of a recommendation from those that are not. We call the links in the first group \textit{recommended} and the ones in latter \textit{spontaneous}.
\subsection{Data overview} \label{sec:dataset:overview}
The (in/out)degree distributions together with their average values ($\mu$) are shown in Figure~\ref{fig:degree_distr}. As expected, all distributions are broad, with values spanning several orders of magnitude. The out-degree distribution in Tumblr is capped at 5000 because the platform imposes an upper bound on the number of blogs a user can follow. On average, Tumblr users are more connected than Flickr users, with average in- and out-degree 5 and 3 times larger than Flickr, respectively.
Figure~\ref{fig:timelines} plots the number of links created over the course of the platforms' life. Both networks have experienced a noticeable growth. For Tumblr, we can plot the time series of recommended link creations occurred after January 2015. We also calculate the set of links that close at least one triangle. In Tumblr, the first sharp increase in the number of triangle-closing links is found between 2012 and 2013. That is determined by the introduction of a new link recommender system. A similar pattern has been observed in Facebook after the introduction of the ``people you may know'' module~\cite{zignani14link}. About $27\%$ of recommended links do not close any triangle: those are recommendations based on the user's topical profile only.
\section{Evolution of ego-networks} \label{sec:analysis}
\subsection{Diameter and connected components} \label{sec:analysis:diameter}
The growth of social networks is associated with three changes in their macroscopic structure: densification, diameter shrinking, and inclusion of almost all nodes in a single giant connected component~\cite{leskovec05graphs}. It is unknown whether the same properties hold at ego-network level.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{ego_numlinks_vs_k.pdf}
\includegraphics[width=0.49\columnwidth]{ego_distance_vs_k.pdf}
\caption{Left: average number of ego-network links vs. number of nodes; best fitting power-law exponent is reported as reference. Right: average network distance after the $n^{th}$ node is added; the black dotted line is obtained considering only Tumblr ego-networks with at least 1000 nodes.}
\label{fig:density_diameter}
\end{center}
\end{figure}
\vspace{4pt}
\noindent \textbf{Q1: How do density, diameter, and component structure evolve as the ego-network grows?}
Like global networks, ego-networks become denser in time. Ego-networks obey a densification power law, for which the number of links scales superlinearly with the number of nodes $|\mathcal{E}_i| \sim |\mathcal{N}_i|^{\gamma}$ (Figure~\ref{fig:density_diameter}, left). The exponent that best defines the scaling in both platforms is $\gamma=1.87$.
More surprisingly, densification does not always lead to the emergence of a single giant connected component covering the whole graph. On average\footnote{\small Results are qualitatively similar when considering the median.}, the largest component's size relative to the network size grows as new nodes join (Figure~\ref{fig:components}, left), but it stabilizes around $0.8$ for networks of 200 nodes or more. The number of components grows sublinearly with the number of nodes (not shown). More notably, the diameter shows little signs of shrinking. The \textit{network distance}, computed as the average distance between all pairs of nodes,\footnote{\small Computed on an undirected version on the graph. Similar results are obtained using diameter or effective diameter.} experiences a three-phases evolution (Figure~\ref{fig:density_diameter}, right). First, at the beginning of the ego-network life, it expands rapidly (\textit{exploration}); then, it starts shrinking slightly (\textit{consolidation}) before asintotically converging to a stable value (\textit{stabilization}). This trend is very different from the sharp diameter decline that characterizes social graphs. We speculate that the consolidation phase might be connected with the intrinsic human limitation to maintain large social groups, as theorized by Robin Dunbar~\cite{dunbar98grooming}. When the ego's social neighborhood exceeds the size that is cognitively manageable by a person (roughly, 150 to 200 individuals), a compensation effect might be triggered: new contacts are not anymore sought further away from the social circles that have been already established, putting an end to the exploration phase. This happens at $n=140$ in Flickr and $n=190$ in Tumblr, values that are compatible with Dunbar's theory.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{giantcomponent.pdf}
\includegraphics[width=0.49\columnwidth]{p_newcomponent.pdf}
\caption{Left: average ratio of nodes in the giant weakly connected component (GCC) as new nodes are added to the ego network. Right: probability that the $n^{th}$ node included in the ego-network spawns a new disconnected component, computed for spontaneous and recommended nodes.}
\label{fig:components}
\end{center}
\end{figure}
The addition of recommended nodes has a different effect on the ego-network expansion, compared to spontaneous ones. New recommended contacts tend to be closer to existing ego-network members. At fixed network size, the addition of new recommended nodes increases the network distance $5\%$ less, on average, than a spontaneous node addition. Ego-networks with at least a recommended node have smaller network distance than ego-networks that grew fully spontaneously; this difference varies with the size, being only $2\%$ smaller for networks under 50 nodes up to $10\%$ smaller for networks with 200 nodes or more. Also, spontaneous nodes have far higher chances, compared to recommended ones, to spawn a new component disconnected from the rest of the ego-network (Figure~\ref{fig:components}, right).
\vspace{4pt}\noindent\textbf{Accounting for amalgamation effects.} To explore evolutionary trends of ego-networks, we rely on aggregate analysis: an indicator is measured on an ego-network when its $n^{th}$ node is added and then it is averaged across all ego-networks. Trends are discovered as $n$ grows (e.g., diameter in Figure~\ref{fig:density_diameter}, right). This approach may yield misleading results because averages computed at different values of $n$ are obtained from different sample sets. This problem is known as the \textit{Simpson's Paradox}~\cite{simpson51interpretation} and it is usually addressed by fixing the sample set~\cite{barbosa16averaging}. To account for it, every time we perform an evolutionary analysis as $n$ varies in $[1,n_{max}]$, we compare results obtained in two settings: the first using the full dataset and the latter considering only the subset of ego-networks that reached at least size $n_{max}$. The results are only slightly different across the two settings, for all the indicators analyzed. For the sake of brevity, we report just one example of such comparison. In Figure~\ref{fig:density_diameter} right, the diameter evolution for Tumblr ego-networks that reached at least size 1000 is very similar to the trend found when all ego-networks are considered.
\subsection{Popularity vs. similarity} \label{sec:analysis:predictors}
The process of link creation in online social networks is driven by two main factors: popularity (that leads to preferential attachment~\cite{mislove08growth}) and similarity (that leads to homophily~\cite{aiello12tweb}). At network scale, their relative weight in predicting the creation of new links might vary depending on the type of social network~\cite{hasan11survey}. At microscopic scale, it is still unclear how popularity and similarity impact the selection of new nodes in ego-networks, and how their relative importance varies in time.
\vspace{4pt}
\noindent \textbf{Q2: How do the criteria of neighbor selection change as the ego-network grows?}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{distr_kin.pdf}
\includegraphics[width=0.49\columnwidth]{distr_CN.pdf}\\
\caption{Distribution of nodes' popularity (indegree) and similarity with ego (common neighbors) at the time of their inclusion in the ego-network.}
\label{fig:predictors_distr}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{CN_vs_numlinks.pdf}
\includegraphics[width=0.49\columnwidth]{PA_vs_numlinks.pdf}\\
\includegraphics[width=0.49\columnwidth]{jaccard_vs_numlinks.pdf}
\includegraphics[width=0.49\columnwidth]{kin_vs_numlinks.pdf}
\caption{Average similarity and popularity indicators of $n^{th}$ node added to the ego-network: common neghbors between the ego and the newly added nodes, Jaccard similarity between their neighbors sets, preferential attachment indicator ($k_{out}(ego) \cdot k_{in}(new node)$), and indegree of the new node.}
\label{fig:predictors_vs_time}
\end{center}
\end{figure}
We select two simple (yet widely-used) proxies of popularity and similarity. Given an ego $i$ who has added $j$ as its neighbor at time $t$, we consider the alter's indegree $k^t_{in}(j)$ as an indicator of its popularity and the number of common neighbors between the ego $i$ and the alter $j$, $CN^t(i,j) = |\Gamma^t_{out}(i) \cap \Gamma^t_{in}(j)|$, as a measure of similarity. Drawing the distributions of $k^t_{in}$ and $CN^t(i,j)$ (Figure~\ref{fig:predictors_distr}), we observe that the range of values is very broad in both platforms. The $CN$ distributions suffer from cut-offs (around $CN=200$) caused by the scarcity of nodes with hundreds of common neighbors or more. Recommended Tumblr nodes yield distributions that are skewed towards higher values because the recommender picks by design those profiles that are popular and well-connected to the ego's neighbors.
As new nodes are added to the ego-network, the number of their common connections with the ego naturally increases (and so does the preferential attachment indicator, as expected). The Jaccard similarity between their neighbor sets oscillates, increasing when the ego-network's size is in the interval $[100,1000]$ and decreasing otherwise. The popularity of new ego-network members, computed as their indegree in the social network, decreases as the ego-network grows. A summary of all the indicators is given in Figure~\ref{fig:predictors_vs_time}.
All the trends are similar, yet shifted towards higher values, when considering recommended nodes only. In short, recommended nodes tend to share more contacts with the ego and to be more popular, which corroborates previous observations about link recommendations being beneficial mostly to popular nodes~\cite{su16effect}. The indicator that differs the most is the Jaccard similarity, that increases monotonically with $n$ for recommended contacts.
\subsection{Temporal activity} \label{sec:analysis:batches}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{distr_batchsize.pdf}
\includegraphics[width=0.49\columnwidth]{distr_batch_hours_after_previous.pdf}
\caption{Distributions of batch size ($s_b$) and batch interarrival time ($\tau_b$). Best fitting power law exponents reported as reference.}
\label{fig:batches_distr}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{batchsize_vs_numbatch.pdf}
\includegraphics[width=0.49\columnwidth]{batchsize_vs_daysfromfirst.pdf}
\caption{Batch size as a function of time, measured as the number of batches created or as the ego's age measured in number of days. The Tumblr Recs curves summarize the trend for batches containing at least one recommended node.}
\label{fig:batches_vs_time}
\end{center}
\end{figure}
Creation of links is not uniform in time. Previous literature found evidence that, globally, the creation of links happens in bursts~\cite{kumar03bursty,kikas13bursty}. At a local level, we aim to learn how often and in which phases of the ego-network life users select new neighbors.
\vspace{4pt}
\noindent \textbf{Q3: When do ego-networks expand?}
To measure how much node additions to an ego-network are concentrated in short periods of time, we resort to basic session analysis to group together temporally-contiguous events. As is standard practice in the analysis of browsing behaviour~\cite{spink06multitasking}, we split user sessions by timeout: a session starts when a new node is added to the ego-network and ends when no other node has been added for 25 minutes. We call \textit{batch} a set of nodes added in a single session.
We compute the average \textit{batch size} $s_b$ and the session \textit{interarrival time} $\tau_b$, namely the time (hours) elapsed from the session's end to the next session's start. The process of batch creation is bursty when $i)$ there are strong temporal heterogeneities in the interarrival time, and $ii)$ consecutive link creations are not independent events. A standard practice to assess those conditions is to measure the decay of the probability density functions for $s_b$ and $\tau_b$: power law decays in the form $P(x) \sim x^{-\gamma}$ indicate burstiness~\cite{karsai12universal}. The distribution of batch size $s_b$ follows a power-law trend, with exponents $2.2$ and $2.45$ in Tumblr and Flickr, respectively (Figure~\ref{fig:batches_distr} left). In Flickr, the size scales freely as there are no boundaries preventing the addition of any number of contact. In Tumblr, we observe a sharp cutoff at $200$ as the service policy enforces a maximum limit of 200 link creations per user per day. The decay of $\tau_b$ is similar on both platforms, with initial intervals fitting a power-law with exponent $\gamma=0.4$, followed by exponential cutoffs due to the finite time window (Figure~\ref{fig:batches_distr}, right).
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.90\columnwidth]{link_perc_vs_days.pdf}
\caption{Average portion of links created in the first 100 days of the ego's life, relative to the final ego-network size. Only nodes who have created links for at least 6 months are considered.}
\label{fig:link_perc_vs_days}
\vspace{-8pt}
\end{center}
\end{figure}
If we consider only sequences of batches containing at least one recommended link, we see that recommendations are associated with the creation of less links per session, but at higher rate. The average batch size is $2.18$ in Flickr and $2.67$ in Tumblr; Tumblr batches with recommended links are $12\%$ smaller (average size $2.35$). The median interarrival time is relatively high in both platforms ---12 days in Tumblr, 2 weeks in Flickr--- but only 5 days for pairs of consecutive batches containing recommended links. No causal claim connecting recommendations and rate of link creation can be made, as a number of confounding factors could influence this trend (e.g., users who are more active might more naturally engage in recommendations). However, this result provides partial evidence that recommendations might contribute to alter the natural time scale of link creation.
The average batch size decreases as time passes and the ego-network grows (Figure~\ref{fig:batches_vs_time}). After the first 30 days (or the first 20-30 batches created), the batch size stabilizes around 2. A similar decreasing trend is also found for the interarrival time $\tau_b$ (not shown). This suggest that nodes tend to build most of their ego-network in the first stages of their life. To confirm that, we compute the average daily ratio of the total number of the ego-network's nodes added to in the first 100 days of the ego's life. We only consider users whose link creation activity spans at least 6 months, to avoid biases introduced by users with short lifespan. As expected, a big chunk of nodes are typically added in the first days of activity (Figure~\ref{fig:link_perc_vs_days}). This finding adds nuance to previous work on temporal graphs. Studies on Flickr using a coarser temporal granularity found that the raw number of new links created by the ego in time is uniform over time~\cite{leskovec08microscopic}. Here we find that the uniform trend starts only after an initial spike of link creations.
\subsection{Community formation} \label{sec:analysis:community}
Ego-networks have a clear community structure because people tend to interact with multiple social circles (e.g., school friends, family members) that are typically weakly connected to one another~\cite{mcauley14discovering}. We ask about the role of these communities in the graph evolution.
\vspace{4pt}
\noindent \textbf{Q4: Is the ego-network growth driven by the boundaries of its communities?}
The ego-network may follow a depth-first expansion pattern with respect to communties, in which the ego preferentially connects to nodes belonging to a community before exploring others. In Flickr, for example, a person could first follow all the accounts of family members and then those of a photography club. Alternatively, the ego-network may expand either breadth-first, picking new nodes in a round-robin fashion, or regardless of the community structure. In social network analysis research we know little evidence in support of any of these scenarios. In the context of web navigation and search, in-depth exploration of content is often most effective and cognitively more natural~\cite{debra94information,tauscher97how}; we hypothesize that the same holds for community exploration.
Measuring the extent to which any of these scenarios reflects people's behaviour is challenging, as in reality communities can be overlapping and change their boundaries as the graph grows. Leaving more advanced measurements for future work, we assume a static, hard partitioning of nodes in communities. For all ego-networks, we compute non-overlapping communities\footnote{\small We use the community detection algorithm by Waltman and Eck~\cite{waltman13smart} that is an optimization of the Louvain method~\cite{blondel08fast}.} at time $t=T_{end}$ (the most recent snapshot in our data), heuristically filtering out small ego-networks with less than 5 members. Ego-networks are often composed by few communities, rarely more than 10 (Figure~\ref{fig:community_no_distr}, left).
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{community_no_distr.pdf}
\includegraphics[width=0.49\columnwidth]{community_size_distr.pdf}
\caption{Left: distribution of number of communities in ego-networks. Right: average size of communities as they appear in the ego-network; 95\% confidence intervals are shown.}
\label{fig:community_no_distr}
\vspace{-8pt}
\end{center}
\end{figure}
To assess to what extent communities emerge over time in an orderly fashion, we rank them by the time the ego has created connections with their nodes. Specifically, we first sort all the nodes by the time they are added to the ego-network (e.g., $[j_1,j_2,j_3,j_4]$). We then replace nodes with the communities they belong to (e.g., $[c_1,c_1,c_2,c_1]$), rank communities by the median position $p$ of their occurrences in that vector (e.g., $p(c_1)=2, p(c_2)=3$; $c_1$ is ranked first, $c_2$ is ranked last), and finally replacing the communities with their respective ranks, thus obtaining a sequence of community ranks $\mathcal{R}$ (e.g., [1,1,2,1]). The intuition is that a community half of whose members has been added to the ego-network by time $t$ comes temporally before any other whose majority of nodes are still outside the ego-network at the same time $t$. If exploration of communities is purely in-depth, $\mathcal{R}$ is fully sorted. The sortedness of a list $L=\{x_1, ..., x_m\}$ can be measured by its \textit{inversion score}:
\begin{equation*}
inv(L) = 1 - \frac{2 \cdot |\{(x_i,x_j) | i < j \wedge x_i > x_j\}|}{\binom{|L|}{2}} \in [-1,1]
\end{equation*}
$inv=1$ indicates sortedness, $inv=-1$ inverse ordering, and $inv=0$ randomness. In Figure~\ref{fig:community_inversions} we plot the average inversion score of $\mathcal{R}$ against the ego-network size. To account for the community size heterogeneity, we compare it with a null-model where the elements in $\mathcal{R}$ are randomly reshuffled. The inversion score quickly stabilizes as the network grows and it has values that are consistently higher (double or more) than the null-model's, supporting the hypothesis that communities tend to be explored in depth, one after the other.
Further evidence can be provided by measuring the probability that the $n^{th}$ node added to the ego-network belongs to the $k^{th}$ community in the ranking, normalized by the probability in the null-model; values higher than 1 indicate above-chance likelihood of a node being in a given community. Figure~\ref{fig:communities_vs_time} shows the average normalized likelihood of the first 50 nodes to belong to the first 5 communities in the rank. Curves for increasing values of $k$ emerge above the randomness threshold one after the other, which backs the hypothesis of communities being explored in-depth.
The size of a community (measured at time $T_{end}$) varies with its temporal rank (Figure~\ref{fig:community_no_distr}, right). On average, people create increasingly larger communities up to the fifth one; from the sixth one on, new communities added become smaller and smaller.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{tumblr_community_inversions.pdf}
\includegraphics[width=0.49\columnwidth]{flickr_community_inversions.pdf}
\caption{Average inversion score of ego-network communities as new nodes are added, compared to a randomized null-model.}
\label{fig:community_inversions}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{tumblr_communities_vs_time.pdf}
\includegraphics[width=0.49\columnwidth]{flickr_communities_vs_time.pdf}
\caption{Average likelihood (over random chance) that the $n^{th}$ node in a ego-network belongs to the $k^{th}$ community.}
\label{fig:communities_vs_time}
\vspace{-15pt}
\end{center}
\end{figure}
\subsection{Recommendations diversity} \label{sec:analysis:matching}
Our analysis shows that the statistical properties of recommended links are different from those of spontaneous ones. It is known that recommender systems postively affect user engagement, in terms of time spent, content consumption, and user contribution~\cite{freyne09increasing}. In agreement with established knowledge, we have found that users exposed to recommendations create more links, more frequently. It is harder to assess whether recommendations foster or limit access to diverse types of content. The academic debate about recommendations being the bane or boon of social media is still very lively~\cite{pariser12filter,bakshy15exposure,sharma15estimating}, with evidence brought in support of the two views. We aim to provide further evidence to shed light on this point in the context of link recommenders.
\vspace{4pt}
\noindent \textbf{Q5: Do link recommendations foster diversity?}
It is hard to infer causality from observational data. Matching is a statistical technique that is used to evaluate the effect of a treatment on a dependent variable by comparing individuals who have received the treatment with others with similar observable features who did not receive it. The more similar the paired individuals and the higher the number of pairs, the higher the confidence of estimating the cause of the treatment on the dependent variable.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.75\columnwidth]{matching.pdf}
\caption{Matching experiment. Average entropy of the neighbor sets at step $k+1$ for groups sharing a matching sequence of length $k$. 95\% confidence intervals are shown.}
\label{fig:matching}
\end{center}
\end{figure}
We conduct a matching experiment to measure if people who follow recommendations end up having ego-networks more similar to one another than if they were to ignore recommendations. We arrange people in \textit{matching groups} containing users who are nearly identical in terms of their local connectivity. We assign to the same matching group users who $i)$ registered to Tumblr less than 30 days apart, and $ii)$ whose very first $k$ neighbors are the same and have been added in the same order to their ego-networks. Each matching group is then split in two subgroups: a \textit{treatment} group of users whose $k+1^{st}$ contact is a recommended one, and a \textit{control} group of all the remaining users, whose $k+1^{st}$ contact has been created spontaneously. The variety of contacts created at step $k+1$ is our dependent variable. If the variety measured in one group is significantly different from the other, we can attribute the divergence to the effect of the recommendation, as the initial conditions of the two groups are virtually identical. For example, if we found that the variety of nodes in the treatment group is lower than the one measured on the control group, we would conclude that recommendations conform the process of link creation by inducing users to follow a more restricted set of accounts compared to what would happen by spontaneous user behavior. We measure diversity of contacts through their entropy. To ensure a fair comparison that accounts for size heterogeneity, we use normalized entropy $\widehat{H}$. Given a bag of nodes $X$ of size $N$, where $p(x)$ is the number of occurrences of node $x \in X$ divided by $N$, the normalized entropy is defined as:
\begin{equation*}
\widehat{H}(X) = \sum_{x \in X} \frac{p(x) \cdot log_2(p(x))}{log_2(N)} \in [0,1].
\end{equation*}
Figure~\ref{fig:matching} shows the results for a total of approximately $25K$ matching groups, for $k \in [1,5]$; requiring $k$ identical links is a too strong requirement for larger $k$. Every point is the average of all the matching groups for a given $k$. First, we observe that, the higher the $k$, the lower the entropy. That is expected: the higher the number of common neighbors, the more likely the next selected neighbor will be the same. Last, most importantly, the variety of the treatment group is always higher; this indicates that recommendations foster diversity. Although it is difficult to pin down the exact reasons why this happens, we provide a possible interpretation. Even if the list of recommended contacts was the same for all the users in the treatment group, the reshuffling of the top recommended contacts that Tumblr implements in the link recommender widget introduces asymmetries across users. More generally, we could hypothesize that the recommender system exposes users to a wider set of potential contacts than the ones they would be exposed to by browsing or searching on the site, thus providing a wider spectrum of options and, in turn, a more diverse set of individual choices.
To test the robustness of the results, we explored some possible alternatives in the setup of the matching experiment. Specifically we: $i)$ measured the entropy at $k+2$ instead of $k+1$ (we leave the $k+n$ generalization for future work); $ii)$ selected only control and treatment groups with at least $m \in [2,15]$ members (the results reported are for $m=5$); $iii)$ randomly downsampled the larger group to match the size of the smaller one, to balance the size of the two; $iv)$ run two independent experiments including in the treatment group only users whose recommended contact had 1) at least one common neighbor (i.e., the recommendation is provided based on network topology features) or 2) no common neighbors (i.e., the recommendation is provided on a topical basis). The absolute values vary slightly across setups, but the qualitative results remain the same.
\section{Impact on link prediction} \label{sec:prediction}
\begin{figure}
\begin{floatrow}
\capbtabbox
{
\begin{tabular}{l|cc}
\hline
\multicolumn{1}{c}{\textsf{Model}} & \multicolumn{1}{c}{\textsf{AUC}} & \multicolumn{1}{c}{\textsf{F-Score}}\\
\hline
Baseline & 0.893 & 0.813 \\
+ age & 0.938 & 0.864 \\
+ $k_{out}$ & 0.897 & 0.817 \\
All & $\mathbf{0.943}$ & $\mathbf{0.87}$ \\
\hline
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}\\
\end{tabular}
}
{\captionsetup{width=.9\linewidth}
\caption{Link prediction results.}
\label{tab:prediction}
}
\hfill
\hspace{-1cm}
\ffigbox
{
\includegraphics[width=0.40\textwidth]{feature_importance.pdf}
}
{
\captionsetup{width=.9\linewidth}
\caption{Feature importance.}
\label{fig:featureImportance}
}
\end{floatrow}
\end{figure}
Our analytical results have direct implications on how link recommender systems can be enhanced to provide more effective suggestions. Next, we discuss two prediction experiments that aim to answer two research questions.
\vspace{4pt}
\noindent \textbf{Q6: To what extent temporal features improve the ability to predict new links?}
The previous analysis showed that egos add new neighbors to their network with criteria that change in time. The resulting hypothesis is that link recommendations that adapt to the current evolutionary stage of the ego-network could gain effectiveness. To test such hypothesis, we run a prediction experiment.
We consider a snapshot of the Tumblr social network at an arbitrary time $t$ (January $1^{st}$ 2015). To build a training set, we sample $200K$ node pairs $(i,j)$ that are not directly connected but with at least one directed common neighbor (i.e., there is a directed path of length 2 from $i$ to $j$). Half of the pairs will be directly connected by a link from $i$ to $j$ before $T_{end}$ (positive examples), the remaining half will remain disconnected (negative examples). For every pair, we extract six simple features: $i$'s outdegree ($k_{out}(i)$), $j$'s indegree ($k_{in}(j)$), preferential attachment (PA~$= k_{out}(i) \cdot k_{in}(j)$), common neighbors (CN~$ = |\Gamma^{t}_{out}(i) \cap \Gamma^{t}_{in}(j)|$), Jaccard similarity between neighbor sets (Jaccard = $\frac{|\Gamma^{t}_{out}(i) \cap \Gamma^{t}_{in}(j)|}{|\Gamma^{t}_{out}(i) \cup \Gamma^{t}_{in}(j)|}$), and $i$'s age measured in number of days elapsed from $i$'s profile creation to $t$. Age and $i$'s outdegree are the two temporal features whose effectiveness we want to investigate: one measures time on a continuous scale, the other on the discrete scale of link creation events.
The simple features above can predict the node pair class very accurately, and a summary of this evaluation (10-fold cross validation using random forest) is given in Table \ref{tab:prediction}. The model trained on the full set of features yields an AUC of 0.943 and a F-measure of 0.87, which is an improvement of $5.6\%$/$7\%$ in terms of AUC/F-measure over a baseline model that does not consider temporal features. Adding the feature $k_{out}$ to the baseline model yields only a slight improvement in accuracy ($0.45\%$/$0.49\%$), while considering age improves the accuracy in a more consistent way ($5.04\%$/$6.27\%$). Figure \ref{fig:featureImportance} summarizes the importance of the features in this link prediction setting by measuring the \textit{mean decrease Gini}, the average gain of purity achieved when splitting on a given variable (the higher, the better)~\cite{louppe13understanding}. In line with previous work~\cite{yang12predicting}, this analysis further confirms the importance of the temporal features and found that time matters when recommending new contacts.
\vspace{4pt}
\noindent \textbf{Q7: Is it possible to limit the bias of the recommender system while keeping its high accuracy?}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.49\columnwidth]{feature_analysis_spon.pdf}
\includegraphics[width=0.49\columnwidth]{feature_analysis_rec.pdf}
\caption{Distribution of structural features of spontaneous and recommended links. Given directed links in the form $(i,j)$ we show the boxplots of the distributions of $k_{in}(j)$, $PA(i,j)$, $CN(i,j)$, and Jaccard$(i,j)$.}
\label{fig:feature_analysis}
\vspace{-10pt}
\end{center}
\end{figure}
In Section~\ref{sec:analysis:predictors} we observed that the distribution of nodes' popularity and of their structural similarity with ego are statistically different when considering recommended vs. spontaneous links. To further investigate this difference, we analyze the distribution of all structural features over a sample ($200$k) of recommended and spontaneous links, equally represented.
For sake of presentation, we normalize the value of each feature on a scale 0 to 1, by dividing the raw feature value by its maximum value in the whole $200$k sample. As shown in Figure~\ref{fig:feature_analysis}, spontaneous links tend to exhibit less degree of structural overlap than recommended links (the median Jaccard value on recommended links is $4$ times larger than the value recorded on spontaneous ones). The same observation holds when analyzing the node popularity; the median in-degree of the target node on recommended links is one order of magnitude higher than the corresponding value computed on spontaneous links.
Learning to what extent it is possible to automatically tell recommended links and spontaneous links apart would allow us to train new recommender systems to suggest links whose properties better adhere to the natural criteria that people follow when adding new contacts. To gauge this possibility, we run a second prediction experiment with the same features and setup of the previous one but with a different selection of positive and negative examples.
We pick $100K$ pairs that will be connected in the future through a spontaneous link as positive examples, and as many pairs that will be connected through a recommended link as negative examples. A random forest classifier is able to distinguish the two classes pretty accurately (AUC~$=0.795$, F-measure~$=0.721$).
In a more realistic scenario, the recommender should learn to recognize the space between recommended links and truly negative examples (links that are never formed). To model that, we add to the training set $100K$ negative pairs that will not be connected at any time in the future. This setting achieves a better performance (AUC~$=0.823$, F-measure~$=0.771$) with around $90\%$ accuracy on the negative class and $55\%$ on the positive one. In short, even if very basic structural and temporal features are used, it is possible to effectively use the output of current link recommenders to train new recommenders that smooth the algorithmic bias and produce suggestions that better simulate the spontaneous process of link selection.
\section{Conclusions} \label{sec:conclusions}
We have provided a large-scale analysis of ego-network evolution on two online platforms, exposing the dynamics of their bursty evolution, community-driven growth, diameter expansion, and selection of new nodes based on a time-varying interplay between similarity and popularity. By studying the set of Tumblr links created as a result of algorithmic suggestions, we find that recommended links have different statistical properties than spontaneously-generated ones. We also find evidence that link recommendations foster network diversity by leading nodes that are structurally similar to choose different sets of new neighbors.
Our work has some limitations. Flickr and Tumblr are mainly interest networks, where people follow each other based on topical tastes. Some of the results we report here might not generalize to social networks that aim mostly at connecting people who know each other in real life (e.g., Facebook, LinkedIn). Also, we only consider the network structure and disregard any notion of node profile, including posting activity in time and user-generated content; that information would help to further detail the dynamics of ego-network expansion with respect to other dimensions such as topical similarity between profiles.
Our results have a number of practical implications. We provide further evidence that in online social networks not all links are created equal; network analysts who produce network growth models based on the observation of online social networks' longitudinal traces should consider weighting links that emerge spontaneously different from those that are created algorithmically. Through our prediction experiments, we also provide a hint about how link recommender systems could incorporate signals on the ego-network's evolutionary stage to improve the quality of suggestions. We hope our work provides yet another step towards a better understanding of the evolutionary dynamics of social networks.
\section{Acknowledgments}
We would like to thank \textbf{Martin Saveski} for his valuable suggestions and the anonymous reviewers for helpful comments.
\balance
\small
\bibliographystyle{abbrv}
| proofpile-arXiv_065-7384 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Isoperimetric inequalities are of ancient interest in mathematics. In general, an isoperimetric inequality gives a lower bound on the `boundary-size'
of a set of a given `size', where the exact meaning of these words varies according to the problem. In the last fifty years, there has been a great deal of interest in {\em discrete} isoperimetric inequalities. These deal with the `boundary' of a set $A$ of vertices in a graph $G=(V,E)$ -- either the {\em edge boundary} $\partial A$, which consists of the set of edges of $G$ that join a vertex in $A$ to a vertex in $V \setminus A$, or the {\em vertex boundary} $b(A)$, which consists of the set of vertices of $V \setminus A$ that are adjacent to a vertex in $A$.
\subsection{The edge isoperimetric inequality for the discrete cube, and some stability versions thereof}
A specific discrete isoperimetric problem which attracted much interest due to its numerous applications is the edge isoperimetric problem for the $n$-dimensional discrete cube, $Q_n$. This is the graph with vertex-set $\{0,1\}^n$, where two 0-1 vectors are adjacent if they differ in exactly one coordinate. The edge isoperimetric problem for $Q_n$ was solved by Harper \cite{Harper}, Lindsey \cite{Lindsey}, Bernstein \cite{Bernstein}, and Hart \cite{Hart}. Let us describe the solution. We may identify $\{0,1\}^n$ with the power-set $\mathcal{P}\left(\left[n\right]\right)$ of $[n]: = \{1,2,\ldots,n\}$, by identifying a 0-1 vector $(x_1,\ldots,x_n)$ with the set $\{i \in [n]:\ x_i=1\}$. We can then view $Q_n$ as the graph with vertex set $\mathcal{P}\left(\left[n\right]\right)$, where two sets $S, T \subset [n]$ are adjacent if $|S \Delta T|=1$. The {\em lexicographic ordering} on $\mathcal{P}\left(\left[n\right]\right)$ is defined by $S > T$ iff $\min(S \Delta T) \in S$. If $m \in [2^n]$, the {\em initial segment of the lexicographic ordering on $\mathcal{P}\left(\left[n\right]\right)$ of size $m$} (or, in short, the {\em lexicographic family of size $m$}) is simply the $m$ largest elements of $\mathcal{P}\left(\left[n\right]\right)$ with respect to the lexicographic ordering.
Harper, Bernstein, Lindsey and Hart proved the following.
\begin{thm}[The `full' edge isoperimetric inequality for $Q_n$]
\label{thm:edge-iso}
If $\mathcal{F} \subset \mathcal{P}([n])$ then $|\partial \mathcal{F}| \geq |\partial \mathcal{L}|$, where $\mathcal{L} \subset \mathcal{P}\left(\left[n\right]\right)$ is the initial segment of the lexicographic ordering of size $|\mathcal{F}|$.
\end{thm}
A weaker, but more convenient (and, as a result, more widely-used) lower bound, is the following:
\begin{cor}[The weak edge isoperimetric inequality for $Q_n$]
\label{cor:edge-iso}
If $\mathcal{F} \subset \mathcal{P}([n])$ then
\begin{equation}\label{Eq:weak-iso}
|\partial \mathcal{F}| \geq |\mathcal{F}|\log_2(2^n/|\mathcal{F}|).
\end{equation}
\end{cor}
Equality holds in (\ref{Eq:weak-iso}) iff $\mathcal{F}$ is a subcube, so (\ref{Eq:weak-iso}) is sharp only when $|\mathcal{F}|$ is a power of 2.
When an isoperimetric inequality is sharp, and the extremal sets are known, it is natural to ask whether the inequality is also `stable' --- i.e., if a set has boundary of size `close' to the minimum, must that set be `close in structure' to an extremal set?
For Corollary~\ref{cor:edge-iso}, this problem was studied in several works. Using a Fourier-analytic argument, Friedgut, Kalai and Naor~\cite{FKN} obtained a stability result for sets of size $2^{n-1}$, showing that if $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$ with $|\mathcal{F}| = 2^{n-1}$ satisfies $|\partial \mathcal{F}| \leq (1+\epsilon)2^{n-1}$, then $|\mathcal{F} \Delta \mathcal{C}|/2^n= O(\epsilon)$ for some codimension-1 subcube $\mathcal{C}$. (The dependence upon $\epsilon$ here is almost sharp, viz., sharp up to a factor of $\Theta(\log(1/\epsilon))$). Bollob\'as, Leader and Riordan (unpublished) proved an analogous result for $|\mathcal{F}| \in \{2^{n-2},2^{n-3}\}$, also using a Fourier-analytic argument. Samorodnitsky \cite{Samorodnitsky09} used a result of Keevash~\cite{Keevash08} on the structure of $r$-uniform hypergraphs with small shadows, to prove a stability result for all $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$ with $\log_2|\mathcal{F}| \in \mathbb{N}$ (i.e., all sizes for which Corollary~\ref{cor:edge-iso} is tight), under the rather strong condition $|\partial \mathcal{F}| \leq (1+O(1/n^4))|\partial \mathcal{L}|$. In \cite{Ellis}, the first author proved the following stability result (which implies the above results), using a recursive approach and an inequality of Talagrand \cite{Talagrand} (which was proved via Fourier analysis).
\begin{thm}[\cite{Ellis}]
\label{thm:e}
There exists an absolute constant $c>0$ such that the following holds. Let $0 \leq \delta < c$. If $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$ with $|\mathcal{F}| = 2^{d}$ for some \(d \in \mathbb{N}\), and $|\mathcal{F} \Delta \mathcal{C}| \geq \delta 2^d$ for all $d$-dimensional subcubes $\mathcal{C} \subset \mathcal{P}\left(\left[n\right]\right)$, then
$$|\partial \mathcal{F}| \geq |\partial \mathcal{C}| +2^d \delta \log_{2}(1/\delta).$$
\end{thm}
As observed in \cite{Ellis}, this result is best-possible (except for the condition $0 \leq \delta < c$, which was conjectured to be unnecessary in \cite{Ellis}).
In \cite{LOL}, we obtain the following stability version of Theorem \ref{thm:edge-iso}, which applies to families of arbitrary size (not just a power of 2), and which is sharp up to an absolute constant factor.
\begin{thm}
\label{thm:full-stability}
There exists an absolute constant $C>0$ such that
the following holds. If $\mathcal{F}\subset \mathcal{P}\left(\left[n\right]\right)$ and $\mathcal{L} \subset \mathcal{P}([n])$ is the initial segment of the lexicographic ordering of size $|\mathcal{F}|$, then there exists an automorphism $\sigma$ of $Q_n$ such that
$$|\mathcal{F}\, \Delta\, \sigma(\mathcal{L})| \leq C(|\partial \mathcal{F}| - |\partial \mathcal{L}|).$$
\end{thm}
The proof uses only combinatorial tools, but is much more involved than the proof of Theorem \ref{thm:e} in \cite{Ellis}.
\subsection{Influences of Boolean functions}
An alternative viewpoint on the edge isoperimetric inequality, which we will use throughout the paper, is via \emph{influences} of Boolean functions. For a function $f:\{0,1\}^n \rightarrow \{0,1\}$, the influence of the $i$th coordinate on $f$ is defined by
\[
I_i[f] := \Pr_{x \in \{0,1\}^n}[f(x) \neq f(x \oplus e_i)],
\]
where $x \oplus e_i$ is obtained from $x$ by flipping the $i$th coordinate, and the probability is taken with respect to the uniform measure on $\{0,1\}^n$. The \emph{total influence} of the function is
\[
I[f] := \sum_{i=1}^n I_i[f].
\]
Over the last thirty years, many results have been obtained on the influences of Boolean functions, and have proved extremely useful in such diverse fields as theoretical computer science, social choice theory and statistical physics, as well as in combinatorics (see, e.g., the survey~\cite{Kalai-Safra}).
\medskip
It is easy to see that the total influence of a function $f$ is none other than the size of the edge boundary of the set $A(f)=\{x\in \{0,1\}^n:\ f(x)=1\}$, appropriately normalised: viz., $I[f] = |\partial (A(f))|/2^{n-1}$. Hence, Corollary~\ref{cor:edge-iso} has the following reformulation in terms of Boolean functions and influences:
\begin{prop}[The weak edge isoperimetric inequality for $Q_n$ -- influence version]
\label{cor:edge-iso-inf}
If $f:\{0,1\}^n \rightarrow \{0,1\}$ is a Boolean function then
\begin{equation}\label{Eq:Main}
I[f] \geq 2\mathbb{E}[f]\log_2(1/\mathbb{E}[f]).
\end{equation}
\end{prop}
\noindent Theorem~\ref{thm:e} can be restated similarly.
\subsection{The biased measure on the discrete cube}
For $p\in [0,1]$, the {\em $p$-biased measure on $\mathcal{P}([n])$} is defined by
$$\mu_p^{(n)}(S) = p^{|S|} (1-p)^{n-|S|}\quad \forall S \subset [n].$$
In other words, we choose a random subset of $[n]$ by including each $j \in [n]$ independently with probability $p$. When $n$ is understood, we will omit the superscript $(n)$, writing $\mu_p = \mu_p^{(n)}$.
The definition of influences with respect to the biased measure is, naturally,
\[
I_i^p[f] := \Pr_{x \sim \mu_p}[f(x) \neq f(x \oplus e_i)],
\]
and $I^p[f] := \sum_{i=1}^n I_i^p[f]$. We abuse notation slightly and write $\mu_p(f):=\mathbb{E}_{\mu_p}[f]$. We remark that we may write $I^p[f] = \mu_p(\partial A(f))$, where we define the measure $\mu_p$ on subsets of $E(Q_n)$ by $\mu_p(\{x,x\oplus e_i\}) = p^{\sum_{j \neq i} x_i}(1-p)^{n-1-\sum_{j \neq i}x_j}$. (Note that $\mu_p(E(Q_n)) = n$, so $\mu_p$ is not a probability measure on $E(Q_n)$ unless $n=1$.)
\medskip
Many of the applications of influences (e.g., to the study of percolation \cite{BKS}, threshold phenomena in random graphs \cite{Bourgain99,Friedgut-SAT}, and hardness of approximation \cite{Dinur-Safra}) rely upon the use of the biased measure on the discrete cube. As a result, many of the central results on influences have been generalized to the biased setting (e.g. \cite{Friedgut98,Friedgut-Kalai,Hatami12}), and the edge isoperimetric inequality is no exception. The following `biased' generalization of Proposition~\ref{cor:edge-iso-inf} is considered folklore (see~\cite{KK06}).
\begin{thm}[The weak biased edge isoperimetric inequality for $Q_n$]
\label{thm:edge-iso-biased}
If $f:\{0,1\}^n \rightarrow \{0,1\}$ is a Boolean function, and $0 < p \leq 1/2$, then
\begin{equation}\label{Eq:Main}
pI^p[f] \geq \mu_p(f)\log_p(\mu_p(f)).
\end{equation}
The same statement holds for all $p \in (0,1)$ if $f$ is monotone increasing, i.e. if $(\forall i \in [n],\ x_i \leq y_i) \Rightarrow f(x) \leq f(y)$.
\end{thm}
The proof (presented in~\cite{KK06}) is an easy inductive argument.
\subsection{A stability version of the biased edge isoperimetric inequality}
The first main result of this paper is the following stability version of Theorem~\ref{thm:edge-iso-biased}.
\begin{thm}
\label{thm:skewed-iso-stability} There exist absolute constants $c_0,C_1 >0$ such that the following holds. Let $0<p\leq\frac{1}{2}$, and let
$\epsilon\leq c_{0}/\ln(1/p)$. Let $f\colon\left\{ 0,1\right\} ^{n}\to\left\{ 0,1\right\} $
be a Boolean function such that
\[
pI^{p}[f]\leq\mu_{p}(f)\left(\log_{p}(\mu_{p}(f))+\epsilon\right).
\]
Then there exists a subcube $S\subset\{0,1\}^{n}$ such that
\begin{equation}
\mu_{p}(f\Delta1_{S})\leq C_1 \frac{\epsilon'}{\ln\left(1/\epsilon'\right)}\mu_{p}(f),\label{eq:conc-1}
\end{equation}
where $\epsilon':=\epsilon \ln(1/p)$, and $f \Delta 1_S := \{x:f(x) \neq 1_S(x)\}$.
\end{thm}
\begin{remark}
\label{remark:non-mono}
It is straightforward to check that there exists an absolute constant $C>0$ such that if $0 < p \leq 1/2 - C\epsilon$, then the subcube $S$ in the conclusion of Theorem \ref{thm:skewed-iso-stability} must be monotone increasing. On the other hand, there exists an absolute constant $c>0$ such that if $1/2-c\epsilon \leq p \leq 1/2$, then one cannot demand that the subcube $S$ be monotone increasing (as can be seen by taking $f = 1_{\{x_1=0\}}$).
\end{remark}
\noindent If we assume further that $f$ is monotone increasing, then the above theorem can be extended to $p>1/2$.
\begin{thm}
\label{thm:mon-iso-stability} For any $\eta>0$, there exist $C_{1}=C_{1}(\eta)$,
$c_{0}=c_{0}(\eta)>0$ such that the following holds. Let $0<p\leq1-\eta$,
and let $\epsilon\leq c_{0}/\ln(1/p)$. Let $f\colon\left\{ 0,1\right\} ^{n}\to\left\{ 0,1\right\} $
be a monotone increasing Boolean function such that
\[
pI^{p}[f]\leq\mu_{p}[f]\left(\log_{p}(\mu_{p}[f])+\epsilon\right).
\]
Then there exists a monotone increasing subcube $S\subset\{0,1\}^{n}$ such that
\begin{equation}
\mu_{p}(f\Delta1_{S})\leq C_1 \frac{\epsilon'}{\ln\left(1/\epsilon'\right)}\mu_{p}(f),\label{eq:conc}
\end{equation}
where $\epsilon':=\epsilon \ln(1/p)$.
\end{thm}
As we show in Section~\ref{sec:examples}, Theorems \ref{thm:skewed-iso-stability} and \ref{thm:mon-iso-stability} are sharp, up to the values of the constants $c_0,C_1$. The proofs use induction on $n$, in a similar way to the proof of Theorem \ref{thm:e} in~\cite{Ellis}, but unlike in previous works, they do not use any Fourier-theoretic tools, relying only upon `elementary' (though intricate) combinatorial and analytic arguments.
\medskip
Theorems \ref{thm:skewed-iso-stability} and \ref{thm:mon-iso-stability} are crucial tools in a recent work of the authors~\cite{EKL16+}, which establishes
a general method for leveraging Erd\H{o}s-Ko-Rado type results in extremal combinatorics into strong stability versions,
without going into the proofs of the original results. This method is used in \cite{EKL16+} to obtain sharp (or almost-sharp) stability versions of the Erd\H{o}s-Ko-Rado theorem itself \cite{EKR}, of the seminal `complete intersection theorem' of Ahlswede and Khachatrian \cite{AK}, of Frankl's recent result on the Erd\H{o}s matching conjecture \cite{Frankl13}, of the Ellis-Filmus-Friedgut proof of the Simonovits-S\'{o}s conjecture \cite{EFF12}, and of various Erd\H{o}s-Ko-Rado type results on $r$-wise (cross)-$t$-intersecting families.
Theorem \ref{thm:mon-iso-stability} is also used in \cite{unions} by the first and last authors to obtain sharp upper bounds on the size of the union of several intersecting families of $k$-element subsets of $[n]$, where $k \leq (1/2-o(1))n$, extending results of Frankl and F\"uredi \cite{ff}.
\subsection{A biased version of the `full' edge isoperimetric inequality for monotone increasing families}
While the generalization of the `weak' edge isoperimetric inequality (i.e., Corollary~\ref{cor:edge-iso}) to the biased measure has
been known for a long time, such a generalization of the `full' edge isoperimetric inequality (i.e., Theorem~\ref{thm:edge-iso}) was hitherto unknown. In his talk at the 7th European Congress of Mathematicians \cite{Kalai16}, Kalai asked whether there is a natural generalization of Theorem~\ref{thm:edge-iso} to the measure $\mu_p$ for $p<1/2$.
We answer Kalai's question in the affirmative by showing that the most natural such generalization does not hold for arbitrary families, but does hold (even for $p>1/2$) under the additional assumption that the family is monotone increasing. (We say a family $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$ is {\em monotone increasing} if $(S \in \mathcal{F},\ S \subset T) \Rightarrow T \in \mathcal{F}$.)
In order to present our result, we first define the appropriate generalization of lexicographic families for the biased-measure setting.
For $s\in\left[2^n\right]$, we denote by $\mathcal{L}_{\frac{s}{2^{n}}}$ the lexicographic family with $s$ elements. (Of course,
$\mu_{1/2}(\mathcal{L}_{\frac{s}{2^{n}}})=s/2^n$.) Note that while in the uniform-measure case, for any $\mathcal{F} \subset \mathcal{P}([n])$ there exists a
lexicographic family $\mathcal{L}_{\frac{s}{2^{n}}}$ with the same measure, this does not hold for $p \neq 1/2$. Hence, we define families
$\mathcal{L}_{\lambda}$ for general $\lambda\in\left(0,1\right)$, as {\em limits} of families $\mathcal{L}_{\frac{s}{2^{n}}}$, where $\frac{s}{2^{n}}\to\lambda$. (See Section \ref{sec:KK} for the formal definition.) These `limit' families $\mathcal{L}_{\lambda}$ are measurable subsets of the Cantor space $\{0,1\}^{\mathbb{N}}$; they satisfy
\begin{equation}\label{eq:limit-measures} \mu_{p}\left(\mathcal{L}_{\lambda}\right) = \lim_{n \to \infty} \mu_{p}\left(\mathcal{L}_{\lfloor \lambda 2^n \rfloor/2^n}\right)\end{equation}
and
\begin{equation} \label{eq:limit-influences} I^{p}\left[\mathcal{L}_{\lambda}\right]=\lim_{n \to \infty} I^{p}\left[\mathcal{L}_{\lfloor \lambda 2^n \rfloor/2^{n}}\right].\end{equation}
In fact, the two properties (\ref{eq:limit-measures}) and (\ref{eq:limit-influences}) are all we need to state and prove our theorem, which is as follows.
\begin{thm}
\label{thm:Monotone}Let $\lambda,p\in\left(0,1\right)$, and let $\mathcal{F}\subset\mathcal{P}\left(\left[n\right]\right)$
be a monotone increasing family with $\mu_{p}\left(\mathcal{F}\right)=\mu_{p}\left(\mathcal{L}_{\lambda}\right)$.
Then $I^{p}\left[\mathcal{F}\right]\ge I^{p}\left[\mathcal{L}_{\lambda}\right]$.
\end{thm}
Our proof uses the Kruskal-Katona theorem~\cite{Katona66,Kruskal63} and the Margulis-Russo Lemma \cite{Margulis,Russo}. In fact, Theorem \ref{thm:edge-iso} (the `full' edge-isoperimetric inequality of Harper, Bernstein, Lindsey and Hart) follows quickly from Theorem \ref{thm:Monotone}, via a monotonization argument, so our proof of Theorem \ref{thm:Monotone} provides a new proof of Theorem \ref{thm:edge-iso}, via the
Kruskal-Katona theorem. This may be of independent interest, and may be somewhat surprising, as the Kruskal-Katona theorem is more immediately connected to the vertex-boundary of an increasing family, than to its edge-boundary.
\medskip
We remark that the assertion of Theorem \ref{thm:Monotone}
is false for arbitrary (i.e., non-monotone) functions, for each value of $p \neq 1/2$. Indeed, it is easy to check that for each $p \in (0,1) \setminus \{\tfrac{1}{2}\}$, the `antidictatorship' $\mathcal{A} = \{S \subset [n]:\ 1 \notin S\}$ has $I^p[\mathcal{A}] = 1 < I^p[\mathcal{L}_{\lambda}]$, where $\lambda$ is such that $\mu_p(\mathcal{L}_{\lambda}) = 1-p\ (= \mu_p(\mathcal{A}))$.
\subsection{Organization of the paper}
In Section \ref{sec:prelim}, we outline some notation and present an inductive proof of Theorem \ref{thm:edge-iso-biased}, some of whose ideas and components we will use in the sequel. In Section \ref{sec:main} (the longest part of the paper), we prove Theorems~\ref{thm:skewed-iso-stability} and~\ref{thm:mon-iso-stability}. In Section \ref{sec:examples}, we give examples showing that Theorems \ref{thm:skewed-iso-stability} and~\ref{thm:mon-iso-stability} are sharp (in a certain sense). In Section \ref{sec:KK}, we prove Theorem \ref{thm:Monotone} and show how to use it to deduce Theorem \ref{thm:edge-iso}. We conclude the paper with some open problems in Section~\ref{sec:open}.
\section{An inductive proof of Theorem~\ref{thm:edge-iso-biased}}
\label{sec:prelim}
In this section, we outline some notation and terminology, and present a simple inductive proof of Theorem~\ref{thm:edge-iso-biased}; components and ideas from this proof will be used in the proofs of Theorems \ref{thm:skewed-iso-stability} and \ref{thm:mon-iso-stability}.
\subsection{Notation and terminology}
When the `bias' $p$ (of the measure $\mu_p$) is clear from the context (including throughout Sections~\ref{sec:prelim} and~\ref{sec:main}), we will sometimes omit it from our notation, i.e. we will sometimes write $\mu(f) := \mu_p(f)$ and $I[f]:=I^{p}[f]$. Moreover, when the Boolean function $f$ is clear from the context, we will sometimes omit if from our notation, i.e. we will sometimes write $\mu := \mu(f)$, $I := I[f]$ and $I_i : = I_i[f]$. If $S \subset \{0,1\}^n$, we write $1_{S}$ for its indicator function, i.e. the Boolean function on $\{0,1\}^n$ taking the value $1$ on $S$ and $0$ outside $S$. A {\em dictatorship} is a Boolean function $f:\{0,1\}^n \to \{0,1\}$ of the form $f = 1_{\{x_j=1\}}$ for some $j \in [n]$; an {\em antidictatorship} is one of the form $f = 1_{\{x_j=0\}}$. Abusing notation slightly, we will sometimes identify a family $\mathcal{F} \subset \mathcal{P}([n])$ with the corresponding indicator function $1_{\{x \in \{0,1\}^n:\ \{i \in [n]:\ x_i=1\} \in \mathcal{F}\}}$. We use the convention $0\log_{p}(0)=0$ (for all $p \in (0,1)$); this turns $x\mapsto x\log_{p}(x)$ into a continuous function on $[0,1]$.
If $f:\{0,1\}^n \to \{0,1\}$ and $i \in [n]$, we define the function $f_{i\to0}:\{0,1\}^{[n]\setminus\{i\}}\to\{0,1\}$ by $f_{i\mapsto0}(y)=f(x)$, where $x_{i}=0$ and $x_{j}=y_{j}$ for all $j\in[n]\setminus\{i\}$. In other words, $f_{i\to0}$ is the restriction of $f$ to the lower half-cube $\{x\in\{0,1\}^{n}:x_{i}=0\}$. We define $f_{i\to1}$ similarly. We write
\begin{align*}
\mu_i^- & = \mu_{i}^{-}(f):=\mu_{p}(f_{i\to0}),\\
\mu_i^+ & = \mu_{i}^{+}(f):=\mu_{p}(f_{i\to1}),\\
I_i^- & = I_{i}^{-}[f]:=I^{p}[f_{i\to0}],\\
I_i^+ & = I_{i}^{+}[f]:=I^{p}[f_{i\to1}].
\end{align*}
Note that
\begin{equation}
p\mu_{i}^{+}(f)+(1-p)\mu_{i}^{-}(f)=\mu(f) \label{eq:basic1}
\end{equation}
and that
\begin{equation}
I\left[f\right]=I_{i}\left[f\right]+pI_{i}^{+}\left[f\right]+\left(1-p\right)I_{i}^{-}\left[f\right].\label{eq:Basic 2}
\end{equation}
\subsection{A proof of Theorem~\ref{thm:edge-iso-biased}}
The proof uses induction on $n$ together with equations (\ref{eq:basic1}) and (\ref{eq:Basic 2}), and the following technical lemma.
\begin{lem}
\label{Lem:ind-step}Let $p\in\left(0,1\right)$, and let $F,G,H\colon\left[0,1\right]\times\left[0,1\right]\to\left[0,\infty\right)$
be the functions defined by
\begin{align*}
F\left(x,y\right) & =px\log_{p}x+\left(1-p\right)y\log_{p}y+px-py,\\
G\left(x,y\right) & =\left(px+\left(1-p\right)y\right)\log_{p}\left(\left(px+\left(1-p\right)y\right)\right),\\
H\left(x,y\right) & =px\log_{p}x+\left(1-p\right)y\log_{p}y+py-px.
\end{align*}
\begin{enumerate}
\item If $x\ge y\ge0$, then $F\left(x,y\right)\ge G\left(x,y\right)$.
\item If $y\ge x\ge0$ and $p\le\frac{1}{2}$, then $H\left(x,y\right)\ge G\left(x,y\right)$.
\end{enumerate}
\end{lem}
\begin{proof}[Proof of Lemma \ref{Lem:ind-step}]
Clearly, for all $y \geq 0$ we have $F\left(y,y\right)=G\left(y,y\right)=H\left(y,y\right)$, and for all $x,y\geq0$, we have
\begin{align}
\frac{\partial F}{\partial x} & =p\log_{p}x+\frac{p}{\ln p}+p=p\log_{p}(px)+\frac{p}{\ln p},\nonumber \\
\frac{\partial G}{\partial x} & =p\log_{p}\left(px+\left(1-p\right)y\right)+\frac{p}{\ln p},\label{eq:partial-devs} \nonumber\\
\frac{\partial H}{\partial y} & =\left(1-p\right)\log_{p}y+p+\frac{1-p}{\ln p}\\
& =\left(1-p\right)\log_{p}(\left(1-p\right)y)-\left(1-p\right)\log_{p}\left(1-p\right)+p + \frac{1-p}{\ln p}\nonumber, \\
\frac{\partial G}{\partial y} & =\left(1-p\right)\log_{p}\left(px+\left(1-p\right)y\right)+\frac{1-p}{\ln p}.\nonumber
\end{align}
Clearly, we have $\frac{\partial F(x,y)}{\partial x}\geq\frac{\partial G(x,y)}{\partial x}$
for all $x,y\ge0$, and therefore $F\left(x,y\right)\ge G\left(x,y\right)$
for all $x\ge y\ge0$, proving (1). We assert that similarly, $\frac{\partial H(x,y)}{\partial y}\ge\frac{\partial G(x,y)}{\partial y}$
for all $x,y\geq0$, if $p \leq 1/2$. (This will imply (2).) Indeed,
\begin{align*}
\frac{\partial H}{\partial y} & =\left(1-p\right)\log_{p}(\left(1-p\right)y)-\left(1-p\right)\log_{p}\left(1-p\right)+p + \frac{1-p}{\ln p}\\
& \ge\frac{\partial G}{\partial y}+p-\left(1-p\right)\log_{p}\left(1-p\right).
\end{align*}
Hence, it suffices to prove the following.
\begin{claim}
\label{claim on alpha(p)}Define $K:(0,1) \to \mathbb{R};\ K(p)=p-\left(1-p\right)\log_{p}\left(1-p\right)$.
Then $K\left(p\right)\ge0$ for all $p\in\left[0,\frac{1}{2}\right]$. \end{claim}
\begin{proof}
It suffices to show that $\alpha(p):=K\left(p\right)\ln(1/p) = -p\ln p+(1-p)\ln(1-p)$
is non-negative for all $p\in[0,1/2]$. We have
$$\alpha'''(x)=\frac{1}{x^{2}}-\frac{1}{\left(1-x\right)^{2}} > 0 \quad \forall x \in (0,1/2),$$
so $\alpha''(x)$ is strictly increasing on $(0,1/2)$, and therefore $\alpha$ has no point of inflexion in $(0,1/2)$. It follows that $0$ and $1/2$ are the only solutions of $\alpha(x)=0$ in $[0,1/2]$. Since $\alpha'(0) = \infty$, it follows that $\alpha(x) \geq 0$ for all $x \in [0,1/2]$, proving the claim.
\end{proof}
This completes the proof of Lemma~\ref{Lem:ind-step}.
\end{proof}
We can now prove Theorem \ref{thm:edge-iso-biased}.
\begin{proof}[Proof of Theorem~\ref{thm:edge-iso-biased}.]
It is easy to check that the theorem holds for $n=1$. Let $n \geq 2$, and suppose the statement of the theorem holds when $n$ is replaced by $n-1$. Let $f:\{0,1\}^n \to \{0,1\}$. Choose any $i \in [n]$. We split into two cases.
\begin{caseenv}
\item[Case (a)] $\mu_{i}^{-}\le\mu_{i}^{+}$.
\end{caseenv}
Applying the induction hypothesis to the functions $f_{i\to0}$
and $f_{i\to1}$, and using the fact that $I_{i}[f] \ge\mu_{i}^{+}-\mu_{i}^{-}$, we obtain
\begin{align*}
pI & =(1-p)pI_{i}^{-}[f]+p^{2}I_{i}^{+}[f]+pI_{i}[f]\\
& \ge(1-p)\mu_{i}^{-} \log_{p}(\mu_{i}^{-}) +p\mu_{i}^{+} \log_{p}(\mu_{i}^{+})+p\left(\mu_{i}^{+}-\mu_{i}^{-}\right)\\
& =F\left(\mu_i^{+},\mu_{i}^{-}\right)\ge G\left(\mu_{i}^{+},\mu_{i}^{-}\right)=\mu\log_{p}\left(\mu\right),
\end{align*}
where $F$ and $G$ are as defined in Lemma \ref{Lem:ind-step}.
\begin{caseenv}
\item[Case (b)] $\mu_{i}^{-}\ge\mu_{i}^{+}$.
\end{caseenv}
The proof in this case is similar: applying the induction hypothesis to the functions $f_{i\to0}$
and $f_{i\to1}$, and using the fact that $I_{i}[f] \ge\mu_{i}^{-}-\mu_{i}^{+}$, we obtain
\begin{align*}
pI & =(1-p)pI_{i}^{-}[f]+p^{2}I_{i}^{+}[f]+pI_{i}[f]\\
& \ge(1-p)\mu_{i}^{-}\log_{p}(\mu_{i}^{-})+p\mu_{i}^{+}\log_{p}(\mu_{i}^{+})+p\left(\mu_{i}^{-}-\mu_{i}^{+}\right)\\
& =H\left(\mu^{+},\mu_{i}^{-}\right)\ge G\left(\mu_{i}^{+},\mu_{i}^{-}\right)=\mu\log_{p}\left(\mu\right),
\end{align*}
using the fact that $p \leq 1/2$.
\end{proof}
We remark that the above proof shows that if $f$ is monotone increasing, then the statement of Theorem~\ref{thm:edge-iso-biased} holds for all $p \in (0,1)$. (Indeed, if $f$ is monotone increasing, then $\mu_{i}^{-}\le\mu_{i}^{+}$ for all $i\in\left[n\right]$, so the assumption $p \leq 1/2$ is not required.)
\section{Proofs of the `biased' isoperimetric stability theorems}
\label{sec:main}
In this section, we prove Theorems \ref{thm:skewed-iso-stability} and \ref{thm:mon-iso-stability}. As the proofs of the two theorems follow the same strategy, we present them in parallel.
The proof of Theorem~\ref{thm:skewed-iso-stability} (and similarly, of Theorem~\ref{thm:mon-iso-stability}) consists of five steps. Assume that $f$ satisfies the assumptions of the theorem.
\begin{enumerate}
\item We show that for each $i \in [n]$, either $I_{i}[f]$ is small or else $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\}$
is `somewhat' small. In other words, the influences of $f$ are similar to the influences of a subcube.
\item We show that $\mu$ must be either very close to 1 or `fairly' small, i.e., bounded away from 1 by a constant. (In the proof of Theorem~\ref{thm:mon-iso-stability}, the constant may depend on $\eta$.)
\item We show that unless $\mu$ is very close to 1, there exists $i \in [n]$ such
that $I_{i}[f]$ is large. This implies that $\min\{\mu_{i}^{-},\mu_{i}^{+}\}$ is `somewhat' small.
\item We prove two `bootstrapping' lemmas saying that if $\mu_{i}^{-}$ is `somewhat'
small, then it must be `very' small, and that if $\mu_{i}^{+}$ is `somewhat'
small, then it must be `very' small. This implies that $f$ is `very' close to being contained in a dictatorship or an antidictatorship.
\item Finally, we prove each theorem by induction on $n$.
\end{enumerate}
\medskip
\noindent From now on, we let $f\colon\left\{ 0,1\right\} ^{n}\to\left\{ 0,1\right\}$ such that $pI^{p}[f]\leq\mu_{p}(f)(\log_{p}(\mu_{p}(f))+\epsilon)$. By reducing $\epsilon$ if necessary, we may assume that $pI[f] = \mu (\log_p(\mu) + \epsilon)$.
\subsection{Relations between the influences of $f$ and the influences of its
restrictions $f_{i\to0},f_{i\to1}$}
\noindent We define $\epsilon_{i}^{-},\epsilon_{i}^{+}$ by
\begin{align*}
pI_{i}^{-}=\mu_{i}^{-}\left(\log_{p}(\mu_{i}^{-})+\epsilon_{i}^{-}\right),\qquad pI_{i}^{+}=\mu_{i}^{+}\left(\log_{p}(\mu_{i}^{+})+\epsilon_{i}^{+}\right).
\end{align*}
Note that Theorem \ref{thm:edge-iso-biased} implies that $\epsilon_{i}^{-},\epsilon_{i}^{+}\geq 0$. We define the functions $F,G,H,K$ as in the proof of Theorem \ref{thm:edge-iso-biased}.
We would now like to express the fact that $I[f]$ is
small in terms of $\epsilon_{i}^{-},\epsilon_{i}^{+},\mu_{i}^{-},\mu_{i}^{+}$.
For each $i\in\left[n\right]$ such that $\mu_{i}^{-}\le\mu_{i}^{+}$, we have
\begin{align}
\begin{split}\mu(\log_{p}(\mu)+\epsilon) & =pI[f]=(1-p)pI_{i}^{-}+p^{2}I_{i}^{+}+pI_{i}[f]\\
& =(1-p)\mu_{i}^{-}\left(\log_{p}(\mu_{i}^{-})+\epsilon_{i}^{-}\right)+p\mu_{i}^{+}\left(\log_{p}(\mu_{i}^{+})+\epsilon_{i}^{+}\right)\\
& +p(\mu_{i}^{+}-\mu_{i}^{-})+p(I_{i}[f]-\mu_{i}^{+}+\mu_{i}^{-});
\end{split}
\label{eq:relation}
\end{align}
rearranging (\ref{eq:relation}) gives
\begin{align}
\epsilon_{i}': & =\mu\epsilon-p\mu_{i}^{+}\epsilon_{i}^{+}-\left(1-p\right)\mu_{i}^{-}\epsilon_{i}^{-}\nonumber \\
& = p\mu_i^+(\epsilon-\epsilon_i^+) + (1-p)\mu_i^-(\epsilon-\epsilon_i^-) \nonumber\\
& = F\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right)+p\left(I_{i}[f]-(\mu_{i}^{+}-\mu_{i}^{-})\right).\label{eq:rearranged-1}
\end{align}
Similarly, for each $i\in [n]$ such that $\mu_{i}^{-}\ge\mu_{i}^{+}$, we have
\begin{align}
\epsilon_{i}' & :=\mu\epsilon-p\mu_{i}^{+}\epsilon_{i}^{+}-\left(1-p\right)\mu_{i}^{-}\epsilon_{i}^{-}\nonumber \\
& = p\mu_i^+(\epsilon-\epsilon_i^+) + (1-p)\mu_i^-(\epsilon-\epsilon_i^-) \nonumber\\
& = H\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right)+p\left(I_{i}[f]-(\mu_{i}^{-}-\mu_{i}^{+})\right).\label{eq:rearrenged-2}
\end{align}
This allows us to deduce two facts about the structure of $f$.
\begin{itemize}
\item By Lemma \ref{Lem:ind-step}, we have $\epsilon_{i}'\geq0$ for all $i \in [n]$. This implies that
either $\epsilon_{i}^{+}\le\epsilon$ or $\epsilon_{i}^{-}\le\epsilon$.
Together with the induction hypothesis, this will imply (in Section \ref{subsec:ind}) that either $f_{i\to0}$
or $f_{i\to1}$ is structurally close to a subcube.
\item $F\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right)$
(resp. $H\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right)$)
is small whenever $\mu_{i}^{+}\ge\mu_{i}^{-}$ (resp. $\mu_{i}^{-}\ge\mu_{i}^{+}$).
Note that the proof of Lemma \ref{Lem:ind-step} shows that whenever
$\mu_{i}^{+}\ge\mu_{i}^{-}$ (resp. $\mu_{i}^{-}\ge\mu_{i}^{+}$)
then $F\left(\mu_{i}^{+},\mu_{i}^{-}\right)$ (resp. $H\left(\mu_{i}^{+},\mu_{i}^{-}\right)$)
is equal to $G\left(\mu_{i}^{+},\mu_{i}^{-}\right)$ only if $\mu_{i}^{+}=\mu_{i}^{-}$
or $\mu_{i}^{-}=0$. We will later show (in Claims \ref{claim:2-cases-constant-p}-\ref{claim:2-cases-small-p}) that if $F\left(\mu_{i}^{+},\mu_{i}^{-}\right)$ (resp. $H\left(\mu_{i}^{+},\mu_{i}^{-}\right)$) is approximately
equal to $G\left(\mu_{i}^{+},\mu_{i}^{-}\right)$, then either $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} $
is small or else $I_{i}[f]$ is small.
\end{itemize}
The following lemma will be used to relate $\mu_i^{+}$ and $\mu_i^-$ to $F(\mu_{i}^{+},\mu_{i}^{-})-G(\mu_{i}^{+},\mu_{i}^{-})$ (or to
$H(\mu_{i}^{+},\mu_{i}^{-})-G(\mu_{i}^{+},\mu_{i}^{-})$), in a more convenient
way.
\begin{lem}
\label{lemma:analysis-basic-functions} If $0 < p < 1$ and $x\ge y\ge0$, then
\[
F\left(x,y\right)-G\left(x,y\right) \geq p\left(x-y\right)\log_{p}\left(\frac{px}{px+\left(1-p\right)y}\right).
\]
If $0 < p\le \tfrac{1}{2}$ and $y\ge x\ge0$, then
\[
H\left(x,y\right)-G\left(x,y\right) \geq \left(1-p\right)\left(y-x\right)\log_{p}\left(\frac{\left(1-p\right)y}{px+\left(1-p\right)y}\right).
\]
If $0 < p\le e^{-2}$ and $y\ge x \ge0$, then
\[
H\left(x,y\right)-G\left(x,y\right) \geq \tfrac{1}{2}p \left(y-x\right).
\]
\end{lem}
\begin{proof}
We show that
\begin{align}
\frac{\partial}{\partial t}\left(F\left(t,y\right)-G\left(t,y\right)\right) & \ge p\log_{p}\left(\frac{px}{px+\left(1-p\right)y}\right)\ \forall y \leq t \leq x, \ 0 < p < 1, \label{eq: calc(1)}\\
\frac{\partial}{\partial t}\left(H\left(t,y\right)-G\left(t,y\right)\right) & \ge\left(1-p\right)\log_{p}\left(\frac{\left(1-p\right)y}{px+\left(1-p\right)y}\right) \ \forall x \leq t \leq y,\ 0 < p \leq 1/2, \label{eq:calc(2)}\\
\frac{\partial}{\partial t}\left(H\left(t,y\right)-G\left(t,y\right)\right) & \ge \tfrac{1}{2}p, \ \forall x \leq t \leq y,\ 0 < p \leq e^{-2}.\label{eq:calc(3)}
\end{align}
These inequalities will complete the proof of the lemma,
by the Fundamental Theorem of Calculus.
Using (\ref{eq:partial-devs}), we have
\begin{align*}
\frac{\partial }{\partial t}\left(F\left(t,y\right)-G\left(t,y\right)\right) & =p\log_{p}\left(pt\right)+\frac{p}{\ln\left(p\right)}-p\log_{p}\left(\left(1-p\right)y+pt\right)-\frac{p}{\ln\left(p\right)}\\
& =p\log_{p}\left(\frac{pt}{\left(1-p\right)y+pt}\right)\ge p\log_{p}\left(\frac{px}{(1-p)y+px}\right),
\end{align*}
proving (\ref{eq: calc(1)}). Similarly, if $p \leq 1/2$, then
\begin{align*}
\frac{\partial }{\partial t}\left(H\left(t,y\right)-G\left(t,y\right)\right) & \ge\left(1-p\right)\log_{p}\left(\frac{\left(1-p\right)y}{px+\left(1-p\right)y}\right)+K\left(p\right)\\
& \ge\left(1-p\right)\log_{p}\left(\frac{\left(1-p\right)y}{px+\left(1-p\right)y}\right),
\end{align*}
proving (\ref{eq:calc(2)}). It is easy to check that for all $p\le e^{-2}$, we have $K\left(p\right)\ge\frac{p}{2}$. Hence,
\[
\frac{\partial }{\partial t}\left(H\left(t,y\right)-G\left(t,y\right)\right)\ge K\left(p\right)\ge\tfrac{p}{2},
\]
proving (\ref{eq:calc(3)}).
\end{proof}
\subsection{Either $I_{i}[f]$ is small, or $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} $
is small}
We now show that the influences of $f$ are similar to the influences of a subcube. Note that if $f=1_{S}$ for a subcube $S = \{x \in \{0,1\}^n:\ x_i = a_i\ \forall i \in T\}$, where $T \subset [n]$ and $a_i \in \{0,1\}$ for all $i \in T$, then $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} =0$
for each $i\in T$, and $I_{i}[f]=0$ for each $i\notin T$. We prove that an approximate version of this statement holds, under our hypotheses.
We start with the simplest case, which is $\zeta<p\le\frac{1}{2}$ for some $\zeta >0$.
\begin{claim}
\label{claim:2-cases-constant-p} Let $\zeta >0$. There exists $C_{2}=C_{2}(\zeta)>0$ such that if $\zeta \leq p \leq 1/2$, then for each $i\in[n]$, one of the following holds.
\begin{description}
\item [{Case (1)}] \label{case:small-influence-1-1} We have $I_{i}[f]\leq C_{2}\epsilon_{i}'$,
and $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} \geq\left(1-C_{2}\epsilon\right)\mu$.
\item [{Case (2)}] \label{case:large-influence-1-1} We have $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} \le C_{2}\epsilon_{i}'$,
and $I_{i}[f] \geq\left(1-C_{2}\epsilon\right)\mu$.
\end{description}
\end{claim}
\begin{proof}[Proof of Claim \ref{claim:2-cases-constant-p}.]
By Lemma \ref{lemma:analysis-basic-functions} and (\ref{eq:rearranged-1}), if $\mu_i^-\leq \mu_i^+$ then
\begin{align}
p\left(\mu_{i}^{+}-\mu_{i}^{-}\right)\log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right) & \le F\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right) \nonumber\\
& \leq\epsilon_{i}'-pI_{i}[f]-p\mu_{i}^{-}+p\mu_{i}^{+}.\label{eq:constant 2-cases(1)}
\end{align}
By Lemma \ref{lemma:analysis-basic-functions} and (\ref{eq:rearrenged-2}), if $\mu_i^-\geq \mu_i^+$ then
\begin{align}
\left(1-p\right)\left(\mu_{i}^{-}-\mu_{i}^{+}\right)\log_{p}\left(\frac{\left(1-p\right)\mu_{i}^{-}}{\mu}\right) &\le H\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right) \nonumber\\
& \leq\epsilon_{i}'-pI_{i}[f]-p\mu_{i}^{+}+p\mu_{i}^{-}.\label{eq:constant 2-cases(2)}
\end{align}
Since the right-hand sides of (\ref{eq:constant 2-cases(1)}) and (\ref{eq:constant 2-cases(2)}) are non-negative, we have
\begin{equation}
I_{i}[f] -\left|\mu_{i}^{+}-\mu_{i}^{-}\right|\le\tfrac{1}{p} \epsilon_{i}'\leq \tfrac{1}{\zeta} \epsilon_{i}'.\label{eq:Inf_i is like monotone}
\end{equation}
We now split into two cases.
\begin{caseenv}
\item[Case (a):] $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} \ge\frac{\mu}{2}$.
\end{caseenv}
In this case, we have
$$p\log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right) = \Omega_{\zeta}(1),\quad \left(1-p\right)\log_{p}\left(\frac{\left(1-p\right)\mu_{i}^{-}}{\mu}\right) = \Omega_{\zeta}(1),$$
so
\[
\left|\mu_{i}^{+}-\mu_{i}^{-}\right| = O_{\zeta}\left(\epsilon'_{i}\right),
\]
by (\ref{eq:constant 2-cases(1)}) and (\ref{eq:constant 2-cases(2)}).
Equation (\ref{eq:Inf_i is like monotone}) now implies that $I_{i}[f] = O_{\zeta}\left(\epsilon'_{i}\right)$. Therefore, $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} \ge\mu-I_{i}[f] = \mu-O_{\zeta}(\epsilon_{i}^{'}) = \mu - O_{\zeta}(\epsilon \mu) = \mu(1-O_{\zeta}(\epsilon))$. Hence, Case (1) of the claim occurs.
\begin{caseenv}
\item[Case (b):] $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} \le\frac{\mu}{2}$.
\end{caseenv}
Firstly, suppose in addition that $\mu_i^- \leq \mu_i^+$, so that $\mu_i^- \leq \mu/2$. Then $p(\mu_{i}^{+}-\mu_{i}^{-}) = \Omega_{\zeta}\left(\mu\right)$, so (\ref{eq:constant 2-cases(1)}) implies that
$$\log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right) = O_{\zeta}( \epsilon_{i}^{'} / \mu).$$
Hence, $\ln\left(\frac{\mu}{p\mu_{i}^{+}}\right) = O_{\zeta}(\epsilon_{i}'/\mu)$, and therefore
\[
1+\frac{\left(1-p\right)\mu_{i}^{-}}{p\mu_{i}^{+}}=\frac{\mu}{p\mu_{i}^{+}} = \exp\left(O_{\zeta}(\epsilon_{i}^{'}/\mu)\right)=1+O_{\zeta}(\epsilon_{i}^{'}/\mu).
\]
Therefore, $\mu_{i}^{-} = O_{\zeta}(\epsilon_{i}^{'})\frac{p \mu_{i}^{+}}{(1-p)\mu} = O_{\zeta}(\epsilon_{i}^{'})$.
We now have $I_{i}[f]\ge\mu-\mu_{i}^{-} = \mu - O_{\zeta}(\epsilon_i') = \mu - O_{\zeta}(\epsilon \mu) = (1-O_{\zeta}(\epsilon))\mu$. Hence, Case (2) of the claim occurs.
\medskip
Secondly, suppose in addition that $\mu_i^+ \leq \mu_i^-$, so that $\mu_i^+ \leq \mu/2$. Then we have $(1-p)(\mu_{i}^{-}-\mu_{i}^{+}) = \Omega\left(\mu\right)$, so (\ref{eq:constant 2-cases(2)}) implies that
$$\log_{p}\left(\frac{(1-p)\mu_{i}^{-}}{\mu}\right) = O( \epsilon_{i}^{'} / \mu).$$
Hence, $\ln\left(\frac{\mu}{(1-p)\mu_{i}^{-}}\right) = O_{\zeta}(\epsilon_{i}'/\mu)$, and therefore
\[
1+\frac{p\mu_{i}^{+}}{(1-p)\mu_{i}^{-}}=\frac{\mu}{(1-p)\mu_{i}^{-}} = \exp\left(O_{\zeta}(\epsilon_{i}^{'}/\mu)\right)=1+O_{\zeta}(\epsilon_{i}^{'}/\mu).
\]
Therefore, $\mu_{i}^{+} = O_{\zeta}(\epsilon_{i}^{'})\frac{(1-p) \mu_{i}^{-}}{p \mu} = O_{\zeta}(\epsilon_{i}^{'})$. It follows that $I_{i}[f]\geq (1-O_{\zeta}(\epsilon))\mu$, so again, Case (2) of the claim must occur.
\end{proof}
We now prove a version of Claim \ref{claim:2-cases-constant-p} for monotone increasing $f$ and for all $p$ bounded away from 1. The idea of the proof
is the same, but the details are slightly messier, mainly because $p$ is no longer bounded away from $0$.
\begin{claim}
\label{claim:mon-2-cases} For any $\eta >0$, there exists $C_2 = C_2(\eta)>0$ such that the following holds. Suppose that $f$ is monotone increasing and that $0 < p \leq 1-\eta$. Let $i\in[n]$. Then one of the following
must occur.
\begin{description}
\item [{Case} (1)] We have $pI_{i}[f] \leq C_{2}\epsilon_{i}'\ln(1/p)$,
and $\mu_{i}^{-}\geq\left(1-C_{2}\epsilon\ln(1/p)\right)\mu$.
\item [{Case} (2)] We have $\mu_{i}^{-}\le C_{2}\epsilon_{i}'\ln(1/p)$,
and $pI_{i}[f] \geq\left(1-C_{2}\epsilon\ln(1/p)\right)\mu$.
\end{description}
\end{claim}
\begin{proof}
By Lemma \ref{lemma:analysis-basic-functions} and equation (\ref{eq:rearranged-1}), we have
\[
p\mathrm{Inf}_{i}[f] \log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right) = p(\mu_i^+-\mu_i^-) \log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right)\le F\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right)\leq\epsilon_{i}'.
\]
We now split into two cases.
\begin{caseenv}
\item[Case (a):] $\mu_{i}^{+}\leq(1-\tfrac{\eta}{2})\frac{\mu}{p}$.
\end{caseenv}
If $\mu_{i}^{+}\leq(1-\tfrac{\eta}{2})\frac{\mu}{p}$, then Case (1)
of Claim \ref{claim:mon-2-cases} must occur, provided we take $C_{2}$
to be sufficiently large. Indeed, we then have
\[
pI_{i}[f] \log_{p}(1-\tfrac{\eta}{2}) = p(\mu_i^+-\mu_i^-) \log_{p}(1-\tfrac{\eta}{2})\leq p(\mu_i^+-\mu_i^-)\log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right)\leq\epsilon_{i}',
\]
which gives $pI_{i}[f] \leq\frac{1}{\ln\left(\frac{2}{2-\eta}\right)}\epsilon_{i}'\ln(1/p)\leq C_{2}\epsilon_{i}'\ln(1/p)$,
provided we choose $C_{2}\geq1/(\ln(2/(2-\eta)))$. This in turn implies that
\[
\mu_{i}^{-}=\mu-p(\mu_{i}^{+}-\mu_{i}^{-})=\mu-pI_{i}[f]\geq\mu-C_{2}\epsilon_{i}'\ln(1/p)\geq\mu-C_{2}\epsilon\mu\ln(1/p),
\]
so Case (1) occurs, as asserted.
\begin{caseenv}
\item[Case (b):] $\mu_{i}^{+}\ge\left(1-\frac{\eta}{2}\right)\frac{\mu}{p}$.
\end{caseenv}
If $\mu_{i}^{+} \geq (1-\tfrac{\eta}{2})\frac{\mu}{p}$, then Case (2)
of Claim \ref{claim:mon-2-cases} must occur. Indeed, since $\mu_{i}^{-}\leq\mu$,
we have $pI_{i}[f]=p(\mu_{i}^{+}-\mu_{i}^{-})\geq\left(1-\tfrac{\eta}{2}-p\right)\mu\geq\tfrac{1}{2}\eta\mu$.
We now have
\[
\log_{p}\left(\frac{p\mu_{i}^{+}}{\mu}\right)\le\frac{\epsilon_{i}'}{p(\mu_i^+-\mu_i^-)}\leq\frac{2\epsilon_{i}'}{\eta\mu}\leq\frac{2\epsilon_{i}'}{\eta p\mu_{i}^{+}}.
\]
Hence,
\[
\frac{p\mu_{i}^{+}}{\mu}\geq p^{2\epsilon_{i}'/(\eta p\mu_{i}^{+})}=\exp\left(-\frac{2\epsilon_{i}'\ln(1/p)}{\eta p\mu_{i}^{+}}\right).
\]
Using the fact that $1-e^{-x}\leq x$ for all $x\geq0$, we have
\begin{align*}
\left(1-p\right)\frac{\mu_{i}^{-}}{\mu}=1-\frac{p\mu_{i}^{+}}{\mu}\leq1-\exp\left(-\frac{2\epsilon_{i}'\ln(1/p)}{\eta p\mu_{i}^{+}}\right)\leq\frac{2\epsilon_{i}'\ln(1/p)}{\eta p\mu_{i}^{+}}.
\end{align*}
This implies
\begin{align*}
\mu_{i}^{-}\leq\left(\frac{\mu}{\eta p\mu_{i}^{+}}\right) \left(\frac{2}{1-p}\right) \epsilon_{i}'\ln(1/p)\leq\left(\frac{2}{\eta(2-\eta)}\right)\left(\frac{2}{\eta}\right)\epsilon_{i}'\ln(1/p)\leq C_{2}\epsilon_{i}'\ln(1/p),
\end{align*}
provided we choose $C_{2}\geq\frac{4}{\eta^{2}(2-\eta)}$. We now
have
\[
pI_{i}[f]=p(\mu_{i}^{+}-\mu_{i}^{-})=\mu-\mu_{i}^{-}\geq\mu-C_{2}\epsilon_{i}'\ln(1/p)\geq\mu-C_{2}\epsilon\mu\ln(1/p),
\]
so Case (2) occurs, as asserted.
\end{proof}
We now prove a version of Claim \ref{claim:2-cases-constant-p} for small $p$ and a general $f$ (i.e., not necessarily monotone increasing). Here, similarly to
in the monotone case, we obtain that either $\mu_{i}^{-}$ is small, or else
$pI_{i}[f]$ is small.
\begin{claim}
\label{claim:2-cases-small-p} There exists an absolute constant $C_2>0$ such that if $0 < p \leq e^{-2}$, then for each $i \in [n]$, one
of the following holds.
\begin{description}
\item [{Case (1)}] We have $pI_{i}[f]\leq C_2 \epsilon_{i}'\ln(1/p)$,
and $\mu_{i}^{-}\geq \left(1-C_{2}\epsilon\ln(1/p)\right)\mu$.
\item [{Case (2)}] We have $\mu_{i}^{-}\le C_{2}\epsilon_{i}'\ln(1/p)$,
and $pI_{i}[f]\geq\left(1-C_{2}\epsilon\ln(1/p)\right)\mu$.
\end{description}
\end{claim}
\begin{proof}
By (\ref{eq:Inf_i is like monotone}), we have
\begin{equation}\label{eq:inf-ineq} pI_{i}[f]-p\left|\mu_{i}^{+}-\mu_{i}^{-}\right|\le\epsilon_{i}'.\end{equation}
Firstly, suppose that $\mu_i^- \geq \mu_i^+$; then $\mu_i^- \geq \mu$, so clearly we have $\mu_i^- \geq (1-C_2\epsilon \ln(1/p))\mu$ for any $C_2>0$. Moreover, by Lemma \ref{lemma:analysis-basic-functions} and (\ref{eq:rearrenged-2}), we have
\begin{equation} \label{eq:diff-ineq} \left(\mu_{i}^{-}-\mu_{i}^{+}\right)\tfrac{p}{2}\le H\left(\mu_{i}^{+},\mu_{i}^{-}\right)-G\left(\mu_{i}^{+},\mu_{i}^{-}\right)\le\epsilon_{i}^{'}\end{equation}
Combining (\ref{eq:inf-ineq}) and (\ref{eq:diff-ineq}) yields $p I_i[f] \leq 3\epsilon_i'$, so Case (1) holds.
Secondly, suppose that $\mu_i^+ > \mu_i^-$. The proof of Claim \ref{claim:mon-2-cases} (replacing $I_i[f]$ with $\mu_i^+-\mu_i^-$ in the appropriate places, and then using (\ref{eq:inf-ineq}) to bound $I_i[f]$ in terms of $|\mu_i^+-\mu_i^-|$) now implies that either Case (1) or Case (2) holds, provided we take $C_2 = C_2(\eta)$, where $\eta \leq 1-e^{-2}$.
\end{proof}
\subsection{Either $\mu$ is fairly small, or very close to 1}
Here, we show that there exists a constant $c_{4}>0$ such that either $\mu = 1-O\left(\frac{\epsilon \ln (1/p)}{\log\left(\frac{1}{\epsilon \ln (1/p)}\right)}\right)$ (i.e., $\mu$ is very close to 1),
or else $\mu < 1-c_{4}$ (i.e., $\mu$ is bounded away from 1). For a general $f$ (and $0 < p \leq 1/2)$, we obtain this by applying the $p$-biased isoperimetric inequality to the complement of $f$: $\tilde{f}\left(x\right)=1-f\left(x\right)$. For monotone $f$ (and $0 < p < 1$), we apply the $p$-biased isoperimetric inequality to the
dual of $f$: $f^{\ast}(x) =1-f\left(\overline{x}\right) = 1-f(1-x)$.
\begin{claim}
\label{claim:expectation-constraint-1}
Let $0 < p\leq1/2$.
Then we either have
\[
\mu\geq1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\]
or else $\mu\leq 1-c_{4}$, where $C_{3},c_{4}>0$ are absolute constants.
\end{claim}
\begin{proof}
Note that $\mu_{p}(\tilde{f})=1-\mu_{p}(f)$ and that $I^{p}[\tilde{f}]=I^{p}[f]$.
By assumption, we have $pI[f] = \mu(\log_{p}\mu+\epsilon)$. On the
other hand, applying Theorem \ref{thm:edge-iso-biased} to $\tilde{f}$, we obtain
\[
pI[f]=pI[\tilde{f}]\geq\left(1-\mu\right)\log_{p}\left(1-\mu\right).
\]
Combining these two facts, we obtain
\[
\mu\left(\log_{p}\mu+\epsilon\right)\geq\left(1-\mu\right)\log_{p}\left(1-\mu\right).
\]
Suppose that $\delta:=1-\mu \le c_{4}$, where $c_4>0$ is to be chosen later. Then
\begin{align*}
\delta\log_{p}\left(\delta\right) & \le\left(1-\delta\right)\left(\log_{p}\left(1-\delta\right)+\epsilon\right)\\
& =\left(1-\delta\right)\left(\frac{\ln\left(\frac{1}{1-\delta}\right)}{\ln\left(\frac{1}{p}\right)}+\epsilon\right)\le\frac{2\delta}{\ln\left(\frac{1}{p}\right)}+\epsilon,
\end{align*}
where the last inequality holds provided $c_{4}$ is sufficiently small.
Hence,
\[
\delta\left(\ln\left(\frac{1}{\delta}\right)-2\right)\le\epsilon\ln\left(\frac{1}{p}\right).
\]
Provided $c_4$ is sufficiently small, this implies that
$$\delta = O \left(\frac{\epsilon \ln(1/p)}{\ln\left(\frac{1}{\epsilon \ln(1/p)}\right)}\right),$$
proving the claim.
\end{proof}
\begin{claim}
\label{claim:expectation-constraint} For any $\eta >0$, there exist $C_3 = C_3(\eta)$ and $c_4 = c_4(\eta)>0$ such that the following holds. Suppose that $0 < p\leq1-\eta$, and suppose that $f$ is monotone increasing. Then we either have
\[
\mu\geq1-\frac{C_{3}\epsilon}{\ln\left(\frac{1}{\epsilon}\right)},
\]
or else $\mu\leq 1-c_{4}$.
\end{claim}
\begin{proof}
Note that $\mu_{1-p}(f^{\ast})=1-\mu_{p}(f)$ and that $I^{1-p}[f^{\ast}]=I^{p}[f]$. By assumption, we have $pI^p[f] = \mu(\log_{p}\mu+\epsilon)$. On the
other hand, applying Theorem \ref{thm:edge-iso-biased} to $f^{\ast}$,
we obtain
\[
(1-p)I^{p}[f]=(1-p)I^{1-p}[f^{\ast}]\geq\left(1-\mu\right)\log_{1-p}\left(1-\mu\right).
\]
Combining these two facts, we obtain
\begin{align*}
\mu\left(\log_{p}\mu+\epsilon\right) & \geq\frac{p}{1-p}\left(1-\mu\right)\log_{1-p}\left(1-\mu\right).
\end{align*}
Suppose that $\delta:=1-\mu\leq c_{4}$, where $c_4 = c_4(\eta)>0$ is to be chosen later. Then
\begin{align}
\frac{p}{1-p}\delta\log_{1-p}\left(\delta\right) & \le\left(1-\delta\right)\left(\log_{p}\left(1-\delta\right)+\epsilon\right)\nonumber \\
& =\left(1-\delta\right)\left(\frac{\ln\left(\frac{1}{1-\delta}\right)}{\ln\left(\frac{1}{p}\right)}+\epsilon\right)\le\frac{2\delta}{\ln\left(\frac{1}{p}\right)}+\epsilon,\label{eq:measure can't be too much close to 1}
\end{align}
where the last inequality holds provided $c_{4}$ is sufficiently small. Observe that $\ln\left(\frac{1}{1-p}\right)=\Theta_{\eta}\left(p\right)$.
Hence,
\begin{equation}
\frac{p}{1-p}\delta\log_{1-p}\left(\delta\right)=\frac{p}{1-p}\delta\frac{\ln\left(\frac{1}{\delta}\right)}{\ln\left(\frac{1}{1-p}\right)}=
\Theta_{\eta}\left(\frac{p}{1-p}\delta
\frac{\ln\left(\frac{1}{\delta}\right)}{p}\right)=\Theta_{\eta}\left(\delta\ln\left(\frac{1}{\delta}\right)\right).
\label{eq:measure can't be too much close to 2}
\end{equation}
Combining (\ref{eq:measure can't be too much close to 1})
and (\ref{eq:measure can't be too much close to 2}), we obtain
\[
\Theta_{\eta} (\delta\ln(1/\delta)) -\frac{2\delta}{\eta} \leq \Theta_{\eta} (\delta\ln(1/\delta)) -\frac{2\delta}{\ln\left(1/p\right)}\le \epsilon,
\]
which implies that
$$\delta = O_{\eta}\left(\frac{\epsilon}{\ln\left(\frac{1}{\epsilon}\right)}\right)$$
provided $c_4$ is sufficiently small depending on $\eta$, proving the claim.
\end{proof}
\subsection{There exists an influential coordinate}
We now show that unless $\mu$ is very close to $1$, there
must exist a coordinate whose influence is large. This coordinate
will be used in the inductive step of the proof of our two stability theorems. First, we deal with the case of small $p$ and general $f$ (i.e., $f$ not necessarily monotone increasing).
\begin{claim}
\label{claim:influential-coordinate} There exists an absolute constant $\zeta \in (0,c_4)$ such that the following holds. Let $0 < p < \zeta$. If $\mu<1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}$,
then there exists $i\in[n]$ for which Case (2) of Claim \ref{claim:2-cases-small-p} occurs, i.e. $\mu_i^- \leq C_2 \epsilon_i' \ln(1/p)$ and $p I_i[f] \geq (1-C_2 \epsilon \ln(1/p))\mu$.
(Here, $C_2$ is the absolute constant from Claim \ref{claim:2-cases-small-p}, and $C_3,c_4$ are the absolute constants from Claim \ref{claim:expectation-constraint-1}.)
\end{claim}
\begin{proof}
We prove the claim by induction on $n$.
If $n=1$ and $\mu<1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}$,
then by Claim \ref{claim:expectation-constraint-1}, we have $\mu<1-c_4$, and therefore $f\equiv0$ or $f=1_{\{x_{1}=1\}}$. (If $f = 1_{\{x_1 = 0\}}$ then $\mu = 1-p > 1-c_4$, provided $\zeta < c_4$.) Hence, we have $\mu_{1}^{-}=0$, so Case (2) must occur for the coordinate 1, verifying the base case.
We now do the inductive step. Let $n\geq2$, and assume the claim
holds when $n$ is replaced by $n-1$. Let $f$ be as in the statement of the claim; then by Claim \ref{claim:expectation-constraint-1}, we have $\mu \leq 1-c_4$. Suppose for a contradiction
that $f$ has Case (1) of Claim \ref{claim:2-cases-small-p} occurring for each $i\in[n]$. First, suppose that $\epsilon_{i}^{-}\geq\epsilon$
for each $i\in[n]$. Fix any $i\in[n]$. By (\ref{eq:rearranged-1}),
we have $0\leq\epsilon_{i}'\leq p(\epsilon-\epsilon_{i}^{+})\mu_{i}^{+}$,
so $\epsilon_{i}^{+}\leq\epsilon$ and therefore
\begin{equation}
I_{i}[f]\leq \tfrac{1}{p} C_{2}\epsilon_{i}'\ln(1/p)\leq C_{2}\left(\epsilon-\epsilon_{i}^{+}\right)\mu_{i}^{+}\ln(1/p)\leq C_{2}c_{0}\mu_{i}^{+}.\label{eq:influence-bound-1}
\end{equation}
Hence,
\[
\mu_{i}^{+}-\mu\leq\mu_{i}^{+}-\mu_{i}^{-}\le I_{i}[f]\leq C_{2}c_{0}\mu_{i}^{+},
\]
so
\[
\mu_{i}^{+}\leq\frac{\mu}{1-C_{2}c_{0}} \leq\frac{1-c_{4}}{1-C_{2}c_{0}}<1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}\leq1-\frac{C_{3}\epsilon_{i}^{+}\ln(1/p)}{\ln\left(\frac{1}{\epsilon_{i}^{+}\ln(1/p)}\right)},
\]
provided $c_{0}$ is sufficiently small. It follows that $f_{i\to1}$
satisfies the hypothesis of the claim, for each $i\in[n]$. Hence,
by the induction hypothesis, there exists $j\in[n]\setminus\{i\}$
such that $f_{i\to1}$ has Case (2) of Claim \ref{claim:2-cases-small-p}
occurring for the coordinate $j$, so
\[
pI_{j}[f_{i\to1}]\geq\left(1-C_{2}\epsilon_{i}^{+}\ln(1/p)\right)\mu_{i}^{+}.
\]
We now have
\begin{align*}
I_{j}[f]\geq pI_{j}[f_{i\to1}]\geq\left(1-C_{2}\epsilon_{i}^{+}\ln(1/p)\right)\mu_{i}^{+}\geq(1-C_{2}\epsilon\ln(1/p))\mu_{i}^{+}\geq(1-C_{2}c_{0})\mu_{i}^{+},
\end{align*}
but this contradicts the fact that (\ref{eq:influence-bound-1})
holds when $i$ is replaced by $j$ (provided $c_{0}$ is sufficiently
small).
We may assume henceforth that there exists $i\in[n]$ such that $\epsilon_{i}^{-}<\epsilon$.
Fix such a coordinate $i$. Since Case (1) occurs for the coordinate
$i$, we have
\[
\mu_{i}^{-}\geq(1-C_{2}\epsilon\ln(1/p))\mu\geq(1-C_{2}c_{0})\mu.
\]
On the other hand, we have
\[
\mu_{i}^{-}\le\frac{\mu}{1-p}\le\frac{1-c_4}{1-\zeta}<1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}<1-\frac{C_{3}\epsilon_{i}^{-}\ln(1/p)}{\ln\left(\frac{1}{\epsilon_{i}^{-}\ln(1/p)}\right)},
\]
provided $\zeta$ is sufficiently small. Hence, $f_{i\to0}$ satisfies
the hypotheses of the claim. Therefore, by the induction hypothesis, there
exists $j\in[n]\setminus\{i\}$ such that $f_{i\to0}$ has Case (2)
of Claim \ref{claim:2-cases-small-p} occurring for the coordinate
$j$, so
\[
pI_{j}[f_{i\to0}]\geq\left(1-C_{2}\epsilon_{i}^{-}\ln(1/p)\right)\mu_{i}^{-}.
\]
We now have
\begin{align*}
pI_{j}[f] & \geq p(1-p)I_{j}[f_{i\to0}]\geq(1-p)\left(1-C_{2}\epsilon_{i}^{-}\ln(1/p)\right)\mu_{i}^{-}\\
& >(1-p)\left(1-C_{2}\epsilon\ln(1/p)\right)(1-C_{2}c_{0})\mu\geq \tfrac{1}{2}(1-C_{2}c_{0})^{2} \mu,
\end{align*}
contradicting the fact that $f$ satisfies Case (1) of Claim \ref{claim:2-cases-small-p} for the coordinate
$j$, provided $c_0$ is sufficiently small. This completes the inductive step, proving the claim.
\end{proof}
Now we deal with the case of constant $p$ ($\leq 1/2$) and general $f$.
\begin{claim}
\label{claim:influential-coordinate-2} For each $\zeta >0$, the following holds provided $c_0$ sufficiently small depending on $\zeta$. Let $\zeta < p \leq 1/2$. If $\mu<1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}$,
then there exists $i\in[n]$ for which Case (2) of Claim \ref{claim:2-cases-constant-p} occurs, i.e. $\min\{\mu_i^-,\mu_i^+\} \leq C_2 \epsilon_i'$ and $p I_i[f] \geq (1-C_2 \epsilon)\mu$.
(Here, $C_3$ is the constant from Claim \ref{claim:expectation-constraint-1}, and $C_2 =C_2(\zeta)$ is the constant from Claim \ref{claim:2-cases-constant-p}.)
\end{claim}
\begin{proof}
If $n=1$ and $\mu<1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}$,
then we must have either $f\equiv0$, $f=1_{\{x_{1}=1\}}$ or $f =1_{\{x_1 = 0\}}$. Hence, we have $\min\{\mu_i^+,\mu_{1}^{-}\}=0$, so Case (2) of Claim \ref{claim:2-cases-constant-p} must occur for the coordinate 1, verifying the base case.
We now do the inductive step. Let $n\geq2$, and assume the claim holds when $n$ is replaced by $n-1$. Let $f$ be as in the statement of the claim; then by Claim \ref{claim:expectation-constraint-1}, we have $\mu \leq 1-c_4$. Suppose for a contradiction
that $f$ has Case (1) of Claim \ref{claim:2-cases-constant-p} occurring for each $i\in[n]$. First, suppose that $\epsilon_{i}^{-}\geq\epsilon$
for each $i\in[n]$. Then almost exactly the same argument as in the proof of Claim \ref{claim:influential-coordinate} yields a contradiction, provided $c_0$ is sufficiently small depending on $\zeta$. Therefore, we may assume henceforth that there exists $i \in [n]$ such that $\epsilon_i^- < \epsilon$. By assumption, Case 1 of Claim \ref{claim:2-cases-constant-p} occurs for the coordinate $i$, and therefore $\min\{\mu_i^+,\mu_i^-\} \geq (1-C_2 \epsilon)\mu$. It follows that
\begin{align*} \mu_i^- & = \frac{\mu - p\mu_i^+}{1-p} \leq \frac{1 - p(1-C_2\epsilon)}{1-p}\mu = \mu + \frac{pC_2\epsilon \mu}{1-p} \leq 1-c_4 + C_2\epsilon < \\
& <1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)} <1-\frac{C_{3}\epsilon_{i}^{-}\ln(1/p)}{\ln\left(\frac{1}{\epsilon_{i}^{-}\ln(1/p)}\right)},
\end{align*}
provided $c_0$ is sufficiently small depending on $\zeta$. Hence, $f_{i\to0}$ satisfies
the hypotheses of the claim. Therefore, by the induction hypothesis, there
exists $j\in[n]\setminus\{i\}$ such that $f_{i\to0}$ has Case (2)
of Claim \ref{claim:2-cases-small-p} occurring for the coordinate
$j$, so
\[
I_{j}[f_{i\to0}]\geq\left(1-C_{2}\epsilon_{i}^{-} \right)\mu_{i}^{-}.
\]
We now have
\begin{align*}
I_{j}[f] & \geq (1-p)I_{j}[f_{i\to0}]\geq (1-p)\left(1-C_{2}\epsilon_{i}^{-}\right)\mu_{i}^{-}\\
& >(1-p)\left(1-C_{2}\epsilon \right)^2 \mu\geq \tfrac{1}{2} (1-C_{2}c_{0} / \ln(2) )^{2} \mu,
\end{align*}
contradicting the fact that $f$ satisfies Case (1) of Claim \ref{claim:2-cases-constant-p} for the coordinate
$j$, provided $c_0$ is sufficiently small depending on $C_2$ (i.e., on $\zeta$). This completes the inductive step, proving the claim.
\end{proof}
Finally, we deal with the case of monotone $f$ and all $p$ bounded away from 1.
\begin{claim}
\label{claim:influential-coordinate-3}
For each $\eta>0$, the following holds provided $c_0$ is sufficiently small depending on $\eta$. Let $0 < p \leq 1-\eta$, and suppose $f$ is monotone increasing. If $\mu < 1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}$, then there exists $i \in [n]$ for which Case (2) of Claim \ref{claim:mon-2-cases} occurs. (Here, $C_3 = C_3(\eta)$ is the constant from Claim \ref{claim:expectation-constraint}.)
\end{claim}
\begin{proof}
We prove the claim by induction on $n$.
If $n=1$ and $\mu < 1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}$, then since $\mu <1$ we must have either $f \equiv 0$ or $f = 1_{\{x_1 = 1\}}$, so $\mu_{1}^{-}=0$. Hence, Case (2) of Claim \ref{claim:mon-2-cases} occurs for the coordinate 1, verifying the base case.
We now do the inductive step. Let $n \geq 2$, and assume the claim holds when $n$ is replaced by $n-1$. Let $f$ be as in the statement of the claim; then by Claim \ref{claim:expectation-constraint}, we have $\mu \leq 1-c_4$. Suppose for a contradiction that $f$ has Case (1) of Claim \ref{claim:mon-2-cases} occurring for each $i \in [n]$. First, suppose that $\epsilon_{i}^{-} \geq \epsilon$ for each $i \in [n]$. Fix any $i \in [n]$. Then almost exactly the same argument as in the proof of Claim \ref{claim:influential-coordinate} (using Claim \ref{claim:expectation-constraint} in place of Claim \ref{claim:expectation-constraint-1}) yields a contradiction.
We may therefore assume henceforth that there exists $i \in [n]$ such that $\epsilon_i^- < \epsilon$. Since Case (1) of Claim \ref{claim:mon-2-cases} occurs for the coordinate $i$, we have
$$\mu_i^- \geq (1-C_2 \epsilon \ln (1/p))\mu \geq (1-C_2c_0) \mu.$$
On the other hand, we have
$$ \mu_i^- \leq \mu < 1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)} < 1-\frac{C_{3}\epsilon_i^-\ln(1/p)}{\ln\left(\frac{1}{\epsilon_i^-\ln(1/p)}\right)},$$
so $f_{i \to 0}$ satisfies the hypotheses of the claim. Hence, by the induction hypothesis, there exists $j \in [n] \setminus \{i\}$ such that $f_{i \to 0}$ has Case (2) of Claim \ref{claim:mon-2-cases} occurring for the coordinate $j$, so
$$p I_{j}[f_{i\to0}] \geq \left(1-C_{2}\epsilon_i^- \ln(1/p)\right) \mu_{i}^{-}.$$
We now have
\begin{align*} pI_j[f] & \geq p(1-p)I_j[f_{i \to 0}] \geq (1-p) \left(1-C_{2}\epsilon_i^- \ln(1/p)\right) \mu_{i}^{-}\\
& > (1-p) \left(1-C_{2}\epsilon \ln(1/p)\right) (1-C_2c_0)\mu \geq \eta (1-C_2 c_0)^2\mu,
\end{align*}
contradicting the fact that $f$ satisfies Case (1) of Claim \ref{claim:mon-2-cases} for the coordinate $j$, provided $c_0$ is sufficiently small depending on $\eta$. This completes the inductive step, proving the claim.
\end{proof}
\subsection{Bootstrapping}
Our final required ingredient is a `bootstrapping' argument, which
says that if $\min\left\{ \mu_{i}^{-},\mu_{i}^{+}\right\} $ is `somewhat'
small, then it must be `very' small.
\begin{claim}
\label{claim:bootstrapping-1} Let $\zeta\in (0,1/2)$. There exist $C_{5}=C_{5}(\zeta)>0$ and $c_{5} >0$ such that the following holds. Let $\zeta<p\le\frac{1}{2}$. If $\mu_{i}^{-} \le c_{5}\mu$,
then
\[
\mu_i^- \le\frac{C_{5}\left(\epsilon-\epsilon_{i}^{+}\right)\ln(1/p)}{\ln\left(1/\left((\epsilon-\epsilon_{i}^{+})\ln(1/p)\right)\right)}\mu,
\]
and if
$\mu_{i}^{+} \le c_{5}\mu$, then
\[
\mu_i^+\le\frac{C_{5}\left(\epsilon-\epsilon_{i}^{-}\right)\ln(1/p)}{\ln\left(1/\left((\epsilon-\epsilon_{i}^{-})\ln(1/p)\right)\right)}\mu.
\]
\end{claim}
\begin{proof}
First suppose that $\mu_{i}^{-}\le\mu_{i}^{+}$, and define $\delta : = \mu_i^- / \mu$. Using (\ref{eq:rearranged-1}), we have
\begin{align}
\label{eq:upper}
\left(1-p\right)\mu_{i}^{-}\log_{p}\left(\mu_{i}^{-}\right)+ & p\mu_{i}^{+}\log_{p}\mu_{i}^{+}-\mu\log_{p}\mu+pI_{i}\left[f\right]\nonumber \\
& =\left(\epsilon-\epsilon_i^{+}\right)p\mu_{i}^{+}+(\epsilon-\epsilon_{i}^{-})\left(1-p\right)\mu_{i}^{-} \nonumber \\
& \leq \left(\epsilon-\epsilon_i^{+}\right)\mu +\epsilon\left(1-p\right)\mu_{i}^{-}.
\end{align}
Observe that
\begin{align}
\label{eq:lower}
\textrm{LHS} & =\left(1-p\right)\mu_{i}^{-}\log_{p}\left(\mu_{i}^{-}\right)+p\mu_{i}^{+}\log_{p}\left(p\mu_{i}^{+}\right)-\mu\log_{p}\mu+pI_{i}\left[f\right]-p\mu_{i}^{+} \nonumber \\
& \ge\left(1-p\right)\mu_{i}^{-}\log_{p}\left(\mu_{i}^{-}\right)+p\mu_{i}^{+}\log_{p}\left(\mu\right)-\mu\log_{p}\left(\mu\right)-p\mu_{i}^{-} \nonumber \\
& =\left(1-p\right)\mu_{i}^{-}\log_{p}\left(\frac{\mu_{i}^{-}}{\mu}\right)-p\mu_{i}^{-}.
\end{align}
Combining (\ref{eq:upper}) and (\ref{eq:lower}) and rearranging, we obtain
\begin{equation}
\left(\frac{\mu_{i}^{-}}{\mu}\right)\left(\log_{p}\left(\frac{\mu_{i}^{-}}{\mu}\right)-\frac{p}{1-p} -\epsilon\right)\leq\frac{\epsilon-\epsilon_{i}^{+}}{1-p} \leq 2(\epsilon-\epsilon_{i}^+).\label{eq:bootstrap-1}
\end{equation}
It follows that
$$\delta \left(\ln (1/\delta)-\frac{p\ln(1/p)}{1-p} - \epsilon \ln(1/p)\right)\leq 2(\epsilon-\epsilon_{i}^+) \ln(1/p),$$
and therefore
$$\delta \left(\ln (1/\delta)-\frac{2}{e} - c_0\right)\leq 2(\epsilon-\epsilon_{i}^+) \ln(1/p).$$
If $c_{5}$ is a small enough absolute constant, this clearly implies that
$$\delta = O_{\zeta}\left(\frac{(\epsilon-\epsilon_i^+)\ln(1/p)}{\ln\left(\frac{1}{(\epsilon-\epsilon_i^+)\ln(1/p)}\right)}\right),$$
as required.
Now suppose that $\mu_{i}^{+}\le\mu_{i}^{-}$, and define $\delta : = \mu_i^+ / \mu$. Using (\ref{eq:rearranged-1}), we have
\begin{align}
\label{eq:upper2}
\left(1-p\right)\mu_{i}^{-}\log_{p}\left(\mu_{i}^{-}\right)+ & p\mu_{i}^{+}\log_{p}\mu_{i}^{+}-\mu\log_{p}\mu+pI_{i}\left[f\right]\nonumber \\
& =\left(\epsilon-\epsilon_i^{+}\right)p\mu_{i}^{+}+(\epsilon-\epsilon_{i}^{-})\left(1-p\right)\mu_{i}^{-} \nonumber \\
& \leq \epsilon p\mu_i^+ +(\epsilon-\epsilon_i^-)\mu.
\end{align}
Observe that
\begin{align}
\label{eq:lower2}
\textrm{LHS} & =\left(1-p\right)\mu_{i}^{-}\log_{p}\left((1-p)\mu_{i}^{-}\right)+p\mu_{i}^{+}\log_{p}\left(\mu_{i}^{+}\right)-\mu\log_{p}(\mu)+pI_{i}\left[f\right] \nonumber \\
& -(1-p) \mu_{i}^{-} \log_p(1-p) \nonumber \\
& \geq \left(1-p\right)\mu_{i}^{-}\log_{p} (\mu) +p\mu_{i}^{+}\log_{p}\left(\mu_{i}^{+}\right)-\mu\log_{p}(\mu)+p(\mu_i^- - \mu_i^+) \nonumber\\
& -(1-p) \mu_{i}^{-} \log_p(1-p)\nonumber\\
& = p\mu_i^+ \log_p\left(\frac{\mu_{i}^{+}}{\mu}\right)+ \mu_i^-(p - (1-p)\log_p(1-p)) - p\mu_i^+ \nonumber\\
& = p\mu_i^+ \log_p\left(\frac{\mu_{i}^{+}}{\mu}\right)+ K(p) \mu_i^- - p\mu_i^+ \nonumber\\
& \geq p\mu_i^+ \log_p\left(\frac{\mu_{i}^{+}}{\mu}\right) - p\mu_i^+.
\end{align}
Combining (\ref{eq:upper2}) and (\ref{eq:lower2}) and rearranging, we obtain
\begin{equation}
\left(\frac{\mu_{i}^{+}}{\mu}\right)\left(\log_{p}\left(\frac{\mu_{i}^{+}}{\mu}\right)-1-\epsilon\right)\leq\frac{\epsilon-\epsilon_{i}^{-}}{p} \leq \tfrac{1}{\zeta}(\epsilon-\epsilon_{i}^-).\label{eq:bootstrap-2}
\end{equation}
It follows that
$$\delta \left(\ln (1/\delta)-\ln(1/p) - \epsilon \ln(1/p)\right)\leq \tfrac{1}{\zeta} (\epsilon-\epsilon_{i}^-) \ln(1/p),$$
and therefore
$$\delta \left(\ln (1/\delta)-\ln(1/\zeta) - c_0\right)\leq \tfrac{1}{\zeta}(\epsilon-\epsilon_{i}^-) \ln(1/p).$$
If $c_{5}$ is small enough depending on $\zeta$, this clearly implies that
$$\delta = O_{\zeta}\left(\frac{(\epsilon-\epsilon_i^-)\ln(1/p)}{\ln\left(\frac{1}{(\epsilon-\epsilon_i^-)\ln(1/p)}\right)}\right),$$
as required.
\end{proof}
We now prove a bootstrapping claim suitable for use in the cases where $p \leq \zeta$ or where $f$ is monotone and $p \leq 1-\eta$.
\begin{claim}
\label{claim:bootstrapping-mon} Let $\eta >0$. There exist $C_{5}=C_{5}(\eta)>0$ and $c_{5} = c_5(\eta)>0$ such that if $p \leq 1-\eta$ and $\mu_{i}^{-}\le c_{5}\mu$, then
\[
\mu_{i}^{-}\le\frac{C_{5}\left(\epsilon-\epsilon_{i}^{+}\right)\ln(1/p)}{\ln\left(1/\left(\left(\epsilon-\epsilon_{i}^{+}\right)\ln(1/p)\right)\right)}\mu.
\]
\end{claim}
\begin{proof}
As in the proof of Claim \ref{claim:bootstrapping-1}, we have
\begin{equation}
\left(\frac{\mu_{i}^{-}}{\mu}\right)\left(\log_{p}\left(\frac{\mu_{i}^{-}}{\mu}\right)-\frac{p}{1-p} - \epsilon\right)\leq\frac{\epsilon-\epsilon_{i}^{+}}{1-p}\leq\frac{\epsilon-\epsilon_{i}^{+}}{\eta}.\label{eq:bootstrap}
\end{equation}
Writing $\delta:=\frac{\mu_{i}^{-}}{\mu}\le c_{5}$, we obtain
\[
\delta\left(\ln(1/\delta)-\frac{1}{e\eta} - c_0\right) \leq \delta\ln(1/\delta)-\frac{p\ln(1/p)}{1-p} -\epsilon\ln(1/p) \le\ln(1/p)O_{\eta}\left(\epsilon-\epsilon_{i}^{+}\right).
\]
Provided $c_{5} = c_5(\eta)>0$ is sufficiently small, this implies that
\[
\delta = O_{\eta}\left(\frac{\left(\epsilon-\epsilon_{i}^{+}\right)\ln(1/p)}{\ln\left(1/\left(\left(\epsilon-\epsilon_{i}^{+}\right)\ln(1/p)\right)\right)}\right),
\]
as required.
\end{proof}
\subsection{Inductive proofs of Theorems \ref{thm:skewed-iso-stability} and \ref{thm:mon-iso-stability}}
\label{subsec:ind}
\begin{proof}[Proof of Theorem \ref{thm:skewed-iso-stability}.]
First, we deal with the case of $p < \zeta$, where $\zeta$ is the absolute constant from Claim \ref{claim:influential-coordinate}. In this case, we prove that the conclusion of Theorem \ref{thm:skewed-iso-stability} holds with $S$ a monotone increasing subcube.
We proceed by induction on $n$. If $n=1$, then $f$ is the indicator function of a monotone increasing subcube unless $f = 1_{\{x_1=0\}}$, so we may assume that $f = 1_{\{x_1=0\}}$. Then $\mu_p(f) = 1-p > 1-\zeta > 1-c_4$, so by Claim \ref{claim:expectation-constraint-1}, we have
\[
\mu_{p}(f)\geq1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\]
so the conclusion of the theorem holds with $S=\{0,1\}$.
We now do the inductive step. Let $n\geq2$, and assume that Theorem \ref{thm:skewed-iso-stability}
holds when $n$ is replaced by $n-1$. Let $f:\{0,1\}^n \to \{0,1\}$ satisfy the hypotheses of Theorem \ref{thm:skewed-iso-stability}. We may assume throughout that $\mu_{p}(f)\leq 1-c_4$, otherwise by Claim \ref{claim:expectation-constraint-1}, we have
\[
\mu_{p}(f)\geq1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\]
so the conclusion of the theorem holds with $S=\{0,1\}^{n}$. Since $\mu_p(f) \leq 1-c_4$, by Claim \ref{claim:influential-coordinate}, there exists $i\in[n]$ such that $\mu_{i}^{-}\leq C_{2} \epsilon\mu\ln(1/p)$, so if $c_{0}$
is a sufficiently small absolute constant, we have $\mu_{i}^{-}\leq c_{5}\mu$, i.e. $\mu_{i}^{-}$ satisfies the hypothesis of Claim \ref{claim:bootstrapping-1}. Therefore, by Claim \ref{claim:bootstrapping-1}, we have
\begin{equation}\label{eq:stronger-upper}
\mu_i^- \le\frac{C_{5}\left(\epsilon-\epsilon_{i}^{+}\right)\ln(1/p)}{\ln\left(1/\left((\epsilon-\epsilon_{i}^{+})\ln(1/p)\right)\right)}\mu,
\end{equation}
and so $\epsilon_i^+ \leq \epsilon$. By applying the induction
hypothesis to $f_{i\to1}$, we obtain
\[
\mu_{p}\left(f_{i\to1}\Delta1_{S_{T}}\right)\leq\frac{C_{1}\epsilon_{i}^{+}\ln(1/p)\mu_{i}^{+}}{\ln\left(\frac{1}{\epsilon_{i}^{+}\ln(1/p)}\right)}
\]
for some monotone increasing subcube $S_{T}=\{x\in\{0,1\}^{[n]\setminus\{i\}}:\ x_{j}=1\ \forall j\in T\}$, where $T \subset [n]$. Therefore, writing
\[
S_{T\cup\{i\}}:=\{x\in\{0,1\}^{n}:\ x_{j}=1\ \forall j\in T\cup\{i\}\},
\]
we have
\begin{align*}
\mu_{p}\left(f\Delta1_{S_{T\cup\left\{ i\right\} }}\right) & \leq\left(1-p\right)\mu_{i}^{-}+p\mu_{p}\left(f_{i\to1}\Delta1_{S_{T}}\right)\\
& \leq\left(1-p\right)\left(\frac{C_{5}(\epsilon-\epsilon_{i}^{+})\ln(1/p)\mu}{\ln\left(\frac{1}{\left[\epsilon-\epsilon_{i}^{+}\right]\ln(1/p)}\right)}\right)+\frac{C_{1}\epsilon_{i}^{+}\ln(1/p)p\mu_{i}^{+}}{\ln\left(\frac{1}{\epsilon_{i}^{+}\ln(1/p)}\right)}\\
& \leq\frac{\left(C_{5}\left(\epsilon-\epsilon_{i}^{+}\right)+C_{1}\epsilon_{i}^{+}\right)\mu\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}\leq\frac{C_{1}\epsilon\mu\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\end{align*}
provided $C_{1}\ge C_{5}$, using (\ref{eq:stronger-upper}). Hence, the conclusion of the theorem holds with $S=S_{T\cup\{i\}}$. This completes the inductive step, proving the theorem in the case $p < \zeta$.
Now we prove the theorem in the case $\zeta \leq p \leq 1/2$. (In this case, we do not prove that the subcube $S$ is monotone increasing; see Remark \ref{remark:non-mono}.)
We proceed again by induction on $n$. If $n=1$, then as before the theorem holds trivially. Let $n\geq2$, and assume Theorem \ref{thm:skewed-iso-stability} holds
when $n$ is replaced by $n-1$. Let $f:\{0,1\}^n \to \{0,1\}$ satisfy the hypotheses of Theorem \ref{thm:skewed-iso-stability}. As before, we may assume throughout that $\mu_{p}(f)\leq 1-c_4$, otherwise by Claim \ref{claim:expectation-constraint-1}, we have
\[
\mu_{p}(f)\geq1-\frac{C_{3}\epsilon\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\]
so the conclusion of the theorem holds with $S=\{0,1\}^{n}$. Since $\mu_p(f) \leq 1-c_4$, by Claim \ref{claim:influential-coordinate-2} (applied with the specific choice of $\zeta$ above, i.e. from Claim \ref{claim:influential-coordinate}), there exists $i\in[n]$ such that $\min\{\mu_{i}^{-},\mu_i^+\}\leq C_{2} \epsilon_i' \leq C_2\epsilon\mu$, so if $c_{0}$
is a sufficiently small absolute constant, we have $\min\{\mu_{i}^{-},\mu_i^+\}\leq c_{5}\mu$, i.e. $\mu_{i}^{-}$ or $\mu_i^+$ satisfies the hypothesis of Claim \ref{claim:bootstrapping-1} (again with the above choice of $\zeta$). Suppose that $\mu_{i}^{-} \leq c_5 \mu$ (the other case is very similar). Then, by Claim \ref{claim:bootstrapping-1}, we have $\epsilon_i^+ \leq \epsilon$. By applying the induction
hypothesis to $f_{i\to1}$, we obtain
\[
\mu_{p}\left(f_{i\to1}\Delta1_{S'}\right)\leq\frac{C_{1}\epsilon_{i}^{+}\ln(1/p)\mu_{i}^{+}}{\ln\left(\frac{1}{\epsilon_{i}^{+}\ln(1/p)}\right)}
\]
for some subcube $S'=\{x\in\{0,1\}^{[n]\setminus\{i\}}:\ x_{j}=a_j\ \forall j\in T\}$, where $T \subset [n]$ and $a_j \in \{0,1\}$ for each $j \in T$.
Therefore, writing
\[
S:=\{x\in\{0,1\}^{n}:\ x_{j}=a_j\ \forall j\in T,\ x_i = 1\},
\]
we have
\begin{align*}
\mu_{p}\left(f\Delta1_{S}\right) & \leq\left(1-p\right)\mu_{i}^{-}+p\mu_{p}\left(f_{i\to1}\Delta1_{S'}\right)\\
& \leq\left(1-p\right)\left(\frac{C_{5}(\epsilon-\epsilon_{i}^{+})\mu \ln(1/p)}{\ln\left(\frac{1}{(\epsilon-\epsilon_{i}^{+})\ln(1/p)}\right)}\right)+\frac{C_{1}\epsilon_{i}^{+}\ln(1/p)p\mu_{i}^{+}}{\ln\left(\frac{1}{\epsilon_{i}^{+}\ln(1/p)}\right)}\\
& \leq\frac{\left(C_{5}\left(\epsilon-\epsilon_{i}^{+}\right)+C_{1}\epsilon_{i}^{+}\right)\mu\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)}\leq\frac{C_{1}\epsilon\mu\ln(1/p)}{\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\end{align*}
provided $C_{1}\ge C_{5}$, using Claim \ref{claim:bootstrapping-1}. This completes the inductive step, proving the theorem in the case $\zeta \leq p \leq 1/2$, and completing the proof of Theorem \ref{thm:skewed-iso-stability}.
The inductive proof of Theorem \ref{thm:mon-iso-stability} is very similar indeed; we omit the details.
\end{proof}
\section{Sharpness of Theorems \ref{thm:skewed-iso-stability} and \ref{thm:mon-iso-stability}}
\label{sec:examples}
Theorem \ref{thm:skewed-iso-stability} is best possible up to the
values of the absolute constants $c_{0}$ and $C_{1}$. This can be
seen by taking $f=1_{A}$, where
\begin{align*}
A= & \{x\in\{0,1\}^{n}:\ x_{i}=1\ \forall i\in[t]\}\\
& \cup\{x\in\{0,1\}^{n}:\ x_{i}=1\ \forall i\in[t+s]\setminus\{t\},\ x_{t}=0\}\\
& \setminus\{x\in\{0,1\}^{n}:\ x_{i}=1\ \forall i\in[t+s]\setminus\{t+1\},\ x_{t+1}=0\},
\end{align*}
for $s,t\in\mathbb{N}$ with $s\geq2$. Let $0 <p \leq 1/2$. We have $\mu_{p}(A)=p^{t}$,
and
\[
I_{i}[A]=\begin{cases}
p^{t-1} & \textrm{ if }1\leq i\leq t-1;\\
(1-p^{s-1})p^{t-1} & \textrm{ if }i=t;\\
p^{t+s-2} & \textrm{ if }i=t+1;\\
2(1-p)p^{t+s-2} & \textrm{ if }t+2\leq i\leq t+s;\\
0 & \textrm{ if }i>t+s.
\end{cases}
\]
Hence,
\[
I^{p}[A]=p^{t-1}\left(t+2(s-1)(1-p)p^{s-1}\right).
\]
On the other hand, it is easy to see that
\[
\frac{\mu_{p}(A\Delta S)}{\mu_{p}(A)}=\frac{\mu_{p}(A\Delta S)}{p^{t}}\geq 2(1-p)p^{s-1}
\]
for all subcubes $S$. Hence, if $\epsilon:= 2(s-1)(1-p)p^{s-1}$,
then
\[
pI^{p}[A]=\mu_{p}(A)(\log_{p}(\mu_{p}(A))+\epsilon),
\]
but
\[
\frac{\mu_{p}(A\Delta S)}{\mu_{p}(A)}\geq\frac{\epsilon}{s-1}:=\delta,
\]
for all subcubes $S$. We have $(s-1)p^{s-1}\geq\tfrac{\epsilon}{2}$,
so writing $s-1=x/\ln(1/p)$, we get
\[
xe^{-x}\geq\tfrac{1}{2}\epsilon\ln(1/p),
\]
which implies
\[
x\leq2\ln\left(\frac{1}{\tfrac{1}{2}\epsilon\ln(1/p)}\right),
\]
or equivalently,
\[
s-1\leq\frac{2}{\ln(1/p)}\ln\left(\frac{1}{\tfrac{1}{2}\epsilon\ln(1/p)}\right).
\]
Hence,
\[
\delta\geq\frac{\epsilon\ln(1/p)}{2\ln\left(\frac{1}{\frac{1}{2}\epsilon\ln(1/p)}\right)},
\]
showing that Theorem \ref{thm:skewed-iso-stability} is best possible
up to the value of $C_{1}$. Moreover, we clearly require $\epsilon\ln(1/p)<1$
(i.e., $c_{0}<1$) for the right-hand side of (\ref{eq:conc}) to
be non-negative.
Observe that the above family $A$ is not monotone increasing. To prove sharpness for Theorem \ref{thm:mon-iso-stability}, we may take $f = 1_{B}$, where
$$B=\{x\in\{0,1\}^{n}:\ x_{i}=1\ \forall i\in[t]\} \cup\{x\in\{0,1\}^{n}:\ x_{i}=1\ \forall i\in[t+s]\setminus\{t\},\ x_{t}=0\}.$$
for $s,t\in\mathbb{N}$ with $s\geq2$. Let $0 < p < 1$. We have $\mu_{p}(B)=p^{t}(1+ (1-p)p^{s-1})$,
and
\[
I^p_{i}[B]=\begin{cases}
p^{t-1} + (1-p) p^{t+s-2}& \textrm{ if }1\leq i\leq t-1;\\
(1-p^{s})p^{t-1} & \textrm{ if }i=t;\\
(1-p)p^{t+s-2} & \textrm{ if }t+1\leq i\leq t+s;\\
0 & \textrm{ if }i>t+s.
\end{cases}
\]
Hence,
$$I^p[B] = p^{t-1}(t+((t+s)(1-p)-1)p^{s-1}),$$
and we have
$$\epsilon : = \frac{I^p[B] - \mu_p(B) \log_p(\mu_p(B))}{\mu} \leq (s-1)(1-p)p^{s-1}.$$
On the other hand, we have
\[
\frac{\mu_{p}(B\Delta S)}{\mu_{p}(B)}=\frac{\mu_{p}(B\Delta S)}{p^{t}(1+ (1-p)p^{s-1})}\geq \frac{(1-p)p^{t+s-1}}{p^{t}(1+ (1-p)p^{s-1})} \geq \tfrac{1}{2}(1-p)p^{s-1}: = \delta
\]
for all subcubes $S$. Similarly to before, we obtain
\[
\delta\geq\frac{\epsilon\ln(1/p)}{2\ln\left(\frac{1}{\epsilon\ln(1/p)}\right)},
\]
showing that Theorem \ref{thm:mon-iso-stability} is best possible
up to a constant factor depending on $\eta$.
We note that $B$ also demonstrates the sharpness of Theorem \ref{thm:skewed-iso-stability}, but does not have the nice property of $\log_p(\mu(B)) \in \mathbb{N}$, so we think it worthwhile to include both examples.
\section{Isoperimetry via Kruskal-Katona -- Proof of Theorem~\ref{thm:Monotone}, and a new proof of the `full' edge isoperimetric inequality}
\label{sec:KK}
In this section, we use the Kruskal-Katona theorem and the Margulis-Russo lemma to give a rather short proof of Theorem~\ref{thm:Monotone}, our biased version of the `full' edge isoperimetric inequality, for monotone increasing sets. We then give the (very short) deduction of Theorem \ref{thm:edge-iso} (the `full' edge isoperimetric inequality) from the $p=1/2$ case of Theorem \ref{thm:Monotone}, hence providing a new proof of the former --- one that relies upon the Kruskal-Katona theorem.
We first give a formal definition of the lexicographic families $\mathcal{L}_{\lambda}$, for $\lambda$ not of the form $s/2^n$. For any $\lambda\in\left(0,1\right)$, we consider its binary expansion
$$\lambda = \sum_{j=1}^{\infty}2^{i_j},$$
where $1 \leq i_1 < i_2 < \ldots$, and we let
$$\mathcal{L}_{\mu} = \bigcup_{j=1}^{\infty} \{S \subset \mathbb{N}:\ S \cap [i_j] = [i_j] \setminus \{i_k:\ k < j\}\} \subset \mathcal{P}(\mathbb{N}).$$
We identify $\mathcal{P}(\mathbb{N})$ with the Cantor space $\{0,1\}^{\mathbb{N}}$. We let $\Sigma$ be the $\sigma$-algebra on $\{0,1\}^{\mathbb{N}}$ generated by the $\pi$-system
$$\left\{\prod_{i=1}^{\infty} A_i:\ A_i \subset \{0,1\} \ \forall i \in \mathbb{N},\ A_i = \{0,1\} \text{ for all but finitely many }i \in \mathbb{N}\right\}.$$
It is easily checked that $\mathcal{L}_\mu \in \Sigma$.
By the Kolmogorov Extension theorem, there exists a unique probability measure $\mu_p^{(\mathbb{N})}$ on $(\{0,1\}^{\mathbb{N}},\Sigma)$ such that
$$\mu_p^{(\mathbb{N})}(A_1 \times A_2 \times \ldots \times A_n \times \{0,1\} \times \{0,1\} \times \ldots) = \mu_p^{(n)}(A_1 \times A_2 \times \ldots \times A_n)$$
for all $n \in \mathbb{N}$ and all $A_1,\ldots,A_n \subset \{0,1\}$. Abusing notation slightly, we write $\mu_p = \mu_p^{(\mathbb{N})}$ when the underlying space $\{0,1\}^{\mathbb{N}}$ is understood.
If $f: \{0,1\}^{\mathbb{N}} \to \{0,1\}$ is $\Sigma$-measurable, we (naturally) define
\[I_i^p[f] := \Pr_{x \sim \mu_p}[f(x) \neq f(x \oplus e_i)]\]
and $I^p[f] := \sum_{i=1}^n I_i^p[f]$.
As stated in the Introduction, it is easily checked that
\begin{equation}\label{eq:limit-measures-2} \mu_{p}^{(\mathbb{N})}\left(\mathcal{L}_{\lambda}\right) = \lim_{n \to \infty} \mu_{p}^{(n)}\left(\mathcal{L}_{\lfloor \lambda 2^n \rfloor/2^n}\right)\end{equation}
and
\begin{equation} \label{eq:limit-influences-2} I^{p}\left[\mathcal{L}_{\lambda}\right]=\lim_{n \to \infty} I^{p}\left[\mathcal{L}_{\lfloor \lambda 2^n \rfloor/2^{n}}\right],\end{equation}
where the families on the left are regarded as subsets of $\mathcal{P}(\mathbb{N})$, and those on the right are regarded as subsets of $\mathcal{P}([n])$. In fact, (\ref{eq:limit-measures-2}) and (\ref{eq:limit-influences-2}) are the only facts we shall need about the families $\mathcal{L}_\mu$ in order to prove Theorem \ref{thm:Monotone}.
In our proof of Theorem \ref{thm:Monotone}, we will use two well-known results. The first is the classical Kruskal-Katona theorem \cite{Katona66,Kruskal63}. To state it, we need some more notation. For a family $\mathcal{F} \subset \mathcal{P}([n])$ and $0 \leq k \leq n$, we write $\mathcal{F}^{(k)}: = \mathcal{F} \cap [n]^{(k)}$. If $k < n$ and $\mathcal{A} \subset [n]^{(k)}$, we write $\partial^{+}(\mathcal{A}) := \{B \in [n]^{(k+1)}:\ A \subset B\ \text{for some }A \in \mathcal{A}\}$ for the upper shadow of $\mathcal{A}$, and if $1 \leq i \leq n-k$, we write $\partial^{+(i)}(\mathcal{A}) := \{B \in [n]^{(k+i)}:\ A \subset B \text{ for some }A \in \mathcal{A}\}$ for its $i$th iterate.
\begin{thm}[Kruskal-Katona theorem]
Let $k < n$, and let $\mathcal{F} \subset [n]^{(k)}$ with $|\mathcal{F}| = |\mathcal{L}_\lambda^{(k)}|$. Then $|\partial^{+}(\mathcal{F})| \geq |\partial^+(\mathcal{L}_\lambda^{(k)})|$.
\end{thm}
\noindent We need the following straightforward corollary.
\begin{cor}
\label{cor:KK}Let $n>k_{0}>k\geq j\geq 1$ with $n-k_0 \geq j$, suppose that $\mathcal{L}_{\lambda} \subset \mathcal{P}\left(\left[n\right]\right)$ depends only upon the coordinates in $[j]$, and let $\mathcal{F}\subset\mathcal{P}\left(\left[n\right]\right)$ be a monotone increasing family with $|\mathcal{F}^{\left(k_{0}\right)}| \leq |\mathcal{L}_{\lambda}^{\left(k_{0}\right)}|$. Then $|\mathcal{F}^{\left(k\right)}|\le|\mathcal{L}_{\lambda}^{\left(k\right)}|$.
\end{cor}
\begin{proof}
Suppose that $|\mathcal{F}^{\left(k_{0}\right)}| \leq |\mathcal{L}_{\lambda}^{\left(k_{0}\right)}|$, and assume for a contradiction that $|\mathcal{F}^{\left(k\right)}|\geq |\mathcal{L}_{\lambda}^{\left(k\right)}|+1$. Choose $\lambda'$ minimal such that $|\mathcal{F}^{\left(k\right)}|= |\mathcal{L}_{\lambda'}^{\left(k\right)}|$; then $\mathcal{L}_{\lambda'}^{(k)} \setminus \mathcal{L}_{\lambda}^{(k)} \neq \emptyset$. Choose $S \in \mathcal{L}_{\lambda'}^{(k)} \setminus \mathcal{L}_{\lambda}^{(k)}$. Since $k_0 \leq n-j$, there exists $S' \supset S$ such that $|S'| = k_0$ and $(S' \setminus S) \cap [j] = \emptyset$, and therefore $S' \in \partial^{+(k_0-k)}(\mathcal{L}_{\lambda'}^{(k)}) \setminus \mathcal{L}_{\lambda}$. Since $j \leq k_0$, we have $\mathcal{L}_{\lambda}^{(k_0)} \subset \partial^{+(k_0-k)}(\mathcal{L}_{\lambda}^{(k)}) \subset \partial^{+(k_0-k)}(\mathcal{L}_{\lambda'}^{(k)})$. It follows that $|\partial^{+(k_0-k)}(\mathcal{L}_{\lambda'}^{(k)})| > |\mathcal{L}_{\lambda}^{\left(k_0\right)}|$. By repeated application of the Kruskal-Katona theorem, since $|\mathcal{F}^{\left(k\right)}|= |\mathcal{L}_{\lambda'}^{\left(k\right)}|$ and $\mathcal{F}$ is monotone increasing, we have
$$|\mathcal{F}^{(k_0)}|\geq|\partial^{+(k_0-k)}(\mathcal{F}^{(k)})| \geq |\partial^{+(k_0-k)}(\mathcal{L}_{\lambda'}^{(k)})| > |\mathcal{L}_{\lambda}^{\left(k_0\right)}|,$$
a contradiction.
\end{proof}
\noindent This implies the following, by a standard application of the method of Dinur-Safra \cite{Dinur-Safra} / Frankl-Tokushige \cite{FT03}, known as `going to infinity and back'. (We present the proof, for completeness.)
\begin{cor}
\label{thm:KK cor}Let $0 < q < p < 1$, let $0 < \lambda < 1$, and let
$\mathcal{F}\subset\mathcal{P}\left(\left[n\right]\right)$ be a monotone increasing family with $\mu_p\left(\mathcal{F}\right)\leq\mu_{p}\left(\mathcal{L}_{\lambda}\right)$.
Then $\mu_{q}\left(\mathcal{F}\right)\le\mu_{q}\left(\mathcal{L}_{\lambda}\right)$.
\end{cor}
\begin{proof}
Let $\mathcal{F}\subset\mathcal{P}\left(\left[n\right]\right)$ be a monotone increasing family with $\mu_p\left(\mathcal{F}\right)\leq\mu_{p}\left(\mathcal{L}_{\lambda}\right)$, and suppose for a contradiction that $\mu_{q}\left(\mathcal{F}\right) > \mu_{q}\left(\mathcal{L}_{\lambda}\right)$. Choose $\lambda'> \lambda$ such that $\mu_{q}\left(\mathcal{F}\right) >\mu_{q}\left(\mathcal{L}_{\lambda'}\right)$. By (\ref{eq:limit-measures-2}), there exists $m\geq n$ such that
$$\mu_p^{(m)}(\mathcal{L}_{\lambda'}\cap \mathcal{P}([m])) > \mu_{p}\left(\mathcal{L}_{\lambda}\right),\quad \mu_q^{(m)}(\mathcal{L}_{\lambda'}\cap \mathcal{P}([m])) > \mu_{q}\left(\mathcal{L}_{\lambda}\right).$$
Define $\mathcal{L}' = \mathcal{L}_{\lambda'}\cap \mathcal{P}([m]) \subset \mathcal{P}([m])$; then
$$\mu_p(\mathcal{L}') > \mu_{p}\left(\mathcal{L}_{\lambda}\right),\quad \mu_q(\mathcal{L}') > \mu_{q}\left(\mathcal{L}_{\lambda}\right).$$
Now, for any family $\mathcal{G}\subset \mathcal{P}\left(\left[n\right]\right)$
and $N \geq n$, we define
$$\mathcal{G}_{N} : = \{A \subset [N]:\ A \cap [n] \in \mathcal{G}\}.$$
It is easily checked that for any $\mathcal{G} \subset \mathcal{P}([n])$ and any $p \in (0,1)$, we have
$$\mu_p(\mathcal{G}) = \lim_{N \to \infty} \frac{|(\mathcal{G}_{N})^{(\lfloor pN \rfloor)}|}{{N \choose \lfloor pN \rfloor}}.$$
In particular, we have
$$\mu_q(\mathcal{F}) = \lim_{N \to \infty} \frac{|(\mathcal{F}_{N})^{(\lfloor qN \rfloor)}|}{{N \choose \lfloor qN \rfloor}}$$
and
$$\mu_q(\mathcal{L}') = \lim_{N \to \infty} \frac{|(\mathcal{L}'_{N})^{(\lfloor qN \rfloor)}|}{{N \choose \lfloor qN \rfloor}}.$$
Since $\mu_q(\mathcal{F}) > \mu_q(\mathcal{L}_{\lambda'}) \geq \mu_q(\mathcal{L}')$, for all $N$ sufficiently large (depending on $q$ and $m$), we have
$$|(\mathcal{F}_{N})^{(\lfloor qN \rfloor)}| > |(\mathcal{L}'_{N})^{(\lfloor qN \rfloor)}|.$$
Since $\mathcal{L}'_{N}$ depends only upon the coordinates in $[m]$, and is a lexicographic family, it follows from Corollary \ref{cor:KK} that if $N$ is sufficiently large depending on $p,q$ and $m$, then
$$|(\mathcal{F}_{N})^{(\lfloor pN \rfloor)}| > |(\mathcal{L}'_{N})^{(\lfloor pN \rfloor)}|.$$
Since
$$\mu_p(\mathcal{F}) = \lim_{N \to \infty} \frac{|(\mathcal{F}_{N})^{(\lfloor pN \rfloor)}|}{{N \choose \lfloor pN \rfloor}}$$
and
$$\mu_p(\mathcal{L}') = \lim_{N \to \infty} \frac{|(\mathcal{L}'_{N})^{(\lfloor pN \rfloor)}|}{{N \choose \lfloor pN \rfloor}},$$
it follows that $\mu_p(\mathcal{F}) \geq \mu_p(\mathcal{L}') > \mu_p(\mathcal{L}_{\lambda})$, a contradiction.
\end{proof}
We also need the well-known lemma of Margulis \cite{Margulis} and Russo \cite{Russo}.
\begin{lem}[Margulis, Russo]
\label{lem:MR}Let $\mathcal{F}\subset\mathcal{P}\left(\left[n\right]\right)$ be a monotone increasing family. Then $\frac{d\mu_{p}\left(\mathcal{F}\right)}{dp}=I^{p}\left[\mathcal{F}\right]$.
\end{lem}
The Margulis-Russo lemma implies that $\frac{d}{dp}\mu_{p}\left(\mathcal{L}_{\lambda}\right)=I^{p}\left[\mathcal{L}_{\lambda}\right]$
for all $\lambda$ of the form $\frac{i}{2^{j}}$. It follows from (\ref{eq:limit-measures-2}) and (\ref{eq:limit-influences-2}) that this formula holds for all $\lambda\in\left(0,1\right)$.
\medskip
\noindent Now we are ready to prove Theorem~\ref{thm:Monotone}.
\begin{proof}[Proof of Theorem~\ref{thm:Monotone}]
Let $\mathcal{F}$ be a family that satisfies the assumptions of the theorem. Note that by Lemma \ref{lem:MR}, for any $p_0 \in (0,1)$, we have $\frac{d}{dp}\mu_{p}\left(\mathcal{L}_{\lambda}\right)|_{p=p_0}=I^{p_0}\left[\mathcal{L}_{\lambda}\right]$ and
$\frac{d}{dp}\mu_{p}(\mathcal{F})|_{p=p_0}=I^{p_0}[\mathcal{F}]$. By Corollary \ref{thm:KK cor}, $\mu_{q}\left(\mathcal{F}\right)\le\mu_{q}\left(\mathcal{L}_{\lambda}\right)$
for any $q\le p$. Therefore,
\[
I^{p}\left[\mathcal{F}\right]=\lim_{q\to p}\frac{\mu_{p}\left(\mathcal{F}\right)-\mu_{q}\left(\mathcal{F}\right)}{p-q}\ge\lim_{q\to p}\frac{\mu_{p}\left(\mathcal{L}_{\lambda}\right)-\mu_{q}\left(\mathcal{L}_{\lambda}\right)}{p-q}=I^{p}\left[\mathcal{L}_{\lambda}\right],
\]
as desired.
\end{proof}
\subsection*{The deduction of Theorem~\ref{thm:edge-iso} from Theorem \ref{thm:Monotone}}
This is a standard (and short) `monotonization' argument. We include it for completeness.
For $i \in [n]$, the {\em $i$th monotonization operator} $\mathcal{M}_i:\mathcal{P}\left(\left[n\right]\right) \to \mathcal{P}\left(\left[n\right]\right)$ is defined as follows. (See e.g. \cite{KKL}.) If $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$, then for each $S \in \mathcal{F}$ we define
$$\mathcal{M}_i(S) = \begin{cases} S \cup \{i\} & \text{if } S \in \mathcal{F},\ i \notin S \text{ and } S \cup \{i\} \notin \mathcal{F},\\
S & \text{otherwise},\end{cases}$$
and we define $\mathcal{M}_i(\mathcal{F}) = \{\mathcal{M}_i(S):\ S \in \mathcal{F}\}$. It is well-known, and easy to check, that for any $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$, we have $|\mathcal{M}_i(\mathcal{F})| = |\mathcal{F}|$ and
$$I^{1/2}_{j}\left[\mathcal{M}_{i}\left(\mathcal{F}\right)\right]\le I^{1/2}_{j}\left[\mathcal{F}\right]\quad \forall j \in [n];$$
summing over all $j$ we obtain
$$I^{1/2}\left[\mathcal{M}_{i}\left(\mathcal{F}\right)\right]\le I^{1/2}\left[\mathcal{F}\right].$$
Observe that the $\mathcal{M}_{i}$'s transform a family to a monotone increasing one, in the sense that for any $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$, the family $\mathcal{G}:=\mathcal{M}_{1}\circ\cdots\circ\mathcal{M}_{n}\left(\mathcal{F}\right)$ is monotone increasing; note also that $|\mathcal{G}| = |\mathcal{F}|$ and $I^{1/2}[\mathcal{G}]\le I^{1/2}[\mathcal{F}]$.
Now let $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$, and let $\mathcal{L}_{\lambda} \subset \mathcal{P}\left(\left[n\right]\right)$ be a lexicographic family with $|\mathcal{L}_{\lambda}|=|\mathcal{F}|$. Let $\mathcal{G} = \mathcal{M}_{1}\circ\cdots\circ\mathcal{M}_{n}\left(\mathcal{F}\right)$; then $|\mathcal{G}|=|\mathcal{F}|$, $I^{1/2}[\mathcal{G}] \leq I^{1/2}[\mathcal{F}]$, and $\mathcal{G}$ is monotone increasing. By Theorem~\ref{thm:Monotone}, we have $I^{1/2}[\mathcal{G}] \geq I^{1/2}[\mathcal{L}_{\lambda}]$, and therefore $I^{1/2}[\mathcal{F}] \geq I^{1/2}[\mathcal{G}] \geq I^{1/2}[\mathcal{L}_{\lambda}]$, proving Theorem \ref{thm:edge-iso}.
\begin{comment}
\begin{proof}[Second proof of Theorem~\ref{thm:edge-iso}]
This proof is by induction on $n$. We will need the following lemma.
\begin{lem}
\label{lem:lem}Let $\mu^{-},\mu^{+}\le1$, and write $\mu=\frac{1}{2}\mu^{+}+\frac{1}{2}\mu^{-}$.
Then:
\[
\frac{1}{2}I\left(\mathcal{L}_{\mu^{-}}\right)+\frac{1}{2}I\left(\mathcal{L}_{\mu^{+}}\right)+\left|\mu^{+}-\mu^{-}\right|\le I\left(\mathcal{L}_{\mu}\right).
\]
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem:lem}]
Suppose w.l.o.g that $\mu^{-}\le\mu^{+}$ and write $\mu^{-}=\frac{i_{0}}{2^{n-1}},\mu^{+}=\frac{i_{1}}{2^{n-1}}$.
We let $\mathcal{L}_{\mu^{-},\mu^{+}}^{n}\subset\mathcal{P}\left(\left[n\right]\right)$ be the unique family,
such that $\mathcal{L}_{\left\{ 1\right\} }^{\left\{ 1\right\} }$ ($\mathcal{L}_{\left\{ 1\right\} }^{\varnothing}$)
consists of the largest $s_{1}$ ($s_{0}$) sets in the lexicographic ordering
on $\mathcal{P}\left(\left[2,\ldots,n\right]\right)$. Since $\mathcal{L}_{\mu^{-},\mu^{+}}^{n}$
are monotone increasing, Theorem \ref{thm:Monotone} implies that
\[
I\left[\mathcal{L}_{\mu}\right]\ge I\left[\mathcal{L}_{\mu^{-},\mu^{+}}\right]=\frac{1}{2}I\left(\mathcal{L}_{\mu^{-}}\right)+\frac{1}{2}I\left(\mathcal{L}_{\mu^{+}}\right)+\mu^{+}-\mu^{-},
\]
as asserted
\end{proof}
Theorem~\ref{thm:edge-iso} follows from Lemma \ref{lem:lem} by a standard inductive argument, as follows.
The case $n=0$ is trivial. Write $\mu^{+}=\mu\left(\mathcal{F}_{\left\{ 1\right\} }^{\left\{ 1\right\} }\right),\mu^{-}=\mu\left(\mathcal{F}_{\left\{ 1\right\} }^{\varnothing}\right)$, and $\mu=\frac{1}{2}\mu^{+}+\frac{1}{2}\mu^{-}$. Also write $I^{+}=I\left[\mathcal{F}_{\left\{ 1\right\} }^{\left\{ 1\right\} }\right]$,
and $I^{-}=I\left[\mathcal{F}_{\left\{ 1\right\} }^{\varnothing}\right]$.
Then by Lemma \ref{lem:lem} and by the induction hypothesis, we have
\begin{align*}
I\left(\mathcal{F}\right) & \ge\left|\mu^{+}-\mu^{-}\right|+\frac{1}{2}I^{+}+\frac{1}{2}I^{-} \ge\left|\mu^{+}-\mu^{-}\right|+\frac{1}{2}I\left[\mathcal{L}_{\mu^{+}}\right]+\frac{1}{2}I\left[\mathcal{L}_{\mu^{-}}\right] \ge I\left[\mathcal{L}_{\mu}\right].
\end{align*}
This completes the proof.
\end{proof}
\end{comment}
\section{Open Problems}
\label{sec:open}
A natural open problem is to obtain a $p$-biased edge-isoperimetric inequality for arbitrary (i.e., not necessarily monotone increasing) families, which is sharp for all values of the $p$-biased measure. This is likely to be difficult, as there is no nested sequence of extremal families. Indeed, it is easily checked that if $p < 1/2$, the unique families $\mathcal{F} \subset \mathcal{P}\left(\left[n\right]\right)$ with $\mu_p(\mathcal{F}) = p$ and minimal $I_p[\mathcal{F}]$ are the dictatorships, whereas the unique families $\mathcal{G} \subset \mathcal{P}\left(\left[n\right]\right)$ with $\mu_p(\mathcal{G}) = 1-p$ and minimal $I_p[\mathcal{G}]$ are the antidictatorships; clearly, none of the former are contained in any of the latter.
Another natural problem is to obtain a sharp stability version of our `full' biased edge isoperimetric inequality for monotone increasing families (i.e.,
Theorem~\ref{thm:Monotone}). This would generalise (the monotone case of) Theorem \ref{thm:full-stability}, our sharp stability version of the `full' edge isoperimetric inequality. It seems likely that the proof in \cite{LOL} can be extended to the biased case using the methods of the current paper, but the resulting proof is expected to be rather long and complex.
\begin{comment}
Another question is to find sufficient conditions for the full biased edge isoperimetric inequality. As we showed in this paper, monotonicity is a sufficient condition, but it is likely that weaker conditions may be sufficient. In particular, it may be that for any fixed $t$, the minimal $\mu_p$ measure of the edge boundary amongst all families $\mathcal{F}$ with $\mu_p(\mathcal{F})=t$ is attained by a family which is `weakly isomorphic' to a lexicographic family, under the appropriate definition of `weak isomorphism'.
David: I think this is known already.
\end{comment}
Finally, it is highly likely that the values of the absolute constants in Theorem \ref{thm:skewed-iso-stability}, and of the constants depending upon $\eta$ in Theorem \ref{thm:mon-iso-stability}, could be substantially improved. Note for example that Theorem \ref{thm:skewed-iso-stability} applies only to Boolean functions whose total influence is very close to the minimum possible, namely, for $pI^{p}[f]\leq\mu_{p}[f]\left(\log_{p}(\mu_{p}[f])+\epsilon\right)$, where $\epsilon\leq c_{0}/\ln(1/p)$ and $c_0$ is very small. It is likely that the conclusion holds under the weaker assumption $\epsilon < 1/\ln(1/p)$. Such an extension is not known even for the uniform measure. (See, for example, the conjectures in \cite{Ellis}.)
| proofpile-arXiv_065-7399 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\IEEEPARstart{T}{he} quest of controlling the electromagnetic radiation pushes modern science and engineering into uncharted territories, spanning from few millimeters up to hundred nanometers of wavelength. These novel technological achievements exploit concepts and phenomena occurring either by the single scatterer functionalities~\cite{Lannebere2015a,Alu2009a} or emerge as collective effects induced by the building block's properties~\cite{PhysRevX.5.031005,Jahani2016,Mazor2017}. In every case the study of the fundamental properties of individual scatterers plays a crucial role to the overall design process.
One of the most studied canonical problems facilitating this purpose is the study of the electromagnetic scattering by a spherical scatterer; a long-studied benchmarking platform providing insights regarding the nature of the scattering phenomena~\cite{Schebarchov2013,Shore2015,Fan2014,Liberal2014,Lukyanchuk2010}. In this article, we cast light to the special case of a small core-shell spherical scatterer, describing its scattering attributions and resonant peculiarities.
Let us begin by assuming a monochromatic, linearly polarized plane wave\footnote{the time-harmonic convention $e^{-i\omega t}$ is used} impinging on a spherical, composite obstacle, as can be seen in Fig.~\ref{fig:structure}. Briefly, the field can be decomposed as a sum of spherical TE and TM field harmonics in every domain (also known as H- and E-waves)~\cite{stratton2007electromagnetic}. By applying the appropriate boundary conditions and solving the formulated problem, a set of field components occurs, described by the Mie coefficients~\cite{stratton2007electromagnetic}. These coefficients quantify the amplitude of each harmonic and their behavior. Our efforts will be mainly concentrated in studying the characteristics of the external scattering coefficients, denoted as $a_n$ and $b_n$.
\begin{figure}[!]
\centering
\includegraphics[width=0.45\textwidth]{2layered-sphere2.pdf}~
\caption{Problem setup: a core-shell sphere immersed in a host medium with internal and external radius $a$ and $b$, under plane wave illumination. Region 1 corresponds to the core while region 2 to the shell region.}
\label{fig:structure}
\end{figure}
The $a_n$ coefficients account for the electric type multipole contributions, while the $b_n$ for the magnetic type. Both are expressed as a ratio of spherical Bessel and Riccati--Bessel functions, found readily in several classical textbooks~\cite{kerker1971scattering,bohren2008absorption}, and articles~\cite{Shore2015}. For magnetically inert case ($\mu_\text{host}=\mu_1=\mu_2=1$) the coefficients are affected by the (relative) permittivity, $\varepsilon_{\text{host}}$, the shell and core material $\varepsilon_{2}$ and $\varepsilon_{1}$, the core-to-shell radius ratio $\eta=\frac{a}{b}$, and the internal and external size parameters $x=ka$ and $y=kb$, with $k$ being the host medium wavenumber (see Fig.~\ref{fig:structure}).
Both $a_n$ and $b_n$ exhibit maximum and minimum values for a variety of material and morphological cases~\cite{bohren2008absorption}. A special type of resonances occurring for negative permittivity values of the shell are called localized surface plasmonic resonances (LSPR or plasmonic resonances). LSPR occur naturally for metals in the visible--near-infrared regime due to the collective oscillations of the free electrons~\cite{kreibig1995optical}. These electric charge oscillations are characterized as electric multipole resonances, visible in the spectrum of the $a_n$ coefficients. Our main focus will be to study the qualitative behavior of these resonances by studying the electric Mie coefficients.
{
This work will be presented in the following way. First, a simple Taylor expansion of the $a_n$ coefficients will be given in Section~\ref{sec:static}, revealing the electrostatic aspects of the enabled resonances. In this classical perspective we present two limiting cases, for thin and thick shells, introducing their effects in the scattering process. In Section~\ref{sec:core} we expand the discussion on how the core material affects the electrostatic response. Section~\ref{sec:Dynamic} presents the newly introduced Mie-Pad\'e expansion for extracting intuitive results on the impact of the size-dependent dynamical mechanisms.
}
{
The depolarization and radiative reaction effects will be presented, giving emphasis to the core material effects. The methodology described in the aforementioned sections is general and can be readily used for studying the behavior of other type of resonances, such as all-dielectric resonances~\cite{Kuznetsov2016}, or other type of canonical shapes, e.g., core-shell cylinders.}
\section{Resonant properties of core-shell plasmonic structures}\label{sec:static}
To begin with, the MacLaurin (Taylor at $y=0$) series expansion of the first electric Mie coefficient ($a_1$) with respect to the size parameter $y=kb$ reads
\begin{equation} \label{eq:dpolar}
\begin{split}
&a_1^T=-i\frac{2}{3} \times \\
&\frac{(\varepsilon_2-1)(2\varepsilon_2+\varepsilon_1)-\eta^3(\varepsilon_2-\varepsilon_1)(2\varepsilon_2+1)}{(\varepsilon_2+2)(2\varepsilon_2+\varepsilon_1)-2\eta^3(\varepsilon_2-\varepsilon_1)(\varepsilon_2-1)}y^3+\mathcal{O}\left(y^5\right)
\end{split}
\end{equation}
In Fig.~\ref{fig:sweep} we can see the extinction efficiency
{
($Q_\text{ext}=\frac{2}{y^2}\sum^\infty_{n=1}\Re\{a_n+b_n\}$ where $a_n$ and $b_n$ are the electric and magnetic multipoles, respectively-see \cite{bohren2008absorption})} spectrum for the case of a small ($y=0.01$) hollow ($\varepsilon_1=1$) core-shell sphere as a function of the shell permittivity and the radius ratio $\eta$, where at least two resonances are visible.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{sweep.png}
\caption{The extinction efficiency spectrum as a function of the permittivity of the shell (lossless) and the radius ratio $\eta$ for a small, hollow ($\varepsilon_\text{core}=1$) sphere ($y=0.01$). For the plasmonic case ($b_n\approx0$) the extinction efficiency is $Q_{\text{ext}}\propto\sum_n^\infty \Re\{a_n\}$, with $n=1,2,...$ . The bright lines correspond to the symmetric (left line) and antisymmetric (right line) dipole plasmonic resonances, while higher order multipoles are not visible. The color scale is logarithmic and is omitted, normalized for better visualization of the resonances.}
\label{fig:sweep}
\end{figure}
It is clear that the Mie coefficient, in its Rayleigh limit (Eq.~(\ref{eq:dpolar})), is proportional to the volume-normalized static dipole polarizability, i.e., $a_{\text{Mie}}=-i\frac{2}{9}y^3\alpha_{\text{static}}$ with $\alpha_\text{static}=3\frac{(\varepsilon_2-1)(2\varepsilon_2+\varepsilon_1)-\eta^3(\varepsilon_2-\varepsilon_1)(2\varepsilon_2+1)}{(\varepsilon_2+2)(2\varepsilon_2+\varepsilon_1)-2\eta^3(\varepsilon_2-\varepsilon_1)(\varepsilon_2-1)}$~\cite{sihvola1999electromagnetic}, indicating that the full-electrodynamic Mie scattering model collapses to the electrostatic scattering description (Rayleigh) for vanishingly small spheres. Therefore, many physically intuitive observations are accessible through the analysis of the electrostatic model. From a mathematical perspective, Eq.~(\ref{eq:dpolar}) is nothing but a fractional function exhibiting zeros and poles. Physically, this function describes a system whose critical points correspond to scattering minima and maxima. The objective of this study is to extract information about the positions and the peculiarities of these points.
We will start by examining the simplest case of a hollow ($\varepsilon_1=1$), dielectric shell. The expanded Mie coefficient reads
\begin{equation}\label{eq:static_hollow}
a_1^T\approx i\frac{2}{3}\frac{(\varepsilon_2-1)(2\varepsilon_2+1)(\eta^3-1)}{(\varepsilon_2+2)(2\varepsilon_2+1)-2\eta^3(\varepsilon_2-1)^2}y^3+\mathcal{O}\left(y^5\right)
\end{equation}
with pole condition (denominator zeros) described by
\begin{equation}\label{eq:static}
\varepsilon^{\pm}=\frac{5+4\eta^3\mp3\sqrt{1+8\eta^3}}{4\left(\eta^3-1\right)}
\end{equation}
The above equation expresses the necessary condition for a pole to occur, namely the first electric dipole-like plasmonic resonances of a nanoshell~\cite{Averitt1999}. The notation $\varepsilon^{\pm}$ follows the plasmon hybridization model~\cite{Prodan2003}: $\varepsilon^-$ gives the permittivity value for the symmetric (bonding) and $\varepsilon^+$ the antisymmetric (antibonding) plasmonic resonance, respectively. Symmetric is the resonance with (symmetric) surface charge distribution between the inner and outer layer, with almost constant electric field in the inner regions, mainly residing in the outer surfaces. This type of resonance exhibits a plasmonic resonance at lower permittivity (energies) than the antisymmetric case, where it is mostly confined at the core region and the inner surfaces~\cite{Prodan2003}.
Equation~(\ref{eq:static}) reveals that the required permittivity value is a function of the radius ratio. An even more intuitive condition can be extracted by expanding further Eq.~(\ref{eq:static}) and examining its behavior for two cases; for thick and thin shells. For the case of a thick shell, i.e., $\eta\rightarrow0$, we obtain two expressions, viz.,
\begin{equation}\label{eq:static_thick1}
\varepsilon^+=-\frac{1}{2}+\frac{3}{2}\eta^3+\mathcal{O}\left( \eta^6 \right)
\end{equation}
\begin{equation}\label{eq:static_thick2}
\varepsilon^-=-2-6\eta^3+\mathcal{O}\left( \eta^9 \right)
\end{equation}
These expressions describe a volume-dependent ($\eta^3$) behavior for the thick-shell resonances. {Assuming a thick shell expansion of the Eq.~(\ref{eq:static_hollow}) we obtain the following expression
\begin{equation}
a_1^T\approx-i\frac{2y^3}{3}+i\frac{2y^3}{\varepsilon_2+2}\left(1+\eta^3-\frac{6\eta^3}{\varepsilon_2+2}\right)+i\frac{2y^3\eta^3}{2\varepsilon_2+1}
\end{equation}
One notices that the first term is affected only by the size parameter, the second term represents the symmetric resonance $\varepsilon_2=-2$ condition for which a part is independent of the thickness $\eta$, and the third antisymmetric $\varepsilon_2=-\frac{1}{2}$ term with a thickness dependency. It is easy to see that for the limiting case $\eta\rightarrow0$ the antisymmetric term vanishes~\cite{sihvola2006character}.
}
The same expansion can be used for very thin shells ($\eta\rightarrow1$). For simplicity we define a new parameter the complementary ratio, defined as $\eta_c=1-\eta \in (0,1)$; vanishingly thin shells are described naturally for $\eta_c\rightarrow0$. We are now ready to express the pole condition of Eq.~(\ref{eq:static}) according to the complementary ratio as
\begin{equation}\label{eq:static_thin1}
\varepsilon^+=-\frac{2}{3}\eta_c+\mathcal{O}\left( \eta_c^2 \right)
\end{equation}
\begin{equation}\label{eq:static_thin2}
\varepsilon^-=\frac{1}{2}-\frac{3}{2\eta_c}-\frac{1}{3}\eta_c+\mathcal{O}\left( \eta_c^2 \right)
\end{equation}
For $\eta_c\rightarrow0^+$ these resonant values approach $0$ and $-\infty$ respectively, which is an expected result. This linear behavior with respect to $\eta_c$ is qualitatively different from the thick-shell case where a volume dependence is observed.
The expansion stratagem demonstrated above can be also used for higher order multipole resonances. Since the small-size expansions of the Mie coefficients boil down to the electrostatic polarizabilities, we generalize the above results following the static multipolarizabilities (or $n$-polarizabilities) extracted from e.g.,~\cite{Sihvola1988}. {A rather physical connection between the electrostatic limit and the plasmon hybridization is exposed in Appendix~\ref{sec:AppA}, especially for the thin- and thick-core limiting cases.}
As a rule, the $a_n$ Mie coefficients in the static limit can be written as functions of the pole order $n$ through the following expression
\begin{widetext}
\begin{equation}
a_n= c_n \frac{((n+1) \varepsilon_2+n \varepsilon_1) (\varepsilon_2-1) -\eta ^{2 n+1} ((n+1) \varepsilon_2-n) (\varepsilon_2-\varepsilon_1)}{(n \varepsilon_2+n+1) ((n+1) \varepsilon_2+n \varepsilon_1)-n(n+1)\eta ^{2 n+1} (\varepsilon_2-1) (\varepsilon_2-\varepsilon_1)}
\end{equation}
\end{widetext}
where $$c_n=-i\frac{n}{\left(2n-1\right)!!\left(2n+1\right)!!}y^{2n+1}$$ for $n=1,2,...$ and the higher order terms ($\mathcal{O}\left(y^{2n+3}\right)$) are truncated. From this relation, the thick-shells ($\eta\rightarrow0$) exhibit the following distribution (hollow core, $\varepsilon_1=1$)
\begin{equation}\label{eq:static2}
\varepsilon^+=-\frac{n}{n+1}+\frac{n(2n+1)}{n+1}\eta^{2n+1}
\end{equation}
\begin{equation}\label{eq:static1}
\varepsilon^-=-\frac{n+1}{n}-\frac{(n+1)(2n+1)}{n}\eta^{2n+1}
\end{equation}
The first term of Eq. (\ref{eq:static1}) corresponds to the known plasmonic distribution of a solid sphere, while the first term of Eq. (\ref{eq:static2}) follows the static resonances of the complementary problem, i.e., a hollow spherical cavity. Both resonances follow an $\eta^{2n+1}$ distribution with respect to the thickness. For large values of $n\rightarrow\infty$ both resonances converge at $\varepsilon=-1$, implying a tendency of the higher order multipoles to accumulate in this value. However, higher multipoles exhibit very sharp linewidths, proportional to $y^{2n+1}$, and their effects are difficult to visualize~\cite{Tzarouchis2016c}.
The thin-shell analysis ($\eta_c\rightarrow0$) exposes a linear (and inverse linear) distribution for all multipoles, viz.,
\begin{equation}
\varepsilon^+=-\frac{n\left(n+1\right)}{2n+1}\eta_c
\end{equation}
\begin{equation}
\varepsilon^-=\frac{1}{n+1}-\frac{2n+1}{n\left(n+1\right)\eta_c}-\frac{n^2+n+1}{3\left(2n+1\right)}\eta_c
\end{equation}
suggesting a different character with that of the thick-shell case. This result can be readily used for the thick versus thin shell designing process. Note that the symmetric resonances are more sensitive ($\propto\frac{1}{\eta_c}$) to small thickness changes than the antisymmetric ones ($\propto\eta_c$).
\section{Core material effects}\label{sec:core}
So far the radius ratio dependencies assuming a hollow shell were examined, i.e., with no material contrast between the host and core medium. In this section the dependencies caused by different core material contrasts will be discussed by allowing a general material description for the core (region 1), hereinafter denoted as $\varepsilon_1$.
For thick shells we distinguish two particular cases appearing on Eq.~(\ref{eq:dpolar}). This branching occurs for the core value $\varepsilon_1=4$, characterized as a critical value where both resonances degenerate to a single resonance for minuscule core-shell ratios, as in Fig.~\ref{fig:core} (c). Note that for the lossless case the scattering spectrum forms a Fano-like resonant profile with a scattering minimum in between both symmetric and antisymmetric resonant maxima (Fig.~\ref{fig:core} (e)). This can be also seen as a manifestation of the scattering equivalent of Foster's reactance theorem~\cite{Foster1924,Monticone2013a}. Furthermore, at this core value the two resonances deviate rapidly (Fig.~\ref{fig:core} (c)) for increasing radius ratio.
{
It is interesting to notice that a general rule describing the core permittivity branching can be found, viz.,
\begin{equation}\label{eq:br}
\varepsilon^{\eta}_\text{br}=\left(\frac{n+1}{n}\right)^2
\end{equation}
for every $n$-multipole. Here the superscript denotes the thick-shell case $\eta\rightarrow0$, implying that a different branching condition holds for the thin-shell case. This branching conditions reveals that even a small core shell might have a large impact on the position of the main resonances. Although the necessary core permittivity values can be empirically found by any Mie code implementation, condition of Eq.~(\ref{eq:br}) gives a rather straightforward rule regarding these degeneracy points.}
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth]{Asset2.pdf}
\caption{The scattering efficiency spectra for the case of (a) $\varepsilon_1=1$, (b) $\varepsilon_1=2.25$, (c) $\varepsilon_1=4$, and (d) $\varepsilon_1=5$. Note the behavior of the antisymmetric resonance, which for increasing core values exhibits a strong redshift. The extinction efficiency is in dB and the color range has been modified accordingly for depicting the resonant phenomena. Inset figure (e) depicts the extinction efficiency (in dB) for $\eta=0.1$, where one minimum and two maximum values are visible, exhibiting a Fano-like interference lineshape.}
\label{fig:core}
\end{figure*}
Going into the details, we obtain the two distinctive cases derived from Eq.~(\ref{eq:dpolar}), viz.,
\begin{alignat}{2}
\begin{rcases}
\varepsilon^+=-\frac{\varepsilon_1}{2}-\frac{3}{2}\varepsilon_1\frac{\varepsilon_1+2}{\varepsilon_1-4}\eta^3
\\
\varepsilon^-=-2+6\frac{\varepsilon_1+2}{\varepsilon_1-4}\eta^3
\end{rcases}
~\text{for }~\varepsilon_1<4
\end{alignat}
and
\begin{alignat}{2}
\begin{rcases}
\varepsilon^+=-2+6\frac{\varepsilon_1+2}{\varepsilon_1-4}\eta^3
\\
\varepsilon^-=-\frac{\varepsilon_1}{2}-\frac{3}{2}\varepsilon_1\frac{\varepsilon_1+2}{\varepsilon_1-4}\eta^3
\end{rcases}
~\text{for }~\varepsilon_1>4
\end{alignat}
Both symmetric and antisymmetric resonances mutually flip their attribution beyond the critical value. For the branching value both resonances exhibit a similar behavior, i.e,
\begin{equation}
\varepsilon^{\pm}=-2\mp3\sqrt{2}\eta^{3/2}-\frac{9}{2}\eta^3
\end{equation}
where for $\eta\rightarrow0$ the two resonances degenerate to the known plasmonic resonant condition of a solid sphere, explaining the origin of this peculiarity. However, an interesting phenomenon occurs: the width of the antisymmetric resonance widens progressively as the core permittivity increases up to the value of $4$, as can be seen in Fig.~\ref{fig:core} (a), (b), and (c).
{For the thin-layer sphere a similar branching occurs at $\varepsilon_1=-2$, resulting in the following resonant conditions
\begin{alignat}{2}\label{eq:test}
\begin{rcases}
\varepsilon^+=-2\frac{\varepsilon_1}{\varepsilon_1+2}\eta_c
\\
\varepsilon^-=\frac{\varepsilon_1}{2}-\frac{\varepsilon_1+2}{2\eta_c}
\end{rcases}
~\text{for }~\varepsilon_1>-2
\end{alignat}
while for the $\varepsilon_1<-2$ the behavior changes mutually, similarly to the thick-shell. In this case the branching condition follows the rule
\begin{equation}
\varepsilon^{\eta_c}_\text{br}=-\frac{n+1}{n}
\end{equation}
which is exactly the position of the plasmonic $n$-multipole resonances of a solid sphere~\cite{sihvola2006character}. In the case of a core material with $\varepsilon_1=-2$, the core-shell sphere degenerates to a solid one. The main symmetric resonance is present while the antisymmetric one vanishes. Small deviations of this value cause the development of antisymmetric resonances, described in Eq.~(\ref{eq:test}).}
{
Another point of interest is how the core material affects the distribution of the scattering minima (system zeros). For the case of a thick shell the zero condition is $\varepsilon_\text{zero}=-\varepsilon_1\frac{n}{n+1}$ (with a trivial zero at $\varepsilon_\text{zero}=1$), while for the $\eta_c\rightarrow0$ case there are two zeros observed, i.e., for $\varepsilon_\text{zero}=\frac{\varepsilon_1}{n+1}-\frac{\varepsilon_1-1}{(n+1)\eta_c}$ and $\varepsilon_\text{zero}=\frac{n \varepsilon_1}{\varepsilon_1-1}\eta_c$, respectively.}
\section{Dynamic effects: Pad\'e approximants of Mie coefficients}\label{sec:Dynamic}
{
Sections~\ref{sec:static} and~\ref{sec:core} demonstrated the electrostatic properties of the plasmonic resonances, extended for accounting the core effects. These results are mainly based on the Taylor series expansions of the Mie coefficients, i.e., merely electrostatic expansions that catch several phenomena in scattering but fail to predict all size-induced dynamic effects. In this section we investigate how these dynamic effects contribute to the overall scattering response of the composite core-shell sphere.
}
{
The discussion regarding the size-induced dynamic effects has a long history, covered by classical textbooks and review papers, e.g., in~\cite{jackson1975electrodynamics,deVries1998}. These dynamic effects are implemented as corrections to the originally obtained electrostatic model, see for instance~\cite{Sipe1974,Meier1983,deVries1998,LeRu2013}. Briefly, there are two types of dynamical effects present in the small-size regime: the dynamic depolarization effect~\cite{Meier1983,deVries1998} and the radiation damping (reaction)~\cite{jackson1975electrodynamics,Sipe1974}. The first one accounts for the fact that within the small size limit the incident wave exhibits a constant phase across the scatterer and hence the field can be considered uniform. Apparently this does not hold for increasing sizes, when small changes on the polarization are introduced, and hence it is named as depolarization correction. The second type of correction is imposed by conservation of energy, i.e., the resonance of a lossless passive system cannot exhibit infinite amplitude values. This energy-conservation requirement is satisfied by inserting a term of $y^3$ order, independent of the material and geometrical characteristics of the scatterer~\cite{Kelly2003,Schebarchov2013,Tretyakov2014}.}
{For the case of a spherical scatterer, these corrections can be extracted in simple and straightforward manner by expanding the Mie coefficients utilizing the Pad\'e approximants. This idea has been recently introduced for the study of the dynamic scattering effects of small spheres~\cite{Tzarouchis2016c,Tzarouchis2017}. The Pad\'e approximants are a special type of rational approximations where the expanded function is approximated as a ratio of two polynomial functions $P(x)$ and $Q(x)$ of order $L$ and $M$ (labeled as $[L/M]$ for short), respectively. This type of approximation is suitable for functions/systems containing resonant poles and zeros in their parametric space and it is widely used for approximating a plethora of physical systems~\cite{bender2013advanced}.}
{
The main findings of this Mie--Pad\'e expansion scheme can be illustrated by the following example, where the $[3/3]$ Pad\'e approximant for the first electric dipole Mie coefficient $a_1$ is given in the following form,
}
\begin{equation}\label{eq:Pade33}
a^{[3/3]}_1=\frac{a^T_1}{1+[y^2]+a^T_1}
\end{equation}
where $a^T_1$ is the static expansion of the Mie coefficient (Eq.~(\ref{eq:dpolar}))~\cite{Tzarouchis2016c} and $[y^2]$ is a second order term presented in details in the next section. {
This type of representation captures both dynamic depolarization and radiative damping effects, exhibiting similarities with other proposed models, see for example in~\cite{Meier1983,deVries1998,Kelly2003,Carminati2006,LeRu2013}. However, in our analysis these dynamic corrections occur by the analysis of the Mie--Pad\'e coefficients, a manifestation of the intrinsic character of these mechanisms to the scattering process. The same Pad\'e approximant scheme can be practically used for the analysis of the scattering amplitudes of arbitrary shaped scatterers possessing a resonant spectrum.}
\subsection{Size-induced dynamic depolarization effects}
{
Before the analysis of the depolarization effects for a core-shell sphere, it is important to make some connecting remarks between the Mie--Taylor results obtained and the proposed Mie--Pad\'e expansion. The lowest non-zero Pad\'e expansion that can be found for the first electric dipole term $a_1$ of a core-shell sphere is of the order $[3/0]$ and it is identical to the Maclaurin expansion of Eq.~(\ref{eq:dpolar}). The next term with a different expression is the $[3/2]$ Pad\'e approximant, viz.,
\begin{equation}\label{eq:pade32}
a^{[3/2]}_1=\frac{a^T_1}{1+[y^2]}
\end{equation}
where the $[y^2]$ term reads
\begin{equation}\label{eq:y2}
[y^2]=\frac{3}{5}\frac{c_1\eta^6+c_2\eta^5+c_3\eta^3+c_4}{c_5\eta^6+c_6\eta^3+c_7}y^2
\end{equation}
where the exact values of coefficients $c_1$ to $c_7$ can be found in Appendix B.
}
{
Since Eq.~(\ref{eq:y2}) is a complicated expression involving both the thickness and the core permittivity, some physical intuition can be gained by applying the same methodology as above, i.e., expand the $[y^2]$ term for very thick and thin shells. For the first case the depolarization term reads
\begin{equation}\label{eq:y2thick}
[y^2]=-\frac{3}{5}\frac{\varepsilon_2-2}{\varepsilon_2+2}y^2
\end{equation}
being identical to the dynamic depolarization term of the solid sphere~\cite{Tzarouchis2016c}. This result is of no surprise since the thick-shell limit approaches naturally the solid sphere case. Note that this term is independent of the core material $\varepsilon_1$, implying that for thick shells the depolarization terms are mainly affected by the shell material.
}
{
Interestingly, for the thin-shell case we distinguish two different cases. First, a hollow core ($\varepsilon_1=1$) gives
\begin{equation}
[y^2]=\frac{1}{5}y^2\frac{4\varepsilon_2+1}{2\varepsilon_2+1}
\end{equation}
while the $[y^2]$ term is
\begin{equation}
[y^2]=-\frac{3}{5}\frac{\varepsilon_1-2}{\varepsilon_1+2}y^2
\end{equation}
when core permittivity is $\varepsilon_1$. In the first hollow case the results indicate that the depolarization effects become negligible for shell permittivity values close to $\varepsilon_2=-1/4$ while a resonance is exhibited for values close to $\varepsilon_2=-1/2$. On the other hand, the existence of core material affects the dynamic depolarization is similar to the case of Eq.~(\ref{eq:y2thick}), exposing a zero depolarization values for $\varepsilon_1=2$.
}
{
In a similar manner by analyzing Eq.~(\ref{eq:pade32}) we obtain the following pole conditions for thick ($\eta\rightarrow0$)
\begin{equation}\label{eq:pole_depol1}
\varepsilon^+=-\frac{1}{2}-\frac{3}{10}y^2\left(1+\frac{7}{10}\eta^3\right)+\frac{3}{2}\eta^3+...
\end{equation}
\begin{equation}
\varepsilon^-=-2-\frac{12}{5}y^2-6\eta^3
\end{equation}
and for thin shells ($\eta_c\rightarrow0$)
\begin{equation}
\varepsilon^+=-\frac{2}{3}\eta_c-\frac{4}{15}y^2\eta_c
\end{equation}
\begin{equation}\label{eq:pole_depol4}
\varepsilon^-=\frac{1}{2}-\frac{3}{2\eta_c}\left(1+\frac{7}{10}y^2\right)-\frac{1}{10}y^2
\end{equation}
indicating that in both cases the symmetric resonance $\varepsilon^-$ exhibits a stronger red-shifting compared to the antisymmetric case.
}
\subsection{Radiative damping}
{
The next interesting Pad\'e expansion is the $[3/3]$ as can be seen in Eq~(\ref{eq:Pade33}), where an extra term of $y^3$ order appears, having the same value of the static polarizability shown in Eq.~(\ref{eq:dpolar}). The radiative damping process is an intrinsic mechanism, required by the conservation of energy, affecting the plasmonic (or any other type) of radiative resonances~\cite{jackson1975electrodynamics} of a scatterer/antenna.
}
{
Any scatterer can be seen as an open resonator possesing conditions where the scattered fields resonate. The scattering poles of such resonators are generally complex called, often called as \emph{natural frequencies}~\cite{stratton2007electromagnetic}; passivity requires the resonant frequency to be complex. The resonances in our case are studied from the shell permittivity perspective, therefore the appearance of this extra term, i.e, the radiation reaction, will result in an imaginary part of the permittivity condition, even for the lossless case. Note that this term dictates the width of the resonance and the amount of losses required for obtaining maximum absorption in the case of a lossy material~\cite{Tretyakov2014,Tzarouchis2016c,osipov2017modern}.}
{
Turning into the pole condition analysis, by neglecting the $[y^2]$ terms of the first $[3/3]$ Pad\'e approximant we obtain the following pole condition for a thick-shell, hollow ($\varepsilon_1=1$) case:
\begin{equation}\label{eq:rad1}
\varepsilon^+=-\frac{1}{2}+\frac{3}{2}\eta^3-iy^3\eta^3
\end{equation}
\begin{equation}\label{eq:rad2}
\varepsilon^-=-2-6\eta^3-2iy^3\left(1+\eta^3\right)
\end{equation}
The complete expression can be restored by including the dynamic depolarization terms, extracted in Eqs.~(\ref{eq:pole_depol1})--(\ref{eq:pole_depol4}).
}
The dynamic correction term of the symmetric resonance is identical with the solid sphere case~\cite{Tzarouchis2016c}, where the radiative damping term exhibits a $y^3$ dependence including an additional volume ($\eta^3$) thickness contribution. The antisymmetric resonance exhibits a similar size volume dependence, demonstrating a somewhat different dependence, vanishing for very small ratio values.
Similarly, for the thin-shell case we have
\begin{equation}\label{eq:rad3}
\varepsilon^+=-\frac{2}{3}\eta_c-\frac{2}{9}i\eta_cy^3
\end{equation}
\begin{equation}\label{eq:rad4}
\varepsilon^-=\frac{1}{2}-\frac{3}{2\eta_c}-i\frac{y^3}{\eta_c}
\end{equation}
where both imaginary terms have a first order dependency with respect to the ratio.
All the imaginary terms in Eqs.~(\ref{eq:rad1}),~(\ref{eq:rad2}),~(\ref{eq:rad3}), and~(\ref{eq:rad4}) reveal a very interesting result regarding the radiative damping process: a hollow core-shell structure exhibits an extra degree of freedom for engineering the scattering behavior. This can be used for reverse engineering purposes, e.g., searching for the best suitable material or finding whether the core-shell structures have better absorption performance than the solid ones for a given material. Simply put, the radiative damping term quantifies the amount of the required losses for maximizing the absorption~\cite{Soric2014,Tretyakov2014}, and regulates the maximum linewidth of the scattering process~\cite{Carminati2006,Tzarouchis2017}.
Finally, we will briefly present how the core material influences the radiative damping process. Similar to the static case, as can be found in Section~\ref{sec:core}, the same type of branching is observed here, i.e., at $\varepsilon_1=4$ for thick and $\varepsilon_1=-2$ for thin shells, respectively. For the thick-shell case ($\eta\rightarrow0$) we obtain
\begin{alignat}{2}
\begin{rcases}\label{eq:pole_d}
\varepsilon^+=-\frac{\varepsilon_1}{2}-\frac{3}{2}\varepsilon_1\frac{\varepsilon_1+2}{\varepsilon_1-4}\eta^3-iy^3g^+(\varepsilon_1,\eta)
\\
\\
\varepsilon^-=-2+6\frac{\varepsilon_1+2}{\varepsilon_1-4}\eta^3-2iy^3g^-(\varepsilon_1,\eta)
\end{rcases}
\text{for}~\varepsilon_1<4
\end{alignat}
with $g^\pm$ being a function of $\varepsilon_1$ and $\eta$, viz.,
\begin{equation}\label{eq:g+}
g^+(\varepsilon_1,\eta)=9\frac{\varepsilon_1^2}{(\varepsilon_1-4)^2}\eta^3
\end{equation}
\begin{equation}\label{eq:g-}
g^-(\varepsilon_1,\eta)=1-3\frac{\varepsilon_1(\varepsilon_1+4)-8}{(\varepsilon_1-4)^2}\eta^3
\end{equation}
For a fixed $\eta$ value Eqs.~(\ref{eq:g+}) and~(\ref{eq:g-}) exhibit their maximum and minimum values for $\varepsilon_1\rightarrow0$.
As a reminder the symmetric damping function Eq.~(\ref{eq:g-}) mutually flips with the antisymmetric function Eq.~(\ref{eq:g+}) when $\varepsilon_1>4$.
In a similar manner the thin-shell case ($\eta_c\rightarrow0$) gives the following pole conditions,
\begin{alignat}{2}
\begin{rcases}\label{eq:pole_dc}
\varepsilon^+=-2\frac{\varepsilon_1}{\varepsilon_1+2}\eta_c-2iy^3g^+_c(\varepsilon_1,\eta_c)
\\
\varepsilon^-=-\frac{\varepsilon_1}{2}+\frac{\varepsilon_1+2}{2\eta_c}-iy^3g^-_c(\varepsilon_1,\eta_c)
\end{rcases}
~\text{for }~\varepsilon_1>-2
\end{alignat}
with
\begin{equation}\label{eq:gc+}
g^+_c(\varepsilon_1,\eta_c)=\frac{\varepsilon_1^2}{(\varepsilon_1+2)^2}\eta_c
\end{equation}
\begin{equation}\label{eq:gc-}
g^-_c(\varepsilon_1,\eta_c)=\frac{1}{\eta_c}-\frac{4}{3}\eta_c\frac{\varepsilon_1(\varepsilon_1-2)-2}{(\varepsilon_1+2)^2}
\end{equation}
As before, Eq.~(\ref{eq:gc+}) has a zero at $\varepsilon_1=0$ while Eq.~(\ref{eq:gc-}) exhibits a minimum value at this point, where the symmetric and antisymmetric resonances mutually exchange their behavior when $\varepsilon_1<-2$.
{
The extracted $g^\pm(\varepsilon_1,\eta)$ functions demonstrate a couple of very interesting peculiarities. First, by adjusting the core
material and ratio we can practically adjust the linewidth and resonant maximum absorption of both symmetric and antisymmetric resonances. This is a particularly attractive feature that enhances the importance of core-shell versus solid scatterers. Moreover, these changes can take place even for very thick shells, having a large impact on the overall performance of the scatterer. Therefore, combining Eqs. (\ref{eq:pole_d})--(\ref{eq:gc-}) one can extract physically intuitive information about the proper design and implementation of small core-shell scatterers.
}
{
A second point of interest can be derived by comparing the radiative damping and dynamic depolarization term. The first is affected by the core material, even for vanishingly small core sizes, being in stark contrast with the dynamic depolarization mechanism, where for thick shell there are no core material effects. This fact can be potentially used for adjusting the radiative characteristics (linewidth, absorption), preserving at the same time the amount of depolarization effects of a scatterer.
}
\section{Conclusions and Summary}
In this article we presented and discussed the general features regarding the electromagnetic scattering by a small core-shell sphere. Thickness effects for the plasmonic resonances were presented throughout the analysis in the electrostatic (Rayleigh) limit, revealing a series of special characteristics, such as the resonant trends of the symmetric and antisymmetric dipole resonances. These trends are quantified in terms of resonant conditions with respect to the shell permittivity of the scatterer.
A generalization towards higher order multipoles was given, following the same static-limit approximation for the Mie coefficients for small hollow scatterers. {As a rule-of-thumb, thick shells exhibit an $\eta^{2n+1}$ depedence for all n-multipoles, while thin shells exhibit an $\eta_c$ (or $1/\eta_c$ for the antisymmetric case) dependence for all higher multipoles.}
Consequently, we studied the effects of the core material to the overall scattering revealing several aspects about this resonant phenomenon and its intrinsic mechanisms involved. The existence of a critical permittivity value demonstrated a core-induced peculiarity regarding the character of the resonances in the small ratio limit. This can expand the potential usage of plasmonic core-sell structures and their functionalities.
{
The analysis expanded into including also dynamic effects by utilizing the recently introduced Mie--Pad\'e expansion, where the Mie coefficients are expanded as Pad\'e approximants. This perspective delivered physically intuitive information regarding the dynamic effects of small scatterers, such as the dynamic depolarization and the radiation reaction mechanisms. In this way the electrostatic model (Mie--Taylor) has been expanded, revealing promising results about the scattering process, such as the core effects on the dynamic depolarization and the radiative damping dependencies in both radius ratio and core permittivity.
}
{
We foresee that both the described analysis and the Mie--Pad\'e expansion will inspire further studies regarding the dynamical dependencies of small scatterers. The results can be readily expanded for the study of other, canonical or non canonical scatterers, through the Pad\'e analysis of their scattering amplitudes. In this way new routes can be carved for the precise engineering of their dynamic mechanisms, towards their implementation to applications that span between radio science, applied chemistry, nanotechnology, and beyond.
}
\section*{Appendix A: Connecting the electrostatic results with the plasmon hybridization model}\label{sec:AppA}
{
The mathematical analysis presented in the above sections emphasized in the thick- and thin-shell dependencies for the permittivity parameter space on its electrostatic limit. A general connection with the electrodynamic scattering and the plasmon hybridization model has been recently presented in~\cite{Thiessen2016}, where the Mie coefficients reorganized, reflecting the plasmon hybridization~\cite{Prodan2003}.
Here we provide a simple, tutorial-like connection between the aforementioned, static-derived, results and the plasmon hybridization model~\cite{Prodan2003}, where the trends for thin and thick cases are demonstrated. The main emphasis is given to the simplicity and the physical intuition provided by this perspective.
}
{
Assuming that the free (conduction) electrons constitute an incompressible, irrotational, and charged fluid density (plasma)~\cite{Prodan2004}, plasmonic resonances emerge as the oscillatory solutions analogous to the oscillating modes of a harmonic oscillator~\cite{Prodan2004,Mukhopadhyay1975} described by the Laplace equation; its solutions give a set of harmonic functions. The term hybridization implies that the problem is formulated with a Lagrangian method, including both kinetic and potential energy (hybridization) in its system description~\cite{Prodan2004}.
For the case of a spherical hollow core-shell structure, after a significant amount of calculations, these resonant frequencies are described by the following condition~\cite{Mukhopadhyay1975}
\begin{equation}\label{eq:hyb}
\omega_{n\pm}^2=\frac{\omega^2_B}{2}\left[1\pm\frac{1}{2n+1}\sqrt{1+4n\left(n+1\right)\eta^{2n+1}}\right]
\end{equation}
where $\omega_{n+}^2$ and $\omega_{n-}^2$ is the symmetric and antisymmetric resonances for a given background frequency (plasma frequency) $\omega_B$ and a given multipole, i.e., dipole ($n=1$), quadrupole ($n=2$) and so on~\cite{Roman-Velazquez2011}.}
{
Let us now consider a lossless Drude material dispersion model, i.e., $\varepsilon= 1-\frac{\omega^2_B}{\omega^2}$. By inserting Eq.~(\ref{eq:hyb}) to the material dispersion model we obtain two discrete resonances
\begin{equation}\label{eq:hyb_epsilon1}
\varepsilon^{\pm}_{n}=1-\frac{2}{1\pm \frac{\sqrt{4 n (n+1) \eta ^{2 n+1}+1}}{2n+1}}
\end{equation}
described as a function of the radius ratio $\eta$. For the dipole case the following expression is obtained
\begin{equation}\label{eq:hyb_epsilon2}
\varepsilon^{\pm}_{1}=1-\frac{2}{1\pm \frac{1}{3} \sqrt{1+ 8 \eta ^3}}
\end{equation}
For $\eta\rightarrow0$ the resonances go to $\varepsilon^+_1\rightarrow-\frac{1}{2}$ and $\varepsilon^-_1\rightarrow-2$, respectively, while for $\eta\rightarrow1$ we obtain $\varepsilon^+_1\rightarrow0^-$ and $\varepsilon^-_1\rightarrow-\infty$.
}
{
Clearly these trends are visible also in the the static model approach described in Section~\ref{sec:static}. A straightforward connection with the static results is obtained with the expansion of Eq.~(\ref{eq:hyb_epsilon2}) with respect to the radius ratio; for thick shells we obtain
\begin{equation}
\varepsilon^+_1=-\frac{1}{2}+\frac{3 \eta ^3}{2}
\end{equation}
\begin{equation}
\varepsilon^-_1=-2-6 \eta ^3
\end{equation}
while for thin shells ($\eta_c$) we have
\begin{equation}
\varepsilon^+_1=-\frac{2}{3} \eta_c
\end{equation}
\begin{equation}
\varepsilon^-_1=\frac{1}{2}-\frac{3}{2\eta_c}-\frac{1}{3}\eta_c
\end{equation}
corresponding exactly to the static results of Eqs. (\ref{eq:static_thick1}), (\ref{eq:static_thick2}), (\ref{eq:static_thin1}), and (\ref{eq:static_thin2}), respectively. This fact can be also seen by the comparison between Eq.~(\ref{eq:static}) and Eq.~(\ref{eq:hyb_epsilon1}), where both expressions are mathematically equivalent. Note that all the higher order terms were truncated.
}
{
It is obvious that the electrostatic analysis captures the essence of the hybridization model, since both models exhibit spherical harmonic solutions (Laplace equation)~\cite{Prodan2003,Sihvola1988}. As a consequence, the electrodynamic perspective (Mie theory) is practically equivalent with both models in the small-size limit. For completeness, we derive the frequency (energy) equivalent relations for both thick and thin cases, viz.,
\begin{alignat}{2}
\begin{rcases}
\omega^2_{1+}=\frac{2}{3}\omega^2_B(1+\eta^3)
\\
\omega^2_{1-}=\frac{1}{3}\omega^2_B(1-2\eta^3)
\end{rcases} ~\text{for }~\eta\rightarrow0
\end{alignat}
and
\begin{alignat}{2}
\begin{rcases}
\omega^2_{1+}=\omega^2_B(1-\frac{2}{3}\eta_c)
\\
\omega^2_{1-}=\omega^2_B\frac{2}{3}\eta_c
\end{rcases}
~\text{for }~\eta_c\rightarrow0
\end{alignat}
}
\section*{Appendix B: dynamic depolarization terms}\label{sec:AppB}
{
In this section the coefficients $c_1$ to $c_7$ used in Eq.~(\ref{eq:y2}) are presented. These coefficients are functions of both core and shell permittivities $\varepsilon_1$ and $\varepsilon_2$,
\begin{equation}
c_1=2 d_1d_2^2 \left(4\varepsilon_2+1\right)
\end{equation}
\begin{equation}
c_2=-9d_2\varepsilon_2^2\left(\varepsilon_1-2\varepsilon_2\right)
\end{equation}
\begin{equation}
c_3=-d_1d_2d_3\left(7\varepsilon_2+4\right)
\end{equation}
\begin{equation}
c_4=-d_1d_3^2 \left(\varepsilon_2-2\right)
\end{equation}
\begin{equation}
c_5=2 d_1d_2^2 \left(2\varepsilon_2+1\right)
\end{equation}
\begin{equation}
c_6=d_2d_3\left(4\varepsilon_2^2+\varepsilon_2+4\right)
\end{equation}
\begin{equation}
c_7=d_1d_3^2 \left(\varepsilon_2+2\right)
\end{equation}
with
\begin{equation}
d_1=\varepsilon_2-1\text{, }d_2=\varepsilon_1-\varepsilon_2\text{, and } d_3=\varepsilon_1+2\varepsilon_2
\end{equation}
}
\balance
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-7422 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The ALICE Collaboration}
\begingroup
\small
\begin{flushleft}
D.~Adamov\'{a}$^\textrm{\scriptsize 87}$,
M.M.~Aggarwal$^\textrm{\scriptsize 91}$,
G.~Aglieri Rinella$^\textrm{\scriptsize 34}$,
M.~Agnello$^\textrm{\scriptsize 30}$\textsuperscript{,}$^\textrm{\scriptsize 113}$,
N.~Agrawal$^\textrm{\scriptsize 47}$,
Z.~Ahammed$^\textrm{\scriptsize 139}$,
S.~Ahmad$^\textrm{\scriptsize 17}$,
S.U.~Ahn$^\textrm{\scriptsize 69}$,
S.~Aiola$^\textrm{\scriptsize 143}$,
A.~Akindinov$^\textrm{\scriptsize 54}$,
S.N.~Alam$^\textrm{\scriptsize 139}$,
D.S.D.~Albuquerque$^\textrm{\scriptsize 124}$,
D.~Aleksandrov$^\textrm{\scriptsize 83}$,
B.~Alessandro$^\textrm{\scriptsize 113}$,
D.~Alexandre$^\textrm{\scriptsize 104}$,
R.~Alfaro Molina$^\textrm{\scriptsize 64}$,
A.~Alici$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 107}$,
A.~Alkin$^\textrm{\scriptsize 3}$,
J.~Alme$^\textrm{\scriptsize 21}$\textsuperscript{,}$^\textrm{\scriptsize 36}$,
T.~Alt$^\textrm{\scriptsize 41}$,
S.~Altinpinar$^\textrm{\scriptsize 21}$,
I.~Altsybeev$^\textrm{\scriptsize 138}$,
C.~Alves Garcia Prado$^\textrm{\scriptsize 123}$,
M.~An$^\textrm{\scriptsize 7}$,
C.~Andrei$^\textrm{\scriptsize 80}$,
H.A.~Andrews$^\textrm{\scriptsize 104}$,
A.~Andronic$^\textrm{\scriptsize 100}$,
V.~Anguelov$^\textrm{\scriptsize 96}$,
C.~Anson$^\textrm{\scriptsize 90}$,
T.~Anti\v{c}i\'{c}$^\textrm{\scriptsize 101}$,
F.~Antinori$^\textrm{\scriptsize 110}$,
P.~Antonioli$^\textrm{\scriptsize 107}$,
R.~Anwar$^\textrm{\scriptsize 126}$,
L.~Aphecetche$^\textrm{\scriptsize 116}$,
H.~Appelsh\"{a}user$^\textrm{\scriptsize 60}$,
S.~Arcelli$^\textrm{\scriptsize 26}$,
R.~Arnaldi$^\textrm{\scriptsize 113}$,
O.W.~Arnold$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 35}$,
I.C.~Arsene$^\textrm{\scriptsize 20}$,
M.~Arslandok$^\textrm{\scriptsize 60}$,
B.~Audurier$^\textrm{\scriptsize 116}$,
A.~Augustinus$^\textrm{\scriptsize 34}$,
R.~Averbeck$^\textrm{\scriptsize 100}$,
M.D.~Azmi$^\textrm{\scriptsize 17}$,
A.~Badal\`{a}$^\textrm{\scriptsize 109}$,
Y.W.~Baek$^\textrm{\scriptsize 68}$,
S.~Bagnasco$^\textrm{\scriptsize 113}$,
R.~Bailhache$^\textrm{\scriptsize 60}$,
R.~Bala$^\textrm{\scriptsize 93}$,
A.~Baldisseri$^\textrm{\scriptsize 65}$,
M.~Ball$^\textrm{\scriptsize 44}$,
R.C.~Baral$^\textrm{\scriptsize 57}$,
A.M.~Barbano$^\textrm{\scriptsize 25}$,
R.~Barbera$^\textrm{\scriptsize 27}$,
F.~Barile$^\textrm{\scriptsize 32}$,
L.~Barioglio$^\textrm{\scriptsize 25}$,
G.G.~Barnaf\"{o}ldi$^\textrm{\scriptsize 142}$,
L.S.~Barnby$^\textrm{\scriptsize 104}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
V.~Barret$^\textrm{\scriptsize 71}$,
P.~Bartalini$^\textrm{\scriptsize 7}$,
K.~Barth$^\textrm{\scriptsize 34}$,
J.~Bartke$^\textrm{\scriptsize 120}$\Aref{0},
E.~Bartsch$^\textrm{\scriptsize 60}$,
M.~Basile$^\textrm{\scriptsize 26}$,
N.~Bastid$^\textrm{\scriptsize 71}$,
S.~Basu$^\textrm{\scriptsize 139}$,
B.~Bathen$^\textrm{\scriptsize 61}$,
G.~Batigne$^\textrm{\scriptsize 116}$,
A.~Batista Camejo$^\textrm{\scriptsize 71}$,
B.~Batyunya$^\textrm{\scriptsize 67}$,
P.C.~Batzing$^\textrm{\scriptsize 20}$,
I.G.~Bearden$^\textrm{\scriptsize 84}$,
H.~Beck$^\textrm{\scriptsize 96}$,
C.~Bedda$^\textrm{\scriptsize 30}$,
N.K.~Behera$^\textrm{\scriptsize 50}$,
I.~Belikov$^\textrm{\scriptsize 135}$,
F.~Bellini$^\textrm{\scriptsize 26}$,
H.~Bello Martinez$^\textrm{\scriptsize 2}$,
R.~Bellwied$^\textrm{\scriptsize 126}$,
L.G.E.~Beltran$^\textrm{\scriptsize 122}$,
V.~Belyaev$^\textrm{\scriptsize 76}$,
G.~Bencedi$^\textrm{\scriptsize 142}$,
S.~Beole$^\textrm{\scriptsize 25}$,
A.~Bercuci$^\textrm{\scriptsize 80}$,
Y.~Berdnikov$^\textrm{\scriptsize 89}$,
D.~Berenyi$^\textrm{\scriptsize 142}$,
R.A.~Bertens$^\textrm{\scriptsize 53}$\textsuperscript{,}$^\textrm{\scriptsize 129}$,
D.~Berzano$^\textrm{\scriptsize 34}$,
L.~Betev$^\textrm{\scriptsize 34}$,
A.~Bhasin$^\textrm{\scriptsize 93}$,
I.R.~Bhat$^\textrm{\scriptsize 93}$,
A.K.~Bhati$^\textrm{\scriptsize 91}$,
B.~Bhattacharjee$^\textrm{\scriptsize 43}$,
J.~Bhom$^\textrm{\scriptsize 120}$,
L.~Bianchi$^\textrm{\scriptsize 126}$,
N.~Bianchi$^\textrm{\scriptsize 73}$,
C.~Bianchin$^\textrm{\scriptsize 141}$,
J.~Biel\v{c}\'{\i}k$^\textrm{\scriptsize 38}$,
J.~Biel\v{c}\'{\i}kov\'{a}$^\textrm{\scriptsize 87}$,
A.~Bilandzic$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 97}$,
G.~Biro$^\textrm{\scriptsize 142}$,
R.~Biswas$^\textrm{\scriptsize 4}$,
S.~Biswas$^\textrm{\scriptsize 4}$,
J.T.~Blair$^\textrm{\scriptsize 121}$,
D.~Blau$^\textrm{\scriptsize 83}$,
C.~Blume$^\textrm{\scriptsize 60}$,
G.~Boca$^\textrm{\scriptsize 136}$,
F.~Bock$^\textrm{\scriptsize 75}$\textsuperscript{,}$^\textrm{\scriptsize 96}$,
A.~Bogdanov$^\textrm{\scriptsize 76}$,
L.~Boldizs\'{a}r$^\textrm{\scriptsize 142}$,
M.~Bombara$^\textrm{\scriptsize 39}$,
G.~Bonomi$^\textrm{\scriptsize 137}$,
M.~Bonora$^\textrm{\scriptsize 34}$,
J.~Book$^\textrm{\scriptsize 60}$,
H.~Borel$^\textrm{\scriptsize 65}$,
A.~Borissov$^\textrm{\scriptsize 99}$,
M.~Borri$^\textrm{\scriptsize 128}$,
E.~Botta$^\textrm{\scriptsize 25}$,
C.~Bourjau$^\textrm{\scriptsize 84}$,
P.~Braun-Munzinger$^\textrm{\scriptsize 100}$,
M.~Bregant$^\textrm{\scriptsize 123}$,
T.A.~Broker$^\textrm{\scriptsize 60}$,
T.A.~Browning$^\textrm{\scriptsize 98}$,
M.~Broz$^\textrm{\scriptsize 38}$,
E.J.~Brucken$^\textrm{\scriptsize 45}$,
E.~Bruna$^\textrm{\scriptsize 113}$,
G.E.~Bruno$^\textrm{\scriptsize 32}$,
D.~Budnikov$^\textrm{\scriptsize 102}$,
H.~Buesching$^\textrm{\scriptsize 60}$,
S.~Bufalino$^\textrm{\scriptsize 30}$\textsuperscript{,}$^\textrm{\scriptsize 25}$,
P.~Buhler$^\textrm{\scriptsize 115}$,
S.A.I.~Buitron$^\textrm{\scriptsize 62}$,
P.~Buncic$^\textrm{\scriptsize 34}$,
O.~Busch$^\textrm{\scriptsize 132}$,
Z.~Buthelezi$^\textrm{\scriptsize 66}$,
J.B.~Butt$^\textrm{\scriptsize 15}$,
J.T.~Buxton$^\textrm{\scriptsize 18}$,
J.~Cabala$^\textrm{\scriptsize 118}$,
D.~Caffarri$^\textrm{\scriptsize 34}$,
H.~Caines$^\textrm{\scriptsize 143}$,
A.~Caliva$^\textrm{\scriptsize 53}$,
E.~Calvo Villar$^\textrm{\scriptsize 105}$,
P.~Camerini$^\textrm{\scriptsize 24}$,
A.A.~Capon$^\textrm{\scriptsize 115}$,
F.~Carena$^\textrm{\scriptsize 34}$,
W.~Carena$^\textrm{\scriptsize 34}$,
F.~Carnesecchi$^\textrm{\scriptsize 26}$\textsuperscript{,}$^\textrm{\scriptsize 12}$,
J.~Castillo Castellanos$^\textrm{\scriptsize 65}$,
A.J.~Castro$^\textrm{\scriptsize 129}$,
E.A.R.~Casula$^\textrm{\scriptsize 23}$\textsuperscript{,}$^\textrm{\scriptsize 108}$,
C.~Ceballos Sanchez$^\textrm{\scriptsize 9}$,
P.~Cerello$^\textrm{\scriptsize 113}$,
B.~Chang$^\textrm{\scriptsize 127}$,
S.~Chapeland$^\textrm{\scriptsize 34}$,
M.~Chartier$^\textrm{\scriptsize 128}$,
J.L.~Charvet$^\textrm{\scriptsize 65}$,
S.~Chattopadhyay$^\textrm{\scriptsize 139}$,
S.~Chattopadhyay$^\textrm{\scriptsize 103}$,
A.~Chauvin$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 35}$,
M.~Cherney$^\textrm{\scriptsize 90}$,
C.~Cheshkov$^\textrm{\scriptsize 134}$,
B.~Cheynis$^\textrm{\scriptsize 134}$,
V.~Chibante Barroso$^\textrm{\scriptsize 34}$,
D.D.~Chinellato$^\textrm{\scriptsize 124}$,
S.~Cho$^\textrm{\scriptsize 50}$,
P.~Chochula$^\textrm{\scriptsize 34}$,
K.~Choi$^\textrm{\scriptsize 99}$,
M.~Chojnacki$^\textrm{\scriptsize 84}$,
S.~Choudhury$^\textrm{\scriptsize 139}$,
P.~Christakoglou$^\textrm{\scriptsize 85}$,
C.H.~Christensen$^\textrm{\scriptsize 84}$,
P.~Christiansen$^\textrm{\scriptsize 33}$,
T.~Chujo$^\textrm{\scriptsize 132}$,
S.U.~Chung$^\textrm{\scriptsize 99}$,
C.~Cicalo$^\textrm{\scriptsize 108}$,
L.~Cifarelli$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 26}$,
F.~Cindolo$^\textrm{\scriptsize 107}$,
J.~Cleymans$^\textrm{\scriptsize 92}$,
F.~Colamaria$^\textrm{\scriptsize 32}$,
D.~Colella$^\textrm{\scriptsize 55}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
A.~Collu$^\textrm{\scriptsize 75}$,
M.~Colocci$^\textrm{\scriptsize 26}$,
G.~Conesa Balbastre$^\textrm{\scriptsize 72}$,
Z.~Conesa del Valle$^\textrm{\scriptsize 51}$,
M.E.~Connors$^\textrm{\scriptsize 143}$\Aref{idp1764384},
J.G.~Contreras$^\textrm{\scriptsize 38}$,
T.M.~Cormier$^\textrm{\scriptsize 88}$,
Y.~Corrales Morales$^\textrm{\scriptsize 113}$,
I.~Cort\'{e}s Maldonado$^\textrm{\scriptsize 2}$,
P.~Cortese$^\textrm{\scriptsize 31}$,
M.R.~Cosentino$^\textrm{\scriptsize 125}$,
F.~Costa$^\textrm{\scriptsize 34}$,
S.~Costanza$^\textrm{\scriptsize 136}$,
J.~Crkovsk\'{a}$^\textrm{\scriptsize 51}$,
P.~Crochet$^\textrm{\scriptsize 71}$,
E.~Cuautle$^\textrm{\scriptsize 62}$,
L.~Cunqueiro$^\textrm{\scriptsize 61}$,
T.~Dahms$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 97}$,
A.~Dainese$^\textrm{\scriptsize 110}$,
M.C.~Danisch$^\textrm{\scriptsize 96}$,
A.~Danu$^\textrm{\scriptsize 58}$,
D.~Das$^\textrm{\scriptsize 103}$,
I.~Das$^\textrm{\scriptsize 103}$,
S.~Das$^\textrm{\scriptsize 4}$,
A.~Dash$^\textrm{\scriptsize 81}$,
S.~Dash$^\textrm{\scriptsize 47}$,
S.~De$^\textrm{\scriptsize 48}$\textsuperscript{,}$^\textrm{\scriptsize 123}$,
A.~De Caro$^\textrm{\scriptsize 29}$,
G.~de Cataldo$^\textrm{\scriptsize 106}$,
C.~de Conti$^\textrm{\scriptsize 123}$,
J.~de Cuveland$^\textrm{\scriptsize 41}$,
A.~De Falco$^\textrm{\scriptsize 23}$,
D.~De Gruttola$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 29}$,
N.~De Marco$^\textrm{\scriptsize 113}$,
S.~De Pasquale$^\textrm{\scriptsize 29}$,
R.D.~De Souza$^\textrm{\scriptsize 124}$,
H.F.~Degenhardt$^\textrm{\scriptsize 123}$,
A.~Deisting$^\textrm{\scriptsize 100}$\textsuperscript{,}$^\textrm{\scriptsize 96}$,
A.~Deloff$^\textrm{\scriptsize 79}$,
C.~Deplano$^\textrm{\scriptsize 85}$,
P.~Dhankher$^\textrm{\scriptsize 47}$,
D.~Di Bari$^\textrm{\scriptsize 32}$,
A.~Di Mauro$^\textrm{\scriptsize 34}$,
P.~Di Nezza$^\textrm{\scriptsize 73}$,
B.~Di Ruzza$^\textrm{\scriptsize 110}$,
M.A.~Diaz Corchero$^\textrm{\scriptsize 10}$,
T.~Dietel$^\textrm{\scriptsize 92}$,
P.~Dillenseger$^\textrm{\scriptsize 60}$,
R.~Divi\`{a}$^\textrm{\scriptsize 34}$,
{\O}.~Djuvsland$^\textrm{\scriptsize 21}$,
A.~Dobrin$^\textrm{\scriptsize 58}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
D.~Domenicis Gimenez$^\textrm{\scriptsize 123}$,
B.~D\"{o}nigus$^\textrm{\scriptsize 60}$,
O.~Dordic$^\textrm{\scriptsize 20}$,
T.~Drozhzhova$^\textrm{\scriptsize 60}$,
A.K.~Dubey$^\textrm{\scriptsize 139}$,
A.~Dubla$^\textrm{\scriptsize 100}$,
L.~Ducroux$^\textrm{\scriptsize 134}$,
A.K.~Duggal$^\textrm{\scriptsize 91}$,
P.~Dupieux$^\textrm{\scriptsize 71}$,
R.J.~Ehlers$^\textrm{\scriptsize 143}$,
D.~Elia$^\textrm{\scriptsize 106}$,
E.~Endress$^\textrm{\scriptsize 105}$,
H.~Engel$^\textrm{\scriptsize 59}$,
E.~Epple$^\textrm{\scriptsize 143}$,
B.~Erazmus$^\textrm{\scriptsize 116}$,
F.~Erhardt$^\textrm{\scriptsize 133}$,
B.~Espagnon$^\textrm{\scriptsize 51}$,
S.~Esumi$^\textrm{\scriptsize 132}$,
G.~Eulisse$^\textrm{\scriptsize 34}$,
J.~Eum$^\textrm{\scriptsize 99}$,
D.~Evans$^\textrm{\scriptsize 104}$,
S.~Evdokimov$^\textrm{\scriptsize 114}$,
L.~Fabbietti$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 97}$,
D.~Fabris$^\textrm{\scriptsize 110}$,
J.~Faivre$^\textrm{\scriptsize 72}$,
A.~Fantoni$^\textrm{\scriptsize 73}$,
M.~Fasel$^\textrm{\scriptsize 88}$\textsuperscript{,}$^\textrm{\scriptsize 75}$,
L.~Feldkamp$^\textrm{\scriptsize 61}$,
A.~Feliciello$^\textrm{\scriptsize 113}$,
G.~Feofilov$^\textrm{\scriptsize 138}$,
J.~Ferencei$^\textrm{\scriptsize 87}$,
A.~Fern\'{a}ndez T\'{e}llez$^\textrm{\scriptsize 2}$,
E.G.~Ferreiro$^\textrm{\scriptsize 16}$,
A.~Ferretti$^\textrm{\scriptsize 25}$,
A.~Festanti$^\textrm{\scriptsize 28}$,
V.J.G.~Feuillard$^\textrm{\scriptsize 71}$\textsuperscript{,}$^\textrm{\scriptsize 65}$,
J.~Figiel$^\textrm{\scriptsize 120}$,
M.A.S.~Figueredo$^\textrm{\scriptsize 123}$,
S.~Filchagin$^\textrm{\scriptsize 102}$,
D.~Finogeev$^\textrm{\scriptsize 52}$,
F.M.~Fionda$^\textrm{\scriptsize 23}$,
E.M.~Fiore$^\textrm{\scriptsize 32}$,
M.~Floris$^\textrm{\scriptsize 34}$,
S.~Foertsch$^\textrm{\scriptsize 66}$,
P.~Foka$^\textrm{\scriptsize 100}$,
S.~Fokin$^\textrm{\scriptsize 83}$,
E.~Fragiacomo$^\textrm{\scriptsize 112}$,
A.~Francescon$^\textrm{\scriptsize 34}$,
A.~Francisco$^\textrm{\scriptsize 116}$,
U.~Frankenfeld$^\textrm{\scriptsize 100}$,
G.G.~Fronze$^\textrm{\scriptsize 25}$,
U.~Fuchs$^\textrm{\scriptsize 34}$,
C.~Furget$^\textrm{\scriptsize 72}$,
A.~Furs$^\textrm{\scriptsize 52}$,
M.~Fusco Girard$^\textrm{\scriptsize 29}$,
J.J.~Gaardh{\o}je$^\textrm{\scriptsize 84}$,
M.~Gagliardi$^\textrm{\scriptsize 25}$,
A.M.~Gago$^\textrm{\scriptsize 105}$,
K.~Gajdosova$^\textrm{\scriptsize 84}$,
M.~Gallio$^\textrm{\scriptsize 25}$,
C.D.~Galvan$^\textrm{\scriptsize 122}$,
D.R.~Gangadharan$^\textrm{\scriptsize 75}$,
P.~Ganoti$^\textrm{\scriptsize 78}$,
C.~Gao$^\textrm{\scriptsize 7}$,
C.~Garabatos$^\textrm{\scriptsize 100}$,
E.~Garcia-Solis$^\textrm{\scriptsize 13}$,
K.~Garg$^\textrm{\scriptsize 27}$,
P.~Garg$^\textrm{\scriptsize 48}$,
C.~Gargiulo$^\textrm{\scriptsize 34}$,
P.~Gasik$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 97}$,
E.F.~Gauger$^\textrm{\scriptsize 121}$,
M.B.~Gay Ducati$^\textrm{\scriptsize 63}$,
M.~Germain$^\textrm{\scriptsize 116}$,
P.~Ghosh$^\textrm{\scriptsize 139}$,
S.K.~Ghosh$^\textrm{\scriptsize 4}$,
P.~Gianotti$^\textrm{\scriptsize 73}$,
P.~Giubellino$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 113}$,
P.~Giubilato$^\textrm{\scriptsize 28}$,
E.~Gladysz-Dziadus$^\textrm{\scriptsize 120}$,
P.~Gl\"{a}ssel$^\textrm{\scriptsize 96}$,
D.M.~Gom\'{e}z Coral$^\textrm{\scriptsize 64}$,
A.~Gomez Ramirez$^\textrm{\scriptsize 59}$,
A.S.~Gonzalez$^\textrm{\scriptsize 34}$,
V.~Gonzalez$^\textrm{\scriptsize 10}$,
P.~Gonz\'{a}lez-Zamora$^\textrm{\scriptsize 10}$,
S.~Gorbunov$^\textrm{\scriptsize 41}$,
L.~G\"{o}rlich$^\textrm{\scriptsize 120}$,
S.~Gotovac$^\textrm{\scriptsize 119}$,
V.~Grabski$^\textrm{\scriptsize 64}$,
L.K.~Graczykowski$^\textrm{\scriptsize 140}$,
K.L.~Graham$^\textrm{\scriptsize 104}$,
J.L.~Gramling$^\textrm{\scriptsize 96}$,
L.~Greiner$^\textrm{\scriptsize 75}$,
A.~Grelli$^\textrm{\scriptsize 53}$,
C.~Grigoras$^\textrm{\scriptsize 34}$,
V.~Grigoriev$^\textrm{\scriptsize 76}$,
A.~Grigoryan$^\textrm{\scriptsize 1}$,
S.~Grigoryan$^\textrm{\scriptsize 67}$,
N.~Grion$^\textrm{\scriptsize 112}$,
J.M.~Gronefeld$^\textrm{\scriptsize 100}$,
F.~Grosa$^\textrm{\scriptsize 30}$,
J.F.~Grosse-Oetringhaus$^\textrm{\scriptsize 34}$,
R.~Grosso$^\textrm{\scriptsize 100}$,
L.~Gruber$^\textrm{\scriptsize 115}$,
F.R.~Grull$^\textrm{\scriptsize 59}$,
F.~Guber$^\textrm{\scriptsize 52}$,
R.~Guernane$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 72}$,
B.~Guerzoni$^\textrm{\scriptsize 26}$,
K.~Gulbrandsen$^\textrm{\scriptsize 84}$,
T.~Gunji$^\textrm{\scriptsize 131}$,
A.~Gupta$^\textrm{\scriptsize 93}$,
R.~Gupta$^\textrm{\scriptsize 93}$,
I.B.~Guzman$^\textrm{\scriptsize 2}$,
R.~Haake$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 61}$,
C.~Hadjidakis$^\textrm{\scriptsize 51}$,
H.~Hamagaki$^\textrm{\scriptsize 77}$\textsuperscript{,}$^\textrm{\scriptsize 131}$,
G.~Hamar$^\textrm{\scriptsize 142}$,
J.C.~Hamon$^\textrm{\scriptsize 135}$,
J.W.~Harris$^\textrm{\scriptsize 143}$,
A.~Harton$^\textrm{\scriptsize 13}$,
D.~Hatzifotiadou$^\textrm{\scriptsize 107}$,
S.~Hayashi$^\textrm{\scriptsize 131}$,
S.T.~Heckel$^\textrm{\scriptsize 60}$,
E.~Hellb\"{a}r$^\textrm{\scriptsize 60}$,
H.~Helstrup$^\textrm{\scriptsize 36}$,
A.~Herghelegiu$^\textrm{\scriptsize 80}$,
G.~Herrera Corral$^\textrm{\scriptsize 11}$,
F.~Herrmann$^\textrm{\scriptsize 61}$,
B.A.~Hess$^\textrm{\scriptsize 95}$,
K.F.~Hetland$^\textrm{\scriptsize 36}$,
H.~Hillemanns$^\textrm{\scriptsize 34}$,
B.~Hippolyte$^\textrm{\scriptsize 135}$,
J.~Hladky$^\textrm{\scriptsize 56}$,
D.~Horak$^\textrm{\scriptsize 38}$,
R.~Hosokawa$^\textrm{\scriptsize 132}$,
P.~Hristov$^\textrm{\scriptsize 34}$,
C.~Hughes$^\textrm{\scriptsize 129}$,
T.J.~Humanic$^\textrm{\scriptsize 18}$,
N.~Hussain$^\textrm{\scriptsize 43}$,
T.~Hussain$^\textrm{\scriptsize 17}$,
D.~Hutter$^\textrm{\scriptsize 41}$,
D.S.~Hwang$^\textrm{\scriptsize 19}$,
R.~Ilkaev$^\textrm{\scriptsize 102}$,
M.~Inaba$^\textrm{\scriptsize 132}$,
M.~Ippolitov$^\textrm{\scriptsize 83}$\textsuperscript{,}$^\textrm{\scriptsize 76}$,
M.~Irfan$^\textrm{\scriptsize 17}$,
V.~Isakov$^\textrm{\scriptsize 52}$,
M.S.~Islam$^\textrm{\scriptsize 48}$,
M.~Ivanov$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 100}$,
V.~Ivanov$^\textrm{\scriptsize 89}$,
V.~Izucheev$^\textrm{\scriptsize 114}$,
B.~Jacak$^\textrm{\scriptsize 75}$,
N.~Jacazio$^\textrm{\scriptsize 26}$,
P.M.~Jacobs$^\textrm{\scriptsize 75}$,
M.B.~Jadhav$^\textrm{\scriptsize 47}$,
S.~Jadlovska$^\textrm{\scriptsize 118}$,
J.~Jadlovsky$^\textrm{\scriptsize 118}$,
C.~Jahnke$^\textrm{\scriptsize 35}$,
M.J.~Jakubowska$^\textrm{\scriptsize 140}$,
M.A.~Janik$^\textrm{\scriptsize 140}$,
P.H.S.Y.~Jayarathna$^\textrm{\scriptsize 126}$,
C.~Jena$^\textrm{\scriptsize 81}$,
S.~Jena$^\textrm{\scriptsize 126}$,
M.~Jercic$^\textrm{\scriptsize 133}$,
R.T.~Jimenez Bustamante$^\textrm{\scriptsize 100}$,
P.G.~Jones$^\textrm{\scriptsize 104}$,
A.~Jusko$^\textrm{\scriptsize 104}$,
P.~Kalinak$^\textrm{\scriptsize 55}$,
A.~Kalweit$^\textrm{\scriptsize 34}$,
J.H.~Kang$^\textrm{\scriptsize 144}$,
V.~Kaplin$^\textrm{\scriptsize 76}$,
S.~Kar$^\textrm{\scriptsize 139}$,
A.~Karasu Uysal$^\textrm{\scriptsize 70}$,
O.~Karavichev$^\textrm{\scriptsize 52}$,
T.~Karavicheva$^\textrm{\scriptsize 52}$,
L.~Karayan$^\textrm{\scriptsize 100}$\textsuperscript{,}$^\textrm{\scriptsize 96}$,
E.~Karpechev$^\textrm{\scriptsize 52}$,
U.~Kebschull$^\textrm{\scriptsize 59}$,
R.~Keidel$^\textrm{\scriptsize 145}$,
D.L.D.~Keijdener$^\textrm{\scriptsize 53}$,
M.~Keil$^\textrm{\scriptsize 34}$,
B.~Ketzer$^\textrm{\scriptsize 44}$,
M. Mohisin~Khan$^\textrm{\scriptsize 17}$\Aref{idp3207120},
P.~Khan$^\textrm{\scriptsize 103}$,
S.A.~Khan$^\textrm{\scriptsize 139}$,
A.~Khanzadeev$^\textrm{\scriptsize 89}$,
Y.~Kharlov$^\textrm{\scriptsize 114}$,
A.~Khatun$^\textrm{\scriptsize 17}$,
A.~Khuntia$^\textrm{\scriptsize 48}$,
M.M.~Kielbowicz$^\textrm{\scriptsize 120}$,
B.~Kileng$^\textrm{\scriptsize 36}$,
D.W.~Kim$^\textrm{\scriptsize 42}$,
D.J.~Kim$^\textrm{\scriptsize 127}$,
D.~Kim$^\textrm{\scriptsize 144}$,
H.~Kim$^\textrm{\scriptsize 144}$,
J.S.~Kim$^\textrm{\scriptsize 42}$,
J.~Kim$^\textrm{\scriptsize 96}$,
M.~Kim$^\textrm{\scriptsize 50}$,
M.~Kim$^\textrm{\scriptsize 144}$,
S.~Kim$^\textrm{\scriptsize 19}$,
T.~Kim$^\textrm{\scriptsize 144}$,
S.~Kirsch$^\textrm{\scriptsize 41}$,
I.~Kisel$^\textrm{\scriptsize 41}$,
S.~Kiselev$^\textrm{\scriptsize 54}$,
A.~Kisiel$^\textrm{\scriptsize 140}$,
G.~Kiss$^\textrm{\scriptsize 142}$,
J.L.~Klay$^\textrm{\scriptsize 6}$,
C.~Klein$^\textrm{\scriptsize 60}$,
J.~Klein$^\textrm{\scriptsize 34}$,
C.~Klein-B\"{o}sing$^\textrm{\scriptsize 61}$,
S.~Klewin$^\textrm{\scriptsize 96}$,
A.~Kluge$^\textrm{\scriptsize 34}$,
M.L.~Knichel$^\textrm{\scriptsize 96}$,
A.G.~Knospe$^\textrm{\scriptsize 126}$,
C.~Kobdaj$^\textrm{\scriptsize 117}$,
M.~Kofarago$^\textrm{\scriptsize 34}$,
T.~Kollegger$^\textrm{\scriptsize 100}$,
A.~Kolojvari$^\textrm{\scriptsize 138}$,
V.~Kondratiev$^\textrm{\scriptsize 138}$,
N.~Kondratyeva$^\textrm{\scriptsize 76}$,
E.~Kondratyuk$^\textrm{\scriptsize 114}$,
A.~Konevskikh$^\textrm{\scriptsize 52}$,
M.~Kopcik$^\textrm{\scriptsize 118}$,
M.~Kour$^\textrm{\scriptsize 93}$,
C.~Kouzinopoulos$^\textrm{\scriptsize 34}$,
O.~Kovalenko$^\textrm{\scriptsize 79}$,
V.~Kovalenko$^\textrm{\scriptsize 138}$,
M.~Kowalski$^\textrm{\scriptsize 120}$,
G.~Koyithatta Meethaleveedu$^\textrm{\scriptsize 47}$,
I.~Kr\'{a}lik$^\textrm{\scriptsize 55}$,
A.~Krav\v{c}\'{a}kov\'{a}$^\textrm{\scriptsize 39}$,
M.~Krivda$^\textrm{\scriptsize 55}$\textsuperscript{,}$^\textrm{\scriptsize 104}$,
F.~Krizek$^\textrm{\scriptsize 87}$,
E.~Kryshen$^\textrm{\scriptsize 89}$,
M.~Krzewicki$^\textrm{\scriptsize 41}$,
A.M.~Kubera$^\textrm{\scriptsize 18}$,
V.~Ku\v{c}era$^\textrm{\scriptsize 87}$,
C.~Kuhn$^\textrm{\scriptsize 135}$,
P.G.~Kuijer$^\textrm{\scriptsize 85}$,
A.~Kumar$^\textrm{\scriptsize 93}$,
J.~Kumar$^\textrm{\scriptsize 47}$,
L.~Kumar$^\textrm{\scriptsize 91}$,
S.~Kumar$^\textrm{\scriptsize 47}$,
S.~Kundu$^\textrm{\scriptsize 81}$,
P.~Kurashvili$^\textrm{\scriptsize 79}$,
A.~Kurepin$^\textrm{\scriptsize 52}$,
A.B.~Kurepin$^\textrm{\scriptsize 52}$,
A.~Kuryakin$^\textrm{\scriptsize 102}$,
S.~Kushpil$^\textrm{\scriptsize 87}$,
M.J.~Kweon$^\textrm{\scriptsize 50}$,
Y.~Kwon$^\textrm{\scriptsize 144}$,
S.L.~La Pointe$^\textrm{\scriptsize 41}$,
P.~La Rocca$^\textrm{\scriptsize 27}$,
C.~Lagana Fernandes$^\textrm{\scriptsize 123}$,
I.~Lakomov$^\textrm{\scriptsize 34}$,
R.~Langoy$^\textrm{\scriptsize 40}$,
K.~Lapidus$^\textrm{\scriptsize 143}$,
C.~Lara$^\textrm{\scriptsize 59}$,
A.~Lardeux$^\textrm{\scriptsize 20}$\textsuperscript{,}$^\textrm{\scriptsize 65}$,
A.~Lattuca$^\textrm{\scriptsize 25}$,
E.~Laudi$^\textrm{\scriptsize 34}$,
R.~Lavicka$^\textrm{\scriptsize 38}$,
L.~Lazaridis$^\textrm{\scriptsize 34}$,
R.~Lea$^\textrm{\scriptsize 24}$,
L.~Leardini$^\textrm{\scriptsize 96}$,
S.~Lee$^\textrm{\scriptsize 144}$,
F.~Lehas$^\textrm{\scriptsize 85}$,
S.~Lehner$^\textrm{\scriptsize 115}$,
J.~Lehrbach$^\textrm{\scriptsize 41}$,
R.C.~Lemmon$^\textrm{\scriptsize 86}$,
V.~Lenti$^\textrm{\scriptsize 106}$,
E.~Leogrande$^\textrm{\scriptsize 53}$,
I.~Le\'{o}n Monz\'{o}n$^\textrm{\scriptsize 122}$,
P.~L\'{e}vai$^\textrm{\scriptsize 142}$,
S.~Li$^\textrm{\scriptsize 7}$,
X.~Li$^\textrm{\scriptsize 14}$,
J.~Lien$^\textrm{\scriptsize 40}$,
R.~Lietava$^\textrm{\scriptsize 104}$,
S.~Lindal$^\textrm{\scriptsize 20}$,
V.~Lindenstruth$^\textrm{\scriptsize 41}$,
C.~Lippmann$^\textrm{\scriptsize 100}$,
M.A.~Lisa$^\textrm{\scriptsize 18}$,
V.~Litichevskyi$^\textrm{\scriptsize 45}$,
H.M.~Ljunggren$^\textrm{\scriptsize 33}$,
W.J.~Llope$^\textrm{\scriptsize 141}$,
D.F.~Lodato$^\textrm{\scriptsize 53}$,
V.R.~Loggins$^\textrm{\scriptsize 141}$,
P.I.~Loenne$^\textrm{\scriptsize 21}$,
V.~Loginov$^\textrm{\scriptsize 76}$,
C.~Loizides$^\textrm{\scriptsize 75}$,
P.~Loncar$^\textrm{\scriptsize 119}$,
X.~Lopez$^\textrm{\scriptsize 71}$,
E.~L\'{o}pez Torres$^\textrm{\scriptsize 9}$,
A.~Lowe$^\textrm{\scriptsize 142}$,
P.~Luettig$^\textrm{\scriptsize 60}$,
M.~Lunardon$^\textrm{\scriptsize 28}$,
G.~Luparello$^\textrm{\scriptsize 24}$,
M.~Lupi$^\textrm{\scriptsize 34}$,
T.H.~Lutz$^\textrm{\scriptsize 143}$,
A.~Maevskaya$^\textrm{\scriptsize 52}$,
M.~Mager$^\textrm{\scriptsize 34}$,
S.~Mahajan$^\textrm{\scriptsize 93}$,
S.M.~Mahmood$^\textrm{\scriptsize 20}$,
A.~Maire$^\textrm{\scriptsize 135}$,
R.D.~Majka$^\textrm{\scriptsize 143}$,
M.~Malaev$^\textrm{\scriptsize 89}$,
I.~Maldonado Cervantes$^\textrm{\scriptsize 62}$,
L.~Malinina$^\textrm{\scriptsize 67}$\Aref{idp3984928},
D.~Mal'Kevich$^\textrm{\scriptsize 54}$,
P.~Malzacher$^\textrm{\scriptsize 100}$,
A.~Mamonov$^\textrm{\scriptsize 102}$,
V.~Manko$^\textrm{\scriptsize 83}$,
F.~Manso$^\textrm{\scriptsize 71}$,
V.~Manzari$^\textrm{\scriptsize 106}$,
Y.~Mao$^\textrm{\scriptsize 7}$,
M.~Marchisone$^\textrm{\scriptsize 66}$\textsuperscript{,}$^\textrm{\scriptsize 130}$,
J.~Mare\v{s}$^\textrm{\scriptsize 56}$,
G.V.~Margagliotti$^\textrm{\scriptsize 24}$,
A.~Margotti$^\textrm{\scriptsize 107}$,
J.~Margutti$^\textrm{\scriptsize 53}$,
A.~Mar\'{\i}n$^\textrm{\scriptsize 100}$,
C.~Markert$^\textrm{\scriptsize 121}$,
M.~Marquard$^\textrm{\scriptsize 60}$,
N.A.~Martin$^\textrm{\scriptsize 100}$,
P.~Martinengo$^\textrm{\scriptsize 34}$,
J.A.L.~Martinez$^\textrm{\scriptsize 59}$,
M.I.~Mart\'{\i}nez$^\textrm{\scriptsize 2}$,
G.~Mart\'{\i}nez Garc\'{\i}a$^\textrm{\scriptsize 116}$,
M.~Martinez Pedreira$^\textrm{\scriptsize 34}$,
A.~Mas$^\textrm{\scriptsize 123}$,
S.~Masciocchi$^\textrm{\scriptsize 100}$,
M.~Masera$^\textrm{\scriptsize 25}$,
A.~Masoni$^\textrm{\scriptsize 108}$,
A.~Mastroserio$^\textrm{\scriptsize 32}$,
A.M.~Mathis$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 35}$,
A.~Matyja$^\textrm{\scriptsize 120}$\textsuperscript{,}$^\textrm{\scriptsize 129}$,
C.~Mayer$^\textrm{\scriptsize 120}$,
J.~Mazer$^\textrm{\scriptsize 129}$,
M.~Mazzilli$^\textrm{\scriptsize 32}$,
M.A.~Mazzoni$^\textrm{\scriptsize 111}$,
F.~Meddi$^\textrm{\scriptsize 22}$,
Y.~Melikyan$^\textrm{\scriptsize 76}$,
A.~Menchaca-Rocha$^\textrm{\scriptsize 64}$,
E.~Meninno$^\textrm{\scriptsize 29}$,
J.~Mercado P\'erez$^\textrm{\scriptsize 96}$,
M.~Meres$^\textrm{\scriptsize 37}$,
S.~Mhlanga$^\textrm{\scriptsize 92}$,
Y.~Miake$^\textrm{\scriptsize 132}$,
M.M.~Mieskolainen$^\textrm{\scriptsize 45}$,
D.~Mihaylov$^\textrm{\scriptsize 97}$,
K.~Mikhaylov$^\textrm{\scriptsize 67}$\textsuperscript{,}$^\textrm{\scriptsize 54}$,
L.~Milano$^\textrm{\scriptsize 75}$,
J.~Milosevic$^\textrm{\scriptsize 20}$,
A.~Mischke$^\textrm{\scriptsize 53}$,
A.N.~Mishra$^\textrm{\scriptsize 48}$,
D.~Mi\'{s}kowiec$^\textrm{\scriptsize 100}$,
J.~Mitra$^\textrm{\scriptsize 139}$,
C.M.~Mitu$^\textrm{\scriptsize 58}$,
N.~Mohammadi$^\textrm{\scriptsize 53}$,
B.~Mohanty$^\textrm{\scriptsize 81}$,
E.~Montes$^\textrm{\scriptsize 10}$,
D.A.~Moreira De Godoy$^\textrm{\scriptsize 61}$,
L.A.P.~Moreno$^\textrm{\scriptsize 2}$,
S.~Moretto$^\textrm{\scriptsize 28}$,
A.~Morreale$^\textrm{\scriptsize 116}$,
A.~Morsch$^\textrm{\scriptsize 34}$,
V.~Muccifora$^\textrm{\scriptsize 73}$,
E.~Mudnic$^\textrm{\scriptsize 119}$,
D.~M{\"u}hlheim$^\textrm{\scriptsize 61}$,
S.~Muhuri$^\textrm{\scriptsize 139}$,
M.~Mukherjee$^\textrm{\scriptsize 139}$,
J.D.~Mulligan$^\textrm{\scriptsize 143}$,
M.G.~Munhoz$^\textrm{\scriptsize 123}$,
K.~M\"{u}nning$^\textrm{\scriptsize 44}$,
R.H.~Munzer$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 60}$,
H.~Murakami$^\textrm{\scriptsize 131}$,
S.~Murray$^\textrm{\scriptsize 66}$,
L.~Musa$^\textrm{\scriptsize 34}$,
J.~Musinsky$^\textrm{\scriptsize 55}$,
C.J.~Myers$^\textrm{\scriptsize 126}$,
B.~Naik$^\textrm{\scriptsize 47}$,
R.~Nair$^\textrm{\scriptsize 79}$,
B.K.~Nandi$^\textrm{\scriptsize 47}$,
R.~Nania$^\textrm{\scriptsize 107}$,
E.~Nappi$^\textrm{\scriptsize 106}$,
M.U.~Naru$^\textrm{\scriptsize 15}$,
H.~Natal da Luz$^\textrm{\scriptsize 123}$,
C.~Nattrass$^\textrm{\scriptsize 129}$,
S.R.~Navarro$^\textrm{\scriptsize 2}$,
K.~Nayak$^\textrm{\scriptsize 81}$,
R.~Nayak$^\textrm{\scriptsize 47}$,
T.K.~Nayak$^\textrm{\scriptsize 139}$,
S.~Nazarenko$^\textrm{\scriptsize 102}$,
A.~Nedosekin$^\textrm{\scriptsize 54}$,
R.A.~Negrao De Oliveira$^\textrm{\scriptsize 34}$,
L.~Nellen$^\textrm{\scriptsize 62}$,
S.V.~Nesbo$^\textrm{\scriptsize 36}$,
F.~Ng$^\textrm{\scriptsize 126}$,
M.~Nicassio$^\textrm{\scriptsize 100}$,
M.~Niculescu$^\textrm{\scriptsize 58}$,
J.~Niedziela$^\textrm{\scriptsize 34}$,
B.S.~Nielsen$^\textrm{\scriptsize 84}$,
S.~Nikolaev$^\textrm{\scriptsize 83}$,
S.~Nikulin$^\textrm{\scriptsize 83}$,
V.~Nikulin$^\textrm{\scriptsize 89}$,
F.~Noferini$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 12}$,
P.~Nomokonov$^\textrm{\scriptsize 67}$,
G.~Nooren$^\textrm{\scriptsize 53}$,
J.C.C.~Noris$^\textrm{\scriptsize 2}$,
J.~Norman$^\textrm{\scriptsize 128}$,
A.~Nyanin$^\textrm{\scriptsize 83}$,
J.~Nystrand$^\textrm{\scriptsize 21}$,
H.~Oeschler$^\textrm{\scriptsize 96}$,
S.~Oh$^\textrm{\scriptsize 143}$,
A.~Ohlson$^\textrm{\scriptsize 96}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
T.~Okubo$^\textrm{\scriptsize 46}$,
L.~Olah$^\textrm{\scriptsize 142}$,
J.~Oleniacz$^\textrm{\scriptsize 140}$,
A.C.~Oliveira Da Silva$^\textrm{\scriptsize 123}$,
M.H.~Oliver$^\textrm{\scriptsize 143}$,
J.~Onderwaater$^\textrm{\scriptsize 100}$,
C.~Oppedisano$^\textrm{\scriptsize 113}$,
R.~Orava$^\textrm{\scriptsize 45}$,
M.~Oravec$^\textrm{\scriptsize 118}$,
A.~Ortiz Velasquez$^\textrm{\scriptsize 62}$,
A.~Oskarsson$^\textrm{\scriptsize 33}$,
J.~Otwinowski$^\textrm{\scriptsize 120}$,
K.~Oyama$^\textrm{\scriptsize 77}$,
M.~Ozdemir$^\textrm{\scriptsize 60}$,
Y.~Pachmayer$^\textrm{\scriptsize 96}$,
V.~Pacik$^\textrm{\scriptsize 84}$,
D.~Pagano$^\textrm{\scriptsize 137}$,
P.~Pagano$^\textrm{\scriptsize 29}$,
G.~Pai\'{c}$^\textrm{\scriptsize 62}$,
S.K.~Pal$^\textrm{\scriptsize 139}$,
P.~Palni$^\textrm{\scriptsize 7}$,
J.~Pan$^\textrm{\scriptsize 141}$,
A.K.~Pandey$^\textrm{\scriptsize 47}$,
S.~Panebianco$^\textrm{\scriptsize 65}$,
V.~Papikyan$^\textrm{\scriptsize 1}$,
G.S.~Pappalardo$^\textrm{\scriptsize 109}$,
P.~Pareek$^\textrm{\scriptsize 48}$,
J.~Park$^\textrm{\scriptsize 50}$,
W.J.~Park$^\textrm{\scriptsize 100}$,
S.~Parmar$^\textrm{\scriptsize 91}$,
A.~Passfeld$^\textrm{\scriptsize 61}$,
S.P.~Pathak$^\textrm{\scriptsize 126}$,
V.~Paticchio$^\textrm{\scriptsize 106}$,
R.N.~Patra$^\textrm{\scriptsize 139}$,
B.~Paul$^\textrm{\scriptsize 113}$,
H.~Pei$^\textrm{\scriptsize 7}$,
T.~Peitzmann$^\textrm{\scriptsize 53}$,
X.~Peng$^\textrm{\scriptsize 7}$,
L.G.~Pereira$^\textrm{\scriptsize 63}$,
H.~Pereira Da Costa$^\textrm{\scriptsize 65}$,
D.~Peresunko$^\textrm{\scriptsize 83}$\textsuperscript{,}$^\textrm{\scriptsize 76}$,
E.~Perez Lezama$^\textrm{\scriptsize 60}$,
V.~Peskov$^\textrm{\scriptsize 60}$,
Y.~Pestov$^\textrm{\scriptsize 5}$,
V.~Petr\'{a}\v{c}ek$^\textrm{\scriptsize 38}$,
V.~Petrov$^\textrm{\scriptsize 114}$,
M.~Petrovici$^\textrm{\scriptsize 80}$,
C.~Petta$^\textrm{\scriptsize 27}$,
R.P.~Pezzi$^\textrm{\scriptsize 63}$,
S.~Piano$^\textrm{\scriptsize 112}$,
M.~Pikna$^\textrm{\scriptsize 37}$,
P.~Pillot$^\textrm{\scriptsize 116}$,
L.O.D.L.~Pimentel$^\textrm{\scriptsize 84}$,
O.~Pinazza$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
L.~Pinsky$^\textrm{\scriptsize 126}$,
D.B.~Piyarathna$^\textrm{\scriptsize 126}$,
M.~P\l osko\'{n}$^\textrm{\scriptsize 75}$,
M.~Planinic$^\textrm{\scriptsize 133}$,
J.~Pluta$^\textrm{\scriptsize 140}$,
S.~Pochybova$^\textrm{\scriptsize 142}$,
P.L.M.~Podesta-Lerma$^\textrm{\scriptsize 122}$,
M.G.~Poghosyan$^\textrm{\scriptsize 88}$,
B.~Polichtchouk$^\textrm{\scriptsize 114}$,
N.~Poljak$^\textrm{\scriptsize 133}$,
W.~Poonsawat$^\textrm{\scriptsize 117}$,
A.~Pop$^\textrm{\scriptsize 80}$,
H.~Poppenborg$^\textrm{\scriptsize 61}$,
S.~Porteboeuf-Houssais$^\textrm{\scriptsize 71}$,
J.~Porter$^\textrm{\scriptsize 75}$,
J.~Pospisil$^\textrm{\scriptsize 87}$,
V.~Pozdniakov$^\textrm{\scriptsize 67}$,
S.K.~Prasad$^\textrm{\scriptsize 4}$,
R.~Preghenella$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 107}$,
F.~Prino$^\textrm{\scriptsize 113}$,
C.A.~Pruneau$^\textrm{\scriptsize 141}$,
I.~Pshenichnov$^\textrm{\scriptsize 52}$,
M.~Puccio$^\textrm{\scriptsize 25}$,
G.~Puddu$^\textrm{\scriptsize 23}$,
P.~Pujahari$^\textrm{\scriptsize 141}$,
V.~Punin$^\textrm{\scriptsize 102}$,
J.~Putschke$^\textrm{\scriptsize 141}$,
H.~Qvigstad$^\textrm{\scriptsize 20}$,
A.~Rachevski$^\textrm{\scriptsize 112}$,
S.~Raha$^\textrm{\scriptsize 4}$,
S.~Rajput$^\textrm{\scriptsize 93}$,
J.~Rak$^\textrm{\scriptsize 127}$,
A.~Rakotozafindrabe$^\textrm{\scriptsize 65}$,
L.~Ramello$^\textrm{\scriptsize 31}$,
F.~Rami$^\textrm{\scriptsize 135}$,
D.B.~Rana$^\textrm{\scriptsize 126}$,
R.~Raniwala$^\textrm{\scriptsize 94}$,
S.~Raniwala$^\textrm{\scriptsize 94}$,
S.S.~R\"{a}s\"{a}nen$^\textrm{\scriptsize 45}$,
B.T.~Rascanu$^\textrm{\scriptsize 60}$,
D.~Rathee$^\textrm{\scriptsize 91}$,
V.~Ratza$^\textrm{\scriptsize 44}$,
I.~Ravasenga$^\textrm{\scriptsize 30}$,
K.F.~Read$^\textrm{\scriptsize 88}$\textsuperscript{,}$^\textrm{\scriptsize 129}$,
K.~Redlich$^\textrm{\scriptsize 79}$,
A.~Rehman$^\textrm{\scriptsize 21}$,
P.~Reichelt$^\textrm{\scriptsize 60}$,
F.~Reidt$^\textrm{\scriptsize 34}$,
X.~Ren$^\textrm{\scriptsize 7}$,
R.~Renfordt$^\textrm{\scriptsize 60}$,
A.R.~Reolon$^\textrm{\scriptsize 73}$,
A.~Reshetin$^\textrm{\scriptsize 52}$,
K.~Reygers$^\textrm{\scriptsize 96}$,
V.~Riabov$^\textrm{\scriptsize 89}$,
R.A.~Ricci$^\textrm{\scriptsize 74}$,
T.~Richert$^\textrm{\scriptsize 53}$\textsuperscript{,}$^\textrm{\scriptsize 33}$,
M.~Richter$^\textrm{\scriptsize 20}$,
P.~Riedler$^\textrm{\scriptsize 34}$,
W.~Riegler$^\textrm{\scriptsize 34}$,
F.~Riggi$^\textrm{\scriptsize 27}$,
C.~Ristea$^\textrm{\scriptsize 58}$,
M.~Rodr\'{i}guez Cahuantzi$^\textrm{\scriptsize 2}$,
K.~R{\o}ed$^\textrm{\scriptsize 20}$,
E.~Rogochaya$^\textrm{\scriptsize 67}$,
D.~Rohr$^\textrm{\scriptsize 41}$,
D.~R\"ohrich$^\textrm{\scriptsize 21}$,
P.S.~Rokita$^\textrm{\scriptsize 140}$,
F.~Ronchetti$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 73}$,
L.~Ronflette$^\textrm{\scriptsize 116}$,
P.~Rosnet$^\textrm{\scriptsize 71}$,
A.~Rossi$^\textrm{\scriptsize 28}$,
A.~Rotondi$^\textrm{\scriptsize 136}$,
F.~Roukoutakis$^\textrm{\scriptsize 78}$,
A.~Roy$^\textrm{\scriptsize 48}$,
C.~Roy$^\textrm{\scriptsize 135}$,
P.~Roy$^\textrm{\scriptsize 103}$,
A.J.~Rubio Montero$^\textrm{\scriptsize 10}$,
R.~Rui$^\textrm{\scriptsize 24}$,
R.~Russo$^\textrm{\scriptsize 25}$,
A.~Rustamov$^\textrm{\scriptsize 82}$,
E.~Ryabinkin$^\textrm{\scriptsize 83}$,
Y.~Ryabov$^\textrm{\scriptsize 89}$,
A.~Rybicki$^\textrm{\scriptsize 120}$,
S.~Saarinen$^\textrm{\scriptsize 45}$,
S.~Sadhu$^\textrm{\scriptsize 139}$,
S.~Sadovsky$^\textrm{\scriptsize 114}$,
K.~\v{S}afa\v{r}\'{\i}k$^\textrm{\scriptsize 34}$,
S.K.~Saha$^\textrm{\scriptsize 139}$,
B.~Sahlmuller$^\textrm{\scriptsize 60}$,
B.~Sahoo$^\textrm{\scriptsize 47}$,
P.~Sahoo$^\textrm{\scriptsize 48}$,
R.~Sahoo$^\textrm{\scriptsize 48}$,
S.~Sahoo$^\textrm{\scriptsize 57}$,
P.K.~Sahu$^\textrm{\scriptsize 57}$,
J.~Saini$^\textrm{\scriptsize 139}$,
S.~Sakai$^\textrm{\scriptsize 73}$\textsuperscript{,}$^\textrm{\scriptsize 132}$,
M.A.~Saleh$^\textrm{\scriptsize 141}$,
J.~Salzwedel$^\textrm{\scriptsize 18}$,
S.~Sambyal$^\textrm{\scriptsize 93}$,
V.~Samsonov$^\textrm{\scriptsize 76}$\textsuperscript{,}$^\textrm{\scriptsize 89}$,
A.~Sandoval$^\textrm{\scriptsize 64}$,
D.~Sarkar$^\textrm{\scriptsize 139}$,
N.~Sarkar$^\textrm{\scriptsize 139}$,
P.~Sarma$^\textrm{\scriptsize 43}$,
M.H.P.~Sas$^\textrm{\scriptsize 53}$,
E.~Scapparone$^\textrm{\scriptsize 107}$,
F.~Scarlassara$^\textrm{\scriptsize 28}$,
R.P.~Scharenberg$^\textrm{\scriptsize 98}$,
H.S.~Scheid$^\textrm{\scriptsize 60}$,
C.~Schiaua$^\textrm{\scriptsize 80}$,
R.~Schicker$^\textrm{\scriptsize 96}$,
C.~Schmidt$^\textrm{\scriptsize 100}$,
H.R.~Schmidt$^\textrm{\scriptsize 95}$,
M.O.~Schmidt$^\textrm{\scriptsize 96}$,
M.~Schmidt$^\textrm{\scriptsize 95}$,
J.~Schukraft$^\textrm{\scriptsize 34}$,
Y.~Schutz$^\textrm{\scriptsize 116}$\textsuperscript{,}$^\textrm{\scriptsize 135}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
K.~Schwarz$^\textrm{\scriptsize 100}$,
K.~Schweda$^\textrm{\scriptsize 100}$,
G.~Scioli$^\textrm{\scriptsize 26}$,
E.~Scomparin$^\textrm{\scriptsize 113}$,
R.~Scott$^\textrm{\scriptsize 129}$,
M.~\v{S}ef\v{c}\'ik$^\textrm{\scriptsize 39}$,
J.E.~Seger$^\textrm{\scriptsize 90}$,
Y.~Sekiguchi$^\textrm{\scriptsize 131}$,
D.~Sekihata$^\textrm{\scriptsize 46}$,
I.~Selyuzhenkov$^\textrm{\scriptsize 76}$\textsuperscript{,}$^\textrm{\scriptsize 100}$,
K.~Senosi$^\textrm{\scriptsize 66}$,
S.~Senyukov$^\textrm{\scriptsize 3}$\textsuperscript{,}$^\textrm{\scriptsize 135}$\textsuperscript{,}$^\textrm{\scriptsize 34}$,
E.~Serradilla$^\textrm{\scriptsize 64}$\textsuperscript{,}$^\textrm{\scriptsize 10}$,
P.~Sett$^\textrm{\scriptsize 47}$,
A.~Sevcenco$^\textrm{\scriptsize 58}$,
A.~Shabanov$^\textrm{\scriptsize 52}$,
A.~Shabetai$^\textrm{\scriptsize 116}$,
O.~Shadura$^\textrm{\scriptsize 3}$,
R.~Shahoyan$^\textrm{\scriptsize 34}$,
A.~Shangaraev$^\textrm{\scriptsize 114}$,
A.~Sharma$^\textrm{\scriptsize 93}$,
A.~Sharma$^\textrm{\scriptsize 91}$,
M.~Sharma$^\textrm{\scriptsize 93}$,
M.~Sharma$^\textrm{\scriptsize 93}$,
N.~Sharma$^\textrm{\scriptsize 129}$\textsuperscript{,}$^\textrm{\scriptsize 91}$,
A.I.~Sheikh$^\textrm{\scriptsize 139}$,
K.~Shigaki$^\textrm{\scriptsize 46}$,
Q.~Shou$^\textrm{\scriptsize 7}$,
K.~Shtejer$^\textrm{\scriptsize 25}$\textsuperscript{,}$^\textrm{\scriptsize 9}$,
Y.~Sibiriak$^\textrm{\scriptsize 83}$,
S.~Siddhanta$^\textrm{\scriptsize 108}$,
K.M.~Sielewicz$^\textrm{\scriptsize 34}$,
T.~Siemiarczuk$^\textrm{\scriptsize 79}$,
D.~Silvermyr$^\textrm{\scriptsize 33}$,
C.~Silvestre$^\textrm{\scriptsize 72}$,
G.~Simatovic$^\textrm{\scriptsize 133}$,
G.~Simonetti$^\textrm{\scriptsize 34}$,
R.~Singaraju$^\textrm{\scriptsize 139}$,
R.~Singh$^\textrm{\scriptsize 81}$,
V.~Singhal$^\textrm{\scriptsize 139}$,
T.~Sinha$^\textrm{\scriptsize 103}$,
B.~Sitar$^\textrm{\scriptsize 37}$,
M.~Sitta$^\textrm{\scriptsize 31}$,
T.B.~Skaali$^\textrm{\scriptsize 20}$,
M.~Slupecki$^\textrm{\scriptsize 127}$,
N.~Smirnov$^\textrm{\scriptsize 143}$,
R.J.M.~Snellings$^\textrm{\scriptsize 53}$,
T.W.~Snellman$^\textrm{\scriptsize 127}$,
J.~Song$^\textrm{\scriptsize 99}$,
M.~Song$^\textrm{\scriptsize 144}$,
F.~Soramel$^\textrm{\scriptsize 28}$,
S.~Sorensen$^\textrm{\scriptsize 129}$,
F.~Sozzi$^\textrm{\scriptsize 100}$,
E.~Spiriti$^\textrm{\scriptsize 73}$,
I.~Sputowska$^\textrm{\scriptsize 120}$,
B.K.~Srivastava$^\textrm{\scriptsize 98}$,
J.~Stachel$^\textrm{\scriptsize 96}$,
I.~Stan$^\textrm{\scriptsize 58}$,
P.~Stankus$^\textrm{\scriptsize 88}$,
E.~Stenlund$^\textrm{\scriptsize 33}$,
J.H.~Stiller$^\textrm{\scriptsize 96}$,
D.~Stocco$^\textrm{\scriptsize 116}$,
P.~Strmen$^\textrm{\scriptsize 37}$,
A.A.P.~Suaide$^\textrm{\scriptsize 123}$,
T.~Sugitate$^\textrm{\scriptsize 46}$,
C.~Suire$^\textrm{\scriptsize 51}$,
M.~Suleymanov$^\textrm{\scriptsize 15}$,
M.~Suljic$^\textrm{\scriptsize 24}$,
R.~Sultanov$^\textrm{\scriptsize 54}$,
M.~\v{S}umbera$^\textrm{\scriptsize 87}$,
S.~Sumowidagdo$^\textrm{\scriptsize 49}$,
K.~Suzuki$^\textrm{\scriptsize 115}$,
S.~Swain$^\textrm{\scriptsize 57}$,
A.~Szabo$^\textrm{\scriptsize 37}$,
I.~Szarka$^\textrm{\scriptsize 37}$,
A.~Szczepankiewicz$^\textrm{\scriptsize 140}$,
M.~Szymanski$^\textrm{\scriptsize 140}$,
U.~Tabassam$^\textrm{\scriptsize 15}$,
J.~Takahashi$^\textrm{\scriptsize 124}$,
G.J.~Tambave$^\textrm{\scriptsize 21}$,
N.~Tanaka$^\textrm{\scriptsize 132}$,
M.~Tarhini$^\textrm{\scriptsize 51}$,
M.~Tariq$^\textrm{\scriptsize 17}$,
M.G.~Tarzila$^\textrm{\scriptsize 80}$,
A.~Tauro$^\textrm{\scriptsize 34}$,
G.~Tejeda Mu\~{n}oz$^\textrm{\scriptsize 2}$,
A.~Telesca$^\textrm{\scriptsize 34}$,
K.~Terasaki$^\textrm{\scriptsize 131}$,
C.~Terrevoli$^\textrm{\scriptsize 28}$,
B.~Teyssier$^\textrm{\scriptsize 134}$,
D.~Thakur$^\textrm{\scriptsize 48}$,
S.~Thakur$^\textrm{\scriptsize 139}$,
D.~Thomas$^\textrm{\scriptsize 121}$,
R.~Tieulent$^\textrm{\scriptsize 134}$,
A.~Tikhonov$^\textrm{\scriptsize 52}$,
A.R.~Timmins$^\textrm{\scriptsize 126}$,
A.~Toia$^\textrm{\scriptsize 60}$,
S.~Tripathy$^\textrm{\scriptsize 48}$,
S.~Trogolo$^\textrm{\scriptsize 25}$,
G.~Trombetta$^\textrm{\scriptsize 32}$,
V.~Trubnikov$^\textrm{\scriptsize 3}$,
W.H.~Trzaska$^\textrm{\scriptsize 127}$,
B.A.~Trzeciak$^\textrm{\scriptsize 53}$,
T.~Tsuji$^\textrm{\scriptsize 131}$,
A.~Tumkin$^\textrm{\scriptsize 102}$,
R.~Turrisi$^\textrm{\scriptsize 110}$,
T.S.~Tveter$^\textrm{\scriptsize 20}$,
K.~Ullaland$^\textrm{\scriptsize 21}$,
E.N.~Umaka$^\textrm{\scriptsize 126}$,
A.~Uras$^\textrm{\scriptsize 134}$,
G.L.~Usai$^\textrm{\scriptsize 23}$,
A.~Utrobicic$^\textrm{\scriptsize 133}$,
M.~Vala$^\textrm{\scriptsize 118}$\textsuperscript{,}$^\textrm{\scriptsize 55}$,
J.~Van Der Maarel$^\textrm{\scriptsize 53}$,
J.W.~Van Hoorne$^\textrm{\scriptsize 34}$,
M.~van Leeuwen$^\textrm{\scriptsize 53}$,
T.~Vanat$^\textrm{\scriptsize 87}$,
P.~Vande Vyvre$^\textrm{\scriptsize 34}$,
D.~Varga$^\textrm{\scriptsize 142}$,
A.~Vargas$^\textrm{\scriptsize 2}$,
M.~Vargyas$^\textrm{\scriptsize 127}$,
R.~Varma$^\textrm{\scriptsize 47}$,
M.~Vasileiou$^\textrm{\scriptsize 78}$,
A.~Vasiliev$^\textrm{\scriptsize 83}$,
A.~Vauthier$^\textrm{\scriptsize 72}$,
O.~V\'azquez Doce$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 35}$,
V.~Vechernin$^\textrm{\scriptsize 138}$,
A.M.~Veen$^\textrm{\scriptsize 53}$,
A.~Velure$^\textrm{\scriptsize 21}$,
E.~Vercellin$^\textrm{\scriptsize 25}$,
S.~Vergara Lim\'on$^\textrm{\scriptsize 2}$,
R.~Vernet$^\textrm{\scriptsize 8}$,
R.~V\'ertesi$^\textrm{\scriptsize 142}$,
L.~Vickovic$^\textrm{\scriptsize 119}$,
S.~Vigolo$^\textrm{\scriptsize 53}$,
J.~Viinikainen$^\textrm{\scriptsize 127}$,
Z.~Vilakazi$^\textrm{\scriptsize 130}$,
O.~Villalobos Baillie$^\textrm{\scriptsize 104}$,
A.~Villatoro Tello$^\textrm{\scriptsize 2}$,
A.~Vinogradov$^\textrm{\scriptsize 83}$,
L.~Vinogradov$^\textrm{\scriptsize 138}$,
T.~Virgili$^\textrm{\scriptsize 29}$,
V.~Vislavicius$^\textrm{\scriptsize 33}$,
A.~Vodopyanov$^\textrm{\scriptsize 67}$,
M.A.~V\"{o}lkl$^\textrm{\scriptsize 96}$,
K.~Voloshin$^\textrm{\scriptsize 54}$,
S.A.~Voloshin$^\textrm{\scriptsize 141}$,
G.~Volpe$^\textrm{\scriptsize 32}$,
B.~von Haller$^\textrm{\scriptsize 34}$,
I.~Vorobyev$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 35}$,
D.~Voscek$^\textrm{\scriptsize 118}$,
D.~Vranic$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 100}$,
J.~Vrl\'{a}kov\'{a}$^\textrm{\scriptsize 39}$,
B.~Wagner$^\textrm{\scriptsize 21}$,
J.~Wagner$^\textrm{\scriptsize 100}$,
H.~Wang$^\textrm{\scriptsize 53}$,
M.~Wang$^\textrm{\scriptsize 7}$,
D.~Watanabe$^\textrm{\scriptsize 132}$,
Y.~Watanabe$^\textrm{\scriptsize 131}$,
M.~Weber$^\textrm{\scriptsize 115}$,
S.G.~Weber$^\textrm{\scriptsize 100}$,
D.F.~Weiser$^\textrm{\scriptsize 96}$,
J.P.~Wessels$^\textrm{\scriptsize 61}$,
U.~Westerhoff$^\textrm{\scriptsize 61}$,
A.M.~Whitehead$^\textrm{\scriptsize 92}$,
J.~Wiechula$^\textrm{\scriptsize 60}$,
J.~Wikne$^\textrm{\scriptsize 20}$,
G.~Wilk$^\textrm{\scriptsize 79}$,
J.~Wilkinson$^\textrm{\scriptsize 96}$,
G.A.~Willems$^\textrm{\scriptsize 61}$,
M.C.S.~Williams$^\textrm{\scriptsize 107}$,
B.~Windelband$^\textrm{\scriptsize 96}$,
W.E.~Witt$^\textrm{\scriptsize 129}$,
S.~Yalcin$^\textrm{\scriptsize 70}$,
P.~Yang$^\textrm{\scriptsize 7}$,
S.~Yano$^\textrm{\scriptsize 46}$,
Z.~Yin$^\textrm{\scriptsize 7}$,
H.~Yokoyama$^\textrm{\scriptsize 132}$\textsuperscript{,}$^\textrm{\scriptsize 72}$,
I.-K.~Yoo$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 99}$,
J.H.~Yoon$^\textrm{\scriptsize 50}$,
V.~Yurchenko$^\textrm{\scriptsize 3}$,
V.~Zaccolo$^\textrm{\scriptsize 84}$\textsuperscript{,}$^\textrm{\scriptsize 113}$,
A.~Zaman$^\textrm{\scriptsize 15}$,
C.~Zampolli$^\textrm{\scriptsize 34}$,
H.J.C.~Zanoli$^\textrm{\scriptsize 123}$,
S.~Zaporozhets$^\textrm{\scriptsize 67}$,
N.~Zardoshti$^\textrm{\scriptsize 104}$,
A.~Zarochentsev$^\textrm{\scriptsize 138}$,
P.~Z\'{a}vada$^\textrm{\scriptsize 56}$,
N.~Zaviyalov$^\textrm{\scriptsize 102}$,
H.~Zbroszczyk$^\textrm{\scriptsize 140}$,
M.~Zhalov$^\textrm{\scriptsize 89}$,
H.~Zhang$^\textrm{\scriptsize 21}$\textsuperscript{,}$^\textrm{\scriptsize 7}$,
X.~Zhang$^\textrm{\scriptsize 7}$\textsuperscript{,}$^\textrm{\scriptsize 75}$,
Y.~Zhang$^\textrm{\scriptsize 7}$,
C.~Zhang$^\textrm{\scriptsize 53}$,
Z.~Zhang$^\textrm{\scriptsize 7}$,
C.~Zhao$^\textrm{\scriptsize 20}$,
N.~Zhigareva$^\textrm{\scriptsize 54}$,
D.~Zhou$^\textrm{\scriptsize 7}$,
Y.~Zhou$^\textrm{\scriptsize 84}$,
Z.~Zhou$^\textrm{\scriptsize 21}$,
H.~Zhu$^\textrm{\scriptsize 21}$\textsuperscript{,}$^\textrm{\scriptsize 7}$,
J.~Zhu$^\textrm{\scriptsize 7}$\textsuperscript{,}$^\textrm{\scriptsize 116}$,
X.~Zhu$^\textrm{\scriptsize 7}$,
A.~Zichichi$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 26}$,
A.~Zimmermann$^\textrm{\scriptsize 96}$,
M.B.~Zimmermann$^\textrm{\scriptsize 34}$\textsuperscript{,}$^\textrm{\scriptsize 61}$,
S.~Zimmermann$^\textrm{\scriptsize 115}$,
G.~Zinovjev$^\textrm{\scriptsize 3}$,
J.~Zmeskal$^\textrm{\scriptsize 115}$
\renewcommand\labelenumi{\textsuperscript{\theenumi}~}
\section*{Affiliation notes}
\renewcommand\theenumi{\roman{enumi}}
\begin{Authlist}
\item \Adef{0}Deceased
\item \Adef{idp1764384}{Also at: Georgia State University, Atlanta, Georgia, United States}
\item \Adef{idp3207120}{Also at: Also at Department of Applied Physics, Aligarh Muslim University, Aligarh, India}
\item \Adef{idp3984928}{Also at: M.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia}
\end{Authlist}
\section*{Collaboration Institutes}
\renewcommand\theenumi{\arabic{enumi}~}
$^{1}$A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia
\\
$^{2}$Benem\'{e}rita Universidad Aut\'{o}noma de Puebla, Puebla, Mexico
\\
$^{3}$Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine
\\
$^{4}$Bose Institute, Department of Physics
and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India
\\
$^{5}$Budker Institute for Nuclear Physics, Novosibirsk, Russia
\\
$^{6}$California Polytechnic State University, San Luis Obispo, California, United States
\\
$^{7}$Central China Normal University, Wuhan, China
\\
$^{8}$Centre de Calcul de l'IN2P3, Villeurbanne, Lyon, France
\\
$^{9}$Centro de Aplicaciones Tecnol\'{o}gicas y Desarrollo Nuclear (CEADEN), Havana, Cuba
\\
$^{10}$Centro de Investigaciones Energ\'{e}ticas Medioambientales y Tecnol\'{o}gicas (CIEMAT), Madrid, Spain
\\
$^{11}$Centro de Investigaci\'{o}n y de Estudios Avanzados (CINVESTAV), Mexico City and M\'{e}rida, Mexico
\\
$^{12}$Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche ``Enrico Fermi', Rome, Italy
\\
$^{13}$Chicago State University, Chicago, Illinois, United States
\\
$^{14}$China Institute of Atomic Energy, Beijing, China
\\
$^{15}$COMSATS Institute of Information Technology (CIIT), Islamabad, Pakistan
\\
$^{16}$Departamento de F\'{\i}sica de Part\'{\i}culas and IGFAE, Universidad de Santiago de Compostela, Santiago de Compostela, Spain
\\
$^{17}$Department of Physics, Aligarh Muslim University, Aligarh, India
\\
$^{18}$Department of Physics, Ohio State University, Columbus, Ohio, United States
\\
$^{19}$Department of Physics, Sejong University, Seoul, South Korea
\\
$^{20}$Department of Physics, University of Oslo, Oslo, Norway
\\
$^{21}$Department of Physics and Technology, University of Bergen, Bergen, Norway
\\
$^{22}$Dipartimento di Fisica dell'Universit\`{a} 'La Sapienza'
and Sezione INFN, Rome, Italy
\\
$^{23}$Dipartimento di Fisica dell'Universit\`{a}
and Sezione INFN, Cagliari, Italy
\\
$^{24}$Dipartimento di Fisica dell'Universit\`{a}
and Sezione INFN, Trieste, Italy
\\
$^{25}$Dipartimento di Fisica dell'Universit\`{a}
and Sezione INFN, Turin, Italy
\\
$^{26}$Dipartimento di Fisica e Astronomia dell'Universit\`{a}
and Sezione INFN, Bologna, Italy
\\
$^{27}$Dipartimento di Fisica e Astronomia dell'Universit\`{a}
and Sezione INFN, Catania, Italy
\\
$^{28}$Dipartimento di Fisica e Astronomia dell'Universit\`{a}
and Sezione INFN, Padova, Italy
\\
$^{29}$Dipartimento di Fisica `E.R.~Caianiello' dell'Universit\`{a}
and Gruppo Collegato INFN, Salerno, Italy
\\
$^{30}$Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy
\\
$^{31}$Dipartimento di Scienze e Innovazione Tecnologica dell'Universit\`{a} del Piemonte Orientale and INFN Sezione di Torino, Alessandria, Italy
\\
$^{32}$Dipartimento Interateneo di Fisica `M.~Merlin'
and Sezione INFN, Bari, Italy
\\
$^{33}$Division of Experimental High Energy Physics, University of Lund, Lund, Sweden
\\
$^{34}$European Organization for Nuclear Research (CERN), Geneva, Switzerland
\\
$^{35}$Excellence Cluster Universe, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany
\\
$^{36}$Faculty of Engineering, Bergen University College, Bergen, Norway
\\
$^{37}$Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia
\\
$^{38}$Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic
\\
$^{39}$Faculty of Science, P.J.~\v{S}af\'{a}rik University, Ko\v{s}ice, Slovakia
\\
$^{40}$Faculty of Technology, Buskerud and Vestfold University College, Tonsberg, Norway
\\
$^{41}$Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany
\\
$^{42}$Gangneung-Wonju National University, Gangneung, South Korea
\\
$^{43}$Gauhati University, Department of Physics, Guwahati, India
\\
$^{44}$Helmholtz-Institut f\"{u}r Strahlen- und Kernphysik, Rheinische Friedrich-Wilhelms-Universit\"{a}t Bonn, Bonn, Germany
\\
$^{45}$Helsinki Institute of Physics (HIP), Helsinki, Finland
\\
$^{46}$Hiroshima University, Hiroshima, Japan
\\
$^{47}$Indian Institute of Technology Bombay (IIT), Mumbai, India
\\
$^{48}$Indian Institute of Technology Indore, Indore, India
\\
$^{49}$Indonesian Institute of Sciences, Jakarta, Indonesia
\\
$^{50}$Inha University, Incheon, South Korea
\\
$^{51}$Institut de Physique Nucl\'eaire d'Orsay (IPNO), Universit\'e Paris-Sud, CNRS-IN2P3, Orsay, France
\\
$^{52}$Institute for Nuclear Research, Academy of Sciences, Moscow, Russia
\\
$^{53}$Institute for Subatomic Physics of Utrecht University, Utrecht, Netherlands
\\
$^{54}$Institute for Theoretical and Experimental Physics, Moscow, Russia
\\
$^{55}$Institute of Experimental Physics, Slovak Academy of Sciences, Ko\v{s}ice, Slovakia
\\
$^{56}$Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic
\\
$^{57}$Institute of Physics, Bhubaneswar, India
\\
$^{58}$Institute of Space Science (ISS), Bucharest, Romania
\\
$^{59}$Institut f\"{u}r Informatik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany
\\
$^{60}$Institut f\"{u}r Kernphysik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany
\\
$^{61}$Institut f\"{u}r Kernphysik, Westf\"{a}lische Wilhelms-Universit\"{a}t M\"{u}nster, M\"{u}nster, Germany
\\
$^{62}$Instituto de Ciencias Nucleares, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico
\\
$^{63}$Instituto de F\'{i}sica, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil
\\
$^{64}$Instituto de F\'{\i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico
\\
$^{65}$IRFU, CEA, Universit\'{e} Paris-Saclay, F-91191 Gif-sur-Yvette, France, Saclay, France
\\
$^{66}$iThemba LABS, National Research Foundation, Somerset West, South Africa
\\
$^{67}$Joint Institute for Nuclear Research (JINR), Dubna, Russia
\\
$^{68}$Konkuk University, Seoul, South Korea
\\
$^{69}$Korea Institute of Science and Technology Information, Daejeon, South Korea
\\
$^{70}$KTO Karatay University, Konya, Turkey
\\
$^{71}$Laboratoire de Physique Corpusculaire (LPC), Clermont Universit\'{e}, Universit\'{e} Blaise Pascal, CNRS--IN2P3, Clermont-Ferrand, France
\\
$^{72}$Laboratoire de Physique Subatomique et de Cosmologie, Universit\'{e} Grenoble-Alpes, CNRS-IN2P3, Grenoble, France
\\
$^{73}$Laboratori Nazionali di Frascati, INFN, Frascati, Italy
\\
$^{74}$Laboratori Nazionali di Legnaro, INFN, Legnaro, Italy
\\
$^{75}$Lawrence Berkeley National Laboratory, Berkeley, California, United States
\\
$^{76}$Moscow Engineering Physics Institute, Moscow, Russia
\\
$^{77}$Nagasaki Institute of Applied Science, Nagasaki, Japan
\\
$^{78}$National and Kapodistrian University of Athens, Physics Department, Athens, Greece, Athens, Greece
\\
$^{79}$National Centre for Nuclear Studies, Warsaw, Poland
\\
$^{80}$National Institute for Physics and Nuclear Engineering, Bucharest, Romania
\\
$^{81}$National Institute of Science Education and Research, Bhubaneswar, India
\\
$^{82}$National Nuclear Research Center, Baku, Azerbaijan
\\
$^{83}$National Research Centre Kurchatov Institute, Moscow, Russia
\\
$^{84}$Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
\\
$^{85}$Nikhef, Nationaal instituut voor subatomaire fysica, Amsterdam, Netherlands
\\
$^{86}$Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom
\\
$^{87}$Nuclear Physics Institute, Academy of Sciences of the Czech Republic, \v{R}e\v{z} u Prahy, Czech Republic
\\
$^{88}$Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States
\\
$^{89}$Petersburg Nuclear Physics Institute, Gatchina, Russia
\\
$^{90}$Physics Department, Creighton University, Omaha, Nebraska, United States
\\
$^{91}$Physics Department, Panjab University, Chandigarh, India
\\
$^{92}$Physics Department, University of Cape Town, Cape Town, South Africa
\\
$^{93}$Physics Department, University of Jammu, Jammu, India
\\
$^{94}$Physics Department, University of Rajasthan, Jaipur, India
\\
$^{95}$Physikalisches Institut, Eberhard Karls Universit\"{a}t T\"{u}bingen, T\"{u}bingen, Germany
\\
$^{96}$Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany
\\
$^{97}$Physik Department, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany
\\
$^{98}$Purdue University, West Lafayette, Indiana, United States
\\
$^{99}$Pusan National University, Pusan, South Korea
\\
$^{100}$Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung GmbH, Darmstadt, Germany
\\
$^{101}$Rudjer Bo\v{s}kovi\'{c} Institute, Zagreb, Croatia
\\
$^{102}$Russian Federal Nuclear Center (VNIIEF), Sarov, Russia
\\
$^{103}$Saha Institute of Nuclear Physics, Kolkata, India
\\
$^{104}$School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom
\\
$^{105}$Secci\'{o}n F\'{\i}sica, Departamento de Ciencias, Pontificia Universidad Cat\'{o}lica del Per\'{u}, Lima, Peru
\\
$^{106}$Sezione INFN, Bari, Italy
\\
$^{107}$Sezione INFN, Bologna, Italy
\\
$^{108}$Sezione INFN, Cagliari, Italy
\\
$^{109}$Sezione INFN, Catania, Italy
\\
$^{110}$Sezione INFN, Padova, Italy
\\
$^{111}$Sezione INFN, Rome, Italy
\\
$^{112}$Sezione INFN, Trieste, Italy
\\
$^{113}$Sezione INFN, Turin, Italy
\\
$^{114}$SSC IHEP of NRC Kurchatov institute, Protvino, Russia
\\
$^{115}$Stefan Meyer Institut f\"{u}r Subatomare Physik (SMI), Vienna, Austria
\\
$^{116}$SUBATECH, Ecole des Mines de Nantes, Universit\'{e} de Nantes, CNRS-IN2P3, Nantes, France
\\
$^{117}$Suranaree University of Technology, Nakhon Ratchasima, Thailand
\\
$^{118}$Technical University of Ko\v{s}ice, Ko\v{s}ice, Slovakia
\\
$^{119}$Technical University of Split FESB, Split, Croatia
\\
$^{120}$The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland
\\
$^{121}$The University of Texas at Austin, Physics Department, Austin, Texas, United States
\\
$^{122}$Universidad Aut\'{o}noma de Sinaloa, Culiac\'{a}n, Mexico
\\
$^{123}$Universidade de S\~{a}o Paulo (USP), S\~{a}o Paulo, Brazil
\\
$^{124}$Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil
\\
$^{125}$Universidade Federal do ABC, Santo Andre, Brazil
\\
$^{126}$University of Houston, Houston, Texas, United States
\\
$^{127}$University of Jyv\"{a}skyl\"{a}, Jyv\"{a}skyl\"{a}, Finland
\\
$^{128}$University of Liverpool, Liverpool, United Kingdom
\\
$^{129}$University of Tennessee, Knoxville, Tennessee, United States
\\
$^{130}$University of the Witwatersrand, Johannesburg, South Africa
\\
$^{131}$University of Tokyo, Tokyo, Japan
\\
$^{132}$University of Tsukuba, Tsukuba, Japan
\\
$^{133}$University of Zagreb, Zagreb, Croatia
\\
$^{134}$Universit\'{e} de Lyon, Universit\'{e} Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, Lyon, France
\\
$^{135}$Universit\'{e} de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, France, Strasbourg, France
\\
$^{136}$Universit\`{a} degli Studi di Pavia, Pavia, Italy
\\
$^{137}$Universit\`{a} di Brescia, Brescia, Italy
\\
$^{138}$V.~Fock Institute for Physics, St. Petersburg State University, St. Petersburg, Russia
\\
$^{139}$Variable Energy Cyclotron Centre, Kolkata, India
\\
$^{140}$Warsaw University of Technology, Warsaw, Poland
\\
$^{141}$Wayne State University, Detroit, Michigan, United States
\\
$^{142}$Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest, Hungary
\\
$^{143}$Yale University, New Haven, Connecticut, United States
\\
$^{144}$Yonsei University, Seoul, South Korea
\\
$^{145}$Zentrum f\"{u}r Technologietransfer und Telekommunikation (ZTT), Fachhochschule Worms, Worms, Germany
\endgroup
\section{Experiment and data analysis}
The data were recorded in 2011 during the second~{Pb--Pb}\ running period
of the LHC. Approximately 2 million minimum bias
events, 29.2 million central trigger events, and 34.1 million
semi-central trigger events were used in this analysis. A detailed description of the ALICE
detector can be found in~\cite{ALICE_exp_1,Abelev:2014ffa}.
The Time Projection Chamber (TPC) has full azimuthal coverage and allows
charged-particle track reconstruction in the pseudorapidity
range $|\eta|<~0.8$, as well as particle identification
via the specific ionization energy loss $\mathrm{d}E/\mathrm{d}x$ associated with each track. In addition to the TPC, the
Time-Of-Flight (TOF) detector was used for identification of particles
with transverse momentum~$p_{\rm{T}}$~~$>$~0.5~{\rm GeV/{\it c}}.
The minimum bias, semi-central, and central triggers used in this analysis all require a signal in both V0
detectors~\cite{Abbas:2013taa}. The V0 is a small angle detector
of scintillator arrays covering pseudorapidity ranges $2.8<\eta<5.1$
and $-3.7<\eta<-1.7$ for a collision vertex occurring at the center of
the ALICE detector. The V0 detector was also used for the
centrality determination~\cite{Aamodt:2010cz}. The results of this analysis
are reported for collision centrality classes expressed as ranges of
the fraction of the inelastic {Pb--Pb}~cross-section: 0--5\%, 5--10\%,
10--20\%, 20--30\%, 30--40\%, and 40--50\%. The position of the
primary event vertex along the beam direction $V_{z}$ was determined
for each event. Events with $|V_{z}|<8$~cm were used in this analysis
to ensure a uniform pseudorapidity acceptance.
The TPC has 18 sectors covering full azimuth with 159 pad rows
radially placed in each sector. Tracks with at least 80 space points
in the TPC have been used in this analysis. Tracks compatible with a decay in flight (kink topology) were rejected.
The track quality was determined
by the $\chi^{2}$ of the Kalman filter fit to the reconstructed TPC
clusters. The $\chi^2$ per degrees of freedom was
required to be less than 4. For primary track selection, only
trajectories passing within 3.2~cm from the primary vertex in the
longitudinal direction and 2.4~cm in the transverse direction were
used. Based on the specific ionization energy loss in the TPC gas
compared with the corresponding Bethe-Bloch curve, and the time of
flight in TOF, a probability for each track to be a pion, kaon,
proton, or electron was determined. Particles for which the pion
probability was the largest were used in this analysis. Pions were
selected in the pseudorapidity range $|\eta|<0.8$ and 0.15 $<$ $p_{\rm{T}}$~$<$
1.5~{\rm GeV/{\it c}}.
The correlation function $C({\bq})$ was calculated as
\begin{eqnarray}
C({\bq})=\frac{ A({\bq})} { B({\bq})},
\end{eqnarray}
where $\bq=\bp_1-\bp_2$ is the relative momentum of two pions,
$A(\bq)$ is the same-event distribution of particle pairs, and
$B(\bq)$ is the background distribution of uncorrelated particle
pairs. Both the $A({\bq})$ and $B({\bq})$ distributions were measured
differentially with respect to the second harmonic event-plane angle
$\Psi_{\mathrm{EP},2}$. The second harmonic event-plane angle $\Psi_{\mathrm{EP},2}$ was determined using TPC tracks. To avoid
self-correlation, each event was split into two subevents
($-0.8<\eta<0$ and $0<\eta<0.8$). Pairs were chosen from one subevent
and the second harmonic event-plane angle $\Psi_{\mathrm{EP},2}$ was determined using
the other subevent particles, and vice-versa, with the event plane
resolution determined from the correlations between the event planes
determined in different subevents~\cite{Poskanzer:1998yz}.
The background distribution is built by using the mixed-event
technique~\cite{Kopylov:1974th} in which pairs are made out of
particles from two different events with similar centrality (less than
2\% difference), event-plane angle (less than 10$^\circ$ difference),
and event vertex position along the beam direction (less than 4~cm
difference). Requiring a minimum value in the
two-track separation parameters $\Delta \varphi^*$ and $\Delta\eta$
controls two-track reconstruction effects such as track splitting or
track merging. The quantity $\varphi^{*}$ is defined in this analysis
as the azimuthal angle of the track in the laboratory frame at the
radial position of 1.6~m inside the TPC. Splitting is the effect when
one track is reconstructed as two tracks, and merging is the effect of
two tracks being reconstructed as one. Also, to reduce the splitting
effect, pairs that share more than 5\% of the TPC clusters were
removed from the analysis. It is observed that at large relative
momentum the correlation function is a constant, and the background
pair distribution is normalized such that this constant is unity. The
analysis was performed for different collision centralities in several
ranges of $k_\rT$, the magnitude of the pion-pair transverse momentum
$\bk_\rT=(\bp_{\rT,1}+\bp_{\rT,2})/2$, and in bins of $\Delta\varphi=
\mathrm{\varphi_{pair}}-\Psi_{\mathrm{EP},2}$, defined in the range (0,~$\pi$) where
$\mathrm{\varphi_{pair}}$ is the pair azimuthal angle. The
Bertsch-Pratt~\cite{Pratt:1986cc,Bertsch:1988db} out--side--long
coordinate system was used with the {\it long} direction pointing
along the beam axis, {\it out} along the transverse pair momentum, and
{\it side} being perpendicular to the other two. The
three-dimensional correlation function was analyzed in the
Longitudinally Co-Moving System (LCMS), in which the total
longitudinal momentum of the pair is zero,
${p_{1,\mathrm{L}}}=-{p_{2,\mathrm{L}}}$.
To isolate the Bose-Einstein contribution in the correlation function, effects due to final-state Coulomb
repulsion must be taken into account. For that, the Bowler-Sinyukov fitting
procedure~\cite{Bowler:1991pi,Sinyukov:1998fc} was used in which the
Coulomb weight is only applied to the fraction of pairs ($\lambda$)
that participate in the Bose-Einstein correlation. In this approach,
the correlation function is fitted to
\begin{eqnarray}
C({\bq},\Delta\varphi)=N[(1-\lambda)+\lambda K({\bq})(1+G({\bq},\Delta\varphi))],
\end{eqnarray}
where $N$ is the normalization factor. The function $G({\bq},\Delta\varphi)$
describes the Bose-Einstein correlations and $K({\bq})$ is the Coulomb
part of the two-pion wave function integrated over a source function
corresponding to $G({\bq})$. In this analysis the Gaussian form of
$G({\bq},\Delta\varphi)$ was used~\cite{guassain}:
\begin{eqnarray}
G(\bq,\Delta\varphi)=\exp
\left[
-q_{\rm out}^{2} R_{\rm out}^{2}(\Delta\varphi)-q_{\rm side}^{2} R_{\rm side}^{2}(\Delta\varphi) \right.
\nonumber
\\
\left.
-q_{\rm long}^{2} R_{\rm long}^{2}(\Delta\varphi)-2q_{\rm out} q_{\rm side} R_{\rm os}^{2}(\Delta\varphi) \right.
\nonumber
\\
\left.
-2q_{\rm side} q_{\rm long} R_{\rm sl}^{2}(\Delta\varphi)-2q_{\rm out} q_{\rm long} R_{\rm ol}^{2}(\Delta\varphi)
\right],
\end{eqnarray}
where the parameters $R_{\rm out}$, $R_{\rm side}$, and $R_{\rm long}$ are traditionally
called HBT radii in the {\it out}, {\it side}, and {\it long}
directions. The cross-terms $R_{\rm os}^{2}$, $R_{\rm sl}^{2}$, and $R_{\rm ol}^{2}$
describe the correlation in the {\it out}-{\it side}, {\it side}-{\it
long}, and {\it out}-{\it long}~directions, respectively.
The systematic errors on the extracted radii vary within 3--9\%
depending on $k_{\rm{T}}$ and centrality. They include uncertainties related
to the tracking efficiency and track quality, momentum resolution
~\cite{Adam:2015vna}, different pair cuts ($\Delta \varphi^*$ and
$\Delta\eta$), and correlation function fit ranges. Positive and
negative pion pairs as well as
data obtained with two opposite magnetic field polarities of the ALICE
L3 magnet
have been analyzed separately and a small
difference in the results (less than 3\%) has been also accounted for
in the systematic error. The total systematic errors were obtained
from adding the above systematic errors in quadrature.
Other than being differential in the event plane, this analysis is
similar in most aspects to the analysis reported
in~\cite{Adam:2015vna}, and further details can be found there.
The results reported below were obtained with the second harmonic
event plane~\cite{Poskanzer:1998yz} determined with the TPC tracks.
It was checked that they are consistent with the results obtained with
the event-plane angle determined with the V0 detector.
Figure~\ref{fig:ktdependence} presents the dependence of $R_{\rm out}^{2}$,
$R_{\rm side}^{2}$, $R_{\rm long}^{2}$, $R_{\rm os}^{2}$, and $\lambda$ on the pion
emission angle relative to the second harmonic event plane. The
results are shown for the centrality classes 20--30\% in four ranges of
$k_{\rm{T}}$: 0.2--0.3, 0.3--0.4, 0.4--0.5, and 0.5--0.7~{\rm GeV/{\it c}}. $R_{\rm out}^2$
and $R_{\rm side}^2$ exhibit clear out-of-phase oscillations. No
oscillations for $R_{\rm long}^2$ and $\lambda$ are observed within the
uncertainties of the measurement. The parameter $R_{\rm os}^2$ shows very
similar oscillations for all $k_{\rm{T}}$ bins. $R_{\rm ol}^2$ and $R_{\rm sl}^2$ (not
shown) are found to be consistent with zero, as expected due to
symmetry, and are not further investigated in this analysis. A
possible correlation between $\lambda$ and the extracted radii was
checked by fixing $\lambda$. No change in the radii has been
observed. The curves represent the fits to the data using the
functions~\cite{Voloshin:1995mc,Voloshin:1996ch}:
\begin{eqnarray} \label{eq:radii_osc}
R^{2}_{\mu}(\Delta\varphi)=R^{2}_{\mu,0}+2R^{2}_{\mu,2}\cos(2\Delta\varphi)~(\mu={\rm out,side,long,sl,ol}),
\nonumber
\\
\nonumber
R_{\rm os}^{2}(\Delta\varphi)=R^{2}_{{\rm os},0}+2 R^{2}_{{\rm os},2}\sin(2\Delta\varphi).
\end{eqnarray}
Fitting the radii's azimuthal dependence with the functional form of
Eq.~\ref{eq:radii_osc} allows us to extract the average radii and the
amplitudes of oscillations. The latter have to be corrected for the
finite event plane resolution. There exist several methods for such a
correction~\cite{Lisa:2005dd}, which produce very similar
results~\cite{Adamczyk:2014mxp} well within errors of this
analysis. The results shown below have been obtained with the simplest
method first used by the E895 Collaboration~\cite{Lisa:2000xj}, in
which the amplitude of oscillation is divided by the event plane
resolution factor. The correction is about 5--15\%, depending
on centrality.
\begin{figure}[!ht]
\centering
\hspace*{-5mm}
\includegraphics[width=11cm]{RModulation}
\hspace*{-5mm}
\caption{The azimuthal dependence of $R_{\rm out}^{2} $, $R_{\rm side}^{2}$,
$R_{\rm long}^{2}$, $R_{\rm os}^{2}$, and
$\lambda$ as a function of
$\Delta\varphi=\mathrm{\varphi_{pair}}-\Psi_{\mathrm{EP},2}$ for the centrality 20--30\% and $k_\rT$
ranges 0.2--0.3, 0.3--0.4, 0.4--0.5, and 0.5--0.7 GeV/$c$. Bands
indicate the systematic errors. The results are not corrected for the
event plane resolution of about 85--95\%.}
\label{fig:ktdependence}
\end{figure}
\begin{figure}[!ht]
\centering
\hspace*{-2mm}
\includegraphics[width=13cm]
{Raverage}
\caption{ The average radii $R_{{\rm out},0}^{2} $, $R_{{\rm side},0}^{2}$,
$R_{{\rm long},0}^{2}$, and $R_{{\rm os},0}^{2}$ as a function of centrality for
different $k_{\rm{T}}$ ranges compared to hydrodynamical
calculations~\cite{Bozek:2014hwa}. Square brackets indicate the
systematic errors. }
\label{fig:avg_radii}
\end{figure}
Figure~\ref{fig:avg_radii} shows the average radii for different $k_{\rm{T}}$
values as a function of centrality. The average radii obtained in
this analysis are consistent with the results reported
in~\cite{Adam:2015vna}. As expected, the radii are larger in more
central collisions and at smaller $k_{\rm{T}}$ values, the latter reflecting
the effect of radial flow~\cite{Lisa:2005dd,Retiere:2003kf}. The
cross-term $R_{{\rm os},0}^2$ is consistent with zero, as expected due to the
symmetry of the system. Figure~\ref{fig:avg_radii} also shows the
average radii calculated for charged pions in the pseudorapidity range
$|\eta|<2$ from 3+1D hydrodynamic
calculations~\cite{Bozek:2014hwa}, assuming freeze-out temperature
$T_{f}$~=~150 MeV and a constant shear viscosity to entropy density
ratio $\eta/s$~= 0.08. The 3+1D hydrodynamic calculations,
while correctly describing the qualitative features of the average
radii dependence on centrality and $k_{\rm{T}}$, fail to describe our results
quantitatively.
\begin{figure}[!ht]
\hspace*{14mm}
\includegraphics[width=13cm,keepaspectratio]{Rratio}
\caption{ Amplitudes of the relative radius oscillations~$R^{2}_{\rm
out,2}/R^{2}_{\rm side,0}$,~$R^{2}_{\rm side,2}/R^{2}_{\rm
side,0}$,~$R^{2}_{\rm long,2}/R^{2}_{\rm long,0}$,
and~$R^{2}_{\rm os,2}/R^{2}_{\rm side,0}$ versus centrality for
the $k_{\rm{T}}$ ranges 0.2--0.3, 0.3--0.4, 0.4--0.5, and 0.5--0.7~{\rm GeV/{\it c}}. The error
bars indicate the statistical uncertainties and the square
brackets show the systematic errors. The STAR data points, for 0--5\%, 5--10\%, 10--20\%, 20--30\% and 30--80\% Au--Au collisions, are slightly shifted for clarity.
}
\label{fig:relative_Radii}
\end{figure}
Figure~\ref{fig:relative_Radii} shows the relative amplitudes of the
radius oscillations $R^{2}_{{\rm out},2}/R^{2}_{{\rm
side},0}$,~$R^{2}_{{\rm side},2}/R^{2}_{{\rm side},0}$,
$R^{2}_{{\rm long},2}/R^{2}_{{\rm long},0}$, and $R^{2}_{{\rm
os},2}/R^{2}_{{\rm side},0}$. When comparing our results to the
ones obtained by the STAR experiment, we observe similar relative
oscillations, however STAR results~\cite{Adams:2003ra,Adams:2004yc}
show on average larger oscillations for $R_{\rm side}^{2}$. Our relative
amplitudes for $R^{2}_{\rm out,2}/R^{2}_{\rm side,0}$, $R^{2}_{\rm
side,2}/R^{2}_{\rm side,0}$, and $R^{2}_{{\rm os},2}/R^{2}_{{\rm
side},0}$ show a clear centrality dependence, whereas the $R^{2}_{\rm long,2}/R^{2}_{\rm long,0}$
is very close to zero for all centralities, similarly to the results
from RHIC~\cite{Adams:2003ra,Adamczyk:2014mxp,Adare:2015bcj}.
The source eccentricity is usually defined as
$\eps=(R^2_y-R^2_x)/(R^2_y+R^2_x)$, where $R_{x}$ is the in-plane
radius of the (assumed) elliptical source and $R_y$ is the
out-of-plane radius. As shown in~\cite{Retiere:2003kf} the relative
amplitudes of side radii oscillations are mostly determined by the
spatial source anisotropy and are less affected by dynamical effects
such as velocity gradients. The source eccentricity at freeze-out
$\eps_{\rm final}$ can be estimated from $R^{2}_{\rm side}$
oscillations at small pion momenta with an accuracy within 20--30\% as
$\eps_{\rm final}\approx
2R_{{\rm side},2}^2/R_{{\rm side},0}^2$~\cite{Retiere:2003kf}.
Figure~\ref{fig:finaleccentricity} presents $2R_{{\rm side},2}^2/R_{{\rm side},0}^2$
for different $k_{\rm{T}}$ ranges as a function of the initial-state
eccentricity for six different centralities and four $k_{\rm{T}}$ bins. For
the initial eccentricity we have used the nucleon participant
eccentricity from the Monte Carlo Glauber model for both,
Au--Au~collisions at \mbox{$\sqrt{s_{_{\rm NN}}}$}~=200~GeV~\cite{Adare:2014vax} and
{Pb--Pb}~collision at \mbox{$\sqrt{s_{_{\rm NN}}}$}~= 2.76~TeV~\cite{Ghosh:2016npo}. Our results for
all $k_{\rm{T}}$ bins are significantly below the values of the initial
eccentricity indicating a more intense expansion in the in-plane
direction. Due to relatively large uncertainties of the RHIC results
for narrow $k_{\rm{T}}$ bins, we compare our results only to the average STAR
data~\cite{Adams:2003ra} in $0.15<k_{\rm{T}} <0.6$~{\rm GeV/{\it c}}~ and to PHENIX
results~\cite{Adare:2014vax} corresponding to $0.2<k_{\rm{T}} <2.0$~{\rm GeV/{\it c}}~
($\mean{k_{\rm{T}}}=0.53$~{\rm GeV/{\it c}}). We find a smaller final-state anisotropy in
the LHC regime compared to RHIC energies. This trend is qualitatively
consistent with expectations from hydrodynamic and transport
models~\cite{Shen:2012us,Lisa:2011na}. The final-state eccentricity
remains positive also at the LHC, evidence of an out-of-plane
elongated source at freeze-out. In Fig.~\ref{fig:finaleccentricity},
we also compare our results to the 3+1D hydrodynamic
calculations~\cite{Bozek:2014hwa}, which were performed for similar
centralities and $k_{\rm{T}}$ ranges as in the experiment. This model
slightly underestimates the final source eccentricity.
\begin{figure}[!ht]
\centering
\includegraphics[width=11cm,keepaspectratio]
{Eccen_Bozek_STAR_PHENIX}
\caption{An estimate of freeze-out eccentricity $2
R^{2}_{\rm side,2}/R^{2}_{\rm side,0}$ for different $k_{\rm{T}}$ ranges vs.
initial state eccentricity from Monte Carlo Glauber
model~\cite{Ghosh:2016npo} for six centrality ranges, 0--5\%,
5--10\%, 10--20\%, 20--30 \%, 30--40 \%, and 40--50\%. The dashed line
indicates $\eps_{\rm final}=\eps_{\rm init}$. Square brackets
indicate systematic errors. }
\label{fig:finaleccentricity}
\end{figure}
In conclusion, we have performed a measurement of two-pion azimuthally
differential femtoscopy relative to the second harmonic flow plane in
{Pb--Pb}~collisions at \mbox{$\sqrt{s_{_{\rm NN}}}$}~= 2.76~TeV. The out, side, and out-side radii
exhibit clear oscillations while the long radius is consistent with a
constant. The relative amplitudes of oscillations only weakly depend
on $k_{\rm{T}}$, with the side-radii oscillation slightly increasing with
$k_{\rm{T}}$. The final-state source eccentricity, estimated via side-radius
oscillations, is noticeably smaller than at lower collisions energies,
but still exhibits an out-of-plane elongated source at freeze-out even
after a stronger in-plane expansion. The final eccentricity is
slightly larger than that predicted by existing hydrodynamic
calculations.
\newenvironment{acknowledgement}{\relax}{\relax}
\begin{acknowledgement}
\section*{Acknowledgements}
\input{fa_2016-12-30.tex}
\end{acknowledgement}
| proofpile-arXiv_065-7432 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{Renyi Divergences in Anomaly Detection}
A popular statistical approach to detect anomalies in real-time data is to compare
the empirical distribution of certain features (updated on the fly) against
a stored ``profile'' (learned from past observations or computed off-line) used as a reference distribution.
Significant deviations of the observed distribution from the assumed profile trigger an alarm~\cite{Gu:2005:DAN:1251086.1251118}.
This technique, among many other applications, is often used
to detect DDoS attacks in network traffic~\cite{DBLP:journals/eswa/GulisanoCFJPP15,DBLP:conf/isncc/PukkawannaKY15}.
To quantify the deviation between the actual data and the reference distribution,
one needs to employ a suitable dissimilarity metric. In this context, based on empirical studies, Renyi divergences were suggested as good
dissimilarity measures~\cite{DBLP:journals/iet-com/LiZY09,DBLP:journals/tifs/XiangLZ11,DBLP:journals/prl/BhuyanBK15,DBLP:journals/eswa/GulisanoCFJPP15,DBLP:conf/isncc/PukkawannaKY15}.
While the divergence can be evaluated based on theoretical models\footnote{
For example, one uses fractional Brownian motions to simulate real network traffic
and Poisson distributions to model DDoS traffic\cite{DBLP:journals/tifs/XiangLZ11}.},
much more important (especially for real-time detection) is the estimation on the basis of samples.
The related literature is focused mainly on tunning the performance
of specific implementations, by choosing appropriate parameters (such as the suitable definition or the sampling frequency)
based on empirical evidence.
On the other hand, not much is known about the theoretical performance of estimating Renyi divergences for general discrete distributions
(continuous distributions need extra smoothness assumptions~\cite{DBLP:conf/icml/SinghP14}).
A limited case is estimating Renyi entropy~\cite{DBLP:conf/soda/AcharyaOST15} which corresponds
to the uniform reference distribution.
In this paper, we attempt to fill the gap by providing
better estimators for the Renyi divergence, together with theoretical guarantees on the performance. In our approach, motivated
by mentioned applications to anomaly detection, we assume that the reference distribution $q$ is explicitly known and
the other distribution $p$ can only be observed from i.i.d. samples.
\subsection{Our Contribution and Related Works}
\paragraph{Better estimators for a-priori known reference distributions}
In the literature Renyi divergences are typically estimated by straightforward \emph{plug-in} estimators
(see \cite{DBLP:journals/iet-com/LiZY09,DBLP:journals/prl/BhuyanBK15,DBLP:journals/iet-com/LiZY09,DBLP:journals/tifs/XiangLZ11,DBLP:journals/prl/BhuyanBK15,DBLP:journals/eswa/GulisanoCFJPP15,DBLP:conf/isncc/PukkawannaKY15}).
In this approach, one puts the empirical distribution (estimated from samples)
into the divergence formula, in place of the true distribution. Unfortunately, they have worse statistical properties, e.g.
are heavily biased. This affects the number of samples required to get a reliable estimate.
To obtain reliable estimates within a possible small number of samples, we extend the techniques
from \cite{DBLP:conf/soda/AcharyaOST15}. The key idea is to use \emph{falling powers} to estimate
power sums of a distribution (this trick is in fact a bias correction method).
The estimator is illustrated in \Cref{alg:estimator} below.
\begin{algorithm}[t]
\DontPrintSemicolon
\KwIn{divergence parameter $\alpha > 1$, \newline
alphabet $\mathcal{A} = \{a_1,\ldots,a_k\}$, \newline
reference distribution $q$ over $\mathcal{A}$, \newline
samples $x_1,\ldots,x_n$ from unknown $p$ on $\mathcal{A}$ \\ }
\KwOut{a number $D$ approximating the $\alpha$-divergence}
initialize\footnotemark $n_i=0$ for all $i=1,\ldots,k$ \\
\For{$j=1,\ldots,n$}{
let $i$ be such that $x_j = a_i$ \\
$n_i \gets n_i+1$
\tcc*{compute empirical frequencies}
}
$M\gets \sum_{i} q_i^{1-\alpha}\cdot \frac{ n_i^{\underline{\alpha}} }{n^{\alpha}}$ \tcc*{bias-corrected power sum estimation, $z^{\underline{\alpha}}$ stands for
the falling $\alpha$-power.}
$D\gets \frac{1}{\alpha-1}\log M$ \tcc*{divergences from power sums}
\Return{$D$}
\caption{Estimation of Renyi Divergence (to a reference distribution known a-priori)}\label{alg:estimator}
\end{algorithm}
\footnotetext{Storing and updating empirical frequencies can be implemented more efficiently when $n \ll k$, which
matters for almost uniform distributions $q$ (sublinear time and memory complexity), but not for the general case.}
For certain cases (where the reference distribution is close to uniform) we estimate the divergence
with the number of samples \emph{sublinear in the alphabet size}, whereas plug-in estimators need a
superlinear number of samples. In particular for the uniform reference distribution $q$, we recover the same upper bounds for
\emph{estimating Renyi entropy} as in \cite{DBLP:conf/soda/AcharyaOST15}.
\paragraph{Upper and lower bounds on the sample complexity}
We show that the sample complexity of estimating divergence of unknown $p$ observed from samples
to an explicitly known $q$ is dependent on the reference distribution $q$ itself.
When $q$ doesn't take too small probabilities, non-trivial estimation is possible, even \emph{sublinear} in the alphabet size for any $p$.
However when $q$ takes arbitrarily small values, the complexity is dependent on inverse powers of probability masses of $p$ and
is \emph{ unbounded} (for a fixed alphabet), without extra assumptions on $p$. We stress that these lower bounds are {no-go results} independent of the estimation technique.
For a more quantitative comparison, see \Cref{table:comparison}.
\begin{table}[h]
\captionsetup{skip=5pt,font=small}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{|l|l|l|l|}
\hline
Assumption & Complexity & Comment & Reference \\
\hline
$\min_i q_i = \Theta(k^{-1}) = \max_{i}q_i$ & $O\left( k^{1-\frac{1}{\alpha}}\right)$ & \parbox{3.5cm}{almost uniform $q$, \\ complexity sublinear} & \Cref{cor:sublinear_complexity}\\
\hline
no assumptions & $\Omega\left(k^{\frac{1}{2}}\right)$ & \parbox{3.5cm}{complexity at least square root} & \Cref{cor:lower_bounds_gen}\\
\hline
$\min_i q_i = k^{-\omega(1)}$ & $k^{\omega(1)}$ & \parbox{3.5cm}{negligible masses in $q$, \\ super-polynomial complexity} & \Cref{cor:sample_blowup} \\
\hline
$\min_i q_i = k^{-O(1)}$ & $k^{O(1)}$ & \parbox{3.5cm}{non-negligible mass in $q$, \\ polynomial complexity} &
\Cref{cor:poly_complexity}
\\
\hline
\end{tabular}
}
\caption{{A brief summary of our results, for the problem of estimating the Renyi divergence $D_{\alpha}(p\parallel q)$ (where
the divergence parameter $\alpha>1$ is a fixed constant)
between the known baseline distribution $q$ and a distribution $p$ learned from samples, both over an alphabet of size $k$.
The complexity is the number of samples needed to estimate the divergence up to a constant error
and with success probability at least $\frac{2}{3}$}.
}
\label{table:comparison}
\end{table}
\paragraph{Complexity instability vs numerical instability}
Our results provide theoretical insights about heuristic ``patches'' to the Renyi divergence formula
suggested in the applied literature. Since the formula is numerically unstable when
one of the probability masses $q_i$ becomes arbitrarily small (see \Cref{dif:renyi_div}), authors suggested
to omit or round up very small probabilities of $q$ (see for example \cite{DBLP:journals/iet-com/LiZY09,DBLP:conf/isncc/PukkawannaKY15}).
In accordance to this, as shown in \Cref{table:comparison}, the sample complexity is also unstable when unlike events occur in the reference distribution $q$.
Moreover, this is the case even if the distribution $q$ is perfectly known.
We therefore conclude that small probabilities of $q$ are very subtle not only because
of numerical instability, but more importantly because the sample complexity is unstable.
\subsection{Our techniques}
For upper bounds we merely borrow and extend techniques from \cite{DBLP:conf/soda/AcharyaOST15}.
For lower bounds our approach is however different.
We find a pair of distributions which are close in total variation yet with much different divergences to $q$,
by a variational approach (writing down an explicit optimization program)
As a result, we can obtain our lower bounds for any accuracy.
In turn, the argument in \cite{DBLP:conf/soda/AcharyaOST15}, even if can be extended to the Renyi divergence,
has inherit limitations that make it work only for sufficiently small accuracies.
Thus we can say that our lower bound technique, in comparison to \cite{DBLP:conf/soda/AcharyaOST15}, offers
lower bounds valid in all regimes of the accuracy parameter, in particular for constant values used in the applied literature.
In fact, our technique strictly improves known lower bounds on estimating collision entropy.
Taking the special case when $q$ is uniform, we obtain that the sample complexity for estimating collision entropy
is $\Omega(k^{\frac{1}{2}})$ even for constant accuracy, while results in \cite{DBLP:conf/soda/AcharyaOST15} guarantees
this only for very small $\delta$ (no exact threshold is given, and hidden constants may be dependent on $\delta$), which is captured by the notation $\tilde{\tilde{\Omega}}(k^{\frac{1}{2}})$.
\subsection{Organization}
In \Cref{sec:prelim} we introduce necessary notions and notations.
Upper bounds on the sample complexity are discussed in \Cref{sec:upper_bounds} and lower bounds
in \Cref{sec:lower_bounds}. We conclude our work in \Cref{sec:conclusion}.
\section{Preliminaries}\label{sec:prelim}
For a distribution $p$ over an alphabet $\mathcal{A} = \{a_1,\ldots,a_k\}$ we denote $p_i = p(a_i)$. All logarithms are at base $2$.
\begin{definition}[Total variation]
The total variation of two distributions $p,p'$ over the same finite alphabet equals
$d_{TV}(p,p') = \frac{1}{2}\sum_{i}|p_i-p'_i|$.
\end{definition}
Below we recall the definition of Renyi divergence (we refer the reader to \cite{DBLP:journals/tit/ErvenH14} for a survey of its properties).
\begin{definition}[Renyi divergence]\label{dif:renyi_div}
The Renyi divergence of order $\alpha$ (in short: Renyi $\alpha$-divergence) of two distributions $p,q$ having the same support is defined by
\begin{align}\label{eq:renyi_div}
D_{\alpha}(p\parallel q)= \frac{1}{\alpha-1}\log\sum_{i}\frac{p_i^{\alpha}}{q_i^{\alpha-1}}
\end{align}
\end{definition}
By setting uniform $q$ we get the relation to Renyi entropy.
\begin{remark}[Renyi entropy vs Renyi divergence]\label{rem:entropy_from_divergence}
For any $p$ over $\mathcal{A}$ the Renyi entropy of order $\alpha$ equals
\begin{align*}
-\frac{1}{\alpha-1}\log\sum_{i}{p_i^{\alpha}} = -D_{\alpha}(p\parallel q_{\mathcal{A}}) + \log |\mathcal{A}|,
\end{align*}
where $q_{\mathcal{A}}$ is the uniform distribution over $\mathcal{A}$.
\end{remark}
\begin{definition}[Renyi's divergence estimation]\label{def:estimator}
Fix an alphabet $\mathcal{A}$ of size $k$, and two distributions $p$ and $q$ over $\mathcal{A}$.
Let $\mathsf{Est}^{q}:\mathcal{A}^n\rightarrow \mathbb{R}$ be an algorithm which receives $n$ independent samples of $p$ on its input.
We say that $\mathsf{Est}^{q}$ provides an additive $(\delta,\epsilon)$-approximation to the Renyi $\alpha$-divergence of $p$ from $q$ if
\begin{align}\label{eq:estimator}
\Pr_{x_i\gets p}\left[ | \mathsf{Est}^{q}(x_1,\ldots,x_n) - D_{\alpha}(p\parallel q)| > \delta \right] < \epsilon.
\end{align}
\end{definition}
\begin{definition}[Renyi's divergence estimation complexity]\label{def:complexity}
The sample complexity of estimating the Renyi divergence given $q$ with probability error $\epsilon$ and
additive accuracy $\delta$ is the minimal number $n$ for which there exists an algorithm satisfying \Cref{eq:estimator}
for all $p$.
\end{definition}
It turns out that it is very convenient not to work directly with estimators for Renyi divergence,
but rather with estimators for weighted power sums.
\begin{definition}[Divergence power sums]
The power sum corresponding to the $\alpha$ divergence of $p$ and $q$ is defined as
\begin{align}\label{eq:renyi_moments}
M_{\alpha}(p,q) \overset{def}{=} \mathrm{e}^{(1-\alpha)D_{\alpha}(p\parallel q)=}= \sum_{i} \frac{p_i^{\alpha}}{q_i^{\alpha-1}}
\end{align}
\end{definition}
The following lemma shows that estimating divergences (\Cref{eq:renyi_div}) with an absolute relative error of $O(\delta)$ and
corresponding power sums (\Cref{eq:renyi_moments}) with a relative error of $O(\delta/(\alpha-1))$ is equivalent
\begin{lemma}[Equivalence of Additive and Multiplicative Estimations]\label{lemma:div_to_moments}
Suppose that $m$ is a number such that $M_{\alpha}(p,q) = m\cdot (1+\delta)$, where $|\delta| < \frac{1}{2}$.
Then $d = -\frac{1}{\alpha-1}\log m$ satisfies
$D_{\alpha}(p\parallel q) = d+O(1/(\alpha-1))\cdot \delta$.
The other way around, if $m'$ is such that $D_{\alpha}(p\parallel q) = d+\delta$, where $|\delta| < \frac{1}{2}$, then
$m = \mathrm{e}^{(1-\alpha)d}$ satisfies $M_{\alpha}(p,q) = m\cdot (1+ O(\alpha-1)\cdot \delta )$.
\end{lemma}
The proof is a straightforward consequence of the first order Taylor's approximation, and will appear in the full version.
\section{Upper Bounds on the Sample Complexity}\label{sec:upper_bounds}
Below we state our upper bounds for the sample complexity. The result is very similar to the formula in \cite{DBLP:conf/soda/AcharyaOST15} before simplifications,
except the fact that in our statement there are additional weights coming from possibly non-uniform $q$ and
it can't be further simplified.
\begin{theorem}[Generalizing \cite{DBLP:conf/soda/AcharyaOST15}]\label{thm:upper_bounds}
For any distributions $p,q$ over an alphabet of size $k$, if the number $n$ satisfies
\begin{align*}
\sum_{r=0}^{\alpha-1}\binom{\alpha}{r}\frac{1}{n^{\alpha-r}} \frac{\sum_{i}\frac{p_i^{\alpha+r}}{q_i^{2\alpha-2}}}{\left(\sum_{i}\frac{p_i^{\alpha}}{q_i^{\alpha-1}}\right)^2}
\ll \epsilon\delta^2,
\end{align*}
then the complexity of estimating the Renyi $\alpha$-divergence of $p$ to given $q$
is at most $n$.
\end{theorem}
The proof is deferred to the appendix, below we discuss corollaries.
The first corollary shows that the complexity is sublinear when the reference distribution is close to uniform.
\begin{corollary}[Sublinear complexity for almost uniform reference probabilities, extending \cite{DBLP:conf/soda/AcharyaOST15}]\label{cor:sublinear_complexity}
Let $p,q$ be distributions over an alphabet of size $k$, and $\alpha>1$ be a constant. Suppose
that $\max_{i} q_i = O(k^{-1})$ and $\min_{i}q_i =\Omega(k^{-1})$.
Then the complexity of estimating the Renyi $\alpha$-divergence with respect to $q$, up to constant accuracy and probability error at most $\frac{1}{3}$,
is $O\left(k^{\frac{\alpha-1}{\alpha}}\right)$.
\end{corollary}
As shown in the next corollary, the complexity is polynomial only if the reference probabilities are not negligible.
\begin{corollary}[Polynomial complexity for non-negligible reference probabilities]\label{cor:poly_complexity}
Let $p,q$ be distributions over an alphabet of size $k$.
Suppose that $\min_{i}{q_i} = k^{-O(1)}$, and let $\alpha > 1$ be a constant. Then the complexity of estimating the Renyi $\alpha$-divergence with respect to $q$, up to a constant accuracy and probability error
at most $0.3$ (in the sense of \Cref{def:complexity}) is $k^{O(1)}$.
\end{corollary}
\begin{proof}[Proof of \Cref{cor:poly_complexity}]
Under our assumptions $\sum_{i}\frac{p_i^{\alpha+r}}{q_i^{2\alpha-2}} = k^{O(1)}\cdot \sum_{i}p_i^{\alpha+r}$.
Since $q_i \leqslant 1$, we get $\sum_{i}\frac{p_i^{\alpha}}{q_i^{\alpha-1}} \leqslant \sum_{i}p_i^{\alpha}$. By
\Cref{thm:upper_bounds}, we conclude that the sufficient condition is
\begin{align*}
\frac{\sum_{i}\frac{p_i^{\alpha+r}}{q_i^{2\alpha-2}}}{\left(\sum_{i}\frac{p_i^{\alpha}}{q_i^{\alpha-1}}\right)^2} =
k^{O(1)}\cdot \frac{\sum_{i}p_i^{\alpha+r}}{\left(\sum_{i}p_i^{\alpha}\right)^2}.
\end{align*}
Therefore, we need to chose $n$ such that
\begin{align*}
k^{O(1)}\cdot \sum_{r=0}^{\alpha-1}\binom{\alpha}{r}\frac{1}{n^{\alpha-r}}\frac{\sum_{i}p_i^{\alpha+r}}{\left(\sum_{i}p_i^{\alpha}\right)^2} <
0.3.
\end{align*}
By the discussion in
\cite{DBLP:conf/soda/AcharyaOST15} we know that for $r=0,\ldots,\alpha-1$ we have
$\frac{\sum_{i}p_i^{\alpha+r}}{\left(\sum_{i}p_i^{\alpha}\right)^2} \leqslant k^{(\alpha-1)\cdot \frac{\alpha-r}{\alpha}}$.
Thus we need to find $n$ that satisfies
\begin{align*}
k^{O(1)}\cdot \sum_{r=0}^{\alpha-1}\binom{\alpha}{r} \left(\frac{n}{k^{\frac{\alpha-1}{\alpha}}}\right)^{r-\alpha}
< 0.3,
\end{align*}
By the inequality $\sum_{j\geqslant 0}\binom{\beta}{j}u^{j} \leqslant (1+u)^{\beta}$
(which follows by the Taylor's expansion for any positive real number $\beta$)
and the symmetry of binomial coefficients we need
\begin{align*}
k^{O(1)}\cdot \left(\left(1+\frac{k^{\frac{\alpha-1}{\alpha}}}{n} \right)^{\alpha}-1\right) < 0.3
\end{align*}
By the Taylor expansion $(1+u)^{\alpha} = 1+O(\alpha u)$ valid for $u\leqslant \frac{1}{\alpha}$ it suffices if
\begin{align*}
k^{O(1)}\cdot \alpha \cdot \frac{k^{\frac{\alpha-1}{\alpha}}}{n} < 0.3.
\end{align*}
which finishes the proof.
\end{proof}
\begin{proof}[Proof of \Cref{cor:sublinear_complexity}]
The corollary can be concluded by inspecting the proof of \Cref{cor:poly_complexity}.
The bounds are the same except that the factor $k^{O(1)}$ is replaced by $\Theta(1)^{\alpha}$.
For constant $\alpha$, the final condition reduces to $n \geqslant \Omega\left(k^{\frac{\alpha-1}{\alpha}}\right)$.
\end{proof}
\section{Sample Complexity Lower Bounds}\label{sec:lower_bounds}
The following theorem provides lower bounds on the sample complexity for any distribution
$p$ and $q$. Since the statement is somewhat technical
we discuss only corollaries and refer to the appendix for a proof.
\begin{theorem}[Sample Complexity Lower Bounds]\label{thm:lower_bounds}
Let $p,q$ be two fixed distributions, $\delta \in (0,0.5)$ and numbers $C_1,C_2 \geqslant 0$ be given by
\begin{align*}
C_1 & = \alpha\frac{\sum_i \delta_i p_i^{\alpha}q_i^{1-\alpha}}{\sum_i p_i^{\alpha}q_i^{1-\alpha}} \cdot \delta^{-1} \\
C_2 & = \frac{\alpha(\alpha-1)}{4}\frac{\sum_i \delta_i^2 p_i^{\alpha}q_i^{1-\alpha}}{\sum_i p_i^{\alpha}q_i^{1-\alpha}} \cdot \delta^{-2}
\end{align*}
for some $\delta_i$ satisfying $\delta_i \geqslant -\frac{1}{2}$, $\sum_i \delta_i p_i = 0$, and
$\sum_{i}p_i|\delta_i| =\delta$.
Then for any fixed $\alpha>1$, estimating the Renyi divergence to $q$ (in the sense of \Cref{def:estimator}) with error probability $\frac{1}{3}$ and up to a constant accuracy
requires is at least
\begin{align*}
n = \Omega\left( \max(\sqrt{C_2},C_1)\right)
\end{align*}
samples from $p$.
\end{theorem}
By choosing appropriate numbers in \Cref{thm:lower_bounds} we can obtain bounds for different settings.
\begin{corollary}[Lower bounds for general case]\label{cor:lower_bounds_gen}
Estimating the Renyi divergence requires always $\Omega\left( k^{\frac{1}{2}} \right)$ samples.
\end{corollary}
\begin{proof}[Proof of \Cref{cor:lower_bounds_gen}]
In \Cref{thm:lower_bounds} we chose the uniform $p$ and $\delta$ such that
$\delta_i = \frac{k}{4}$ for the index $i=i_0$ minimizing $q_i$, and $\delta_i = -\frac{k}{4(k-1)}$ elsewhere. This gives
us $C_1\geqslant 0$ and $C_2 \geqslant \Omega(k^2)\cdot \frac{q_{i_0}^{1-\alpha}}{\sum_{i}q_i^{1-\alpha}}$ (the constant dependent on $\alpha$) which is bigger than $\Omega(k)$,
because $\frac{q_{i_0}^{1-\alpha}}{\sum_{i}q_i^{1-\alpha}} \geqslant k^{-1}$ by our choice of $i_0$.
\end{proof}
\begin{corollary}[Polynomial complexity requires non-negligible probability masses]\label{cor:sample_blowup}
For sufficiently large $k$, if $\min_i q_i = k^{-\omega(1)}$ then there exists a distribution $p$ dependent on $k$ such that
estimation is at least $k^{\omega(1)}$.
\end{corollary}
\begin{proof}[Proof of \Cref{cor:sample_blowup}]
Fix one alphabet symbol $a_{i_0}$ and real positive numbers $c,d$.
Let $q$ put the probability $\frac{1}{k^{c}}$ on $x$ and be uniform elsewhere. Also
let $p$ put the probability $\frac{1}{k^{d}}$ on $x$ and be uniform elsewhere. We have
\begin{align*}
\frac{p_i^{\alpha}}{q_i^{\alpha-1}} = \left\{
\begin{array}{cc}
O(k^{-1}) & i\not=i_0 \\
k^{c(\alpha-1)-d\alpha} & i = i_0
\end{array}
\right.
\end{align*}
and
\begin{align*}
\max_{i}\frac{p^{\alpha-2}}{q^{\alpha-1}_i} &=\max( k^{c(\alpha-1)-d(\alpha-2)},1)
\end{align*}
Choose $d$ so that it satisfies
\begin{align*}
c(\alpha-1)-d\alpha & > -1 \\
c(\alpha-1)-d(\alpha-2) & > 0
\end{align*}
for example
$d = \frac{\alpha-1}{\alpha}\cdot c$ (works for $\alpha \geqslant 2$ and $1< \alpha < 2$) we obtain from \Cref{thm:lower_bounds}
(where we take $\delta_i = \frac{1}{2}$ for $i=i_0$ and constant $\delta_i $ elsewhere,
and our conditions on $d$ ensure that $C_1\geqslant 0$ and $C_2 \geqslant \Omega(k^{2d})$ respectively
)
that for sufficiently large $k$ the minimal number of samples is
\begin{align*}
n = \Omega\left( k^{d} \right).
\end{align*}
Note that if $c=\omega(1)$ our choice of $d$ implies that also $d=\omega(1)$, and thus the corollary follows.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
We extended the techniques recently used to analyze the complexity of entropy estimation
to the problem of estimating Renyi divergence. We showed
that in general there are no uniform bounds on the sample complexity,
and the complexity is polynomial in the alphabet size if and only if the reference distribution
doesn't take negligible probability masses (explained by the numerical properties of the divergence formula).
\printbibliography
| proofpile-arXiv_065-7446 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{Proof of Theorem \ref{thm:decentralized}} \label{proof:thm:decentralized}
To solve the optimization problem \eqref{eq2:reg problem} over affine tariffs of the form $T(d)=A+\pi^{{\mbox{{\tiny $\top$}}}}d$, we obtain expressions for $\overline{\textup{\textsf{cs}}}(T)$ and $\overline{\textup{\textsf{rs}}}(T)$ in terms of the parameters $\pi$ and $A$, considering the customer-integrated DERs.
On one hand, from the separability implications of the linearity of $T$ on the customers' problem established in Sec. \ref{sec:decentralizedIntegration}, we have that customers with DERs obtain an expected surplus
\begin{align*}
\overline{\textup{\textsf{cs}}}^i(T) &= \mathbb{E}[S^i(D^i(\pi,\omega^i))-T(d^i(\pi,\omega^i))] \\
&= \overline{\textup{\textsf{cs}}}^i_0(T) + \mathbb{E}[\pi^{{\mbox{{\tiny $\top$}}}} (s^*(\pi,\theta^i) + r^i(\omega^i))],
\end{align*}
where $\overline{\textup{\textsf{cs}}}^i_0(T)$, the expected consumer surplus without DERs, is computed according to \eqref{eq2:cs}, \mbox{\textit{i.e.},\ \/}
\begin{align} \label{eq2:cs without DERs}
\overline{\textup{\textsf{cs}}}^i_0(T) = \mathbb{E} \big[ S^i(D^i(\pi,\omega^i),\omega^i) - \pi^{{\mbox{{\tiny $\top$}}}}D^i(\pi,\omega^i) \big] - A.
\end{align}
On the other hand, since this case does not consider retailer-integrated DERs, the retailer derives an expected surplus
\begin{align*}
\overline{\textup{\textsf{rs}}}(T) &= \sum_{i=1}^M \mathbb{E}[ T(d^i(\pi,\omega^i)) - \lambda^{{\mbox{{\tiny $\top$}}}}d^i(\pi,\omega^i) ] \\
&= \overline{\textup{\textsf{rs}}}_0(T) - \sum_{i=1}^M \mathbb{E}[ (\pi-\lambda)^{{\mbox{{\tiny $\top$}}}} ( s^*(\pi,\theta^i) + r^i(\omega^i) ) ]
\end{align*}
where $\overline{\textup{\textsf{rs}}}_0(T)$, the expected retailer surplus without DERs, is computed according to \eqref{eq2:rs}, \mbox{\textit{i.e.},\ \/}
$$
\overline{\textup{\textsf{rs}}}_0(T) = A\cdot M + \sum_{i=1}^M \mathbb{E} \left[ (\pi - \lambda)^{{\mbox{{\tiny $\top$}}}}D^i(\pi,\omega^i) \right].
$$
We can now solve the constraint $\overline{\textup{\textsf{rs}}}(T)=F$ for $A$,
\begin{align} \label{eq2:connection charge decentralized}
A=\mbox{$\frac{1}{M}$} \left( F - \mathbb{E} \left[ \mbox{$\sum_{i=1}^M$} (\pi-\lambda)^{{\mbox{{\tiny $\top$}}}} d^i(\pi,\omega^i) \right] \right)
\end{align}
and replace it in the objective function of problem \eqref{eq2:reg problem}, $\overline{\textup{\textsf{cs}}}(T)$, which yields
\begin{align} \label{eq2:cs decentralized}
\overline{\textup{\textsf{cs}}}(T) &= \overline{\textup{\textsf{sw}}}_0(T) - F + \sum_{i=1}^M \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} ( s^*(\pi,\theta^i) + r^i(\omega^i) ) ]
\end{align}
where $\overline{\textup{\textsf{sw}}}_0(T) = \overline{\textup{\textsf{cs}}}_0(T) + \overline{\textup{\textsf{rs}}}_0(T)$, the expected total surplus that $T$ would induce in the absence of DERs, is given by
\begin{align} \label{eq2:sw no DERs}
\overline{\textup{\textsf{sw}}}_0(T) &= \sum_{i=1}^{M} \mathbb{E}\left[ S^i(D^{i}(\pi,\omega^i), \omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}} D^{i}(\pi,\omega^i) \right].
\end{align}
One can then show that $\overline{\textup{\textsf{sw}}}_0(T)$ is maximized over $\pi$ at $\pi_{\textsc{dec}} = \overline{\lambda}$ (see proof of Theorem 1 in \cite{MunozTong16partIarxiv}), when $\nabla_{\pi}D(\pi,\omega)$ and $\lambda$ are uncorrelated (see Cor. 1 in \cite{MunozTong16partIarxiv}).
Moreover, each term $\mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} s^*(\pi,\theta^i)]$ is also maximized over $\pi$ at $\pi_{\textsc{dec}} = \overline{\lambda}$ since
$$
\mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} s^*(\pi,\theta^i)] = \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} s^*(\pi,\theta^i) \leq \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} s^*(\overline{\lambda},\theta^i) = V(\overline{\lambda},\theta^i)
$$
for all $\pi \in \mathbb{R}^N$.
Consequently, the expression $\overline{\textup{\textsf{cs}}}(T)$ in \eqref{eq2:cs decentralized} is maximized over $\pi$ at $\pi^*_{\textsc{dec}} = \overline{\lambda}$.
The strict concavity of $\overline{\textup{\textsf{sw}}}_0(T)$ in $\pi$ (Prop. 4 in \cite{MunozTong16partIarxiv}) guarantees the uniqueness and optimality of $\pi^*_{\textsc{dec}}$ and $A^*_{\textsc{dec}}$.
Replacing $\pi$ and $d^i(\pi,\omega^i)$ in \eqref{eq2:connection charge decentralized} for $\overline{\lambda}$ and $d^i(\overline{\lambda},\omega^i)$ according to \eqref{eq2:demand with DERs}, respectively, yields the expression for $A^*_{\textsc{dec}}$ in \eqref{eq2:A decentralized}, where
$$
A^* = \mbox{$\frac{1}{M}$} \left( F + \tr \left( \cov \left( \lambda,D \left( \overline{\lambda},\omega \right) \right) \right) \right)
$$
is the optimal connection charge without DERs for uncorrelated $\nabla_{\pi}D(\pi,\omega)$ and $\lambda$ (Corollary 1 in \cite{MunozTong16partIarxiv}).
\hfill $\blacksquare$
\subsubsection*{Proof of Corollary \ref{cor:decentralized}} \label{proof:cor:decentralized}
Theorem \ref{thm:decentralized} implies that if $\nabla_{\pi}D(\pi,\omega)$ and $\lambda$ are uncorrelated then $\pi^*_{\textsc{dec}}=\overline{\lambda}$, which does not depend on the parameter $F$.
Hence, it is clear from \eqref{eq2:sw no DERs} that
$
\overline{\textup{\textsf{sw}}}^*_0 \equiv \overline{\textup{\textsf{sw}}}_0(T^*),
$
where $T^*(q)=A^*+\pi^{*{\mbox{{\tiny $\top$}}}}q$,
is also independent from the parameter $F$.
It follows from \eqref{eq2:cs decentralized} that
$$
\overline{\textup{\textsf{cs}}}(T^*_{\textsc{dec}})= \overline{\textup{\textsf{sw}}}^*_0 - F + \mbox{$\sum_{i=1}^{M}$} V(\overline{\lambda},\theta^i) + \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^i(\omega^i)]
$$
and, since $\overline{\textup{\textsf{rs}}}(T^*_{\textsc{dec}})=F$ at optimality,
\begin{align*}
\overline{\textup{\textsf{sw}}}(T^*_{\textsc{dec}}) = \overline{\textup{\textsf{sw}}}^*_0 + \mbox{$\sum_{i=1}^{M}$} V(\overline{\lambda},\theta^i) + \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^i(\omega^i)].
\end{align*}
Thus, $\overline{\textup{\textsf{sw}}}(T^*_{\textsc{dec}})$ is also independent of the parameter $F$.
\hfill $\blacksquare$
\subsubsection*{Proof of Theorem \ref{thm:optimalityDERs}} \label{proof:thm:optimalityDERs}
We show this result using arguments analogous to those in the proof of Theorem 3 in \cite{MunozTong16partIarxiv}.
That is, we show that the optimal two-part tariff $T^*_{\textsc{dec}}$ attains an upper bound for the performance of all \emph{ex-ante} tariffs derived from the social planner's problem.
To obtain a tight upper bound for ex-ante tariffs only (rather than a looser bound for all possibly ex-post tariffs), the social planner makes customers' decisions relying only on the information observable by each of them (\mbox{\textit{i.e.},\ \/} $\omega^i$) as opposed to based on global information (\mbox{\textit{e.g.}, \/} $\xi=(\lambda,\omega^1,\ldots,\omega^M)$).
Consider the social planner's problem
\begin{subequations} \label{eq2:social planner with DERs}
\begin{align}
\max_{\{q^i(\cdot),s^i(\cdot)\}_{i=1}^M} &\ \overline{\textup{\textsf{sw}}} \\
\text{s.t} \ \ &\ s^i(\omega^i) \in \mathcal{U}(\theta^i), \qquad i=1,\ldots,M.
\end{align}
\end{subequations}
with
$$
\overline{\textup{\textsf{sw}}} = \mathbb{E}_{\xi} \left[ \sum_{i=1}^M S^i(q^i(\omega^i),\omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}}d^i(\omega^i) \right]
$$
which is related to the regulator's problem \eqref{eq2:reg problem} with customer-integrated DERs.
The notations $q^i(\omega^i)$, $s^i(\omega^i)$, and $d^i(\omega^i)$ indicate the restriction of the social planner to make (causal) decisions \emph{contingent only} on the local state of each customer $\omega^i$.
Recall from \eqref{eq2:cs with DERs} that the expected consumer surplus for a given ex-ante tariff is given by $\overline{\textup{\textsf{cs}}}(T) = \sum_{i=1}^M \overline{\textup{\textsf{cs}}}^i(T)$, where $\overline{\textup{\textsf{cs}}}^i(T)$ can be written as
\ifdefined\IEEEPARstart
in \eqref{eq2:cs with DERs long},
\else
\begin{align}
\overline{\textup{\textsf{cs}}}^i(T) &= \max_{q^i(\cdot),s^i(\cdot)} \bigg\{\mathbb{E} \left[ S^i(q^i(\omega^i),\omega^i) - T\Big(q^i(\omega^i)-r^i(\omega^i)-s^i(\omega^i)\Big) \right] \Big| s^i(\omega^i) \in \mathcal{U}(\theta^i) \bigg\} \nonumber \\
&= \mathbb{E} \bigg[ S^i(q^{i*}(T,\omega^i),\omega^i) - T\Big( \underbrace{q^{i*}(T,\omega^i)-r^{i}(\omega^i)-s^{i*}(T,\omega^i)}_{d^{i*}(T,\omega^i)} \Big) \bigg],
\label{eq2:cs with DERs long}
\end{align}
\fi
and note that the corresponding expected retailer surplus is given by
\begin{align*}
\overline{\textup{\textsf{rs}}}(T) = \mathbb{E} \left[ \sum_{i=1}^M T(d^{i*}(T,\omega^i)) - \lambda^{{\mbox{{\tiny $\top$}}}} d^{i*}(T,\omega^i) \right],
\end{align*}
and the expected total surplus by
\begin{align*}
\overline{\textup{\textsf{sw}}}(T) &= \mathbb{E} \bigg[ \sum_{i=1}^M S^i(q^{i*}(T,\omega^i),\omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}} d^{i*}(T,\omega^i) \bigg] \\
&= \mathbb{E} \bigg[ \sum_{i=1}^M S^i(q^{i*}(T,\omega^i),\omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}} q^{i*}(T,\omega^i)\bigg] \\
& \qquad\qquad + \mathbb{E} \bigg[ \sum_{i=1}^M \lambda^{{\mbox{{\tiny $\top$}}}} (r^i(\omega^i) + s^{i*}(T,\omega^i))\bigg].
\end{align*}
The following sequence of equalities/inequalities shows that problem \eqref{eq2:social planner with DERs} provides an upper bound to problem \eqref{eq2:reg problem}.
\begin{align}
& \max_{T(\cdot)} \{ \overline{\textup{\textsf{cs}}}(T) \ | \ \overline{\textup{\textsf{rs}}}(T) = F \} + F \nonumber\\
&= \max_{T(\cdot)} \{ \overline{\textup{\textsf{cs}}}(T) + \overline{\textup{\textsf{rs}}}(T) \ | \ \overline{\textup{\textsf{rs}}}(T) = F \} \nonumber\\
&= \max_{T(\cdot)} \{ \overline{\textup{\textsf{sw}}}(T) \ | \ \overline{\textup{\textsf{rs}}}(T) = F \} \nonumber\\
&\leq \max_{T(\cdot)} \ \overline{\textup{\textsf{sw}}}(T) \label{eq2:reg problem UB 1} \\
&\leq \max_{\{q^i(\cdot)\}_{i=1}^M} \mathbb{E}\left[ \sum_{i=1}^MS^i(q^i(\omega^i),\omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}}q^i(\omega^i) \right] \nonumber\\
&~~~~ +
\sum_{i=1}^M \max_{s^i(\cdot)} \Big\{ \left. \mathbb{E}\left[ \lambda^{{\mbox{{\tiny $\top$}}}}(r^i(\omega^i) + s^i(\omega^i)) \right] \ \right| \ s^i(\omega^i) \in \mathcal{U}(\theta^i) \Big\} \label{eq2:reg problem UB 2} \\
&= \max_{\{q^i(\cdot),s^i(\cdot)\}_{i=1}^{M}} \ \Big\{ \ \overline{\textup{\textsf{sw}}} \ \big| \ s^i(\omega^i) \in \mathcal{U}(\theta^i), \ i=1,\ldots,M \Big\}. \label{eq2:reg problem UB 3}
\end{align}
In particular, the inequality in \eqref{eq2:reg problem UB 2} holds because $\overline{\textup{\textsf{sw}}}(T)$ depends on $T$ only through $q^{i*}(T,\omega^i)$ and $s^{i*}(T,\omega^i)$.
This implies that maximizing $\overline{\textup{\textsf{sw}}}(T)$ directly over $\{q^i(\cdot),s^i(\cdot)\}_{i=1}^M$ rather than indirectly over $T(\cdot)$ is a relaxation of the optimization in \eqref{eq2:reg problem UB 1}.
Clearly, the problem in \eqref{eq2:reg problem UB 2} corresponds to the social planner's problem in \eqref{eq2:reg problem UB 3} and \eqref{eq2:social planner with DERs}.
It suffices to show now that $T^*_{\textsc{dec}}$ attains the upper bound in \eqref{eq2:reg problem UB 3}.
To that end, we use the independence sufficient condition $\omega \perp \lambda$.
We show that, under said condition, the expected total surplus $\overline{\textup{\textsf{sw}}}(T^*_{\textsc{dec}})$ matches the upper bound.
First note that the condition $\omega \perp \lambda$ allows to rewrite the upper bound in \eqref{eq2:reg problem UB 2} and \eqref{eq2:reg problem UB 3} as follows.
\begin{align*}
& \max_{\{q^i(\cdot),s^i(\cdot)\}_{i=1}^{M}} \ \Big\{ \ \overline{\textup{\textsf{sw}}} \ \big| \ s^i(\omega^i) \in \mathcal{U}(\theta^i), \ i=1,\ldots,M \Big\} \nonumber\\
&= \sum_{i=1}^M \max_{q^i(\cdot)} \ \mathbb{E}_{\omega^i} \left[ S^i(q^i(\omega^i),\omega^i) - \mathbb{E}_{\lambda|\omega^i}\left[\lambda | \omega^i\right]^{{\mbox{{\tiny $\top$}}}} q^i(\omega^i) \right] \nonumber\\
&\qquad\qquad\qquad + \mathbb{E}_{\xi}\left[ \lambda^{{\mbox{{\tiny $\top$}}}} r^i(\omega^i) \right] \nonumber\\
&\qquad\qquad\qquad +
\max_{s^i(\cdot)} \Big\{ \left. \mathbb{E}_{\xi}\left[ \lambda^{{\mbox{{\tiny $\top$}}}} s^i(\omega^i) \right] \ \right| \ s^i(\omega^i) \in \mathcal{U}(\theta^i) \Big\} \nonumber \displaybreak[0] \\
&= \sum_{i=1}^M \max_{q^i(\cdot)} \ \mathbb{E}_{\omega^i} \left[ S^i(q^i(\omega^i),\omega^i) - \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} q^i(\omega^i) \right] \quad + \quad \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} \overline{r}^i \\
&\qquad\qquad \qquad +
\max_{s^i(\cdot)} \Big\{ \left. \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} \overline{s}^i \ \right| \ s^i(\omega^i) \in \mathcal{U}(\theta^i) \Big\} \displaybreak[0] \\
&= \sum_{i=1}^M \mathbb{E}_{\omega^i} \left[ S^i(D^i(\overline{\lambda},\omega^i),\omega^i) - \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} D^i(\overline{\lambda},\omega^i) \right] \\
&\qquad\qquad\qquad + \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} \overline{r}^i \quad + \quad \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} s^{*}(\overline{\lambda},\theta^i),
\end{align*}
where the last equality follows from the definition of the demand function $D^i(\pi,\omega^i)$ and the simplification of storage operation policy under deterministic prices.
The result follows since the tariff $T^*_{\textsc{dec}}$ induces the same expected total surplus if $\omega \perp \lambda$, \mbox{\textit{i.e.},\ \/}
\begin{align*}
\overline{\textup{\textsf{sw}}}(T^*_{\textsc{dec}})
& = \mathbb{E}_{\xi} \bigg[ \sum_{i=1}^M S^i(D^{i}(\pi^*_{\textsc{dec}},\omega^i),\omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}} D^{i}(\pi^*_{\textsc{dec}},\omega^i)\bigg] \\
& \qquad\qquad\qquad + \mathbb{E} \bigg[ \sum_{i=1}^M \lambda^{{\mbox{{\tiny $\top$}}}} (r^i(\omega^i) + s^{*}(\pi^*_{\textsc{dec}},\theta^i))\bigg] \\
& = \sum_{i=1}^M \mathbb{E}_{\omega^i} \bigg[ S^i(D^{i}(\overline{\lambda},\omega^i),\omega^i) - \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} D^{i}(\overline{\lambda},\omega^i)\bigg] \\
& \qquad\qquad\qquad\qquad\qquad + \overline{\lambda}^{{\mbox{{\tiny $\top$}}}} (\overline{r}^i + s^{*}(\overline{\lambda},\theta^i)).
\tag*{$\blacksquare$}
\end{align*}
\subsubsection*{Proof of Theorem \ref{thm:centralized}} \label{proof:thm:centralized}
To solve problem \eqref{eq2:reg problem} over affine tariffs of the form $T(d)=A+\pi^{{\mbox{{\tiny $\top$}}}}d$, we first need expressions for $\overline{\textup{\textsf{cs}}}(T)$ and $\overline{\textup{\textsf{rs}}}(T)$ in terms of $(\pi, A)$, considering the retailer-integrated DERs.
On the one hand, since this case does not consider customer-integrated DERs, the customers derive an expected surplus that remains unchanged, \mbox{\textit{i.e.},\ \/} $\overline{\textup{\textsf{cs}}}^i(T) = \overline{\textup{\textsf{cs}}}^i_0(T)$.
On the other hand, \eqref{eq2:retailer separation} characterizes the expected retailer surplus induced by $T$ considering the retailer-integrated DERs.
According to \eqref{eq2:retailer separation}, $\overline{\textup{\textsf{rs}}}(T)$ depends on the decision variables $(\pi,A)$ only through $\overline{\textup{\textsf{rs}}}_0(T)$.
Hence, the regulator's problem \eqref{eq2:reg problem} at hand, with parameter $F$, is equivalent to that of the regulator without any DERs and parameter $\tilde{F} := F-V(\overline{\lambda},\theta^o) - \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)]$, \mbox{\textit{i.e.},\ \/}
\begin{align} \label{eq2:reg problem centralized}
\max_{T(\cdot)} \ \overline{\textup{\textsf{cs}}}_0(T) \quad \text{s.t.} \quad \overline{\textup{\textsf{rs}}}_0(T)=\tilde{F}.
\end{align}
Theorem 1 in \cite{MunozTong16partIarxiv} for the case without DERs characterizes the optimal two-part tariff for problem \eqref{eq2:reg problem centralized}.
Applying this result to problem \eqref{eq2:reg problem centralized} yields the desired result, \mbox{\textit{i.e.},\ \/}
$$
\pi^*_{\textsc{cen}} = \overline{\lambda} + \mathbb{E}[\nabla_{\pi}D(\pi^*_{\textsc{cen}},\omega)]^{-1} \mathbb{E}[\nabla_{\pi}D(\pi^*_{\textsc{cen}},\omega) (\lambda-\overline{\lambda})]
$$
and
\begin{align*}
A^*_{\textsc{cen}} &= \mbox{$\frac{1}{M}$} \left( \tilde{F} - \mathbb{E} \left[(\pi^*_{\textsc{cen}} - \lambda)^{{\mbox{{\tiny $\top$}}}} D(\pi^*_{\textsc{cen}},\omega) \right] \right) \\
&= \mbox{$\frac{1}{M}$} \left( F - \mathbb{E} \left[(\pi^*_{\textsc{cen}} - \lambda)^{{\mbox{{\tiny $\top$}}}} D(\pi^*_{\textsc{cen}},\omega) \right] \right) \\
& \qquad\qquad\qquad\qquad\qquad - \mbox{$\frac{1}{M}$} \left( V(\overline{\lambda},\theta^o) - \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)] \right) \\
&= A^* - \mbox{$\frac{1}{M}$} \left( V(\overline{\lambda},\theta^o) - \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)] \right).
\tag*{$\blacksquare$}
\end{align*}
\subsubsection*{Proof of Corollary \ref{cor:centralized}} \label{proof:cor:centralized}
Applying Cor. 2 in \cite{MunozTong16partIarxiv} to problem \eqref{eq2:reg problem centralized} implies that
\begin{align*}
\overline{\textup{\textsf{cs}}}(T^*_{\textsc{cen}}) &= \overline{\textup{\textsf{sw}}}^*_0 - \tilde{F} \\
&= \overline{\textup{\textsf{sw}}}^*_0 - F + V(\overline{\lambda},\theta^o) + \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)]
\end{align*}
where, according to \eqref{eq2:sw0 star}, $\overline{\textup{\textsf{sw}}}^*_0 \equiv \overline{\textup{\textsf{sw}}}_0(T^*)$ does not depend on $F$.
The result follows since $\overline{\textup{\textsf{rs}}}(T^*_{\textsc{cen}})=F$ further implies that
\begin{align*}
\overline{\textup{\textsf{sw}}}(T^*_{\textsc{cen}}) &= \overline{\textup{\textsf{cs}}}(T^*_{\textsc{cen}}) + \overline{\textup{\textsf{rs}}}(T^*_{\textsc{cen}}) \\
&= \overline{\textup{\textsf{sw}}}^*_0 + V(\overline{\lambda},\theta^o) + \mathbb{E}[ \lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)].
\tag*{$\blacksquare$}
\end{align*}
\section{Conclusions} \label{sec:conclusions2}
We leverage the analytical framework developed in \cite{MunozTong16partIarxiv} to study how retail electricity tariff structure can distort the net benefits brought by DERs integrated by customers and their retailer.
This work is an application of Ramsey pricing with extensions to accommodate the integration of DERs.
Our analysis offers several conclusions.
First, while net metering tariffs that rely on flat and higher prices to maintain revenue adequacy provide increasingly stronger incentives for customers to integrate renewables%
, they induce increasingly larger cross-subsidies and consumption inefficiencies that can outweigh renewables' benefits.
These significant inefficiencies have draw little attention in the literature compared to the cross-subsidies.
Second, net metering tariffs can achieve revenue adequacy without compromising efficiency by using marginal-cost-based dynamic prices and higher connection charges.
These tariffs, however, provide little incentive to integrate renewables.
Third, retailer-integrated DERs bring customers net benefits that are less dependent on tariff structure, and they cause no tariff feedback loops.
As such, this alternative to behind-the-meter DERs seems worth exploring.
This study represents an initial point of analysis, for it has various limitations.
First, policy objectives beyond efficiency and revenue sufficiency ---often considered in practice--- are here ignored.
Practical criteria such as bill stability make ``desirable'' tariffs hard to be ever attained.
Second, customer disconnections are assumed to be not plausible as a customer choice.
This assumption becomes increasingly less realistic with the decline of DER costs.
Lastly, retailer non-energy costs are assumed to be fixed and independent of the coincident (net) peak load.
Relaxing this assumption leads to peak-load pricing formulations \cite{CrewEtAl95}.
We discuss one such relaxation in \cite{Munoz16}, where capacity costs are recovered with a \emph{demand charge} applied to the net demand coincident with the peak period.
\section{Introduction} \label{sec:intro2}
\ifdefined\IEEEPARstart
\IEEEPARstart{T}{his}
\else
This
\fi
two-part paper studies the design of dynamic retail electricity tariffs for distribution systems with distributed renewable and storage resources.
We consider a regulated monopolistic retailer who, on the one hand, serves residential customers with stochastic demands, and on the other hand, interfaces with an exogenous wholesale market with stochastic prices.
In this framework, we analyze both customer-integrated and retailer-integrated distributed energy resources (DERs).
Our goal is to shed lights on the widely adopted net metering compensation mechanism and the efficiency loss implied by some of the prevailing retail tariffs when an increasing amount of DERs are integrated into the distribution system.
While Part I \cite{MunozTong16partIarxiv} establishes a framework to analyze the efficiency of revenue adequate tariffs with connection charges, Part II
extends it to address the integration of DERs.
The main contribution of Part II is twofold.
First, we characterize analytically the optimal revenue adequate ex-ante two-part tariff for a distribution system with renewables and storage integrated by customers or the retailer.
We characterize the consumer (and social) welfare achieved by the optimal two-part tariff under both integration models.
This analysis is an application of the classical Ramsey pricing theory \cite{BrownSibley86} with extensions to accommodate the multi-period integration of stochastic DERs.
Second, we analyze a numerical case study based on empirical data that estimates the increasingly larger inefficiencies and interclass cross-subsidies caused by DERs when net metering tariffs with price markups are used to maintain revenue adequacy.
In this context, the derived optimal two-part tariffs and a centralized DER integration model offer two alternatives to mitigate these undesirable effects.
The main results of Part II are as follows.
We leverage the retail tariff design framework established in \cite{MunozTong16partIarxiv} to accommodate the integration of DERs by customers (in a net-metering setting) and by the retailer in Section \ref{sec:retailTariffDesignDER}.
The extended framework considers heterogeneous customers with arbitrary behind-the-meter renewables and storage.
Therein, we derive the optimal ex-ante two-part tariff under both DER integration models and the combined effect of this tariff and DERs on consumer and social welfare.
We find that under the optimal two-part tariff, DERs integrated under either model bring the same gains in social and consumer welfare.
This is in contrast to prevailing volumetric tariffs under which the integration of DERs can increase or decrease social and consumer welfare depending critically on the integration model and the retailer's fixed costs.
Indeed, we demonstrate that the two-part tariff structure is optimal in the sense that no other tariff structure ---however complex--- can achieve a strictly higher social welfare.
This means that the two-part \emph{net metering} tariff of the decentralized model is optimal as a DER compensation mechanism.
These welfare effects are explained by the structure of the optimal ex-ante two-part tariff.
We show that under both integration models the derived tariff consists of an identical time-varying price and a distinct connection charge.
In particular, the time-varying price reflects the wholesale prices and their statistical correlation with the elasticity of the random demand.
The optimal connection charge allocates uniformly among customers the retailer's fixed costs and additional costs and savings caused by risks and the integrated DERs.
Indeed, while savings from retailer-integrated DERs reduce the connection charge, customer-integrated DERs induce slight increments or reductions caused by risks introduced by renewables.
The theoretical analysis of DER integration is complemented in Section \ref{sec:caseStudy2} with an empirical study based on publicly available data from NYISO and the largest utility company in New York City.
The performance of the optimal ex-ante two-part tariffs is compared with several other ex-ante tariffs for different levels of DER penetration, under both integration models.
Tariffs used as benchmarks include the optimal linear tariff and two-part flat tariffs used extensively in practice by utilities.
In particular, relative to a base case with a nominal two-part flat tariff and no DERs, we estimate the efficiency gains or losses brought by tariff changes in Section \ref{sec:caseStudy2:baseCase}.
Subsequently, in Section \ref{sec:caseStudy2:efficiency}, we estimate the efficiency gains or loses brought by the integration of DERs under both integration models and the various ex-ante considered tariffs.
Most notably, our results estimate that the efficiency gains brought by switching from flat to hourly pricing, which are below $1\%$ (of the utility's gross revenue) for most relevant cases, can be more than tripled by a $\$10$ increase in the monthly connection charge.
Moreover, for the case with customer-integrated DERs, we estimate in Section \ref{sec:caseStudy2:xsubsidies} the indirect cross-subsidies that customers without DERs give to DER-owning customers due to net metering tariffs with marked-up retail prices.
All our estimations in this case study assume a stylized model for thermostatically controlled loads.
Concluding remarks and proof sketches of the main results are included in Section \ref{sec:conclusions2} and the Appendix, respectively.
Detailed proofs of all results can be found in \cite{MunozTong16partIIarxiv}.
\
\vspace{-10pt}
\subsection{Related Work}
The literature on retail electricity tariff design is extensive \cite{Munoz16}, and there is an increasing interest in addressing the integration of DERs.
We briefly discuss works that are relevant to our paper.
Based on their main focus, we group these works into two categories: $(i)$ tariff design for fixed cost recovery with DERs, and $(ii)$ optimal demand response with DERs.
\subsubsection{Tariff design for fixed cost recovery with DERs}
The general principles used in retail tariff design are briefly reviewed in \ifThesis{\cite{RodriguezEtal08, RenesesRodriguez14}}{\cite{RenesesRodriguez14}} and more extensively in \ifThesis{\cite{BraithwaitEtal07, LBNL16b}}{\cite{LBNL16b}}, and the additional challenges brought by DERs are discussed in \cite{Costello15}.
In the light of such challenges, current tariff design practices and broader regulatory issues are being revised in comprehensive studies to address the adoption of DERs%
\ifThesis{
\cite{NREL13,MIT15,DPS15_Track2WhitePaper,NREL15,LBNL16a,LBNL16b,NARUC16}, to estimate the impact of different tariff structures on the bills of residential customers with solar PV \cite{NREL15}, and to investigate pricing issues related to the interaction between distribution utilities and the owners of DERs \cite{LBNL16a}.
}{\cite{NREL13,MIT15,NARUC16}}.
Research efforts to study more specific issues of DER integration such as \cite{EidEtal14,JargstorfBelmans15,DarghouthEtal16,Sioshansi16} have also emerged.
For instance, in \cite{JargstorfBelmans15}, the trade-off between multiple tariff design criteria is studied in a multi-objective optimization framework.
An analytical approach leverages a generation capacity investment model in \cite{Sioshansi16} to characterize sufficient conditions for RTP and flat tariffs to be revenue adequate.
More empirical approaches are conducted in \cite{EidEtal14}, where interclass cross-subsidies and revenue shortfalls caused by net metering tariffs are estimated, and in \cite{DarghouthEtal16}, which estimates the impact of tariff structure and net metering on the deployment of distributed solar PV.
Finally, there is an increasing volume of literature studying the ``death spiral'' of DER adoption \cite{ChewEtal12, CaiEtal13, Kind13, MIT13, CostelloHemphill14, RMI14, RMI15}, which is presented as a threat on the financial viability of utilities.
This threat refers to a self-reinforcing feedback loop of DER adoption involving a decline in energy sales
and the persistent attempt to recover utilities' fixed costs by increasing volumetric charges.
The empirical analysis in \cite{CaiEtal13}, for example, models the effect that price feedback loops may have on the adoption of solar PV and concludes that it may not be significant within the next decade.
In \cite{MIT13}, an extensive list of factors that affect the system dynamics of DER adoption is presented.
It concludes that while the feedback loop is possible, it is not predetermined and can be avoided.
A stylized demand model is used in \cite{CostelloHemphill14} to argue that a minimum of price elasticity is required for the threat to be an actual problem.
The work in \cite{RMI15} provides an estimate of the evolution of the lowest-cost configuration (namely grid only, grid+solar, or grid+solar+battery) for residential and commercial customers to satisfy their load in the long-term for a few U.S. cities.
There are still important gaps in this subject.
For example, none of the works above studies the efficiency loss and, with the exception of \cite{EidEtal14}, the interclass cross-subsidies entailed by the adoption of DERs under net metering tariffs.
This is precisely a focus of our work.
\subsubsection{Optimal demand response with DERs}
Many works focus on deriving optimal retail pricing schemes to induce desired electricity consumption behavior on customers with DERs such as \cite{ChenEtal12, TangEtal14, JiaTong16b, HanifEtal16}.
For example, in \cite{JiaTong16b}, the authors consider customer and retailer integrated renewables and storage separately in a setting similar to ours.
They derive dynamic linear tariffs that maximize an objective that balances the retailer profit and customers' welfare.
Unlike our work, however, none of these works consider explicitly a revenue adequacy constraint nor the use of connection charges.
\section{An Empirical Case Study} \label{sec:caseStudy2}
In this section, we analyze a case study of a hypothetical distribution utility that faces New York city's wholesale prices and residential demand for an average summer day.
We compare the performance of several day-ahead tariffs with hourly prices
at different levels of solar and storage capacity.
Besides the optimal two-part tariff, we study other tariff structures with two pricing alternatives (flat pricing or hourly dynamic pricing) and with daily connection charges fixed at various levels: zero, a nominal value reflecting Con Edison's connection charge, the nominal value plus $0.33$ \$/day ($10$ \$/month), and the nominal value plus $1.66$ \$/day ($50$ \$/month).
Similar tariff reforms are being proposed in practice to solve utilities' fixed cost recovery problem \cite{LBNL16b}.
Given a tariff structure, we optimize the non-fixed parameters to maximize the expected consumer surplus subject to revenue adequacy.
This case study uses the same demand model as in Part I \cite{MunozTong16partIarxiv}, which comprises a linear demand function and a quadratic utility function for each customer.
We use publicly available energy sales data and rates from Con Edison for the 2015 Summer to fit the demand model.
Con Edison's default tariff for its $2.2$ million residential customers is essentially a two-part tariff $T^{\textsc{ce}}$ with a \emph{flat} price of $\pi^{\textsc{ce}}=17.2$ \cent/kWh and a connection charge of $15.76$ \$/month ($A^{\textsc{ce}}=0.53$ \$/day).
We use day-ahead wholesale prices for NYC from NYISO.
\subsection{Base case} \label{sec:caseStudy2:baseCase}
This is the case without DERs and nominal tariff $T^{\textsc{ce}}$.
Throughout the case study, we assume an average price elasticity of the total daily demand of $\overline{\varepsilon}(\pi^{\textsc{ce}})=-0.3$ at $\pi^{\textsc{ce}}$, which is a reasonable estimate of the short-term own-price elasticity of electricity demand \ifThesis{\cite{Borenstein05, Lijesen07, EPRI08}}{\cite{EPRI08}}.
Moreover, we consider a total of $M=2.2$ million residential customers and use the tariff $T^{\textsc{ce}}$ to compute the utility's average daily revenue from the residential segment and the portion that contributes towards fixed costs, which amount respectively to $\overline{\textsf{rev}}(T^{\textsc{ce}})= \$7.19$ million (M) dollars and
$$
F^{\textsc{ce}} := \overline{\textup{\textsf{rs}}}_{0}(T^{\textsc{ce}}) = \overline{\textsf{rev}}(T^{\textsc{ce}}) - \mathbb{E} \left[ \lambda^{{\mbox{{\tiny $\top$}}}} D(\b{1}\pi^{\textsc{ce}},\omega) \right]=\$5.83 \text{M.}
$$
For the sake of brevity, the details of these computations already described in Part I \cite{MunozTong16partIarxiv} are not reproduced here.
\setlength\fboxsep{0pt}
\setlength\fboxrule{0pt}
We illustrate in Fig. \ref{fig:example:ParetoFrontBaseCase} the expected retailer surplus ($\overline{\textup{\textsf{rs}}}$) and expected consumer surplus ($\overline{\textup{\textsf{cs}}}$) induced by the revenue adequate tariff that maximizes the expected consumer surplus within each tariff structure for different values of $F$.
For each tariff structure, the resulting parametric curve is a Pareto front that quantifies the compromise between $\overline{\textup{\textsf{cs}}}$ and the $\overline{\textup{\textsf{rs}}}$ target, $F$.
We plot these curves as (possibly negative) surplus \emph{gains} relative to the values induced by $T^{\textsc{ce}}$, $\overline{\textup{\textsf{rs}}}_0(T^{\textsc{ce}})$ and $\overline{\textup{\textsf{cs}}}_0(T^{\textsc{ce}})=\$9.54$M, normalized by $\overline{\textsf{rev}}(T^{\textsc{ce}})$.
We make some observations from Fig. \ref{fig:example:ParetoFrontBaseCase}.
First, the $-1$ slope of the Pareto front associated to the optimal two-part tariff $T^*$ corroborates that the induced efficiency $\overline{\textup{\textsf{sw}}}(T^*)$ does not depend on $F$.
Conversely, the larger the $F$, the more inefficient the suboptimal two-part tariffs considered become.
This can be seen from the non-unitary slopes exhibited by all tariffs except $T^*$.
Second, at the nominal $\overline{\textup{\textsf{rs}}}$ target $F^{\textsc{ce}}$ (\mbox{\textit{i.e.},\ \/} the horizontal axis),
significant differences in the induced $\overline{\textup{\textsf{cs}}}$ gains are observed among the tariffs.
In particular, moving from flat prices to hourly prices improves $\overline{\textup{\textsf{cs}}}$ by approximately $1\%$ ($\$72$k/day).
A more significant $\overline{\textup{\textsf{cs}}}$ gain ($8.1\%$) is brought by also increasing the connection charge to the optimal level (which amounts to $A^{*}=2.65$ \$/day or $79.5$ \$/month).
Conversely, decreasing the connection charge to zero reduces $\overline{\textup{\textsf{cs}}}$ by $4.8\%$.
These empirical computations suggest that additional fixed costs can be recovered more efficiently by increasing connection charges than by pricing more dynamically.
\subsection{Tariff structure and net benefits of DERs} \label{sec:caseStudy2:efficiency}
We now analyze the combined impact of tariff structure and DER integration on consumers' surplus.
We measure changes in $\overline{\textup{\textsf{cs}}}$ relative to $\overline{\textup{\textsf{cs}}}_0(T^{\textsc{ce}})$ and normalized by $\overline{\textsf{rev}}(T^{\textsc{ce}})$.
\subsubsection{Customer-integrated DERs} \label{sec:caseStudy2:decentralized}
We start by estimating changes in $\overline{\textup{\textsf{cs}}}$ as a function of the solar and battery storage aggregate capacity integrated by customers.
The tariffs here considered are applied to the hourly net metered demand, so they differ from existing net metering tariffs with rolling credit.
Moreover, we model the integration of renewable resources using hourly solar PV generation data from a simulated 5kW-DC-capacity rooftop system located in NYC\footnote{``Typical year'' solar power data for the same months as temperature is taken from NREL's PVWatts Calculator available in \url{http://pvwatts.nrel.gov}.}.
Similarly, we consider the basic specifications of a $6.4$ kWh Tesla Powerwall battery\footnote{More precisely, a $3.3$ kW charging/discharging rate and a 96\% charging/discharging efficiency are used.}.
We integrate as many of these systems as necessary to reach the specified level of capacity.
\ifdefined\IEEEPARstart
\begin{figure*}[t]
\centering
\subcaptionbox{Pareto front for base case (zoom-in).\label{fig:example:ParetoFrontBaseCase}}
{\includegraphics[width=0.32\linewidth]{ParetoFrontZoomIn_square}}
\subcaptionbox{Pareto fronts with decentralized DERs.\label{fig:example:PFdecentralized}}
{\includegraphics[width=0.32\linewidth]{ParetoFrontDecentralized2_square}}
\subcaptionbox{Pareto fronts with centralized DERs.\label{fig:example:PFcentralized}}
{\includegraphics[width=0.32\linewidth]{ParetoFrontCentralized2_square}}
\caption{\protect\subref{fig:example:ParetoFrontBaseCase} Normalized retailer surplus target \textit{v.s.} induced consumer surplus gain (Pareto front) for various tariffs in base case (\mbox{\textit{i.e.},\ \/} no DERs). In \protect\subref{fig:example:PFdecentralized} and \protect\subref{fig:example:PFcentralized}, Pareto fronts for base case and two cases with different DER integration levels ($10.12\%$ and $20.23\%$) are compared.}
\label{fig:example:paretoFront}
\end{figure*}
\else
\begin{figure*}[t]
\centering
\subfloat[][Zoom into neighborhood of $\left(\overline{\textup{\textsf{cs}}}(T^{\textsc{ce}}),\overline{\textup{\textsf{rs}}}(T^{\textsc{ce}}) \right)$.]{
\includegraphics[width=1\linewidth]{ParetoFrontZoomIn}\label{fig:example:ParetoFrontBaseCase}}
\caption{Normalized retailer surplus target \textit{v.s.} induced consumer surplus gain (Pareto front) for various tariffs in base case (\mbox{\textit{i.e.},\ \/} no DERs).}
\label{fig:ParetoFronts:partII}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[][Pareto fronts with decentralized DERs.]{\includegraphics[width=1\linewidth]{ParetoFrontDecentralized2}\label{fig:example:PFdecentralized}}\\
\subfloat[][Pareto fronts with centralized DERs.]{\includegraphics[width=1\linewidth]{ParetoFrontCentralized2}\label{fig:example:PFcentralized}}
\caption{\protect\subref{fig:example:ParetoFrontBaseCase} Normalized retailer surplus target \textit{v.s.} induced consumer surplus gain (Pareto front) for various tariffs in base case (\mbox{\textit{i.e.},\ \/} no DERs) and two cases with different DER integration levels ($10.12\%$ and $20.23\%$) are compared.}
\label{fig:example:paretoFront}
\end{figure*}
\fi
In Fig. \ref{fig:example:PFdecentralized} we plot the Pareto fronts associated to three types of tariffs
and three decentralized solar PV integration levels.
This figure is a zoomed-out version of Fig. \ref{fig:example:ParetoFrontBaseCase} computed for three DER integration levels.
As such, it gives a rough intuition of how decentralized DERs transform the Pareto fronts of different tariff structures and, in turn, affect $\overline{\textup{\textsf{cs}}}$.
In general, horizontal differences between the Pareto fronts represent changes in $\overline{\textup{\textsf{cs}}}$ due to tariff structure and/or to different levels of decentralized DER integration, for certain $F$.
Evidently, for any $F$, decentralized DERs bring $\overline{\textup{\textsf{cs}}}$ gains if the flat tariff structure is replaced by the optimal two-part tariff $T^*_{\textsc{dec}}$.
Conversely, $\overline{\textup{\textsf{cs}}}$ losses are brought by the DERs for $F=F^{\textsc{ce}}$ if the adjusted flat tariff structure is kept, or if it is replaced by the ``dynamic, nominal A'' structure.
We quantify these changes in $\overline{\textup{\textsf{cs}}}$ for $F=F^{\textsc{ce}}$ explicitly with the following parametric analysis over the decentralized DER integration level.
In Fig. \ref{fig:example:CSgains:a}, for several tariff structures, we plot the normalized $\overline{\textup{\textsf{cs}}}$ gains (or, equivalently, the $\overline{\textup{\textsf{sw}}}$ gains) caused by increments in the PV capacity integrated by customers and the corresponding updates to the tariff required to maintain revenue adequacy, \mbox{\textit{i.e.},\ \/} $\overline{\textup{\textsf{rs}}}(T)=F^{\textsc{ce}}$.
This case assumes that the storage capacity integrated is half the PV capacity.
In particular, Fig. \ref{fig:example:CSgains:a} shows how integrating decentralized DERs can trigger both efficiency gains and losses depending on the tariff structure.
For example, the curve for the adjusted flat tariff $T^{\textsc{ce}}$ suggests that maintaining revenue adequacy with flat rate increments would cause DERs to bring no significant net gains or losses in $\overline{\textup{\textsf{cs}}}$ and $\overline{\textup{\textsf{sw}}}$ for small levels of integration.
However, DER integration levels beyond $500$ MW would bring increasingly larger losses in $\overline{\textup{\textsf{cs}}}$ and $\overline{\textup{\textsf{sw}}}$ of $1.3\%$ at $1.1$ GW and $15\%$ at $2$ GW.
A similar performance is shown by the optimal linear tariff (dynamic pricing) with nominal connection charge $A^{\textsc{ce}}$.
The gain of $1\%$ it exhibits with no DERs vanishes to $0\%$ at $1.1$ GW of PV, becoming net efficiency losses for higher levels of DER.
The optimal linear tariffs with higher connection charges ---$10$ and $50$ \$/month larger than the nominal---, which bring initial efficiency gains of $2.8\%$ and $7.7\%$, respectively.
While the gains of the former increase to then decrease after reaching a maximum of $3.5\%$, the gains of the latter monotonically increase reaching $13.6\%$ at the maximum PV capacity considered, $2.2$ GW.
Other example is the flat tariff with no connection charge, which starts with efficiency losses of $6.8\%$ that increase sharply up to $20.4\%$ at $1.1$ GW.
Lastly, the optimal two-part tariff starts with an efficiency gain of $8.2\%$, and it lets customer-integrated DERs to generate their full value, which amounts to an additional $6.6\%$ of efficiency at $2.2$ GW or $3\%$ per GW.
In other terms, the efficiency gains foregone by using the adjusted flat tariff $T^{\textsc{ce}}$ rather than the optimal two-part tariff $T^*$ increase linearly with the level of DER integration and reach $29.3\%$ (or $\$2.11$ M/day) at $2$ GW.
In summary, connection charges embody a method for fixed cost recovery that seems to be even more effective than dynamic pricing in the sense that it can harness at least $90\%$ of the efficiency gains attained by the optimal two-part tariff for all the integration levels considered, whereas dynamic pricing alone can harnesses at most $12.5\%$ and it generates efficiency losses for higher integration levels.
A word of caution on tariffs with high connection charges and lower flat prices, however, is that they induce customers to consume more on peak, thus precipitating the need for network upgrades that increase the retailer's fixed costs in a way not captured by our model.
This problem can be tackled using dynamic prices and high connection charges to recover fixed costs, such as the optimal two-part tariff, and more forcefully by considering endogenous fixed costs dependent of the coincident peak net-load.
\setlength\fboxsep{0pt}
\setlength\fboxrule{0pt}
\ifdefined\IEEEPARstart
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{CSoverallGains2}\\[-15pt]
\subcaptionbox{Decentralized DER integration.\label{fig:example:CSgains:a}}
{\rule{0.3\linewidth}{0pt}}
\subcaptionbox{Centralized DER integration.\label{fig:example:CSgains:b}}
{\rule{0.3\linewidth}{0pt}}
\subcaptionbox{Cross-subsidies.\label{fig:example:cross-subsidies}}
{\rule{0.3\linewidth}{0pt}}
\caption{Expected gains in consumer and social surplus induced by \protect\subref{fig:example:CSgains:a} behind-the-meter solar-plus-battery capacity and \protect\subref{fig:example:CSgains:b} retailer-integrated solar-plus-battery capacity under different types of tariffs.
Gains are measured relative to base case with tariff $T^{\textsc{ce}}$ and no DERs.
In \protect\subref{fig:example:cross-subsidies}, cross-subsidies from customers without solar to customers with solar \textit{v.s.} level of behind-the-meter solar integration.}
\label{fig:example:CSgains}
\end{figure*}
\else
\begin{figure}[h]
\centering
\subfloat[][Decentralized DER integration.]{
\includegraphics[width=0.99\linewidth]{CSoverallGainsDec2}\label{fig:example:CSgains:a}}\\
\subfloat[][Centralized DER integration.]{
\includegraphics[width=0.99\linewidth]{CSoverallGainsCen}\label{fig:example:CSgains:b}}
\caption{Expected gains in consumer and social surplus induced by \protect\subref{fig:example:CSgains:a} behind-the-meter solar-plus-battery capacity and \protect\subref{fig:example:CSgains:b} retailer-integrated solar-plus-battery capacity under different types of tariffs.
Gains are measured relative to base case with tariff $T^{\textsc{ce}}$ and no DERs.}
\label{fig:example:CSgains}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{crossSubsidiesTriangular}
\caption{Cross-subsidy from customers without solar to customers with solar \textit{v.s.} level of behind-the-meter solar integration.}
\label{fig:example:cross-subsidies}
\end{figure}
\fi
\subsubsection{Retailer-integrated DERs} \label{sec:caseStudy2:centralized}
We now estimate changes in surplus as a function of the solar and battery storage capacity integrated by the retailer.
For the sake of a fair comparison, DERs with the same characteristics as before are used.
In Fig. \ref{fig:example:CSgains:b}, for several tariff structures, we plot the normalized gains in $\overline{\textup{\textsf{cs}}}$ and $\overline{\textup{\textsf{sw}}}$ caused by increments in the PV and storage capacity integrated (in a 2:1 ratio) by the retailer and the corresponding tariff updates required to maintain revenue adequacy.
The figure reveals that the DERs bring monotonic surplus gains under all the tariffs considered.
This is because the benefits brought by centralized DERs are not offset by the consumption inefficiencies induced by decentralized DERs.
In fact, the inefficiencies induced by suboptimal tariffs without DERs are slightly mitigated by centralized DERs because these DERs help the retailer recover a small portion of $F$.
Hence, Fig. \ref{fig:example:CSgains:b} suggests that the changes in $\overline{\textup{\textsf{cs}}}$ and $\overline{\textup{\textsf{sw}}}$ brought by retailer-integrated DERs are virtually unbiased by tariff structure.
This is unlike customer-integrated DERs whose effect on $\overline{\textup{\textsf{cs}}}$ and $\overline{\textup{\textsf{sw}}}$ is significantly biased by tariff structure changes, and specially, by the reliance on retail price markups for fixed cost recovery, as it is evident in Fig. \ref{fig:example:CSgains:a}.
In other words, under tariffs with significant retail markups, while centralized DERs generally bring surplus gains, decentralized DERs tend to mitigate surplus gains or bring surplus losses.
It also clear from Fig. \ref{fig:example:CSgains} that dynamic pricing and higher connection charges help consistently (\mbox{\textit{i.e.},\ \/} regardless the level of DER integration) to mitigate existing inefficiencies.
Notably, connection charges seem to offer a much more effective measure to mitigate such inefficiencies than dynamic pricing.
\subsection{Cross-subsidies induced by net metering} \label{sec:caseStudy2:xsubsidies}
Considering the inequity concerns raised by using net metering as a mechanism to compensate DERs \ifThesis{\cite{EidEtal14,BorlickWood14}\cite[Sec. 9.5]{MIT15}}{\cite{EidEtal14}\cite[Sec. 9.5]{MIT15}}, it is instructive to quantify the cross-subsidies induced by the tariffs in the previous section.
To that end, we compute the cross-subsidies between PV owners and non-PV owners for different levels of customer-integrated solar PV capacity.
For a given tariff structure and level of (decentralized) solar integration, the cross-subsidy is computed by first obtaining the contribution that PV owners make towards the fixed costs $F$.
Similarly, the contribution that PV owners would make under a version of the given tariff that settles consumption and generation separately is also computed.
This version, which is also optimized subject to revenue adequacy, settles all consumption at the rates $\pi$ and all generation at the prices $\overline{\lambda}$\footnote{Customer-integrated storage is not considered in this analysis because nonlinear tariffs make customers' problem fundamentally more complicated.
}.
Clearly, PV owners contribute less towards $F$ under the net metering tariff than its counterpart if the associated prices are marked up as a means to recover the fixed costs $F$.
We compute said cross-subsidy as the difference between these two values, normalized by $F$.
Intuitively, cross-subsidies are the difference between the costs that each group \emph{should} pay for and those they \emph{actually} pay for, due to net metering.
The computation of cross-subsidies requires specifying individual demand functions and the distribution of solar capacity between customers.
We consider a simple illustrative case.
Customers have identical demand functions except for a scaling parameter $\sigma_i$ satisfying $\sigma_i=i \cdot \sigma$ for some $\sigma > 0$.
The 5-kW solar installations are allocated to the largest consumers, who have the greatest incentive to invest in solar generation.
The resulting inter-class cross-subsidies are depicted in Fig. \ref{fig:example:cross-subsidies}.
Evidently, all tariffs induce non-trivial cross-subsidies except for the optimal two-part tariff which yields virtually no cross-subsidy (in spite of the discussion in Sec. \ref{sec:performance with DER}).
This is not entirely surprising since pricing according to $\pi^*_{\textsc{dec}}=\overline{\lambda}$ is efficient and consistent with cost causality.
Hence, cross-subsidies with such tariff happen only through the second term in \eqref{eq2:A decentralized} which is rather small compared to $A^*$.
The cross-subsidies of all the other tariffs increase with PV capacity, and they do it at an increasingly faster rate for flat tariffs.
\section{Retail Tariff Design with DERs} \label{sec:retailTariffDesignDER}
\subsection{Multi-period Ramsey Pricing under Uncertainty}
Consider a regulator who sets a retail electricity tariff $T$ in advance (ex-ante) to maximize the welfare of $M$ customers over a billing cycle of $N$ time periods, subject to a net revenue sufficiency constraint for the monopolistic retailer serving the load.
Expectations are used to deal with the uncertainties that naturally arise when fixing a tariff in advance of actual usage.
To quantify the customers' welfare we use the notion of consumers' surplus, which measures the difference between the gross benefit derived from consumption and what the customer pays for it.
Formally, we assume that given a tariff $T$, customer $i$ consumes a profile $q^i(T,\omega^i) \in \mathbb{R}^N$ within the $N$-period billing cycle contingent on the random evolution of the local state $\omega^i = (\omega^i_1,\ldots,\omega^i_N) \in \mathbb{R}^N$, provided that $q^i$ is purchased from the retailer.
Accordingly, customer $i$ derives an expected surplus
\begin{align} \label{eq2:cs}
\overline{\textup{\textsf{cs}}}^i(T) = \mathbb{E} \big[ S^i(q^i(T,\omega^i),\omega^i) - T(q^i(T,\omega^i)) \big],
\end{align}
where $T:\mathbb{R}^N \rightarrow \mathbb{R}$ and $S^i(q^i(T,\omega^i),\omega^i)$ is the derived gross benefit.
Collectively, customers derive an expected consumer surplus $\overline{\textup{\textsf{cs}}}(T) = \mathbb{E}[ \sum_{i=1}^M \textup{\textsf{cs}}^i(T) ]$, where the expectation is taken with respect to the $M$-tuple $\omega = (\omega^1,\ldots,\omega^M)$.
Similarly, the expected retailer surplus or net revenue is
\begin{align} \label{eq2:rs}
\overline{\textup{\textsf{rs}}}(T) = \mathbb{E} \big[ \mbox{$\sum_{i=1}^M$} T(q^i(T,\omega^i)) - \lambda^{{\mbox{{\tiny $\top$}}}} q(T,\omega) \big],
\end{align}
where $\lambda \in \mathbb{R}^N$ is the profile of random real-time wholesale prices, $q(T,\omega)$ is the aggregated demand profile, $\lambda^{{\mbox{{\tiny $\top$}}}} q(T,\omega)$ is the energy cost faced by the retailer, and the expectation is over the uncertain evolution of the global state $\xi=(\lambda,\omega)$.
Adding the consumer and retailer surplus together yields the (expected) social surplus
$
\overline{\textup{\textsf{sw}}}(T) = \overline{\textup{\textsf{cs}}}(T) + \overline{\textup{\textsf{rs}}}(T)
$
which quantifies the social welfare induced by a tariff $T$.
We can now formulate the regulator's tariff design problem as the optimization problem
\begin{align} \label{eq2:reg problem}
\max_{T(\cdot)} & \ \overline{\textup{\textsf{cs}}}(T) \quad \text{s.t.} \quad \overline{\textup{\textsf{rs}}}(T) = F,
\end{align}
where $F$ is a constant representing the non-energy costs faced by the retailer that need to be passed on to its customers\footnote{$F$ may include delivery, metering, and customer service costs, and it may also recognize that a regulated firm should be allowed to earn some profit.}.
As such, \eqref{eq2:reg problem} is a version of the Ramsey pricing problem\footnote{Ramsey pricing is pricing efficiently subject to a breakeven constraint \cite{BrownSibley86}.
With \eqref{eq2:reg problem}, we seek to apply Ramsey pricing to a single service with time-varying, random marginal costs and temporally dependent stochastic demands.
}.
In particular, we consider ex-ante two-part tariffs\footnote{This restriction may involve no loss of generality (see Thm. \ref{thm:optimalityDERs} below).
} $T(q)=A+\pi^{{\mbox{{\tiny $\top$}}}}q$ with connection charge $A \in \mathbb{R}$ and time-varying price $\pi \in\mathbb{R}^N$.
These tariffs induce an individual consumption profile $q^i(T,\omega^i) = D^i(\pi,\omega^i)$, where $D^i(\cdot,\omega^i)$ is a demand function assumed to be nonnegative, continuously differentiable in $\pi$, and with a negative definite Jacobian $\nabla_{\pi} D^i(\pi,\omega^i) \in \mathbb{R}^{N \times N}$ that satisfies the following assumption\footnote{A detailed discussion on the implications of this assumption and special cases when it is satisfied can be found in Part I \cite{MunozTong16partIarxiv}.}.
\assumptionAlt{ \label{Assumption 1:part2}
$g(\pi)=\mathbb{E}[\nabla_{\pi} D(\pi,\omega) (\pi-\lambda)]$ is such that the Jacobian matrix $\nabla g(\pi)$ is negative definite (nd).
}
In the following sections we accommodate the integration of DERs into the tariff design framework above.
To that end, we assume that either customers or the retailer have access to distributed renewable and storage resources.
We model an agent's access to renewables as the ability to use a state-contingent energy profile $r\in\mathbb{R}_+^N$ at no cost.
Similarly, we model access to a storage with capacity $\theta \in \mathbb{R}_+$ as the ability to offset energy \emph{needs} with any vector of storage discharges $s \in \mathbb{R}^N$ in the operation constraint set\footnote{The lossless storage model defined by $\mathcal{U}(\theta)$, which assumes no initial charge nor charging/discharging rate limits, involves no loss of generality since more complex storage models can be accommodated redefining $\mathcal{U}(\theta)$.}
$$
\mathcal{U}(\theta) = \left\{ s \in \mathbb{R}^N \ \Big\vert \ 0 \leq - \mbox{$\sum_{t=1}^k$} s_t \leq \theta, \ k=1,\ldots,N \right\}.
$$
We define the (arbitrage) \emph{value} of the storage given a deterministic price vector $\pi \in\mathbb{R}^N$ as
\begin{align} \label{eq2:storage operation}
V^{\textsc{s}}(\pi, \theta) = \max_{s\in\mathbb{R}^N} \left\{ \pi^{{\mbox{{\tiny $\top$}}}}s \ \Big\vert \ s \in \mathcal{U}(\theta) \right\},
\end{align}
and let $s^{*}(\pi,\theta)$ denote an optimal solution of \eqref{eq2:storage operation}.
In what follows, we focus on characterizing solutions to problem \eqref{eq2:reg problem} considering DERs integrated either behind the meter by customers in a net-metering setting or by the retailer.
\subsection{Decentralized (behind-the-meter) DER Integration} \label{sec:decentralizedIntegration}
Suppose that customers install renewables and a battery behind the meter.
Let $r^i(\omega^i) \in \mathbb{R}^N$ and $s^i \in \mathbb{R}^N$ denote the energy customer $i$ obtains from renewable resources in state $\omega^i$ and from the battery, respectively, and let $\theta^i \in \mathbb{R}_+$ represent its storage capacity.
We operate in a net-metering setting where tariffs depend only on
$
d^i = q^i - r^i - s^i,
$
which we use to represent customer $i$'s net-metered demand.
Hence, given a tariff $T(d)=A+\pi^{{\mbox{{\tiny $\top$}}}}d$, customer $i$ chooses consumption $q^i_k$ and storage operation $s^i_k$ at each time $k$ contingent on $\omega^i_1,\ldots,\omega^i_k$
to solve the multistage stochastic program
\begin{subequations} \label{eq2:cs with DERs}
\begin{align}
\overline{\textup{\textsf{cs}}}^i(T) = \max_{q^i(\cdot),s^i(\cdot)} &\ \mathbb{E}[S^i(q^i(\omega^i),\omega^i)-T(d^i(\omega^i))], \label{eq2:cs with DERs:objective} \\
\text{s.t} \ \ &\ s^i(\omega^i) \in \mathcal{U}(\theta^i). \label{eq2:cs with DERs:cons}
\end{align}
\end{subequations}
A key observation is that the linearity of two-part tariffs implies that customer $i$'s problem \eqref{eq2:cs with DERs} can be separated into two sub-problems: choosing $q^i(\cdot)$ to maximize $\mathbb{E}[S^i(q^i(\omega^i),\omega^i)-\pi^{{\mbox{{\tiny $\top$}}}} q^i(\omega^i)]-A$ and choosing $s^i(\cdot)$ to maximize $\mathbb{E}[\pi^{{\mbox{{\tiny $\top$}}}} s^i(\omega^i)]$ subject to \eqref{eq2:cs with DERs:cons}.
The former problem is equivalent to that of customers without DERs analyzed in \cite{MunozTong16partIarxiv}, whose solution characterizes the demand function $D^i(\pi,\omega^i)$.
As for the second sub-problem, it is clear from \eqref{eq2:storage operation} that $s^{*}(\pi,\theta^i)$ is an optimal solution.
These solutions constitute an optimal solution to \eqref{eq2:cs with DERs} and thus a net demand function
\begin{align} \label{eq2:demand with DERs}
d^i(\pi,\omega^i) = D^i(\pi,\omega^i)-r^i(\omega^i)-s^{*}(\pi,\theta^i).
\end{align}
This fundamental separation of the customer's problem yields the following result, where we use $r(\omega) = \sum_{i=1}^M r^i(\omega^i)$ and $s = \sum_{i=1}^M s^i$ for notational convenience.
\theoremAlt{\label{thm:decentralized}
Suppose that customers have access to renewables and storage as characterized in \eqref{eq2:cs with DERs} and \eqref{eq2:demand with DERs}.
If $\nabla_{\pi} D(\pi,\omega)$ and $\lambda$ are uncorrelated\footnote{The absence of decentralized storage makes this condition unnecessary.}, then the two-part tariff $T^*_{\textsc{dec}}$ that solves problem \eqref{eq2:reg problem} is given by $\pi^*_{\textsc{dec}} = \overline{\lambda}$ and
\begin{align} \label{eq2:A decentralized}
A^*_{\textsc{dec}} &= A^* - \mbox{$\frac{1}{M}$} \tr\left( \cov \left( \lambda, r(\omega) \right) \right),
\end{align}
where $A^*$, the connection charge in the absence of DER, would be given by
$
A^* = \mbox{$\frac{1}{M}$} \left( F + \tr(\cov(\lambda,D(\overline{\lambda},\omega))) \right).
$
}
Before discussing some implications of Theorem \ref{thm:decentralized}, we examine the condition that $\nabla_{\pi} D(\pi,\omega)$ and $\lambda$ are uncorrelated.
This condition holds in many situations.
In particular, it holds for demands that are not much affected by consumers' local randomness, such as the charging of electric vehicles and typical household appliances.
It even holds for smart HVAC loads that are affected by random temperature fluctuations since their demand takes the form $D(\pi,\omega)=D(\pi) + b(\omega)$, \mbox{\textit{i.e.},\ \/} a demand with additive disturbances \cite{JiaTong16a}.
The tariff $T^*_{\textsc{dec}}$ in Thm. \ref{thm:decentralized} reveals the following.
Letting retail prices reflect an unbiased estimate of the marginal costs of electricity ($\lambda$) maximizes social and consumer welfare.
Under net metering, this implies that the retailer should buy customers' energy surplus (from DERs) at the same price that he buys energy at the wholesale market (in expectation).
The expression for $A^*_{\textsc{dec}}$ in \eqref{eq2:A decentralized} has an intuitive interpretation.
It indicates that the integration of behind-the-meter DERs would require adjustments to the connection charge.
These adjustments could be positive if the integrated renewables tend to cause wholesale prices to drop (\mbox{\textit{i.e.},\ \/} negative correlation), but they could be negative otherwise.
Consequently, these adjustments can increase or decrease the consumer surplus of customers without DERs because the former are perceived by \emph{all} customers as changes in their electricity bills.
The welfare gains brought by decentralized DERs depend critically on retail tariffs.
To assess the performance of two-part tariffs in this regard we first need a point of comparison.
In the absence of DERs, $T^*_{\textsc{dec}}$ reduces to the optimal ex-ante two-part tariff $T^*(q)=A^*+\pi^{*{\mbox{{\tiny $\top$}}}}q$ derived in \cite{MunozTong16partIarxiv}, where $\pi^*=\overline{\lambda}$ under the assumption in Theorem \ref{thm:decentralized}.
As a point of comparison, consider that in the absence of DERs and under tariff $T^*$, customers derive an expected surplus $\overline{\textup{\textsf{cs}}}_0(T^*)=\overline{\textup{\textsf{sw}}}^{*}_0 - F$, the retailer derives $\overline{\textup{\textsf{rs}}}_0(T^*)=F$, and social welfare is
\begin{align} \label{eq2:sw0 star}
\overline{\textup{\textsf{sw}}}^{*}_0 &= \sum_{i=1}^M \mathbb{E} \big[ S^i(D^i(\pi^*,\omega^i),\omega^i) - \lambda^{{\mbox{{\tiny $\top$}}}}D^i(\pi^*,\omega^i) \big].
\end{align}
\corollaryAlt{\label{cor:decentralized}
Under the tariff $T^*_{\textsc{dec}}$, customer-integrated DERs induce an expected total surplus $\overline{\textup{\textsf{sw}}}(T^*_{\textsc{dec}}) = \overline{\textup{\textsf{sw}}}^*_0 + \sum_{i=1}^M V^{\textsc{s}}(\overline{\lambda},\theta^i) + \mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^i(\omega^i)]$ that is independent of $F$ and
$\overline{\textup{\textsf{cs}}}(T^*_{\textsc{dec}}) = \overline{\textup{\textsf{cs}}}_0(T^*) + \sum_{i=1}^M V^{\textsc{s}}(\overline{\lambda},\theta^i) + \mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^i(\omega^i)]$.
}
The expressions $\overline{\textup{\textsf{cs}}}_0(T^*)$ and $\overline{\textup{\textsf{cs}}}(T^*_{\textsc{dec}})$ above characterize the tradeoff between the retailer's surplus target $F$ and consumers' surplus $\overline{\textup{\textsf{cs}}}$ induced by the tariffs $T^*$ and $T^*_{\textsc{dec}}$, respectively.
Indeed, noting the linear dependence of $A^*$ in $F$, it becomes clear that in both cases the $\overline{\textup{\textsf{rs}}}$-$\overline{\textup{\textsf{cs}}}$ tradeoff is linear, as illustrated in Fig. \ref{fig:PFdecentralized}.
Moreover, the fact that the social welfare achieved in both cases ($\overline{\textup{\textsf{sw}}}^{*}_0$ and $\overline{\textup{\textsf{sw}}}(T^*_{\textsc{dec}})$) does not depend on $F$ implies that said tradeoff is not only linear but one-to-one (\mbox{\textit{i.e.},\ \/} the Pareto fronts in Fig. \ref{fig:PFdecentralized} have slope $-1$).
This means that while an increased net revenue target $F+\Delta F$ decreases consumer surplus in expectation, it does not decrease social surplus.
Conversely, the integration of DERs behind-the-meter increases both social and consumer surplus by $\sum_{i=1}^M V^{\textsc{s}}(\overline{\lambda},\theta^i) + \mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^i(\omega^i)]$ in expectation regardless of the retailer's net revenue target $F$.
\begin{figure}[t]
\centering
\subcaptionbox{Optimal two-part tariffs.\label{fig:example:CSgains:a}}
{\includegraphics[width=0.5\linewidth]{corollary2_0}}
\caption{Efficient Pareto fronts or $\overline{\textup{\textsf{rs}}}$-$\overline{\textup{\textsf{cs}}}$ tradeoff induced by optimal ex-ante two-part tariffs $T^*_{\textsc{dec}}$ (solid line) and $T^*$ (dashed line) with and without DERs.}
\label{fig:PFdecentralized}
\end{figure}
Another implication of the optimal two-part tariff $T^*_{\textsc{dec}}$ is the likely impact it would have on the rapid adoption of behind-the-meter DERs.
Prevailing tariffs that rely on retail markups to achieve revenue adequacy provide an strong incentive for customers to integrate Distributed Generation (DG).
This is because, under net-metering, the higher the retail prices, the more savings DG represents.
By eliminating retail markups and imposing virtually unavoidable connection charges, $T^*_{\textsc{dec}}$ generally reduces such savings.
Hence, $T^*_{\textsc{dec}}$ is likely to decelerate the adoption of decentralized DERs compared to the prevailing less efficient retail tariffs.
This suggests that there is a tradeoff between efficiency and the rapid adoption of behind-the-meter DERs.
\subsubsection{Optimality of Net Metering} \label{sec:performance with DER}
We have restricted the regulator to offer net-metering two-part tariffs.
There are, however, alternative mechanisms to compensate DERs that do not rely on net metering (\mbox{\textit{e.g.}, \/} feed-in tariffs).
We argue that the regulator cannot improve upon the efficiency attained by $T^*_{\textsc{dec}}$ with more complex ex-ante tariffs under certain condition\ifThesis{\footnote{The optimality argument in \cite[Sec. III.C]{MunozTong16partIarxiv} without DERs applies to the case with retailer-integrated DERs presented in the following section since both problems are equivalent except for a difference in the parameter $F$.}}{}.
This holds true because $T^*_{\textsc{dec}}$ induces the same efficiency attained by the social planner, which provides an upper bound to the regulator's problem \eqref{eq2:reg problem}\ifThesis{\footnote{Due to the restriction to ex-ante tariffs, we restrict the social planner's decisions to be contingent on each customer's local state $\omega^i$.
This is because ex-ante tariffs cannot carry updated information of the global state $\xi=(\lambda,\omega)$, unlike real-time or ex-post tariffs.}}{}.
\theoremAlt{ \label{thm:optimalityDERs}
Suppose that customers have access to renewables and storage.
If wholesale prices $\lambda$ and customers' states $\omega$ are statistically independent (\mbox{\textit{i.e.},\ \/} $\lambda \perp \omega$), then $T^*_{\textsc{dec}}$ is an optimal solution of \eqref{eq2:reg problem} among the class of ex-ante tariffs.}
Namely, the restriction to two-part net metering tariffs, which are simple and thus practical tariffs, imply no loss of efficiency if $\lambda \perp \omega$.
The latter condition, however, makes the result somewhat restrictive as it applies to loads not affected by customers' local randomness such as washers and dryers, computers, batteries and EV charging \emph{but} not to HVAC loads or behind-the-meter solar and wind DG.
Nonetheless, said condition suggests that if the net load and $\lambda$ are poorly correlated (or either exhibits little uncertainty at the time the tariff is fixed) then $T^*_{\textsc{dec}}$ may have a good performance.
\subsection{Centralized (retailer-based) DER Integration} \label{sec:centralizedIntegration}
As an alternative to behind-the-meter DERs, we now consider the case where the retailer installs DERs within the distribution network.
To that end, suppose that the retailer has access to a renewable supply $r^o(\xi)\in\mathbb{R}^N_+$ and a storage capacity $\theta^o \in \mathbb{R}_+$.
Without loss of generality, we assume that the retailer determines the operation of storage before the billing cycle starts (\mbox{\textit{i.e.},\ \/} ex-ante)\footnote{Allowing storage operation to be contingent on partial observations of $\lambda\in\mathbb{R}^N$ (say $s^o(\lambda)\in\mathbb{R}^N$) only makes the maximum value of $\mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}}s^o(\lambda)]$ over $s^o(\lambda)\in \mathcal{U}(\theta)$ in \eqref{eq2:retailer separation} hard to compute under general assumptions.}.
Assuming that the retailer operates storage to maximize his net revenue, the resulting surplus induced by a tariff $T$ can be written as
\begin{small}
\begin{align}
\overline{\textup{\textsf{rs}}}(T) &= \max_{s \in \mathcal{U}(\theta^o)} \mathbb{E}\left[ \sum_{i=1}^{M} T(q^{i}(T,\omega^i)) \ - \lambda^{{\mbox{{\tiny $\top$}}}} (q(T,\omega)-r^o(\omega)-s) \right] \nonumber \\
&= \overline{\textup{\textsf{rs}}}_0(T) - \mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)] - V^{\textsc{s}}(\overline{\lambda},\theta^o), \label{eq2:retailer separation}
\end{align}
\end{small}%
The fact that the two last terms in \eqref{eq2:retailer separation} do not depend on $T$ facilitates obtaining the following result since both terms simply offset the surplus target $F$ when imposing $\overline{\textup{\textsf{rs}}}(T) = F$.
\theoremAlt{\label{thm:centralized}
Suppose that the retailer has access to renewables and storage as characterized in \eqref{eq2:retailer separation}.
Then the two-part tariff $T^*_{\textsc{cen}}$ that solves problem \eqref{eq2:reg problem} is given by
\begin{align}
\pi^*_{\textsc{cen}} = \overline{\lambda} + \mathbb{E}[\nabla_{\pi}D(\pi^*_{\textsc{cen}},\omega)]^{-1} \mathbb{E}[\nabla_{\pi}D(\pi^*_{\textsc{cen}},\omega) (\lambda-\overline{\lambda})], \n
\end{align}
\begin{align} \label{eq2:A centralized}
A^*_{\textsc{cen}} = A^* - \mbox{$\frac{1}{M}$} \big( V^{\textsc{s}}(\overline{\lambda},\theta^o) + \mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)] \big)
\end{align}
where $A^*$, the connection charge in the absence of DER, would be given by
$
A^* = \mbox{$\frac{1}{M}$} \left( F - \mathbb{E}[(\pi^*_{\textsc{cen}} - \lambda)^{{\mbox{{\tiny $\top$}}}}D(\pi^*_{\textsc{cen}},\omega)] \right).
$
}
We first note that, unlike Thm. \ref{thm:decentralized}, Thm. \ref{thm:centralized} does not require $\nabla_{\pi} D(\pi,\omega)$ and $\lambda$ to be uncorrelated.
However, if this condition is satisfied it holds that $\pi^*_{\textsc{cen}}=\pi^*_{\textsc{dec}}=\overline{\lambda}$.
In other words, under optimally set ex-ante two-part tariffs, the integration of DERs by either customers or their retailer do not require updating prices to maintain revenue adequacy.
Hence, in both cases, any potential feedback loop of DER integration on retail prices (and thus on consumption) is undermined.
In terms of the connection charge in \eqref{eq2:A centralized}, the integration of DERs by the retailer results in reductions relative to $A^*$.
These reductions contrast with the potential increments required by customer-integrated DERs (\mbox{\textit{cf.},\ \/} $A^*_{\textsc{dec}}$ in \eqref{eq2:A decentralized}).
The underlying reason for such difference is intuitive, specially considering the identical retail prices $\pi^*_{\textsc{cen}}=\pi^*_{\textsc{dec}}$.
Decentralized DERs represent savings in volumetric charges for customers (with reduced net loads) whereas centralized DERs represent savings in electricity purchases for the retailer.
Because the latter savings cannot increase the retailer surplus beyond $F$, they are allocated uniformly between customers through reductions in the connection charge.
Unlike with decentralized DERs in general, the welfare gains brought by DERs integrated (and operated) by the retailer do not depend on retail tariffs.
This is formalized by the following result.
\corollaryAlt{\label{cor:centralized}
Under the tariff $T^*_{\textsc{cen}}$, retailer-integrated DERs induce an expected total surplus $\overline{\textup{\textsf{sw}}}(T^*_{\textsc{cen}})= \overline{\textup{\textsf{sw}}}^*_0 + V^{\textsc{s}}(\overline{\lambda},\theta^o) +\mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)]$ that is independent of $F$ and
$\overline{\textup{\textsf{cs}}}(T^*_{\textsc{cen}}) = \overline{\textup{\textsf{cs}}}_0(T^*) + V^{\textsc{s}}(\overline{\lambda},\theta^o) +\mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^o(\xi)]$.
}
In Cor. \ref{cor:centralized}, $\overline{\textup{\textsf{cs}}}(T^*_{\textsc{cen}})$ characterizes a linear one-to-one tradeoff between $F$ and $\overline{\textup{\textsf{cs}}}$ induced by the $T^*_{\textsc{cen}}$.
This tradeoff---equivalent to the Pareto front induced by $T^*_{\textsc{dec}}$ in Fig. \ref{fig:PFdecentralized}---is characterized by the social welfare achieved by $T^*_{\textsc{cen}}$, $\overline{\textup{\textsf{sw}}}(T^*_{\textsc{cen}})$.
Consequently, similar to behind-the-meter DERs, the integration of centralized DERs increases both social and consumer surplus by $V^{\textsc{s}}(\overline{\lambda},\theta^0) + \mathbb{E}[\lambda^{{\mbox{{\tiny $\top$}}}} r^0(\omega)]$ in expectation regardless of the retailer's net revenue target $F$.
The equivalent \emph{collective} welfare effects of DERs integrated under both models (characterized by Cor. \ref{cor:decentralized} and \ref{cor:centralized}) are in contrast to their \emph{individual} welfare effects.
As suggested above, welfare gains (or losses) from decentralized DERs are captured individually by DER-integrating customers as reductions in their bills and as bill reductions or increments for all other customers due to the adjustments to the connection charge $A^*_{\textsc{dec}}$.
This allocation of welfare gains constitutes an interclass cross-subsidy between customers.
Conversely, welfare gains from centralized DERs are uniformly captured by all customers as reductions in the connection charge $A^*_{\textsc{cen}}$.
Lastly, an implication of $T^*_{\textsc{cen}}$ is the likely impact it has on the adoption of DERs.
The reduction in the connection charge characterized by $A^*_{\textsc{cen}}$ relative to $A^*$ is the net benefit perceived by each customer due to the integrated centralized DERs.
Hence, customers should be willing to let the retailer integrate DERs even if they entail capital costs that offset a portion of said reductions in the connection charge.
| proofpile-arXiv_065-7447 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Formally, the cross section for producing a heavy evaporation residue,
$\sigma_{\rm EVR}$, in a fusion reaction can be written as
\begin{equation}
\sigma_{\rm EVR}(E)=\frac{\pi h^2}{2\mu E}\sum\limits_{\ell=0}^\infty
(2\ell+1)T(E,\ell)P_{\rm CN}(E,\ell)W_{\rm sur}(E,\ell)
\end{equation}
where $E$ is the center of mass energy, and $T$ is the probability of the
colliding nuclei to overcome the potential barrier in the entrance channel
and reach the contact point. (The term ``evaporation residue'' refers to the
product of a fusion reaction followed by the evaporation of a specific
number of neutrons.) $P_{\rm CN}$ is the probability that the
projectile-target system will evolve from the contact point to the
compound nucleus. $W_{\rm sur}$ is the probability that the compound
nucleus will decay to produce an evaporation residue rather than fissioning.
Conventionally the EVR cross section is separated into three individual
reaction stages (capture, fusion, survival) motivated, in part, by the
different time scales of the processes. However, one must remember that the
$W_{\rm sur}$ term effectively sets the allowed values of the spin. This
effect is shown in figure~\ref{WL-f1} where the capture cross sections for
several reactions are shown without and with the spin limitation posed by the
survival probabilities.
Several successful attempts have been made to describe the cross sections
for evaporation residue formation in cold fusion reactions \cite{r2,r3,r4,r5}.
In figure~\ref{WL-f2}(a), I show some typical examples of post dictions of the
formation cross sections for elements 104-113 in cold fusion reactions.
The agreement between theory and experiment is impressive because the cross
sections extend over six orders of magnitude, i.e., a robust agreement.
Because the values of $\sigma_{\rm capture}$ are well known or generally
agreed upon (see below), the values of the product
$P_{\rm CN}\cdot W_{\rm sur}$ are the same in most of these post dictions.
However, as seen in figure~\ref{WL-f2}(b),
the values of $P_{\rm CN}$ differ significantly in these
post dictions \cite{r2,r3,r4,r5}, and differ from measurements of $P_{\rm CN}$
\cite{r6}. A similar situation occurs in predictions of cross sections for
hot fusion reactions. These are clear-cut cases
in which a simple agreement between theory and experiment in postdicted
cross sections is not sufficient to indicate a real understanding of the
phenomena involved.
We might ask what the overall uncertainties are in the current
phenomenological models for predicting heavy element production cross
sections. This is an item of some controversy. Some feel the uncertainties
in typical predictions are factors of 2-4 \cite{r7} while others
estimate these uncertainties to be 1-2 orders of magnitude \cite{r8,r9}.
\begin{figure}[th]
\vspace*{-6mm}
\centering
\includegraphics[width=77mm]{Fig1a.eps}
\hspace*{-10mm}
\includegraphics[width=73mm]{Fig1b.eps}
\vspace*{-7mm}
\caption{(a) Calculated capture cross sections for some typical reactions.
(b) the ``spin-limited'' capture cross sections for the reactions in (a)
\cite{r1}.}
\label{WL-f1}
\end{figure}
\section{Capture Cross Sections}
\label{sec2}
The capture cross section is, in the language of coupled channel calculations,
the ``barrier crossing'' cross section. It is the sum of the quasifission,
fast fission, fusion-fission and fusion-evaporation residue cross sections.
The barriers involved are the interaction barriers and not the fusion
barriers. There are several models for capture cross sections. Each of them
has been tested against a number of measurements of capture cross sections
for reactions that, mostly, do not lead to the formation of the heaviest
nuclei. In general, these models are able to describe the magnitudes of the
capture cross sections within 50\% and the values of the interaction barriers
within 20\%. The most robust of these models takes into account the effects
of target and projectile orientation/deformation upon the collisions, the
couplings associated with inelastic excitations of the target and projectile
and the possibility of one or two neutron transfer processes. Loveland
\cite{r10} has compared calculations of the capture cross sections for
reactions that synthesize heavy nuclei with the measured cross sections.
Good agreement between the measured and calculated values of the cross
sections occurs for all reactions. The ratio of calculated to observed
capture cross sections varies from 0.5 to 2. Nominally, given the other
uncertainties in estimating $\sigma_{\rm EVR}$, this seems generally
acceptable. However, from the point of view of an experimentalist, it is
not acceptable. The capture cross section is relatively easy to measure
and an uncertainty of a factor of 50\% may mean having to run an experiment
for several months longer to be successful in a synthetic effort.
\begin{figure}[t]
\vspace*{0mm}
\centering
\includegraphics[width=75mm]{Fig2.eps}
\hspace*{-10mm}
\includegraphics[width=75mm]{Fig3.eps}
\unitlength1mm
\begin{picture}(0,0)
\put(-55,50){(a)}
\put(15,50){(b)}
\end{picture}
\vspace*{-6mm}
\caption{(a) Typical predictions of the formation cross sections of
elements 104-113 using cold fusion reactions.
(b) $P_{\rm CN}$ values for the predictions in panel (a).
The references cited in the legends refer to Adamian \cite{r4},
Feng \cite{r5}, Swiatecki \cite{r2}, and Loveland \cite{r3}.
The additional reference in panel (b) is to the data of Naik
{\it et al.} \cite{r6}.}
\label{WL-f2}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=75mm]{Fig4-a.eps}
\hspace*{-10mm}
\includegraphics[width=75mm]{Fig4-b.eps}
\vspace*{-6mm}
\caption{The capture-fission excitation functions for the reaction
of $^{39}$K (left) and $^{46}$K (right) with $^{181}$Ta.}
\label{WL-f4}
\end{figure}
Future synthetic efforts with heavy nuclei may involve the use of very
neutron-rich beams such as radioactive beams. While we understand that
such efforts are not likely to produce new superheavy elements due to the
low intensities of the radioactive beams \cite{r3}, there may exist a window
of opportunity to make new neutron-rich isotopes of elements 104-108
\cite{r11}. Our ability to predict the capture cross sections for the
interaction of very neutron-rich projectiles with heavy nuclei is limited
\cite{r11} and this is especially true near the interaction barrier
where the predictions of models of the capture cross sections may differ
by orders of magnitude \cite{r11}. As part of the effort to study the
scientific issues that will be relevant at next generation radioactive
beam facilities, such as FRIB, we have started to use the ReA3 facility at
the NSCL to study capture processes with fast-releasing beams such as
the potassium isotopes. We chose to study the interaction of
$^{39,46}$K with $^{181}$Ta. The first {\em preliminary} results
from that experiment are shown in figure~\ref{WL-f4}. The $^{39}$K +$^{181}$Ta
reaction results seem to be well understood within conventional pictures
of capture \cite{r12,r13} while the neutron-rich $^{46}$K results suggest
an unusual near barrier fusion enhancement.
\section{Survival Probabilities, $W_{\rm sur}$}
\label{sec3}
Formally $W_{\rm sur}$ can be written as
\begin{equation}
W_{\rm sur}(E_{\rm c.m.})=P_{xn}(E^*_{\rm CN})\prod\limits_{i=1}^x
\frac{\mathit{\Gamma}_n(E^*_i)}{\mathit{\Gamma}_n(E^*_i)+
\mathit{\Gamma}_f(E^*_i)}
\end{equation}
where $P_{xn}$ is the probability of emitting $x$ (and only $x$) neutrons
from a nucleus with excitation energy $E^*$, and $\mathit{\Gamma}_n$ and $\mathit{\Gamma}_f$
are the partial widths for decay of the completely fused system by either
neutron emission or fission, respectively. For the most part, the formalism
for calculating the survival, against fission, of a highly excited nucleus
is understood. There are significant uncertainties, however, in the input
parameters for these calculations and care must be used in treating some
situations. ``Kramers effects'' and the overall fission barrier height
are found \cite{r14} to have the biggest effect on the calculated cross
sections.
A recent experiment concerning survival probabilities in hot fusion
reactions showed the importance of ``Kramers effects'' \cite{r15}. The
nucleus $^{274}$Hs was formed at an excitation energy of 63~MeV using the
$^{26}$Mg+$^{248}$Cm reaction. $^{274}$Hs has several interesting properties.
The liquid drop model fission barrier height is zero and there is a
subshell at $N = 162$, $Z = 108$. In the formation reaction, $P_{\rm CN}$
is measured \cite{r16} to be 1.0. By measuring the angular distribution
of the fission associated neutrons, Yanez {\it et al.} \cite{r15} were able
to deduce a value of $\mathit{\Gamma}_n/\mathit{\Gamma}_{\rm total}$ for
the first chance
fission of $^{274}$Hs ($E^* = 63$~MeV) of $0.89\pm 0.13$! A highly excited
fragile nucleus with a vanishingly small fission barrier decayed $\sim 90$\%
of the time by emitting a neutron rather than fissioning. Conventional
calculations with various values of the fission barrier height were unable
to reproduce these results. The answer to this dilemma is to consider the
effects of nuclear viscosity to retard fission \cite{r17}, the so-called
Kramers effects. These Kramers effects are the reason that hot fusion
reactions are useful in heavy element synthesis, in that the initial high
excitation energies of the completely fused nuclei do not result in
catastrophic losses of surviving nuclei \cite{r18}.
With respect to fission barrier heights, most modern models do equally
well/poorly in describing fission barrier heights for Th-Cf nuclei.
Afanasjev {\it et al.} \cite{r19} found the average deviation between
the calculated and known inner barrier heights was $\sim 0.7$~MeV amongst
various models. Bertsch {\it et al.} \cite{r20} estimate the uncertainties
in fission barrier heights are 0.5-1.0~MeV in known nuclei. Kowal {\it et al.}
\cite{r21} found for even-even nuclei with $Z = 92$-$98$ the difference
between measured and calculated inner barrier heights was 0.8~MeV.
Baran {\it et al.} \cite{r22} found very large, i.e., several MeV,
differences between various calculated fission barrier heights for
$Z = 112$-$120$. In summary, fission barrier heights are known within
0.5-1.0~MeV. For super-heavy nuclei, the change of fission barrier height
by 1~MeV in each neutron evaporation step can cause an order of magnitude
uncertainty in the $4n$-channel. For the $3n$-channel, the uncertainty
is about a factor of four \cite{r14}.
An additional problem is that at the high excitation energies characteristic
of hot fusion reactions, the shell effects stabilizing the fission barrier
are predicted \cite{r23} to be ``washed out'' with a resulting fission
barrier height $< 1$~MeV for some cases. Furthermore the rate of damping
of the shell effects differs from nucleus to nucleus. This point is well
illustrated by the calculations in reference~\cite{r24} of the ``effective''
fission barrier heights for the $^{48}$Ca+$^{249}$Bk reaction.
Measurements of fission barrier heights are difficult and the results
depend on the models used in the data analysis. Recently Hofmann {\it et al.}
\cite{r25} have deduced the shell-correction energies from the systematics
of the $Q_\alpha$ values for the heaviest nuclei and used these
shell-correction energies to deduce fission barrier heights. The deduced
barrier heights for elements 118 and 120 may be larger than expected.
\section{Fusion Probability, $P_{\rm CN}$}
\label{sec4}
The fusion probability, $P_{\rm CN}$, is the least known (experimentally)
of the factors affecting complete fusion reactions and perhaps the most
difficult to model. The essential task is to measure the relative amounts
of fusion-fission and quasifission in a given reaction. Experimentally this
is done using mass-angle correlations where it is difficult to measure,
with any certainty, the fraction of fusion reactions when that quantity is
less than 1\%. (For cold fusion reactions, $P_{\rm CN}$ is predicted to
take on values of $10^{-2}$ to $10^{-6}$ for reactions that make elements
104-113.) The reaction of $^{124}$Sn with $^{96}$Zr can be used to illustrate
the uncertainties in the theory to estimate $P_{\rm CN}$ where
\cite{r10} various theoretical estimates of $P_{\rm CN}$ range from
0.0002 to 0.56 and where the measured value is 0.05 as well as the
data shown in figure~\ref{WL-f2}(b).
Where we have made progress in understanding $P_{\rm CN}$ is in the
excitation energy dependence of $P_{\rm CN}$. Zagrebaev and Greiner
\cite{r26} have suggested the following ad hoc functional form for the
excitation energy dependence of $P_{\rm CN}$
\begin{equation}
P_{\rm CN}(E^*,J)=\frac{P^0_{\rm CN}}{1+\mbox{exp}
[\frac{E^*_B-E^*_{\rm int}(J)}{\Delta}]}
\end{equation}
where $P_{\rm CN}^0$ is the fissility-dependent ``asymptotic'' (above
barrier) value of $P_{\rm CN}$ at high excitation energies, $E_B^*$ is
the excitation energy at the Bass barrier, $E_{\rm int}^*(J)$ is the internal
excitation energy ($E_{c.m.}+Q-E_{\rm rot}(J)$), $J$ is the angular momentum
of the compound nucleus, and $\Delta$ (an adjustable parameter) is taken to
be 4~MeV. This formula describes the extensive data of Knyazheva {\it et al.}
\cite{r27} for the $^{48}$Ca+$^{154}$Sm reaction very well \cite{r10}.
A generalization of this formula has been used to describe the excitation
energy dependence of $P_{\rm CN}$ for the reactions of $^{48}$Ca with
$^{238}$U, $^{244}$Pu and $^{248}$Cm \cite{r28}.
It is also clear that $P_{\rm CN}$ must depend on the entrance channel
asymmetry of the reaction. Numerous scaling factors to express this
dependence have been proposed and used. An extensive survey of $P_{\rm CN}$
in a large number of fusing systems was made by du~Rietz {\it et al.}
\cite{r29}. They thought that perhaps some fissility related parameter
would be the best way to organize their data on $P_{\rm CN}$ and its
dependence of the properties of the entrance channel in the reactions
they studied. They found the best fissility-related scaling variable
that organized their data was $x_{\rm du Rietz} = 0.75x_{\rm eff} +
0.25x_{\rm CN}$. The parameters $x_{\rm eff}$ and $x_{\rm CN}$ are the
associated fissilities for the entrance channel and the compound system,
respectively. The following equations can be used to calculate these
quantities
\begin{displaymath}
x_{\rm CN}=\frac{(Z^2/A)_{\rm CN}}{(Z^2/A)_{\rm critical}};
(Z^2/A)_{\rm critical}=50.883\cdot[1.-1.7826(\frac{A-2Z}{A})^2];
x_{\rm eff}=\frac{4Z_1Z_2/[A_1^{1/3}A_2^{1/3}(A_1^{1/3}+A_2^{1/3})]}{(Z^2/A)_{\rm critical}}
\end{displaymath}
In figure~\ref{WL-f5}, I show most of the known data on $P_{\rm CN}$ using
the du~Rietz scaling variable. There is no discernable pattern.
Restricting the choice of cases to those in a narrow excitation energy
bin improves the situation somewhat but it is clear we are missing
something in our semi-empirical systematics.
Some progress has been made in calculating $P_{\rm CN}$ using TDHF
calculations. Wakhle {\it et al.} \cite{r30} made a pioneering study of
the $^{40}$Ca+$^{238}$U reaction. The capture cross sections predicted by
their TDHF calculations agreed with measured capture cross sections
\cite{r31} within $\pm 20$\%. In addition they were able to predict the
ratio of fusion to capture cross sections,
$\sigma_{\rm fus}/\sigma_{\rm capture}$, as $0.09\pm 0.07$ at 205.9~MeV and
$0.16\pm 0.06$ at 225.4~MeV in agreement with reference~\cite{r31} who
measured ratios of $0.06\pm 0.03$ and $0.14\pm 0.05$, respectively. Whether
TDHF calculations will become a predictive tool for heavy element synthesis
remains to be seen.
\begin{figure}[th]
\centering
\includegraphics[width=78mm]{Fig5-a.eps}
\hspace*{-16mm}
\includegraphics[width=78mm]{Fig5-b.eps}
\vspace*{-8mm}
\caption{Fissility dependence of $P_{\rm CN}$ with and without
excitation energy sorting.}
\label{WL-f5}
\end{figure}
\section{Predictions for the Production of Elements 119 and 120}
\label{sec5}
Loveland \cite{r10} has shown that the current predictions for the
production cross sections for elements 119 and 120 differ by 1-3 orders
of magnitude, reflecting the uncertainties discussed above.
For the reaction $^{50}$Ti+$^{249}$Bk$\rightarrow 119$, the uncertainties
in the predicted maximum cross sections for the $3n$ and $4n$ channels
differ by ``only'' a factor of 20-40 while larger uncertainties are found
in the predictions for the $^{54}$Cr+$^{248}$Cm$\rightarrow 120$ reaction.
The energies of the maxima of the $3n$ and $4n$ excitation functions are
uncertain to 3-4~MeV, a troublesome situation for the experimentalist.
\section{Conclusions}
\label{sec6}
I conclude that: (a) Capture cross sections should be measured for reactions
of interest. (b) We need better and more information on fission barrier
heights, and their changes with excitation energy for the heaviest nuclei.
(c) We need to devise better methods of measuring $P_{\rm CN}$ and more
TDHF calculations of $P_{\rm CN}$. (d) The current uncertainty in
calculated values of $\sigma_{\rm EVR}$ is at least 1-2 orders of magnitude.
(e) New opportunities for making neutron-rich actinides with RNBs may exist.\\
\begin{acknowledgement}
This work was supported, in part, by the Director, Office of Energy Research,
Division of Nuclear Physics of the Office of High Energy and Nuclear
Physics of the U.S.~Department of Energy under Grant DE-SC0014380 and
the National Science Foundation under award 1505043.
\end{acknowledgement}
| proofpile-arXiv_065-7451 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
{A}{} fundamental problem of broad theoretical and practical interest is to characterize the maximum correlation between the outputs of a pair of functions of random sequences. Consider the two distributed agents shown in Figure \ref{fig:agents}.
A pair of correlated discrete memoryless sources (DMS) are fed to the two agents. These agents are to each make a binary decision. The goal of the problem is to maximize the correlation between the outputs of these agents subject to specific constraints on the decision functions. The study of this setup has had impact on a variety of disciplines, for instance, by taking the agents to be two encoders in the distributed source coding problem \cite{FinLen,arxiv2}, or two transmitters in the interference channel problem \cite{arxiv2}, or Alice and Bob in a secret key-generation problem \cite{security2, security3}, or two agents in a distributed control problem \cite{control}.
A special case of the problem is the study of common-information (CI) generated by the two agents. As an example, consider two encoders in a Slepian-Wolf (SW) setup. Let $U_1,U_2$, and $V$ be independent, non-constant binary random variables.
\begin{figure}[!t]
\centering
\includegraphics[height=1.2in]{agents}
\caption{Correlated Boolean decision functions.}
\label{fig:agents}
\end{figure}
Then, an encoder observing the DMS $X=(V,U_1)$, and an encoder observing $Y=(V,U_2)$ agree on the value of $V$ with probability one. The random variable $V$ is called the CI observed by the two encoders. These encoders require a sum-rate equal to $H(V)+H(U_1)+H(U_2)$ to transmit the source to the decoder. This gives a reduction in rate equal to the entropy of $V$, compared to the transmission of the sources over independent point-to-point channels. The gain in performance is directly related to the entropy of the CI. So, it is desirable to maximize the entropy of the CI between the encoders.
In \cite{ComInf1}, the authors investigated multi-letterization as a method for increasing the CI. They showed that multi-letterization does not lead to an increase in the CI. More precisely, they prove the following statement:
{ \textit{Let $X$ and $Y$ be two sequences of DMSs. Let $f_{n}(X^n)$ and $g_{n}(Y^n)$ be two sequences of functions which converge to one another in probability. Then, the normalized entropies $\frac{1}{n}H(f_{n}(X^n))$, and $\frac{1}{n}H(g_{n}(Y^n))$ are less than or equal to the entropy of the CI between $X$ and $Y$ for large $n$. }}
A stronger version of the result was proved by Witsenhausen \cite{ComInf2}, where maximum correlation between the outputs is upper-bounded subject to the following restrictions on the decision functions:
\\\textit{ 1) The entropy of the binary output is fixed.
\\2) The agents cooperate with each other.
}
It was shown that maximum correlation is achieved if both users output a single element of the string without further processing (e.g. each user outputs the first element of its corresponding string). This was used to conclude that common-information can not be induced by multi-letterization. While, the result was used extensively in a variety of areas such as information theory, security, and control \cite{security2,security3, control}, in many problems, there are additional constraints on the set of admissible decision functions. For example, one can consider constraints on the `effective length' of the decision functions. This is a valid assumption, for instance, in the case of communication systems, the users have lower-bounds on their effective lengths due to the rate-distortion requirements in the problem \cite{arxiv2}.
In this paper, the problem under these additional constraints is considered. A new upper-bound on the correlation between the outputs of arbitrary pairs of Boolean functions is derived. The bound is presented as a function of the dependency spectrum of the Boolean functions. This is done in several steps.
First, the effective length of an additive Boolean function is defined. Then, we use a method similar to \cite{ComInf2}, and map the Boolean functions to the set of real-valued functions. Using tools in real analysis, we find an additive decomposition of these functions. The decomposition components have well-defined effective lengths. Using the decomposition we find the dependency spectrum of the Boolean function. The dependency spectrum is a generalization of the effective length and is defined for non-additive Boolean functions. Lastly, we use the dependency spectrum to derive the new upper-bound.
The rest of the paper is organized as follows: Section \ref{sec:not} presents the notation used in the paper. Section \ref{sec:eff} develops useful mathematical machinery to analyze Boolean function. Section \ref{sec:corr} contains the main result of the paper. Finally, Section \ref{sec:con} concludes the paper.
\section{Notation}\label{sec:not}
In this section, we introduce the notation used in this paper. We represent random variables by capital letters such as $X, U$. Sets are denoted by calligraphic letters such as $\mathcal{X}, \mathcal{U}$. Particularly, the set of natural numbers and real numbers are shown by $\mathbb{N}$, and $\mathbb{R}$, respectively.
For random variables, the $n$-length vector $(X_1,X_2,\cdots,X_n), X_i\in \mathcal{X}$ is denoted by $X^n\in \mathcal{X}^n$.
The binary string $(i_1,i_2,\cdots,i_n), i_j\in \{0,1\}$ is written as $\mathbf{i}$.
The vector of random variables $(X_{j_1},X_{j_2},\cdots, X_{j_k}), j_i\in [1,n], j_i\neq j_k$, is denoted by $X_{\mathbf{i}}$, where $i_{j_l}=1, \forall l\in [1,k]$. For example, take $n=3$, the vector $(X_1,X_3)$ is denoted by $X_{101}$, and the vector $(X_1,X_2)$ by $X_{110}$.
For two binary strings $\mathbf{i},\mathbf{j}$, we write $\mathbf{i}<\mathbf{j}$ if and only if $i_k<j_k, \forall k\in[1,n]$. For a binary string $\mathbf{i}$ we define $N_{\mathbf{i}}\triangleq w_H(\mathbf{i})$, where $w_H$ denotes the Hamming weight. Lastly, the vector $\sim \mathbf{i}$ is the element-wise complement of $\mathbf{i}$.
\section{The \textit{Dependency Spectrum} of a Function}\label{sec:eff}
In this section, we study the correlation between the output of a Boolean function with subsets of the input. Particularly, we are interested in the answers to questions such as `How strongly does the first element $X_1$ affect the output of $e(X^n)$?' `Is this effect amplified when we take $X_2$ into account as well?' `Is there a subset of random variables that (almost) determines the value of the output?'. We formulate these questions in mathematical terms, and find a characterization of the dependency spectrum of a Boolean function. The dependency spectrum is a vector which captures the correlation between different subsets of the input elements with each element of the output. As an intermediate step, we define the effective length of an additive Boolean function below:
\begin{Definition}
For a Boolean function $e:\{0,1\}^n\to \{0,1\}$ defined by $e(X^n)=\sum_{i\in \mathsf{J}}X_i, \mathsf{J}\subset [1,n] $, where the addition operator is the binary addition, the effective length is defined as the cardinality of the set $\mathsf{J}$.
\end{Definition}
For a general Boolean function (e.g. non-additive), we find a decomposition of ${e}$ into a set of functions ${e}_{\mathbf{i}}, \mathbf{i}\in \{0,1\}^n$ whose effective length is well-defined. First, we provide a mapping from the set of Boolean functions to the set of real functions. This allows us to use the tools available in real analysis to analyze these functions.
Fix a discrete memoryless source $X$, and a Boolean function defined by ${e}:\{0,1\}^n\to \{0,1\}$. Let $P\left(e(X^n)=1\right)=q$. The real-valued function corresponding to $e$ is represented by $\tilde{e}$, and is defined as follows:
\begin{align}
\tilde{e}(X^n)= \begin{cases}
1-q, & \qquad e(X^n)=1, \\
-q. & \qquad\text{otherwise}.
\end{cases}
\end{align}
\begin{Remark}
Note that $\tilde{e}$ has zero mean and variance $q(1-q)$.
\label{Rem:exp_0}
\end{Remark}
The random variable $\tilde{e}(X^n)$ has finite variance on the probability space $(\mathcal{X}^n, 2^{\mathcal{X}^n}, P_{X^n})$. The set of all such functions is denoted by $\mathcal{H}_{X,n}$. More precisely, we define $\mathcal{H}_{X,n}\triangleq L_2(\mathcal{X}^n, 2^{\mathcal{X}^n}, P_{X^n})$ as the separable Hilbert space of all measurable functions $\tilde{h}:\mathcal{X}^n\to \mathbb{R}$. Since X is a DMS, the isomorphy relation
\begin{equation}
\mathcal{H}_{X,n}= \mathcal{H}_{X,1}\otimes \mathcal{H}_{X,1}\cdots \otimes \mathcal{H}_{X,1}
\label{eq:Hil_Dec1}
\end{equation}
holds \cite{Reed_and_Simon}, where $\otimes$ indicates the tensor product.
\begin{Example}
Let n=1. The Hilbert space $\mathcal{H}_{X,1}$ is the space of all measurable functions $\tilde{h}:\mathcal{X}\to \mathbb{R}$. The space is spanned by the two linearly independent functions $\tilde{h}_1(X)=\mathbbm{1}(X)$ and $\tilde{h}_2(X)=\mathbbm{1}(\bar{X})$, where $\bar{X}=X\oplus 1$. We conclude that the space is two-dimensional.
\label{Ex:ex1}
\end{Example}
\begin{Remark}
The tensor operation in $\mathcal{H}_{X,n}$ is real multiplication (i.e. $f_1, f_2\in \mathcal{H}_{X,1}: f_1(X_1)\otimes f_2(X_2)\triangleq f_1(X_1)f_2(X_2)$). Let $\{f_i(X)|i\in [1,d]\}$ be a basis for $\mathcal{H}_{X,1}$, then a basis for $\mathcal{H}_{X,n}$ would be the set of all the real multiplications of these basis elements: $\{\Pi_{j\in [1,n]}f_{i_j}(X_j), i_j\in [1,d]\}$.
\end{Remark}
Example \ref{Ex:ex1} gives a decomposition of the space $\mathcal{H}_{X,1}$. Next, we introduce another decomposition of $\mathcal{H}_{X,1}$ which turns out to be very useful. Let $\mathcal{I}_{X,1}$ be the subset of all measurable functions of $X$ which have 0 mean, and let $\gamma_{X,1}$ be the set of constant real functions of $X$. We argue that $\mathcal{H}_{X,1}=\mathcal{I}_{X,1}\oplus \gamma_{X,1}$ gives a decomposition of $\mathcal{H}_{X,1}$.
$\mathcal{I}_{X,1}$ and $\gamma_{X,1}$ are linear subspaces of $\mathcal{H}_{X,1}$. $\mathcal{I}_{X,1}$ is the null space of the linear functional which takes an arbitrary function $\tilde{f}\in \mathcal{H}_{X,1}$ to its expected value $\mathbb{E}_{X}(\tilde{f})$. The null space of any non-zero linear functional is a hyper-space in $\mathcal{H}_{X,1}$. So, $\mathcal{I}_{X,1}$ is a one-dimensional subspace of $\mathcal{H}_{X,1}$. From Remark \ref{Rem:exp_0}, $\tilde{e}_1\in \mathcal{I}_{X,1}$. We conclude that any element of $\mathcal{I}_{X,1}$ can be written as $c\tilde{e}_1(X^n), c\in \mathbb{R}$.
$\gamma_{X,1}$ is also one dimensional. It is spanned by the function $\tilde{g}(X)=1$.
Consider an arbitrary element $\tilde{f}\in \mathcal{H}_{X,1}$. One can write $\tilde{f}= \tilde{f}_1+\tilde{f}_2$ where $\tilde{f}_1=\tilde{f}-\mathbb{E}_{X}(\tilde{f})\in \mathcal{I}_{X,1}$, and $\tilde{f}_2= \mathbb{E}_{X}(\tilde{f})\in \gamma_{X,1}$. \label{Ex:one_dim}
Replacing $\mathcal{H}_{X,1}$ with $\mathcal{I}_{X,1}\oplus \gamma_{X,1}$ in \eqref{eq:Hil_Dec1}, we have:
\begin{align}
\mathcal{H}_{X,n}&=\otimes_{i=1}^n \mathcal{H}_{X,1}=\otimes_{i=1}^n (\mathcal{I}_{X,1}\oplus \gamma_{X,1})\nonumber
\\&\stackrel{(a)}{=} \oplus_{\mathbf{i}\in \{0,1\}^n} (\mathcal{G}_{i_1}\otimes \mathcal{G}_{i_2}\otimes\dotsb \otimes \mathcal{G}_{i_n}),
\label{eq:Hil_Dec2}
\end{align}
where
\begin{align*}
\mathcal{G}_j=
\begin{cases}
\gamma_{X,1} \qquad \ j=0,\\
\mathcal{I}_{X,1} \qquad \ j=1,
\end{cases}
\end{align*}
and, in (a), we have used the distributive property of tensor products over direct sums.
\begin{Remark}
Equation \eqref{eq:Hil_Dec2}, can be interpreted as follows: for any $\tilde{e}\in \mathcal{H}_{X,n}, n\in \mathbb{N}$, we can find a decomposition $\tilde{e}=\sum_{\mathbf{i}}\tilde{e}_{\mathbf{i}}$, where $\tilde{e}_{\mathbf{i}}\in \mathcal{G}_{i_1}\otimes \mathcal{G}_{i_2}\otimes\dotsb \otimes \mathcal{G}_{i_n}$. $\tilde{e}_{\mathbf{i}}$ can be viewed as the component of $\tilde{e}$ which is only a function of $\{X_{i_j}|i_j=1\}$. In this sense, the collection $\{\tilde{e}_{\mathbf{i}}|\sum_{j\in[1,n]}i_j=k\}$, is the set of components of $\tilde{e}$ whose effective length is $k$.
\label{Rem:Dec}
\end{Remark}
In order clarify the notation, we provide the following example:
\begin{Example}
Let $X$ be a binary symmetric source, and let $e(X_1,X_2)=X_1\wedge X_2$ be the binary `and' function. The corresponding real function is:
\begin{align*}
\tilde{e}(X_1,X_2)=
\begin{cases}
-\frac{1}{4} \qquad \ (X_1,X_2)\neq (1,1),\\
\frac{3}{4} \qquad \ (X_1,X_2)=(1,1).
\end{cases}
\end{align*}
Lagrange interpolation gives $\tilde{e}=X_1X_2-\frac{1}{4}$. The decomposition is given by:
\begin{align*}
&\tilde{e}_{1,1}= (X_1-\frac{1}{2})(X_2-\frac{1}{2}), \tilde{e}_{1,0}=\frac{1}{2}(X_1-\frac{1}{2}),
\\&\tilde{e}_{0,1}= \frac{1}{2}(X_2-\frac{1}{2}),\tilde{e}_{0,0}=0.
\end{align*}
The variances of these functions are given below:
\begin{align*}
&Var(\tilde{e})=\frac{3}{16}, Var(\tilde{e}_{0,1})=Var(\tilde{e}_{1,0})=Var(\tilde{e}_{1,1})=\frac{1}{16}.
\end{align*}
As we shall see in the next section, these variances play a major role in determining the correlation preserving properties of $\tilde{e}$. The vector whose elements include these variances is called the dependency spectrum of $e$.
In the perspective of the effective length, the function $\tilde{e}$ has $\frac{2}{3}$ of its variance distributed between $\tilde{e}_{0,1}$, and $\tilde{e}_{1,0}$ which have effective length one, and $\frac{1}{3}$ of the variance is on $\tilde{e}_{1,1}$ which is has effective length two.
\end{Example}
Similar to the above examples, for arbitrary $\tilde{e}\in \mathcal{H}_{X,n}, n\in \mathbb{N}$, we find a decomposition $\tilde{e}=\sum_{\mathbf{i}}\tilde{e}_{\mathbf{i}}$, where $\tilde{e}_{\mathbf{i}}\in \mathcal{G}_{i_1}\otimes \mathcal{G}_{i_2}\otimes\dotsb \otimes \mathcal{G}_{i_n}$. We characterize $\tilde{e}_{\mathbf{i}}$ in terms of products of the basis elements of $\otimes_{j\in[1,n]}\mathcal{G}_{i_j}$ using the following result in linear algebra:
\begin{Lemma}[\cite{Reed_and_Simon}]
Let $\mathcal{H}_{i},i \in[1,n]$ be vector spaces over a field $F$. Also, let $\mathcal{B}_{i}=\{v_{i,j}|j\in [1,d_i]\}$ be the basis for $\mathcal{H}_i$ where $d_i$ is the dimension of $\mathcal{H}_i$. Then, any element $v\in \otimes_{i\in [1,n]}\mathcal{H}_i$ can be written as $v=\sum_{j_1\in [1,d_1]}\sum_{j_2\in [1,d_2]}\cdots \sum_{j_n\in [1,d_n]} c_{j^n} v_{j_1}\otimes v_{j_2}\cdots \otimes v_{j_n}$.
\label{Lem:tensor_dec}
\end{Lemma}
Since $\mathcal{G}_{i_j}$'s, $j\in [1,n]$ take values from the set $\{\mathcal{I}_{X,1}, \gamma_{X,1}\}$, they are all one-dimensional. For the binary source $X$ with $P(X=1)=q$, define $\tilde{h}$ as:
\begin{align}
\tilde{h}(X)= \begin{cases}
1-q, & \text{if } X=1, \\
-q. & \text{if } X=0.
\end{cases}
\label{eq:basis}
\end{align}
Then, the single element set $\{\tilde{h}(X)\}$ is a basis for $\mathcal{I}_{X,1}$. Also, the function $\tilde{h}(X)=1$ spans $\gamma_{X,1}$. So, using Lemma \ref{Lem:tensor_dec}, $\tilde{e}_{\mathbf{i}}(X^n)= c_{\mathbf{i}}\prod_{t:i_t=1}\tilde{h}(X_{{t}}), c_i\in \mathbb{R}$. We are interested in the variance of $\tilde{e}_{\mathbf{i}}$'s. In the next proposition, we show that the $\tilde{e}_{\mathbf{i}}$'s are uncorrelated and we derive an expression for the variance of $\tilde{e}_{\mathbf{i}}$.
\begin{Proposition}
\label{pr:partfun}
Define $\mathbf{P}_{\mathbf{i}}$ as the variance of $\tilde{e}_{\mathbf{i}}$. The following hold:
\\ 1) $\mathbb{E}(\tilde{e}_{\mathbf{i}}\tilde{e}_{\mathbf{j}})=0, \mathbf{i}\neq \mathbf{j}$, in other words $\tilde{e}_{\mathbf{i}}$'s are uncorrelated.
\\ 2) $\mathbf{P}_{\mathbf{i}}=\mathbb{E}(\tilde{e}_{\mathbf{i}}^2)=c_{\mathbf{i}}^2(q(1-q))^{w_{H}(\mathbf{i})}.$
\end{Proposition}
\begin{proof}
1) follows by direct calculation. 2) holds from the independence of $X_i$'s.
\end{proof}
Next, we find the characterization for $\tilde{e}_{\mathbf{i}}$.
\begin{Lemma}
$\tilde{e}_{\mathbf{i}}=\mathbb{E}_{X^n|X_{\mathbf{i}}}(\tilde{e}|X_{\mathbf{i}})-\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}}$ gives the unique orthogonal decomposition of $\tilde{e}$ into the Hilbert spaces $\mathcal{G}_{i_1}\otimes \mathcal{G}_{i_2}\cdots\otimes \mathcal{G}_{i_n}, \mathbf{i}\in \{0,1\}^n$.
\label{Lem:unique}
\end{Lemma}
\begin{IEEEproof}
Please refer to the Appendix.
\end{IEEEproof}
The following example clarifies the notation used in Lemma \ref{Lem:unique}.
\begin{Example}
Consider the case where $n=2$. We have the following decomposition of $\mathcal{H}_{X,2}$:
\begin{align}
& \mathcal{H}_{X,2}=(\mathcal{I}_{X,1} \otimes \mathcal{I}_{X,1}) \oplus\nonumber \\
&(\mathcal{I}_{X,1} \otimes \gamma_{X,1})\oplus (\gamma_{X,1}\otimes\mathcal{I}_{X,1}) \oplus (\gamma_{X,1}\otimes\gamma_{X,1}).
\label{eq:2dimdec}
\end{align}
Let $\tilde{e}(X_1,X_2)$ be an arbitrary function in $\mathcal{H}_{X,2}$. The unique decomposition of $\tilde{e}$ in the form given in \eqref{eq:2dimdec} is as follows:
\begin{align*}
\tilde{e}&= \tilde{e}_{1,1}+\tilde{e}_{1,0}+\tilde{e}_{0,1}+\tilde{e}_{0,0},\\
&\tilde{e}_{1,1}= \tilde{e}-\mathbb{E}_{X_2|X_1}(\tilde{e}|X_1)-\mathbb{E}_{X_1|X_2}(\tilde{e}|X_2)+\mathbb{E}_{X_1,X_2}(\tilde{e})
\\& \tilde{e}_{1,0}=\mathbb{E}_{X_2|X_1}(\tilde{e}|X_1)-\mathbb{E}_{X_1,X_2}(\tilde{e}),
\\& \tilde{e}_{0,1}= \mathbb{E}_{X_1|X_2}(\tilde{e}|X_2)-\mathbb{E}_{X_1,X_2}(\tilde{e}),
\\&\tilde{e}_{0,0}=\mathbb{E}_{X_1,X_2}(\tilde{e}).
\end{align*}
It is straightforward to show that each of the $\tilde{e}_{i,j}$'s, $i,j\in \{0,1\}$, belong to their corresponding subspaces. For instance, $ \tilde{e}_{0,1}$ is constant in $X_1$, and is a $0$ mean function of $X_2$ (i.e. $\mathbb{E}_{X_2}\left(\tilde{e}_{0,1}(x_1,X_2)\right)=0, x_1\in \{0,1\}$), so $\tilde{e}_{0,1}\in \gamma_{X,1}\otimes\mathcal{I}_{X,1}$.
\end{Example}
The following proposition describes some of the properties of $\tilde{e}_{\mathbf{i}}$ which were derived in the proof of Lemma \ref{Lem:unique}:
\begin{Proposition}
\label{prop:belong2}
The following hold:
\nonumber\\1) $\forall\mathbf{i},\mathbb{E}_{X^n}(\tilde{e}_{\mathbf{i}})$=0.\\
2) $\forall \mathbf{i}\leq \mathbf{k}$, we have $\mathbb{E}_{X^n|X_{\mathbf{j}}}(\tilde{e}_{\mathbf{i}}|X_{\mathbf{k}})=\tilde{e}_{\mathbf{i}}$.\\
3) $\mathbb{E}_{X^n}(\tilde{e}_{\mathbf{i}}\tilde{e}_{\mathbf{k}})=0$, for $\mathbf{i}\neq \mathbf{k}$.\\
4) $\forall \mathbf{k}\leq \mathbf{i}: \mathbb{E}_{X^n|X_{\mathbf{k}}}(\tilde{e}_{\mathbf{i}}|X_{\mathbf{k}})=0.$
\end{Proposition}
Lastly, we derive an expression for $\mathbf{P}_{\mathbf{i}}$:
\begin{Lemma}
For arbitrary $e:\{0,1\}^n\to \{0,1\}$, let $\tilde{e}$ be the corresponding real function, and let $\tilde{e}=\sum_{\mathbf{i}}\tilde{e}_{\mathbf{i}}$ be the decomposition in the form of Equation \eqref{eq:Hil_Dec2}. The variance of each component in the decomposition is given by the following recursive formula $\mathbf{P}_{\mathbf{i}} =\mathbb{E}_{X_{\mathbf{i}}}(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}}))-\sum_{\mathbf{j}< \mathbf{i}}\mathbf{P}_{\mathbf{j}}, \forall \mathbf{i}\in \mathbb{F}_2^n$, where $\mathbf{P}_{\underline{0}}\triangleq 0$.
\label{Lem:power}
\end{Lemma}
\begin{IEEEproof}
\begin{align*}
\mathbf{P}_{\mathbf{i}}
&=Var_{X_{\mathbf{i}}}(\tilde{e}_{\mathbf{i}}(X^n))
= \mathbb{E}_{X_{\mathbf{i}}}(\tilde{e}^2_{\mathbf{i}}(X^n))-\mathbb{E}_{X_{\mathbf{i}}}^2(\tilde{e}_{\mathbf{i}}(X^n))\\
& \stackrel{(a)}{=} \mathbb{E}_{X_{\mathbf{i}}}\left(\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}(\tilde{e}|X_{\mathbf{i}})-\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}}\right)^2\right)-0
\\&=\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}(\tilde{e}|X_{\mathbf{i}})\tilde{e}_{\mathbf{j}}\right)+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&\stackrel{(b)}=\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}(\sum_{\mathbf{l}}\tilde{e}_{\mathbf{l}}|X_{\mathbf{i}})\tilde{e}_{\mathbf{j}}\right)
\\&+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&\stackrel{(c)}{=}\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{i}}}\left(\sum_{\mathbf{l}}\mathbb{E}_{X^n|X_{\mathbf{i}}}(\tilde{e}_{\mathbf{l}}|X_{\mathbf{i}})\tilde{e}_{\mathbf{j}}\right)
\\&+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&\stackrel{(d)}{=}\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{i}}}\left(\sum_{\mathbf{l}}\mathbbm{1}(\mathbf{l}\leq \mathbf{i})\mathbb{E}_{X^n|X_{\mathbf{i}}}(\tilde{e}_{\mathbf{l}}|X_{\mathbf{i}})\tilde{e}_{\mathbf{j}}\right)
\\&+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&\stackrel{(e)}{=}\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{i}}}\left(\sum_{\mathbf{l}<\mathbf{i}}\tilde{e}_{\mathbf{l}}\tilde{e}_{\mathbf{j}}\right)+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&\stackrel{(f)}{=}\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\sum_{\mathbf{l}<\mathbf{i}}\mathbbm{1}(\mathbf{j}=\mathbf{l})\mathbb{E}_{X_{\mathbf{i}}}\left(\tilde{e}_{\mathbf{l}}\tilde{e}_{\mathbf{j}}\right)+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&=\mathbb{E}_{X_{\mathbf{i}}}\left(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}})\right)-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{j}}}(\tilde{e}^2_{\mathbf{j}})+\mathbb{E}_{X_{\mathbf{i}}}((\sum_{\mathbf{j}< \mathbf{i}} \tilde{e}_{\mathbf{j}})^2)
\\&=\mathbb{E}_{X_{\mathbf{i}}}(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}}))-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{j}}}(\tilde{e}^2_{\mathbf{j}})+\sum_{\mathbf{j}< \mathbf{i}} \sum_{\mathbf{k}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{i}}}(\tilde{e}_{\mathbf{j}}\tilde{e}_{\mathbf{k}})
\\&
\stackrel{(g)}{=}\mathbb{E}_{X_{\mathbf{i}}}(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}}))-2\sum_{\mathbf{j}<\mathbf{i}}\mathbb{E}_{X_{\mathbf{j}}}(\tilde{e}^2_{\mathbf{j}})+\sum_{\mathbf{j}< \mathbf{i}} \sum_{\mathbf{k}<\mathbf{i}}\mathbbm{1}(\mathbf{j}=\mathbf{k})\mathbb{E}_{X_{\mathbf{i}}}(\tilde{e}^2_{\mathbf{j}})
\\&=\mathbb{E}_{X_{\mathbf{i}}}(\mathbb{E}_{X^n|X_{\mathbf{i}}}^2(\tilde{e}|X_{\mathbf{i}}))-\sum_{\mathbf{j}< \mathbf{i}}\mathbf{P}_{\mathbf{j}},
\end{align*}
where (a) follows from 1) in Proposition \ref{prop:belong2}, b) follows from the decomposition in Equation \eqref{eq:Hil_Dec2}, (c) uses linearity of expectation, (d) uses 4) in Proposition \ref{prop:belong2}, (e) holds from 2) in \ref{prop:belong2}, and in (f) and (g) we have used 1) in Proposition \ref{prop:belong2}.
\end{IEEEproof}
\begin{Corollary}
For an arbitrary $e:\{0,1\}^n\to \{0,1\}$ with corresponding real function $\tilde{e}$, and decomposition $\tilde{e}=\sum_{\mathbf{j}}\tilde{e}_{\mathbf{j}}$. Let the variance of $\tilde{e}$ be denoted by $\mathbf{P}$. Then, $\mathbf{P}=\sum_{\mathbf{j}}\mathbf{P}_{\mathbf{j}}$.
\label{Cor:power}
\end{Corollary}
The corollary is a special case of Lemma \ref{Lem:power}, where we have taken $\mathbf{i}$ to be the all ones vector.
The following provides a definition of the dependency spectrum of a Boolean function:
\begin{Definition}[Dependency Spectrum]
For a Boolean function $e$, the vector of variances $(P_{\mathbf{i}})_{\mathbf{i}\in \{0,1\}^n}$ is called the dependency spectrum of $e$.
\end{Definition}
In the next section, we will use the dependency spectrum to upper-bound the maximum correlation between the outputs of two arbitrary Boolean functions.
\section{Correlation Preservation in Arbitrary Functions}\label{sec:corr}
We proceed with presenting the main result of this paper. Let $(X,Y)$ be a pair of DMS's. Consider two arbitrary Boolean functions $e:\mathcal{X}^n\to \{0,1\}$ and $f:\mathcal{Y}^n\to \{0,1\}$. Let $ q\triangleq P(e=1)$, $r\triangleq P(f=1)$. Let $\tilde{e}=\sum_{\mathbf{i}}e_{\mathbf{i}}$, and $\tilde{f}=\sum_{\mathbf{i}}f_{\mathbf{i}}$ give the decomposition of these functions as defined in the previous section.
The following theorem provides an upper-bound on the probability of equality of $e(X^n)$ and $f(Y^n)$.
\begin{Theorem}
Let $\epsilon\triangleq P(X\neq Y)$, the following bound holds:
\begin{align*}
&2\!\!\sqrt{\sum_{\mathbf{i}}\mathbf{P}_{\mathbf{i}}}\sqrt{\sum_{\mathbf{i}}\mathbf{Q}_{\mathbf{i}}}-2\!\!\sum_{\mathbf{i}}C_\mathbf{i}\mathbf{P}_{\mathbf{i}}^{\frac{1}{2}}\mathbf{Q}_{\mathbf{i}}^{\frac{1}{2}}
\leq \!\!P(e(X^n)\neq f(Y^n))
\\&\leq 1- 2\sqrt{\sum_{\mathbf{i}}\mathbf{P}_{\mathbf{i}}}\sqrt{\sum_{\mathbf{i}}\mathbf{Q}_{\mathbf{i}}}+2\sum_{\mathbf{i}}C_\mathbf{i}\mathbf{P}_{\mathbf{i}}^{\frac{1}{2}}\mathbf{Q}_{\mathbf{i}}^{\frac{1}{2}}
,
\end{align*}
where $C_{\mathbf{i}}\triangleq (1-2\epsilon)^{N_\mathbf{i}}$, $\mathbf{P}_{\mathbf{i}}$ is the variance of $\tilde{e}_{\mathbf{i}}$, and ${\tilde{e}}$ is the real function corresponding to ${e}$, and $\mathbf{Q}_{\mathbf{i}}$ is the variance of $\tilde{f}_{\mathbf{i}}$, and finally, $N_{\mathbf{i}}\triangleq w_H(\mathbf{i})$.
\label{th:sec3}
\end{Theorem}
\begin{IEEEproof}
Please refer to the appendix.
\end{IEEEproof}
\begin{Remark}
$C_{\mathbf{i}}$ is decreasing with $N_{\mathbf{i}}$. So, in order to increase $ P(e(X^n)\neq f(Y^n))$, most of the variance $\mathbf{P}_{\mathbf{i}}$ should be distributed on $\tilde{e}_\mathbf{i}$ which have lower $N_{\mathbf{i}}$ (i.e. operate on smaller blocks). Particularly, the lower bound is minimized by setting
\begin{align*}
\mathbf{P}_{\mathbf{i}}=
\begin{cases}
1 \qquad & \mathbf{i}=\mathbf{i}_1,\\
0 \qquad & otherwise.
\end{cases}
\end{align*}
This recovers the result in \cite{ComInf2}.
\end{Remark}
We derived a relation between the dependency spectrum of a Boolean function and its correlation preserving properties. This can be used in a variety of disciplines. For example, in communication problems, cooperation among different nodes in a network requires correlated outputs which can be linked to the dependency spectrum through the results derived here. On the other hand, there are restrictions on the dependency spectrum based on the rate-distortion requirements (better performance requires larger effective lengths). We investigate this in \cite{arxiv1}, and show that the large blocklength single-letter coding strategies used in networks are sub-optimal in various problems.
\section{Conclusion}\label{sec:con}
We derived a new bound on the maximum correlation between Boolean functions operating on pairs of sequences of random variable. The bound was presented as a function of the dependency spectrum of the functions. We developed a new mathematical apparatus for analyzing Boolean functions, provided formulas for decomposing the Boolean function into additive components, and for calculating the dependency spectrum of these functions. The new bound has wide ranging applications in security, control and information theory.
| proofpile-arXiv_065-7452 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
The pair correlation function is commonly considered the most
informative second-order summary statistic of a spatial point process
\citep{stoyan:stoyan:94,moeller:waagepetersen:03,illian:penttinen:stoyan:stoyan:08}.
Non-parametric estimates of the pair correlation function are useful
for assessing regularity or clustering of a spatial point pattern and
can moreover be used for inferring parametric models for spatial point
processes via minimum contrast estimation
\citep{stoyan:stoyan:96,illian:penttinen:stoyan:stoyan:08}.
Although alternatives exist \citep{yue:loh:13}, kernel estimation
is the by far most popular approach
\citep{stoyan:stoyan:94,moeller:waagepetersen:03,illian:penttinen:stoyan:stoyan:08}
which is closely related to kernel estimation of probability
densities.
Kernel estimation is computationally fast and works well except at small
spatial lags. For spatial lags close to zero, kernel estimators suffer
from strong bias, see e.g.\ the discussion at page 186 in
\cite{stoyan:stoyan:94}, Example~4.7 in
\cite{moeller:waagepetersen:03} and Section 7.6.2 in \cite{baddeley:rubak:turner:15}.
The bias is a major drawback if one attempts
to infer a parametric model from the non-parametric estimate since the
behavior
near zero is important for
determining the right parametric model \citep{jalilian:guan:waagepetersen:13}.
In this paper we adapt orthogonal series density estimators
\citep[see e.g.\ the reviews in][]{hall:87,efromovich:10}
to the estimation of the pair correlation function. We derive unbiased
estimators of the coefficients in an orthogonal series expansion of the
pair correlation function and propose a criterion for choosing a certain
optimal smoothing scheme. In the literature on orthogonal series
estimation of probability densities, the data are usually
assumed to consist of indendent observations from the unknown
target density. In our case the situation is more complicated as the
data used for estimation consist of spatial lags between observed pairs of
points. These lags are neither independent nor identically distributed and the sample of lags is
biased due to edge effects.
We establish consistency and asymptotic normality of our new
orthogonal series estimator and study its performance in a simulation
study and an application to a tropical rain forest data set.
\section{Background
\label{sec:background}
\subsection{Spatial point processes}
We denote by $X$ a point process on ${\mathbb R}^d$, $d \ge 1$, that is, $X$ is
a locally finite random subset of ${\mathbb R}^d$. For $B\subseteq {\mathbb R}^d$, we let $N(B)$
denote the random number of points in $X \cap B$. That $X$ is locally finite
means that $N(B)$ is finite almost surely whenever $B$ is bounded. We
assume that $X$ has an intensity function $\rho$ and a second-order
joint intensity $\rho^{(2)}$ so that for bounded $A,B \subset {\mathbb R}^d$,
\begin{equation}
E\{N(B)\} = \int_B \rho({u}}%{{\mathbf{u}}) {\mathrm{d}} {u}}%{{\mathbf{u}}, \quad
E\{N(A) N(B)\} = \int_{A\cap B} \rho({u}}%{{\mathbf{u}}) {\mathrm{d}} {u}}%{{\mathbf{u}}
+ \int_A \int_B \rho^{(2)}({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}) {\mathrm{d}} {u}}%{{\mathbf{u}} {\mathrm{d}}{v}}%{{\mathbf{v}}. \label{eq:moments}
\end{equation}
The pair correlation function $g$ is defined as
$g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}) = \rho^{(2)}({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})/\{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})\}$
whenever $\rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})>0$ (otherwise we define $g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})=0$).
By \eqref{eq:moments},
\[
\text{cov}\{ N(A), N(B) \} = \int_{A \cap B} \rho({u}}%{{\mathbf{u}}) {\mathrm{d}} {u}}%{{\mathbf{u}} +
\int_{A}\int_{B} \rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})\big\{ g({v}}%{{\mathbf{v}},{u}}%{{\mathbf{u}}) - 1 \big\} {\mathrm{d}}{u}}%{{\mathbf{u}}{\mathrm{d}}{v}}%{{\mathbf{v}}
\]
for bounded $A,B \subset {\mathbb R}^d$.
Hence, given the intensity function, $g$ determines
the covariances of count variables $N(A)$ and $N(B)$. Further, for
locations $u,v \in {\mathbb R}^d$, $g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})>1$ ($<1$)
implies that the presence of a point at ${v}}%{{\mathbf{v}}$ yields an elevated
(decreased) probability of observing yet another point in a small
neighbourhood of ${u}}%{{\mathbf{u}}$ \cite[e.g.\ ][]{coeurjolly:moeller:waagepetersen:15}.
In this paper we assume that $g$ is isotropic, i.e.\ with
an abuse of notation, $g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})=g(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|)$.
Examples of pair correlation functions are shown in Figure~\ref{fig:gfuns}.
\subsection{Kernel estimation of the pair correlation function}
Suppose ${X}}%{\mathbf{X}$ is observed within a bounded observation window $W
\subset {\mathbb R}^d$ and let ${X}}%{\mathbf{X}_W= {X}}%{\mathbf{X} \cap W$. Let $k_b(\cdot)$ be a
kernel of the form $k_b(r)=k(r/b)/b$, where $k$ is a
probability density
and
$b>0$ is the bandwidth.
Then a kernel density
estimator \citep{stoyan:stoyan:94,baddeley:moeller:waagepetersen:00} of $g$ is
\[
\hat{g}_k(r;b) = \frac{1}{{\text{sa}_d} r^{d-1}}
\sum_{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W}}^{\neq}
\frac{ k_{b}(r - \|{v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}}\|)}{ \rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\quad r\geq 0,
\]
where ${\text{sa}_d}$ is the surface area of the unit sphere in ${\mathbb R}^d$,
$\sum^{\neq}$ denotes sum over all distinct points,
$1/|W \cap W_{{h}}{%{{\mathbf{h}}}|$, ${h}}{%{{\mathbf{h}} \in {\mathbb R}^d$,
is the translation edge correction factor with
$W_{{h}}{%{{\mathbf{h}}}=\{{u}}%{{\mathbf{u}}-{h}}{%{{\mathbf{h}}: {u}}%{{\mathbf{u}}\in W\}$, and $|A|$ is the volume (Lebesgue measure)
of $A\subset{\mathbb R}^{d}$.
Variations of this include \citep{guan:leastsq:07}
\[
\hat{g}_d(r;b) =\frac{1}{{\text{sa}_d}} \sum_{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W}}^{\neq}
\frac{ k_{b}(r - \|{v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}}\|) }{ \|{v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}}\|^{d-1} \rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\quad r \geq 0 \]
and the bias corrected estimator \citep{guan:leastsq:07}
\[
\hat g_c(r;b) = \hat g_d(r;b) / c(r;b), \quad
c(r;b) = \int_{-b}^{\min\{r,b\}} k_b(t) {\mathrm{d}} t,
\]
assuming $k$ has bounded support $[-1,1]$.
Regarding the choice of kernel,
\cite{illian:penttinen:stoyan:stoyan:08}, p.~230, recommend to use the
uniform kernel $k(r)=\mathbbm{1}(|r|\le 1)/2$, where $\mathbbm{1}(\, \cdot\, )$
denotes the indicator function, but the Epanechnikov kernel $k(r)=(3/4)(1 - r^2)\mathbbm{1}(|r|\leq1)$
is another common choice.
The choice of the bandwidth $b$ highly affects
the bias and variance of the kernel estimator.
In the planar ($d=2$) stationary case,
\cite{illian:penttinen:stoyan:stoyan:08}, p.~236,
recommend $b=0.10/\surd{\hat{\rho}}$ based on practical experience where $\hat{\rho}$ is an estimate
of the constant intensity. The default in \texttt{spatstat} \citep{baddeley:rubak:turner:15}, following
\cite{stoyan:stoyan:94}, is to use
the Epanechnikov kernel with $b=0.15/\surd{\hat{\rho}}$.
\cite{guan:composite:07} and \cite{guan:leastsq:07}
suggest to choose $b$ by composite
likelihood cross validation or by
minimizing an estimate of the mean integrated squared error defined over some interval $I$ as
\begin{equation}\label{eq:mise}
\textsc{mise}(\hat{g}_m, w) ={\text{sa}_d} \int_I
E\big\{ \hat{g}_m(r;b) - g(r)\big\}^2 w(r-{r_{\min}}) {\mathrm{d}} r,
\end{equation}
where $\hat g_{m}$, $m=k,d,c$, is one of the aforementioned kernel
estimators, $w\ge 0$ is a weight function and ${r_{\min}} \ge 0$.
With $I=(0,R)$, $w(r)=r^{d-1}$ and $r_{\min}=0$,
\cite{guan:leastsq:07} suggests to estimate the mean integrated squared error by
\begin{equation}\label{eq:ywcv}
M(b) = {\text{sa}_d} \int_{0}^{R}
\big\{ \hat{g}_{m}(r;b) \big\}^2 r^{d-1} {\mathrm{d}} r
-2 \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in X_{W}\\ \|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\| \le R}}^{\neq}
\frac{\hat{g}_{m}^{-\{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}}(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|;b)}{
\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\end{equation}
where $\hat{g}_m^{-\{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}}$, $m=k,d,c$, is defined as $\hat g_m$
but based on the reduced data $({X}}%{\mathbf{X} \setminus \{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}) \cap
W$. \cite{loh:jang:10} instead use a spatial bootstrap for
estimating \eqref{eq:mise}. We return to \eqref{eq:ywcv} in Section~\ref{sec:miseest}.
\section{Orthogonal series estimation}\label{sec:ose}
\subsection{The new estimator}
For an $R>0$, the new orthogonal series estimator of $g(r)$,
$0\le {r_{\min}} < r < {r_{\min}} + R$, is based on an orthogonal series
expansion of $g(r)$ on $({r_{\min}}, {r_{\min}} + R)$ :
\begin{equation}\label{eq:expansion}
g(r) = \sum_{k=1}^{\infty} \theta_k \phi_k(r-{r_{\min}}),
\end{equation}
where $\{\phi_k\}_{k \ge 1}$ is an orthonormal basis of functions on
$(0, R)$ with respect to some weight function $w(r) \ge 0$, $r \in (0, R)$.
That is, $\int_{0}^R \phi_k(r) \phi_l(r) w(r) {\mathrm{d}} r = \mathbbm{1}(k=l)$
and the coefficients in the expansion are given by $\theta_k
=\int_{0}^{R} g(r+{r_{\min}}) \phi_k(r) w(r) {\mathrm{d}} r$.
For the cosine basis, $w(r)=1$ and $\phi_1(r) = 1/\surd{R}$, $\phi_k(r)= (2/R)^{1/2} \cos\{ (k - 1) \pi r/R \}$, $k \ge 2$. Another example is the Fourier-Bessel basis with $w(r)= r^{d-1}$
and $ \phi_k(r)=2^{1/2}J_{\nu}\left(r \alpha_{\nu,k}/R
\right)r^{-\nu}/\{ RJ_{\nu+1}(\alpha_{\nu,k})\}$, $k \ge 1$,
where $\nu=(d-2)/2$, $J_{\nu}$ is the Bessel function of the first kind of
order $\nu$, and $\{\alpha_{\nu,k}\}_{k=1}^\infty$ is the sequence of
successive positive roots of $J_{\nu}(r)$.
An estimator of $g$ is obtained by replacing the $\theta_k$ in \eqref{eq:expansion}
by unbiased estimators and
truncating or smoothing the infinite sum. A similar approach has a
long history in the context of non-parametric estimation of
probability densities, see e.g.\ the review in \citet{efromovich:10}.
For $\theta_k$ we propose the estimator
\begin{equation}\label{eq:thetahat}
\hat \theta_k=\frac{1}{{\text{sa}_d}} \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W}\\ {r_{\min}} < \|{u}}%{{\mathbf{u}} - {v}}%{{\mathbf{v}}\| < {r_{\min}}+R}}^{\neq}
\frac{\phi_k(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}}) w(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}})}{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}}) \|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|^{d-1}|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\end{equation}
which is unbiased by the second order Campbell formula, see Section~S2 of the supplementary
material.
This type of estimator has some similarity to the coefficient estimators used for probability
density estimation but is based on spatial lags ${v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}$ which are
not independent nor identically distributed. Moreover the estimator is
adjusted for the possibly inhomogeneous intensity $\rho$ and corrected
for edge effects.
The orthogonal series estimator is finally of the form
\begin{equation}\label{eq:orthogpcf}
\hat g_o(r; b) = \sum_{k=1}^{\infty} b_k \hat \theta_k \phi_k(r-{r_{\min}}),
\end{equation}
where $b=\{ b_k \}_{k=1}^\infty$ is a smoothing/truncation scheme.
The simplest smoothing scheme is $b_k=\mathbbm{1}[k \le K]$ for some cut-off
$K\geq1$. Section~\ref{sec:smoothing} considers several other smoothing schemes.
\subsection{Variance of $\hat \theta_k$}
\label{sec:varthetak}
The factor $\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|^{d-1}$ in \eqref{eq:thetahat} may
cause problems when $d>1$ where the presence of two
very close points in $X_W$ could imply
division by a quantity close to zero.
The expression for the variance of $\hat \theta_k$ given in Section~S2 of the supplementary
material
indeed shows that the variance is not finite
unless $g(r)w(r-{r_{\min}})/r^{d-1}$ is bounded for ${r_{\min}}<r<{r_{\min}}+R$. If ${r_{\min}}>0$ this is always satisfied for bounded $g$. If ${r_{\min}}=0$ the condition is still satisfied in case of the Fourier-Bessel basis and bounded $g$.
For the cosine basis $w(r)=1$ so if ${r_{\min}}=0$ we need
the boundedness of $g(r)/r^{d-1}$.
If ${X}}%{\mathbf{X}$ satisfies a hard core condition
(i.e.\ two points in ${X}}%{\mathbf{X}$ cannot be closer than some
$\delta>0$), this is trivially satisfied. Another
example is a determinantal point process \citep{LMR15} for
which $g(r)=1-c(r)^2$ for a correlation function $c$. The boundedness
is then e.g.\ satisfied if $c(\cdot)$ is the
Gaussian ($d \le 3$) or exponential ($d \le 2$) correlation function.
In practice, when using the cosine basis, we take ${r_{\min}}$ to be a small positive number to avoid issues with infinite variances.
\subsection{Mean integrated squared error and smoothing schemes}
\label{sec:smoothing}
The orthogonal series estimator \eqref{eq:orthogpcf}
has the mean integrated squared error
\begin{align}
\textsc{mise}\big(\hat{g}_{o},w\big)
&= {\text{sa}_d} \int_{{r_{\min}}}^{{r_{\min}}+R}
E\big\{ \hat{g}_{o}(r;b) - g(r) \big\}^2 w(r-{r_{\min}}) {\mathrm{d}} r \nonumber \\
&= {\text{sa}_d} \sum_{k=1}^{\infty} E(b_k\hat{\theta}_{k} - \theta_k)^2
= {\text{sa}_d} \sum_{k=1}^{\infty} \big[ b_{k}^2 E\{(\hat{\theta}_{k})^2\}
-2b_k \theta_{k}^2 + \theta_{k}^2 \big]. \label{eq:miseo}
\end{align}
Each term in~\eqref{eq:miseo} is minimized with $b_k$ equal
to \citep[cf.][]{hall:87}
\begin{equation}\label{eq:bstar}
b_{k}^{*} = \frac{\theta_{k}^2}{E\{(\hat{\theta}_{k})^2\} }
=\frac{\theta_{k}^2}{\theta_{k}^2 + \text{var}(\hat{\theta}_{k})},
\quad k\geq0,
\end{equation}
leading to the minimal value ${\text{sa}_d}\sum_{k=1}^{\infty} b_{k}^{*}
\text{var}(\hat{\theta}_{k})$ of the mean integrated square error. Unfortunately, the $b_k^*$ are unknown.
In practice we consider a parametric class of smoothing schemes $b(\psi)$. For practical
reasons we need a finite sum in \eqref{eq:orthogpcf} so one component
in $\psi$ will be a cut-off index $K$ so that $b_k(\psi)=0$ when
$k>K$. The simplest smoothing scheme is
$b_k(\psi)=\mathbbm{1}(k\le K)$. A more refined scheme is
$b_k(\psi)=\mathbbm{1}(k\le K)\hat b_k^*$ where $\hat b_k^* = \widehat{\theta_k^2}/(\hat
\theta_k)^2$ is an estimate of the optimal smoothing coefficient
$b_k^*$ given in \eqref{eq:bstar}. Here $\widehat{\theta_k^2}$
is an asymptotically unbiased estimator of $\theta_k^2$ derived
in Section~\ref{sec:miseest}. For these two smoothing schemes
$\psi=K$. Adapting the scheme suggested by \cite{wahba:81},
we also consider $\psi=(K,c_1,c_2)$, $c_1>0,c_2>1$,
and $b_k(\psi)=\mathbbm{1}(k\le K)/(1 + c_1 k^{c_2})$.
In practice we choose the smoothing parameter $\psi$ by minimizing
an estimate of the mean integrated squared error, see Section~\ref{sec:miseest}.
\subsection{Expansion of $g(\cdot)-1$}\label{sec:g-1}
For large $R$, $g({r_{\min}}+R)$ is typically close to one. However, for the Fourier-Bessel basis,
$\phi_k(R)=0$ for all $k \ge 1$ which implies $\hat g_o({r_{\min}}+R)=0$.
Hence the estimator cannot be consistent for $r={r_{\min}}+R$ and the
convergence of the estimator for $r \in ({r_{\min}},{r_{\min}}+R)$ can be quite
slow as the number of terms $K$ in the estimator increases.
In practice we obtain quicker convergence by applying the Fourier-Bessel
expansion to $g(r)-1=\sum_{k \ge 1} \vartheta_{k} \phi_k(r-{r_{\min}})$
so that the estimator becomes $\tilde g_o(r;b)=1+ \sum_{k=1}^{\infty}
b_k\hat{\vartheta}_{k} \phi_k(r-{r_{\min}})$ where $\hat{\vartheta}_k = \hat
\theta_k - \int_{0}^{{r_{\min}}+R} \phi_k(r) w(r) {\mathrm{d}} r$ is an estimator
of $\vartheta_k = \int_{0}^R \{ g(r+{r_{\min}})-1 \} \phi_k(r) w(r) {\mathrm{d}}
r$. Note that $\text{var}(\hat{\vartheta}_k)=\text{var}(\hat{\theta}_k)$
and $\tilde g_o(r;b)- E\{\tilde{g}_o(r;b)\}= \hat g_o(r;b)- E\{\hat g_o(r;b)\}$.
These identities imply that the results regarding consistency and asymptotic normality established for $\hat g_o(r;b)$ in Section~\ref{sec:asympresults} are
also valid for $\tilde g_o(r;b)$.
\section{Consistency and asymptotic normality}\label{sec:asympresults}
\subsection{Setting}
To obtain asymptotic results we assume that ${X}}%{\mathbf{X}$ is observed through an increasing sequence of observation windows
$W_n$. For ease of presentation we assume square
observation windows $W_n= \times_{i=1}^d [-n a_i , n a_i]$ for some
$a_i >0$, $i=1,\ldots,d$. More general sequences of windows can be used at the
expense of more notation and assumptions. We also consider an associated sequence
$\psi_n$, $n \ge 1$, of smoothing parameters satisfying conditions to
be detailed in the following. We let $\hat \theta_{k,n}$ and $\hat
g_{o,n}$ denote the estimators of $\theta_k$ and $g$ obtained from
${X}}%{\mathbf{X}$ observed on $W_n$. Thus
\[
\hat \theta_{k,n} = \frac{1}{{\text{sa}_d}|W_n|}
\sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W_n}\\ {v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}} \in B_{r_{\min}}^R}}^{\neq}
\frac{\phi_k(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}})w(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|-{r_{\min}})}{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}}) \|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|^{d-1}e_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})}),
\]
where
\begin{equation}\label{eq:edge
B_{r_{\min}}^R=\{ {h}}{%{{\mathbf{h}} \in {\mathbb R}^d \mid {r_{\min}} < \|{h}}{%{{\mathbf{h}}\| < {r_{\min}}+R\} \quad \text{and}\quad e_n({h}}{%{{\mathbf{h}})= |W_n
\cap (W_n)_{h}}{%{{\mathbf{h}}|/|W_n|.
\end{equation}
Further,
\[
\hat g_{o,n} (r;b) = \sum_{k=1}^{K_n} b_k(\psi_n) \hat \theta_{k,n} \phi_k(r-{r_{\min}})
= \frac{1}{{\text{sa}_d}|W_n|}
\sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W_n}\\ {v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}} \in B_{r_{\min}}^R}}^{\neq}
\frac{w(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|)\varphi_{n}({v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}},r)}{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}}) \|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|^{d-1}e_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})|},
\]
where
\begin{equation}\label{eq:hn}
\varphi_{n}({h}}{%{{\mathbf{h}},r) = \sum_{k=1}^{K_n} b_k(\psi_n) \phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \phi_k(r-{r_{\min}}).
\end{equation}
In the results below we refer to higher order normalized
joint intensities $g^{(k)}$ of ${X}}%{\mathbf{X}$.
Define the $k$'th order joint intensity of $X$ by the identity
\[
E\left\{\sum_{{u}}%{{\mathbf{u}}_1,\ldots,{u}}%{{\mathbf{u}}_k \in {X}}%{\mathbf{X}}^{\neq} \mathbbm{1}( {u}}%{{\mathbf{u}}_1 \in A_1,\ldots,{u}}%{{\mathbf{u}}_k \in A_k) \right\}
= \int_{A_1\times \cdots \times A_k} \rho^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k) {\mathrm{d}}{v}}%{{\mathbf{v}}_1\cdots{\mathrm{d}}{v}}%{{\mathbf{v}}_k
\]
for bounded subsets $A_i \subset {\mathbb R}^d$, $i=1,\ldots,k$, where the
sum is over distinct ${u}}%{{\mathbf{u}}_1,\ldots,{u}}%{{\mathbf{u}}_k$.
We then let $g^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k)=\rho^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k)/\{\rho({v}}%{{\mathbf{v}}_1) \cdots \rho({v}}%{{\mathbf{v}}_k)\}$ and assume with an abuse of notation that the $g^{(k)}$ are translation invariant for $k=3,4$, i.e.\ $g^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k)=g^{(k)}({v}}%{{\mathbf{v}}_2-{v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k-{v}}%{{\mathbf{v}}_1)$.
\subsection{Consistency of orthogonal series estimator}
\label{sec:consistency}
Consistency of the orthogonal series estimator can be established under
fairly mild conditions following the approach in \cite{hall:87}.
We first state some conditions that ensure (see Section~S2 of the supplementary
material)
that $\text{var}(\hat \theta_{k,n}) \le C_1/|W_n|$ for some $0<C_1 < \infty$:
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{V\theenumi}
\item \label{cond:rho} There exists $0< \rho_{\min} < \rho_{\max} < \infty$
such that for all ${u}}%{{\mathbf{u}}\in{\mathbb R}^{d}$, $\rho_{\min}\leq \rho({u}}%{{\mathbf{u}})\leq \rho_{\max}$.
\item \label{cond:gandg3} For any ${h}}{%{{\mathbf{h}}, {h}}{%{{\mathbf{h}}_1,{h}}{%{{\mathbf{h}}_2\in {B_\rmin^{R}}$,
$g({h}}{%{{\mathbf{h}}) w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \leq C_2 \|{h}}{%{{\mathbf{h}}\|^{d-1}$ and $g^{(3)}({h}}{%{{\mathbf{h}}_1,{h}}{%{{\mathbf{h}}_2)\leq C_3$
for constants $C_2,C_3 < \infty$.
\item \label{cond:boundedg4integ} A constant $C_4<\infty$ can be
found such that $\sup_{{h}}{%{{\mathbf{h}}_1,{h}}{%{{\mathbf{h}}_2\in{B_\rmin^{R}}}
\int_{{\mathbb R}^{d}} \Big| g^{(4)}({h}}{%{{\mathbf{h}}_1, {h}}{%{{\mathbf{h}}_3,{h}}{%{{\mathbf{h}}_2+{h}}{%{{\mathbf{h}}_3) - g({h}}{%{{\mathbf{h}}_1)g({h}}{%{{\mathbf{h}}_2)
\Big| {\mathrm{d}}{h}}{%{{\mathbf{h}}_3 \leq C_4$.
\end{enumerate}
The first part of V\ref{cond:gandg3} is needed to ensure finite variances of the $\hat \theta_{k,n}$ and is discussed in detail in Section~\ref{sec:varthetak}. The second part simply requires that $g^{(3)}$ is bounded.
The condition V\ref{cond:boundedg4integ} is a weak dependence condition which is also used for asymptotic normality in Section~\ref{sec:asympnorm} and for estimation of $\theta_k^2$ in Section~\ref{sec:miseest}.
Regarding the smoothing scheme, we assume
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{S\theenumi}
\item $B=\sup_{k,\psi} \big|b_k(\psi)\big|< \infty$ and for all $\psi$,
$\sum_{k=1}^\infty \big|b_k(\psi)\big| <\infty$.
\item $\psi_n \rightarrow \psi^*$ for some $\psi^*$, and
$\lim_{\psi \rightarrow \psi^*} \max_{1 \le k \le m} \big|b_k(\psi)-1\big|=0$
for all $m\geq1$.
\item $|W_n|^{-1} \sum_{k=1}^\infty \big| b_k(\psi_n)\big| \rightarrow 0$.
\end{enumerate}
E.g.\ for the simplest smoothing scheme, $\psi_n = K_n$,
$\psi^*=\infty$ and we assume $K_n/|W_n| \rightarrow 0$.
Assuming the above conditions we now verify that the mean integrated
squared error of $\hat g_{o,n}$ tends to zero as $n \rightarrow
\infty$. By \eqref{eq:miseo}, $\textsc{mise}\big(\hat{g}_{o,n}, w \big)/{\text{sa}_d}
= \sum_{k=1}^{\infty} \big[ b_{k}(\psi_n)^2 \text{var}(\hat{\theta}_{k})+
\theta_{k}^2\{b_k(\psi_n) - 1\}^2 \big]$.
By V1-V3 and S1 the right hand side is bounded by
\[ B C_1 |W_n|^{-1} \sum_{k=1}^\infty \big| b_k(\psi_n)\big| + \max_{1 \le k \le
m}\theta_k^2 \sum_{k=1}^m (b_k(\psi_n)-1)^2 + (B^2+1) \sum_{k=m+1}^\infty
\theta_k^2.
\]
By Parseval's identity, $\sum_{k=1}^{\infty} \theta_k^2 < \infty$.
The last term can thus be made arbitrarily small by choosing $m$
large enough. It also follows that $\theta_k^2$ tends to zero as $k \rightarrow \infty$.
Hence, by S2, the middle term
can be made arbitrarily small by choosing $n$ large enough for any choice of $m$. Finally, the first term can be made arbitrarily small by S3 and choosing $n$ large enough.
\subsection{Asymptotic normality}\label{sec:asympnorm}
The estimators $\hat \theta_{k,n}$ as well as the estimator $\hat g_{o,n}(r;b)$
are of the form
\begin{equation}\label{eq:decomp2}
S_n = \frac{1}{{\text{sa}_d} |W_n|} \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W_n}\\{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}} \in {B_\rmin^{R}}}}^{\neq}
\frac{f_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})}{\rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})e_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})}
\end{equation}
for a sequence of even functions $f_n:{\mathbb R}^d \rightarrow
{\mathbb R}$. We let $\tau^2_n=|W_n|\text{var}(S_n)$.
To establish asymptotic normality of estimators of the form
\eqref{eq:decomp2} we need certain mixing
properties for $X$ as in \cite{waagepetersen:guan:09}. The strong mixing coefficient for the point process $X$
on ${\mathbb R}^d$ is given by~\citep{ivanoff:82,politis:paparoditis:romano:98}
\begin{align*}
\alpha_{\mathbf{X}}(m;a_1,a_2) =
\sup\big\{& \big| \text{pr}(E_1\cap E_2) - \text{pr}(E_1)\text{pr}(E_2) \big|:
E_1\in\mathcal{F}_{X}(B_1), E_2\in\mathcal{F}_{X}(B_2), \\
&|B_1|\leq a_1, |B_2|\leq a_2,
\mathcal{D}(B_1, B_2)\geq m, B_1,B_2\in\mathcal{B}({\mathbb R}^d) \big\},
\end{align*}
where $\mathcal{B}({\mathbb R}^d)$ denotes the Borel $\sigma$-field on ${\mathbb R}^d$,
$\mathcal{F}_{X}(B_i)$
is the $\sigma$-field generated by ${X}}%{\mathbf{X}\cap B_i$ and
\[
\mathcal{D}(B_1, B_2) = \inf\big\{\max_{1\leq i\leq d}|u_i-v_i|:
{u}}%{{\mathbf{u}}=(u_1,\ldots,u_d)\in B_1,{v}}%{{\mathbf{v}}=(v_1,\ldots,v_d)\in B_2 \big\}.
\]
To verify asymptotic normality we need the following assumptions as well as V1 (the conditions V2 and V3 are not needed due to conditions N\ref{cond:boundedgfuns} and N\ref{cond:unifbound} below):
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{N\theenumi}
\item \label{cond:mixingcoef}
The
mixing coefficient satisfies $\alpha_{{X}}%{\mathbf{X}}(m;(s+2R)^d,\infty) =
O(m^{-d-\varepsilon})$ for some $s,\varepsilon>0$.
\item \label{cond:boundedgfuns}
There exists a $\eta>0$ and $L_{1}<\infty$
such that $g^{(k)}({h}}{%{{\mathbf{h}}_1,\ldots,{h}}{%{{\mathbf{h}}_{k-1})\leq L_{1}$ for $k=2,\ldots,
2(2+\lceil \eta \rceil )$ and all ${h}}{%{{\mathbf{h}}_1,\ldots,{h}}{%{{\mathbf{h}}_{k-1}\in{\mathbb R}^{d}$.
\item \label{cond:liminfvar}
$\liminf_{n \rightarrow \infty} \tau^2_n >0$.
\item \label{cond:unifbound}
There exists $L_2 < \infty$ so that
$| f_n({h}}{%{{\mathbf{h}}) | \le L_2$ for all $n \ge 1$ and ${h}}{%{{\mathbf{h}} \in {B_\rmin^{R}}$.
\end{enumerate}
The conditions N1-N\ref{cond:liminfvar} are standard in the point process
literature, see e.g.\ the discussions in \cite{waagepetersen:guan:09}
and \cite{coeurjolly:moeller:14}.
The condition N\ref{cond:liminfvar} is difficult to verify and is usually
left as an assumption, see \cite{waagepetersen:guan:09},
\cite{coeurjolly:moeller:14} and \cite{dvovrak:prokevov:16}.
However, at least in the stationary case, and in case
of estimation of $\hat \theta_{k,n}$, the expression
for $\text{var}(\hat \theta_{k,n})$ in Section~S2 of the supplementary
material
shows that $\tau_n^2=|W_n| \text{var}(\hat \theta_{k,n})$ converges to
a constant which supports the plausibility of condition N\ref{cond:liminfvar}.
We discuss N\ref{cond:unifbound} in further detail
below when applying the general framework to $\hat \theta_{k,n}$ and
$\hat g_{o,n}$.
The following theorem is proved in Section~S3 of the supplementary
material.
\begin{theorem}\label{theo:coefnormality
Under conditions V1, N1-N4,
$\tau_{n}^{-1} |W_n|^{1/2} \big\{ S_n - E(S_n) \big\} \stackrel{D}{\longrightarrow} N(0, 1)$.
\end{theorem}
\subsection{Application to $\hat \theta_{k,n}$ and $\hat g_{o,n}$}
In case of estimation of $\theta_{k}$, $\hat{\theta}_{k,n}=S_n$ with
$f_n({h}}{%{{\mathbf{h}})= \phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})/\|{h}}{%{{\mathbf{h}}\|^{d-1}$.
The assumption N\ref{cond:unifbound} is then straightforwardly
seen to hold in the case of the Fourier-Bessel
basis where $|\phi_k(r)|\le |\phi_k(0)|$ and $w(r)=r^{d-1}$. For the
cosine basis, N\ref{cond:unifbound} does not hold in general and further assumptions are needed, cf.\ the discussion in Section~\ref{sec:varthetak}. For simplicity we here just assume ${r_{\min}}>0$.
Thus we state the following\begin{corollary}
Assume V1, N1-N4, and, in case of the cosine basis, that ${r_{\min}}>0$. Then
\[
\{\text{var}(\hat \theta_{k,n})\}^{-1/2} (\hat \theta_{k,n} -\theta_{k}) \stackrel{D}{\longrightarrow} N(0, 1). \]
\end{corollary}
For $\hat g_{o,n}(r;b)=S_n$,
\[
f_n({h}}{%{{\mathbf{h}})=\frac{\varphi_n({h}}{%{{\mathbf{h}},r) w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})}{\|{h}}{%{{\mathbf{h}}\|^{d-1}}
= \frac{w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})}{\|{h}}{%{{\mathbf{h}}\|^{d-1}}
\sum_{k=1}^{K_n} b_k(\psi_n) \phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \phi_k(r-{r_{\min}}),
\]
where $\varphi_n$ is defined in \eqref{eq:hn}.
In this case, $f_n$ is typically not uniformly bounded since the
number of not necessarily decreasing terms in the sum
defining $\varphi_n$ in \eqref{eq:hn} grows with $n$. We therefore
introduce one more condition:
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{N\theenumi}
\setcounter{enumi}{4}
\item \label{cond:Knbound} There exist an $\omega>0$ and $M_\omega<\infty$ so that
\[ K_{n}^{-\omega} \sum_{k=1}^{K_n}
b_k(\psi_n)\big |\phi_k(r-{r_{\min}})\phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \big| \leq M_{\omega} \]
for all ${h}}{%{{\mathbf{h}}\in{B_\rmin^{R}}$.
\end{enumerate}
Given N\ref{cond:Knbound}, we can simply rescale: $\tilde{S}_n:= K_n^{-\omega} S_n$
and $\tilde \tau^2_n:=K_n^{-2\omega} \tau^2_n$.
Then, assuming $\liminf_{n \rightarrow \infty} \tilde \tau_n^2 >0$,
Theorem~\ref{theo:coefnormality} gives the asymptotic normality of
$\tilde \tau_n^{-1}|W_n|^{1/2} \{\tilde{S}_n- E(\tilde{S}_n)\}$
which is equal to $\tau_n^{-1}|W_n|^{1/2}\{S_n- E(S_n)\}$.
Hence we obtain
\begin{corollary}
Assume V\ref{cond:rho},
N\ref{cond:mixingcoef}-N\ref{cond:boundedgfuns}, N\ref{cond:Knbound} and
$\liminf_{n \rightarrow \infty} K_n^{-2\omega} \tau_n^2>0$.
In case of the cosine basis, assume further ${r_{\min}}>0$.
Then for $r\in({r_{\min}},{r_{\min}}+R)$,
\[
\tau_n^{-1}|W_n|^{1/2} \big[ \hat{g}_{o,n}(r;b)- E\{\hat g_{o,n}(r;b)\} \big] \stackrel{D}{\longrightarrow} N(0, 1).
\]
\end{corollary}
In case of the simple smoothing scheme $b_k(\psi_n)=\mathbbm{1}(k \le K_n)$,
we take $\omega=1$ for the cosine basis. For the
Fourier-Bessel basis we take $\omega=4/3$ when $d=1$ and
$\omega=d/2+2/3$ when $d>1$ (see the derivations in Section~S6 of the
supplementary material).
\section{Tuning the smoothing scheme}\label{sec:miseest}
In practice we choose $K$, and other parameters in
the smoothing scheme $b(\psi)$, by minimizing an estimate of the
mean integrated squared error.
This is equivalent to minimizing
\begin{equation} \label{eq:Ipsi} {\text{sa}_d} I(\psi) = \textsc{mise}(\hat g_{o}, w)
\! - \!
\int_{{r_{\min}}}^{{r_{\min}}+R} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \big\{ g(r) - 1 \big\}^2 w(r) {\mathrm{d}} r =
\sum_{k=1}^{K} \big[ b_{k}(\psi)^2 E\{(\hat{\theta}_{k})^2\}
-2b_k(\psi) \theta_{k}^2 \big].
\end{equation}
In practice we must replace \eqref{eq:Ipsi} by an
estimate. Define $\widehat{\theta^2_k}$ as
\[ \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}},{u}}%{{\mathbf{u}}',{v}}%{{\mathbf{v}}' \in {X}}%{\mathbf{X}_W\\ {v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}' \in B_{r_{\min}}^R }}^{\neq}
\!\!\!\!\!\!\! \frac{\phi_k(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|-{r_{\min}})\phi_k(\|{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'\|-{r_{\min}})w(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|-{r_{\min}})w(\|{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'\|-{r_{\min}})}{{\text{sa}_d}^2
\rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})\rho({u}}%{{\mathbf{u}}')\rho({v}}%{{\mathbf{v}}')
\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|^{d-1} \|{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'\|^{d-1} |W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}| | W \cap W_{{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'}|}.
\]
Then, referring to the set-up in Section~\ref{sec:asympresults} and assuming V\ref{cond:boundedg4integ},
\[
\lim_{n \rightarrow \infty} E(\widehat{\theta^2_{k,n}}) \to
\left\{\int_{0}^{R} g(r+{r_{\min}}) \phi_k(r) w(r) {\mathrm{d}} r \right\}^2
=\theta_k^2
\]
(see Section~S4 of the supplementary material)
and hence
$\widehat{\theta^2_{k,n}}$
is an asymptotically unbiased estimator of $\theta_{k}^2$. The
estimator is obtained from $(\hat \theta_k)^2$ by retaining only terms
where all four points ${u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}},{u}}%{{\mathbf{u}}',{v}}%{{\mathbf{v}}'$ involved are
distinct. In simulation studies, $\widehat{\theta_k^2}$ had a smaller root mean squared error than $(\hat \theta_k)^2$ for estimation of $\theta_k^2$.
Thus
\begin{equation}\label{eq:Ipsiestm}
\hat I(\psi) = \sum_{k=1}^{K}
\big\{ b_{k}(\psi)^2 (\hat{\theta}_{k})^2
-2 b_k(\psi) \widehat{\theta_{k}^2} \big\}
\end{equation}
is an asymptotically unbiased estimator of~\eqref{eq:Ipsi}. Moreover, \eqref{eq:Ipsiestm} is equivalent to the following slight modification of
\cite{guan:leastsq:07}'s criterion \eqref{eq:ywcv}:
\[
\int_{{r_{\min}}}^{{r_{\min}}+R} \big\{ \hat{g}_{o}(r;b) \big\}^2 w(r-{r_{\min}}) {\mathrm{d}} r
-\frac{2}{{\text{sa}_d}} \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in X_{W}\\ {v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\in B_{r_{\min}}^R}}^{\neq}
\frac{\hat{g}_{o}^{-\{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}}(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|;b)w(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}})}{
\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|}.
\]
For the simple smoothing scheme $b_k(K)=\mathbbm{1}(k\leq K)$, \eqref{eq:Ipsiestm}
reduces to
\begin{equation}\label{eq:Isimple}
\hat I(K) = \sum_{k=1}^{K}
\big\{ (\hat{\theta}_{k})^2 -2 \widehat{\theta_{k}^2} \big\}
= \sum_{k=1}^{K} (\hat{\theta}_{k})^2 ( 1 -2 \hat{b}^{*}_{k}),
\end{equation}
where $\hat{b}^{*}_{k}=\widehat{\theta_{k}^2}/(\hat{\theta}_{k})^2$ is
an estimator of $b^{*}_{k}$ in~\eqref{eq:bstar}.
In practice, uncertainties
of $\hat \theta_{k}$ and $\widehat{\theta_{k}^{2}}$
lead to numerical instabilities in the minimization of~\eqref{eq:Ipsiestm}
with respect to $\psi$. To obtain a numerically stable procedure we first determine $K$ as
\begin{equation}\label{eq:Kestim}
\hat K = \inf \{2 \le k \le K_{\max}: (\hat{\theta}_{k+1})^2
-2 \widehat{\theta_{k+1}^2} > 0 \}
= \inf \{2 \le k \le K_{\max}: \hat{b}^{*}_{k+1} < 1/2 \}.
\end{equation}
That is, $\hat K$ is the first local minimum of \eqref{eq:Isimple}
larger than 1 and smaller than an upper limit $K_{\max}$ which we chose to be
49 in the applications. This choice of $K$ is also used for the refined and the Wahba smoothing schemes.
For the refined smoothing scheme we thus let $b_{k}=\mathbbm{1}(k\leq \hat K)\hat{b}_{k}^{*}$. For the Wahba smoothing
scheme $b_{k}=\mathbbm{1}(k\leq \hat K)/(1 + \hat c_1k^{\hat c_2})$, where $\hat c_1$ and $\hat c_2$ minimize $ \sum_{k=1}^{\hat K}
\left\{ (\hat{\theta}_{k})^2/(1 + c_1k^{c_2})^2 -
2 \widehat{\theta_{k}^2}/(1 + c_1k^{c_2}) \right\}$
over $c_1>0$ and $c_2>1$.
\section{Simulation study}
\label{sec:simstudy}
\begin{figure}
\centering
\includegraphics[width=\textwidth,scale=1]{gfuns.pdf}
\caption{Pair correlation functions for the point processes considered in the simulation study.
\label{fig:gfuns}
\end{figure}
We compare the performance of the orthogonal series estimators and
the kernel estimators for data simulated on $W=[0,1]^2$ or
$W=[0,2]^2$ from four point processes with constant intensity $\rho=100$.
More specifically, we consider $n_{\text{sim}}=1000$ realizations from a Poisson process,
a Thomas process (parent intensity $\kappa=25$, dispersion standard deviation $\omega=0.0198$),
a Variance Gamma cluster process \citep[parent intensity
$\kappa=25$, shape parameter $\nu=-1/4$, dispersion parameter $\omega=0.01845$,][]{jalilian:guan:waagepetersen:13}, and a determinantal point process
with pair correlation function $g(r)=1-\exp\{-2 (r/\alpha)^2\}$
and $\alpha=0.056$. The pair correlation functions of these point processes are shown in Figure~\ref{fig:gfuns}.
For each realization,
$g(r)$ is estimated for $r$ in $({r_{\min}}, {r_{\min}}+ R)$, with ${r_{\min}}=10^{-3}$ and $R=0.06, 0.085, 0.125$,
using the kernel estimators $\hat{g}_{k}(r; b)$, $\hat{g}_{d}(r; b)$ and
$\hat{g}_{c}(r; b)$ or the orthogonal series estimator $\hat{g}_{o}(r;b)$.
The Epanechnikov kernel with bandwidth $b=0.15/\surd{\hat{\rho}}$
is used for $\hat{g}_{k}(r; b)$
and $\hat{g}_{d}(r; b)$ while the bandwidth of $\hat{g}_{c}(r; b)$
is chosen by minimizing \cite{guan:leastsq:07}'s estimate \eqref{eq:ywcv} of
the mean integrated squared error.
For the orthogonal series estimator, we consider both the cosine and the Fourier-Bessel
bases with simple, refined or Wahba smoothing schemes.
For the Fourier-Bessel basis we use the modified orthogonal series
estimator described in Section~\ref{sec:g-1}. The parameters for the smoothing
scheme are chosen according to Section~\ref{sec:miseest}.
From the simulations we estimate the mean integrated squared error \eqref{eq:mise} with $w(r)=1$ of each estimator $\hat g_m$, $m=k,d,c,o$,
over the intervals $[{r_{\min}}, 0.025]$ (small spatial lags) and
$[{r_{\min}}, {r_{\min}}+R]$ (all lags).
We consider the kernel estimator $\hat{g}_{k}$
as the baseline estimator and compare any of the other estimators $\hat g$
with $\hat{g}_{k}$ using the log relative efficiency
$e_{I}(\hat{g}) = \log \{ \widehat{\textsc{mise}}_{I}(\hat{g}_{k})/\widehat{\textsc{mise}}_{I}(\hat{g}) \}$, where $\widehat{\textsc{mise}}_{I}(\hat{g})$ denotes the estimated mean squared integrated error over the interval $I$ for the estimator $\hat{g}$. Thus
$e_{I}(\hat{g}) > 0$ indicates that $\hat{g}$ outperforms
$\hat{g}_{k}$ on the interval $I$.
Results for W=$[0,1]^2$ are summarized in Figure~\ref{fig:efficiencies}.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=\textwidth]{plot11new.pdf}
\end{tabular}
\caption{Plots of log relative efficiencies for small lags $({r_{\min}},
0.025]$ and all lags $({r_{\min}}, R]$, $R=0.06,0.085,0.125$, and $W=[0,1]^2$. Black: kernel estimators. Blue and red: orthogonal series estimators with Bessel respectively
cosine basis. Lines serve to ease visual interpretation.}\label{fig:efficiencies}
\end{figure}
For all types of point processes, the orthogonal series estimators
outperform or does as well as the kernel estimators both at small lags
and over all lags. The detailed conclusions depend on whether the non-repulsive Poisson, Thomas and Var Gamma processes or the repulsive determinantal process are considered. Orthogonal-Bessel with refined or Wahba smoothing is superior for Poisson, Thomas and Var Gamma but only better than $\hat g_c$ for the determinantal point process. The performance of the orthogonal-cosine estimator is between or better than the performance of the kernel estimators for Poisson, Thomas and Var Gamma and is as good as the best kernel estimator for determinantal. Regarding the kernel estimators, $\hat g_c$ is better than $\hat g_d$ for Poisson, Thomas and Var Gamma and worse than $\hat g_d$ for determinantal.
The above conclusions are stable over the three $R$ values
considered. For $W=[0,2]^2$ (see Figure~S1 in the supplementary
material) the conclusions are similar but with more clear superiority of the orthogonal series
estimators for Poisson and Thomas. For Var Gamma the performance of
$\hat g_c$ is similar to the orthogonal series estimators. For determinantal and
$W=[0,2]^2$, $\hat g_c$ is better than
orthogonal-Bessel-refined/Wahba but still inferior to
orthogonal-Bessel-simple and orthogonal-cosine.
Figures~S2 and S3 in the supplementary material give a more
detailed insight in the bias and variance properties for $\hat g_k$,
$\hat g_c$, and the orthogonal series estimators with simple smoothing scheme.
Table~S1 in the supplementary material shows that the selected $K$ in
general increases when the observation window is enlargened, as
required for the asymptotic results. The general
conclusion, taking into account the simulation results for all four
types of point processes, is that the best
overall performance is obtained with orthogonal-Bessel-simple, orthogonal-cosine-refined or orthogonal-cosine-Wahba.
To supplement our theoretical results in Section~\ref{sec:asympresults}
we consider the distribution of the simulated $\hat g_o(r;b)$ for $r=0.025$
and $r=0.1$ in case of the Thomas process and using the Fourier-Bessel
basis with the simple smoothing scheme. In addition to $W=[0,1]^2$ and
$W=[0,2]^2$, also $W=[0,3]^2$ is considered. The mean, standard error,
skewness and kurtosis of $\hat{g}_{o}(r)$ are given in Table~\ref{tab:fsampleghat}
while histograms of the estimates are shown in Figure~S3.
The standard error of $\hat g_{o}(r;b)$ scales as $|W|^{1/2}$
in accordance with our theoretical results. Also the bias decreases and
the distributions of the estimates become increasingly normal as $|W|$ increases.
\begin{table}
\caption{Monte Carlo mean, standard error, skewness (S)
and kurtosis (K) of $\hat{g}_{o}(r)$ using the Bessel basis
with the simple smoothing scheme in case of the Thomas process on observation
windows $W_1=[0,1]^2$, $W_2=[0,2]^2$ and $W_3=[0,3]^3$.}%
\begin{tabular}{ccccccc}
& $r$ & $g(r)$ & $\hat{E}\{\hat{g}_{o}(r)\}$ & $[\hat{\text{var}}\{\hat{g}_{o}(r)\}]^{1/2}$
& $\hat{\text{S}}\{\hat{g}_{o}(r)\}$ & $\hat{\text{K}}\{\hat{g}_{o}(r)\}$ \\
$W_1$ & 0.025 & 3.972 & 3.961 & 0.923 & 1.145 & 5.240 \\
$W_1$ & 0.1 & 1.219 & 1.152 & 0.306 & 0.526 & 3.516 \\
$W_2$ & 0.025 & 3.972 & 3.959 & 0.467 & 0.719 & 4.220 \\
$W_2$ & 0.1 & 1.219 & 1.187 & 0.150 & 0.691 & 4.582 \\
$W_3$ & 0.025 & 3.972 & 3.949 & 0.306 & 0.432 & 3.225 \\
$W_3$ & 0.1 & 1.2187 & 1.2017 & 0.0951 & 0.2913 & 2.9573
\end{tabular}
\label{tab:fsampleghat}
\end{table}
\section{Application}
\label{sec:example}
We consider point patterns of locations of \emph{Acalypha diversifolia}
(528 trees), \emph{Lonchocarpus heptaphyllus} (836 trees) and
\emph{Capparis frondosa} (3299 trees) species in the 1995 census for
the $1000\text{m}\times 500\text{m}$ Barro Colorado Island plot \citep{hubbell:foster:83,condit:98}.
To estimate the intensity function of each species, we use a log-linear regression model
depending on soil condition (contents of copper,
mineralized nitrogen, potassium and phosphorus and soil acidity) and topographical
(elevation, slope gradient, multiresolution
index of valley bottom flatness, ncoming mean solar radiation
and the topographic wetness index) variables. The regression parameters
are estimated using the quasi-likelihood approach
in~\cite{guan:jalilian:waagepetersen:15}. The point patterns and fitted intensity functions are shown in Figure~S5 in the supplementary material.
The pair correlation function of each species is then estimated using
the bias corrected kernel estimator $\hat{g}_{c}(r;b)$ with $b$ determined by minimizing~\eqref{eq:ywcv} and the orthogonal series estimator
$\hat{g}_{o}(r;b)$ with both Fourier-Bessel and cosine basis,
refined smoothing scheme and the optimal cut-offs $\hat{K}$ obtained from~\eqref{eq:Kestim};
see Figure~\ref{fig:bcipcfs}.
For {\em Lonchocarpus} the three estimates are
quite similar while for {\em Acalypha} and {\em Capparis} the estimates deviate markedly
for small lags and then become similar for lags
greater than respectively 2 and 8 meters. For {\em Capparis} and the
cosine basis, the number of selected coefficients coincides with the chosen upper limit 49 for the number of coefficients. The cosine estimate displays oscillations which appear to be artefacts of
using high frequency components of the cosine basis. The function
\eqref{eq:Isimple} decreases very slowly after $K=7$ so we also tried
the cosine estimate with $K=7$ which gives a more reasonable
estimate.
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{acapcfest.pdf
\includegraphics[width=0.33\textwidth]{loncopcfest.pdf
\includegraphics[width=0.33\textwidth]{capppcfest.pdf
\caption{Estimated pair correlation functions for tropical rain forest
trees.}
\label{fig:bcipcfs}
\end{figure}
\section*{Acknowledgement}
Rasmus Waagepetersen is supported by the Danish Council for Independent Research | Natural Sciences, grant "Mathematical and Statistical Analysis of Spatial Data", and by the "Centre for Stochastic Geometry and Advanced Bioimaging", funded by the Villum Foundation.
\section*{Supplementary material}
Supplementary material
includes proofs of consistency and asymptotic normality results
and details of the simulation study and data analysis.
\bibliographystyle{apalike}
| proofpile-arXiv_065-7458 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Ultracold atoms confined in optical lattices provide a fascinating synthetic platform for quantum simulations of various lattice Hamiltonians in a controllable fashion beyond what is possible in natural crystals~\cite{1998_Zoller_Jaksch_PRL,2002_Hofstetter_Cirac_PRL,2007_Lewenstein_AP,2008_Bloch_Dalibard_RMP,2010_Esslinger_CMP,2015_Lewenstein_RPP,2016_Li_Liu_RPP}. For example, synthesizing optical lattices with artificial gauge fields has attracted considerable recent research efforts, and has now become one of the most important developments in ultracold atomic physics~\cite{2011_Dalibard_Gerbier_RMP}. In particular, $\pi$ flux models, which are in general difficult to find in solid state materials, have been realized with atoms in shaken optical lattices~\cite{2013_Chin_NatPhys,2013_Sengstock_NatPhys}. The artificial-gauge-field quantum simulator has become a versatile ground for investigating exotic many-body physics such as bosonic chiral condensates~\cite{2013_Paramekanti_PRB,2015_Edmonds_EPL,2014_Zaletel_PRB,2016_Li_Liu_RPP} and fermonic quantum Hall states~\cite{1980_Klitzing_PRL}.
Another optical lattice based quantum simulator recently attracting interest is laser-assisted spin-orbit coupling (SOC) ~\cite{2005_Ruseckas_PRL,2005_Osterloh_Zoller_PRL,2009_Liu_PRL,2011_Lin_Spielman_Nature,2012_Pan_SOC,2012_Zhang_Zhai_PRL,2012_Cheuk_Zwierlein_PRL,2013_Galitski_NatReview,2013_Xu_You_PRA,2013_Anderson_Spielman_PRL,2015_Zhai_RPP,2016_Huang_Zhang_NatPhys,2016_Wu_Pan_Science,2017_Bloch_SOC}, aiming for exotic Rashba ring condensate of bosons~\cite{2015_Zhai_RPP} and symmetry-protected topological states of fermions~\cite{2010_Hasan_RMP,2016_Chiu_RMP}. For bosons, it has been demonstrated that one-dimensional SOC leads to crystalline condensates~\cite{2013_Galitski_NatReview,2014_Ji_Pan_NatPhys}, and a more recent experiment realizing two-dimensional SOC has further advanced this subject~\cite{2016_Wu_Pan_Science}. For fermions, the SOC effects have been observed in the single-particle energy dispersion~\cite{2012_Zhang_Zhai_PRL,2012_Cheuk_Zwierlein_PRL,2016_Huang_Zhang_NatPhys}.
The developments in SOC quantum simulators together with the recently demonstrated capability of measuring atomic spin currents~\cite{2016_Bloch_Schweizer_arXiv} are bridging the fields of spintronics and ultracold atomic physics.
However, the experimentally realized SOCs are all single-particle effects extrinsically induced by laser, causing experimental challenges (e.g., heating problems), in studying many-body quantum effects with more sophisticated SOCs. It is thus worthwhile to find alternative ways to generate SOC-like effects, e.g. with interactions, so that nontrivial strong correlations and many-body effects can be studied. Previous mean field analysis shows that SOC effects can spontaneously emerge due to two-body interactions~\cite{2014_Li_NatComm}, but whether this exotic phenomenon could survive (beyond mean field theory) against strong fluctuations is unknown, raising possible experimental difficulties in realizing this unconventional paradigm for SOC engineering.
In this paper, we study a spinor (two-component) Bose gas in a one-dimensional double-valley lattice in the absence of any bare SOC. The double-valley lattice model describes the $sp$-orbital coupled optical lattice system~\cite{li2013topological} as realized in Ref.~\onlinecite{2013_Chin_NatPhys} or the shaking-induced two-dimensional $\pi$-flux triangular lattice~\cite{2013_Sengstock_NatPhys} reduced to one dimension. For certain repulsive spin-miscible interactions~\cite{pethick2002bose}, it is found through an effective field theory analysis that the ground state could be a chiral spin condensate in the presence of strong fluctuations in one dimension. This chiral spin condensate exhibits spin-charge mixing, which turns out to be the major ``quantum-fluctuation-source" in selecting the nontrivial chiral spin order in a classically degenerate manifold. For a specific $\pi$-flux triangular ladder model, we confirm the existence of chiral spin condensate using the density matrix renormalization group (DMRG) method~\cite{White1992,Schollwock2011}, which is a variational algorithm within the class of matrix product states and numerically ``exact" for one-dimensional systems due to their entanglement properties. This strongly correlated state features spontaneous staggered chiral spin loop currents where the two spin components counterflow, i.e. they move along opposite directions (Fig.~\ref{Figure1}). We emphasize that such spin-orbit effects arise purely from interactions in the theory, distinctive from previously explored single-particle SOC effects. By computing various correlation functions, conformal-field-theory central charge, and entanglement scaling, we conclude that the relevant low-energy effective field theory here is a two-component Luttinger liquid.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure1.pdf}
\caption{{(a) Schematics of the $\pi$ flux triangular ladder model. The two legs are labeled as I and II. The two components are represented using up and down arrows. In the thermodynamic limit, the two components develop currents along opposite directions as indicated by the arrows along the bond. (b) The single-particle band structure with two valleys at momenta $Q_{\pm}$.}}
\label{Figure1}
\end{figure}
\section{Field Theory Analysis}
Consider two-component (pseudospin $\sigma=\uparrow,\downarrow$) bosons in a double-valley band, e.g. the experimentally realized one dimensional $sp$-orbital coupled optical lattice~\cite{2013_Chin_NatPhys} or the $\pi$-flux triangular lattice reduced to one dimension~\cite{2013_Sengstock_NatPhys}. Assuming the band minima are at ${\pm Q}$ (see Fig.~\ref{Figure1}), the low energy degrees of freedom $\phi_{\sigma,\pm}$, describing fluctuations near the band minima, are introduced as $\phi_{p = \pm, \sigma} (x) = \int \frac{dk}{2\pi} b_{\sigma} (\pm Q + k) e^{ik x}$, with $b_{\sigma}(q)$ being the annihilation operator for the eigenmode of the lower band (Fig.~\ref{Figure1}). The effective field theory has the action $S = \int dx dt {\cal L}$ with the Lagrangian
\begin{eqnarray}
{\cal L} &=& \phi_{p \sigma } ^* ( x,t) \left [ i\partial _t + \frac{1}{2M} \frac{\partial^2} {\partial x^2} -
i p\lambda \frac{\partial^3}{\partial x^3} + \mu \right] \phi_{p \sigma } (x,t ) \\
&+& \sum_{\sigma \sigma' } g_{\sigma \sigma'}
\left [\sum_{p p' } |\phi_{p \sigma } |^2 | \phi_{p' \sigma } | ^2
+ \sum_p \phi_{p \sigma } ^* \phi_{-p \sigma} \phi_{-p \sigma'} ^* \phi_{p\sigma' } \right ], \nonumber
\end{eqnarray}
where the effective mass $M$ and the third order derivative term $\lambda$ can be extracted from the energy dispersion near the band minima, and $g_{\sigma\sigma'}$ characterizes the interaction strengths in the spinor Bose gas. We introduce
$g_{\uparrow \uparrow} = g_{\downarrow \downarrow} = g_d$
and
$g_{\uparrow \downarrow} = g_{\downarrow \uparrow} = g_{od} $.
For spin-miscible repulsive interactions, we have $g_d >0$, $g_{od} >0$, and $g_d ^2 > g_{od}^2$. Because of the exchange term $\sum_p \phi_{p \sigma } ^* \phi_{-p \sigma} \phi_{-p \sigma'} ^* \phi_{p\sigma' } $, we cannot have coexistence of $\phi_{+ \sigma}$ and $\phi_{-\sigma}$ at low energy. This leads to a spontaneous Ising symmetry breaking at the classical level which remains robust even in the presence of quantum fluctuations at zero temperature. We thus have either chiral charge or chiral spin superfluid as characterized by
\begin{equation}
\left[
\begin{array}{c}
\phi_{\uparrow +} \\
\phi_{\downarrow+}
\end{array}
\right]
\text{or}
\left[
\begin{array}{c}
\phi_{\uparrow +} \\
\phi_{\downarrow -}
\end{array}
\right]
=
\left[
\begin{array}{c}
\sqrt{\rho_0 + \delta \rho_\uparrow } e^{i\theta_\uparrow} \\
\sqrt{\rho_0 + \delta \rho_\downarrow} e^{i\theta_\downarrow}
\end{array}
\right],
\end{equation}
respectively, where the fields $\delta \rho_\sigma$ and $\theta_\sigma$ represent density and phase fluctuations at low energy.
We define charge and spin degrees of freedom as $ \theta_{c,s} = \frac{1}{\sqrt{2}} \left( \theta_\uparrow \pm \theta_\downarrow \right) $, whose conjugate momenta $\Pi_{c,s}$ are introduced correspondingly. For the chiral charge superfluid, we find a spin-charge separated Hamiltonian
\begin{eqnarray}
{\cal H}_{\chi_c} &=& \sum_{\nu = c,s} \frac{1}{2} v_\nu \left[ K_\nu \Pi_\nu ^2 + \frac{1}{K_\nu} (\partial_x \theta_\nu)^2 \right] \nonumber \\
&& + \lambda \left[ \Pi_c \frac{\partial^3}{\partial x^3 } \theta_c + \Pi_s \frac{\partial^3}{\partial x^3 } \theta_s \right],
\label{eq:chic}
\end{eqnarray}
with $K_{c/s} = \sqrt{2(g_d \pm g_{od} ) /\rho_0} $
and $v_{c/s} = \sqrt{2 \rho_0 (g_d \pm g_{od}) } $.
Here we have neglected higher order nonlinear terms as we shall show later that the classical degeneracy between chiral spin and charge superfluids is already broken at the harmonic order. Performing the expansion in terms of the eigenmodes for the quantum fields,
\begin{eqnarray}
\theta_{\nu} &=& \sqrt{K_\nu} \int_{-\Lambda} ^\Lambda \frac{dk}{2\pi} \frac{1}{\sqrt{2 |k|}}
\left[ a_\nu (k) e^{ikx} + a_\nu ^\dag (k) e^{-ikx} \right] \\
\Pi_{\nu} &=& \frac{-i}{\sqrt{ K_\nu } } \int_{-\Lambda} ^\Lambda \frac{dk}{2\pi} \sqrt{ \frac{|k|}{2} }
\left[ a_\nu (k) e^{ikx} - a_\nu ^\dag (k) e^{-ikx} \right]
\end{eqnarray}
we find that the energy density of the ground state is $\frac{1}{4\pi} [v_c + v_s] \Lambda^2$, with $\Lambda$ the high-momentum cutoff in the field theory. Note that the third order derivative term $\lambda$ in Eq.~\eqref{eq:chic} actually does not contribute to the ground state energy. For the chiral spin superfluid, we find a spin-charge mixed Hamiltonian
\begin{eqnarray}
{\cal H}_{\chi_s} &=& \sum_{\nu = c,s} \frac{1}{2} v_\nu \left[ K_\nu \Pi_\nu ^2 + \frac{1}{K_\nu} (\partial_x \theta_\nu)^2 \right] \nonumber \\
&& + \lambda \left[ \Pi_c \frac{\partial^3}{\partial x^3 } \theta_s + \Pi_s \frac{\partial^3}{\partial x^3 } \theta_c \right].
\end{eqnarray}
When compared to the chiral charge superfluid, the spin-charge mixing term leads to an additional energy-density correction
\begin{equation}
\Delta {E}/L = - \frac{\lambda^2 \Lambda^6} {24 \pi} \frac{ (K_c-K_s)^2}{ K_c K_s (v_c + v_s)}.
\label{eq:DeltaE}
\end{equation}
and makes the chiral spin superfluid the actual ground state of our system.
In optical lattice experiments, the chiral spin superfluid should be detectable using spin-resolved time-of-flight techniques as two spins spontaneously (quasi-) condense in different valleys, forming a momentum space antiferromagnet. In the language of conformal field theory, the chiral spin superfluid is a critical phase with two gapless normal modes and is formally described by two Virasoro algebras with central charge $c=1$~\cite{1990_Frahm-Korepin-PRB}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure2.pdf}
\caption{The superfluid [(a) and (b)] and density [(c) and (d)] correlation functions for the $\pi$-flux triangular ladder. In the calculation of correlation function, we choose the left-most lattice site to be $m=10$ and the distance $r=n-m$. In panels (a) and (b), the blue stars are numerical results and the red dots are the least square fitting results using Eq.~\ref{MomentumFunction}. The fitting parameters are given in the panels.}
\label{Figure2}
\end{figure}
\medskip
\section{$\pi$-flux Triangular Ladder Model}
To further demonstrate the existence of the chiral spin superfluid, we choose a concrete optical lattice --- a $\pi$ flux triangular two-leg ladder (see the experiment in Ref.~\onlinecite{2013_Sengstock_NatPhys}), and carry out numerical calculations. As shown in Fig.~\ref{Figure1}, we consider a two-leg ladder with the legs labeled as ${\rm I}$ and ${\rm II}$, aligned along the $x$ direction. The number of sites along the $x$ direction is $L$ and these sites are indexed by $m$. The creation (annihilation) operators are denoted as $b^{\dagger}_{\sigma,\alpha,m}$ ($b_{\sigma,\alpha,m}$) with the spin, leg, and site indices, $\sigma=\uparrow,\downarrow$, $\alpha={\rm I},{\rm II}$, and $m\in[1,2,\cdots,L]$. The single-particle Hamiltonian consists of hopping terms connecting all the nearest neighbors with a {\em positive} coefficient as given by
\begin{eqnarray}
H_{0} &=& \sum_{\sigma=\uparrow,\downarrow} \sum_{m} \Big[ b^{\dagger}_{\sigma,{\rm I},m} b_{\sigma,{\rm I},m+1} + b^{\dagger}_{\sigma,{\rm I},m} b_{\sigma,{\rm II},m} \nonumber \\
&+& b^{\dagger}_{\sigma,{\rm I},m} b_{\sigma,{\rm II},m+1} + b^{\dagger}_{\sigma,{\rm II},m} b_{\sigma,{\rm II},m+1} + {\rm H.c.} \Big]
\end{eqnarray}
The positive coefficient can be obtained using $\pi$ magnetic flux in each unit cell, which is impossible to reach in electronic systems but has been implemented in ultracold atoms~\cite{2013_Sengstock_NatPhys}. With periodic boundary condition along the $x$ direction, the single-particle Hamiltonian in momentum space reads
\begin{eqnarray}
\left[
\begin{array}{cc}
2\cos{k} & 1+\cos{k}+i\sin{k} \\
1+\cos{k}-i\sin{k} & 2\cos{k} \\
\end{array}
\right]
\end{eqnarray}
The two Bloch bands have energy eigenvalues $E_{\pm}=2\cos{k}\pm\sqrt{\sin^2{k}+(1+\cos{k})^2}$. The lower band has two equal minimal values at momenta $Q_{\pm}=\pm\arccos(-7/8)\approx\pm{2.6362}$ rad and their degeneracy is protected by time-reversal symmetry. We study many-body systems with the numbers of bosons denoted as $N_{\sigma}$ and define the filling factor as $(N_{\uparrow}+N_{\downarrow})/(2L)$. The interactions between the bosons are described by the terms
\begin{eqnarray}
V &=& \sum_{\sigma=\uparrow,\downarrow} \sum_{\alpha={\rm I},{\rm II}} \sum_{m} U_{0} b^{\dagger}_{\sigma, \alpha,m} b^{\dagger}_{\sigma,\alpha,m} b_{\sigma,\alpha, m} b_{\sigma,\alpha,m} \nonumber \\
&+& \sum_{\alpha={\rm I},{\rm II}} \sum_{m} U_{1} b^{\dagger}_{\uparrow,\alpha,m} b^{\dagger}_{\downarrow,\alpha,m} b_{\downarrow,\alpha,m} b_{\uparrow,\alpha,m}
\end{eqnarray}
where $U_{0}=\infty$ (we choose hard-core bosons for computational ease) and $U_{1}$ is a finite number. When computing the many-body ground state of our system using the DMRG method, we employ open boundary conditions as this is more efficient than periodic boundary conditions. We extract useful information only using the lattice sites far from the edges to suppress any edge effects.
\section{Numerical Results}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figure3.pdf}
\caption{The panels (a) and (b) show the charge (red dots) and spin (blue stars) current correlation functions for the same system as in Fig. \ref{Figure2}. The panels (c) and (d) show the spin current correlation functions for two systems at several different $U_{1}$ values. The left-most lattice site in calculating the correlation is chosen to be $m=10$ and the distance $r=n-m$.}
\label{Figure3}
\end{figure*}
We first analyze the physics of spinless bosons. If the bosons do not interact with each other, they would occupy one of the momenta $Q_{\pm}$ and we have a macroscopic ground state degeneracy. As the interaction is turned on, the bosons prefer to occupy one of the momenta $Q_{\pm}$ and produce a chiral condensate with spontaneous time-reversal symmetry breaking. For a two-component system with no inter-species interaction, the two components can condense in either of the two valleys so we have four degenerate ground states schematically denoted as $|(\uparrow, Q_{+}), (\downarrow, Q_{+})\rangle$, $|(\uparrow, Q_{+}), (\downarrow, Q_{-})\rangle$, $|(\uparrow, Q_{-}), (\downarrow, Q_{+})\rangle$, and $|(\uparrow, Q_{-}),(\downarrow, Q_{-})\rangle$, with $|(\sigma, Q), (\sigma', Q')\rangle$ referring to the spin component $\sigma$ ($\sigma'$) condensing at $Q$ ($Q'$). The degeneracy of $|(\uparrow, Q_{+}), (\downarrow, Q_{+})\rangle$ and $|(\uparrow, Q_{-}), (\downarrow, Q_{-})\rangle$ is protected by time-reversal symmetry, and the degeneracy of $|(\uparrow, Q_{+}), (\downarrow, Q_{-})\rangle$ and $|(\uparrow, Q_{-}), (\downarrow, Q_{+})\rangle$ is protected by inversion symmetry. The former pair of states posses chiral charge order while the latter pair of states posses chiral spin order. These two orders are degenerate at the classical level, so the question to address is which order the inter-species interaction $U_1$ would select in the presence of strong quantum fluctuations.
We compute the ground states at $1/2$ and $1/3$ fillings (the physics at other fillings are qualitatively the same). In one dimension, a superfluid phase features a quasi-long-range order in the correlation $\Gamma_{mn}=\langle b^{\dagger}_{\sigma,\alpha,m} b_{\sigma,\alpha,n} \rangle$ due to strong quantum fluctuations. We show two examples of $\Gamma_{mn}$ for spin-up bosons on leg-I in Fig. \ref{Figure2} (a) and (b) and fit them according to
\begin{eqnarray}
\Gamma_{mn}=f\cos[q(m-n)]/|m-n|^{\alpha}
\label{MomentumFunction}
\end{eqnarray}
In both cases, the coefficient $q$ is very close to $|Q_{\pm}|$ so we conclude that the ground states are indeed superfluids in which the bosons (quasi-) condense at $Q_{\pm}$. As another check, we have also computed the density-density correlation functions $\Delta_{mn}=\langle \rho_{\sigma,\alpha,m} \rho_{\sigma,\alpha,n} \rangle - \langle \rho_{\sigma,\alpha,m} \rangle \langle \rho_{\sigma,\alpha,n} \rangle $ ($\rho_{\sigma,\alpha,m}=b^{\dagger}_{\sigma,\alpha,m} b_{\sigma,\alpha,m}$). The two examples for spin-up bosons on leg-I shown in Fig. \ref{Figure2} (c) and (d) decay to zero very quickly so there is no long-range density order. Because the Hamiltonian is symmetric between spin-up and spin-down bosons, the same correlation functions are expected for spin-down bosons, which we have confirmed explicitly.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure4.pdf}
\caption{The von Neumann entanglement entropy. The blue stars are numerical results and the red lines are the least square fitting results using Eq. \ref{EntropyFunction} without the oscillating term. The fitting parameters are given in the panels.}
\label{Figure4}
\end{figure}
The superfluid and density correlation functions demonstrate that the ground states are superfluid but they do not tell us whether the two components condense in the same or different valleys. To distinguish between chiral charge and chiral spin orders, we define the current operator on the bond $m$ as $J_{{\sigma}m} = i( b^{\dagger}_{\sigma,{\rm I},m} b_{\sigma,{\rm II},m} - b^{\dagger}_{\sigma,{\rm II},m} b_{\sigma,{\rm I},m} )$ and compute its correlation functions. The two types of chiral orders will give rise to two different long-range ordered correlation functions---$\Theta^{c}_{mn}=\langle J^{c}_{m} J^{c}_{n} \rangle$ and $\Theta^{s}_{mn}=\langle J^{s}_{m} J^{s}_{n} \rangle$, respectively, where $J^{c}_{m}=J_{{\uparrow}m}+J_{{\downarrow}m}$ is the charge current operator and $J^{s}_{m}=J_{{\uparrow}m}-J_{{\downarrow}m}$ is the spin current operator. The chiral spin ordered phase breaks the inversion symmetry spontaneously and the spin current correlation function exhibits true long-range order because the broken symmetry is discrete. Fig. \ref{Figure3} shows the correlation functions $\Theta^{c}_{mn}$ and $\Theta^{s}_{mn}$. We find that $\Theta^{c}_{mn}$ decays to zero quickly as $m-n$ increases while $\Theta^{s}_{mn}$ saturates to a constant value in the bulk. The quantity $\Theta^{s}_{mn}$ is an ``order parameter" to identify the regime where the chiral spin condensate is stabilized. Fig. \ref{Figure3} (c) and (d) show $\Theta^{s}_{mn}$ for different interaction strength $U_{1}$. In both cases, we find that as we increase $U_1$ the ground state exhibits chiral spin current when $U_1$ is not too large, but eventually $\Theta^{s}_{mn}$ becomes a decaying function of $m-n$ (meanwhile $\Theta^{c}_{mn}$ always decay to zero quickly). This implies that the chiral spin order gets weaker as the interspecies interaction strength increases, which is consistent with the field theory results in Eq.~\eqref{eq:DeltaE}. The large $U_1$ regime in our system is expected to resemble similar physics as described in Ref. \onlinecite{2004_Kuklov_PRL}.
To confirm that the low-energy effective field theory is indeed a two-component Luttinger liquid, we calculate the scaling of the entanglement entropy using DMRG. We choose a subsystem $A$ with $L_{A}$ sites on the left of the system, trace out the other sites to obtain the reduced density matrix $\rho_{A}$ of $A$, and compute the von Neumann entropy $S=-{\rm Tr}(\rho_{A} \ln\rho_{A})$. For a system that is open in the $x$ direction, $S$ is predicted to take a functional form of~\cite{2004_Calabrese-Cardy}
\begin{eqnarray}
S(L_{A}) = \frac{c}{6} \ln\left[ \frac{L}{\pi} \sin\left(\pi\frac{L_{A}}{L}\right) \right] + g + F
\label{EntropyFunction}
\end{eqnarray}
based on conformal field theory, where $g$ is a constant and $F$ is a non-universal oscillating term. In our model, we find that the $F$ term becomes less important as the system size increases so the value of $c$ can be extracted using sufficiently large systems. To avoid edge effects, we discard the data points for which $L_{A}$ is close to zero or $L$ and only use those in the middle. The two examples shown in Fig. \ref{Figure4} both give central charge $2$ as expected for a two-component Luttinger liquid.
\section{Conclusion}
Based on effective field theory analysis and numerical simulations, we have established that the ground state of two-component bosons in a one-dimensional lattice with double-valley band could be a chiral spin superfluid. This phase spontaneously breaks inversion symmetry and exhibits chiral spin loop currents. Our predictions should be readily testable in ultracold atoms using spin-resolved time-of-flight techniques since the system exhibits momentum space antiferromagnetism. The chiral spin condensate may lead to applications in quantum information processing and topological state engineering. For example, it may be used as building blocks of chiral spin networks for multiparticle entanglement generation~\cite{2015_Pichler_Zoller_PRA}. In Bose-Fermi mixtures, chiral spin condensate of bosons may provide a background for the fermions to form topological phases. The prospect of inducing topological phases from spontaneous symmetry breaking ~\cite{2008_Raghu_Qi_PRL,2009_Sun_PRL,2015_Li_NatComm,2015_Xu_Li_PRL,2016_Zhu_Sheng_PRB} is worth further exploration.
The $\pi$-flux triangular ladder model may also host other interesting physics. With optical lattices~\cite{2013_Sengstock_NatPhys}, the hopping constants along the legs and between the legs could in principle be changed independently. The interaction strengths are also tunable by changing lattice depth and by Feshbach resonances. Varying these parameters, a even richer phase diagram is anticipated. In particular, the bosons may form a Mott insulator rather than a superfluid at integer filling factors. It remains to be seen if the insulating state can also possess nontrivial chiral spin order.
\section{Acknowledgement}
YHW thanks Hong-Hao Tu for helpful discussions. Work at MPQ was supproted by the European Union Project {\em Simulators and Interfaces with Quantum Systems}. Work at Maryland was supported by JQI-NSF-PFC, LPS-MPO-CMTC, and the PFC seed grant ``Emergent phonemena in interacting spin-orbit coupled gases" (XL). XL acknowledges KITP for hospitality where part of the manuscript was finished. XL is supported by National Program on Key Basic Research Project of China (Grant No. 2017YFA0304204), National Natural Science Foundation of China (Grant No. 117740067), the Thousand-Youth-Talent Program of China.
\bibliographystyle{apsrev4-1}
| proofpile-arXiv_065-7459 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Detection Method}\seclab{method}
Cosmic $\nu_{\tau}$ can produce taus\footnote{Other neutrino flavors can be neglected, as the electron range in matter at these energies is too short and the muon decay length too large.} under the Earth surface through charged-current interactions. Taus may then exit and decay in the atmosphere, generating Earth-skimming extensive air showers
(EAS)~\cite{Fargion00,Bertou04}. EAS emit coherent electromagnetic radiations at frequencies of a few to hundreds of MHz, detectable by radio antennas for shower energies $E \gtrsim 3\cdot10^{16}$ eV~\cite{CODALEMA2005,LOPES2005}.
The strong beaming of the electromagnetic emission, combined with the transparency of the atmosphere to radio waves, will allow the radiodetection of EAS initiated by tau decays at distances up to several tens of kilometers (see \figref{fig}), making radio antennas ideal instruments for the search of cosmic neutrinos. Furthermore, antennas offer practical advantages (e.g. limited unit cost, easiness of deployment) that allow the deployment of an array over very large areas, as required by the expected low neutrino event rate.
Remote sites, with low electromagnetic background, should obviously be considered for the array location. In addition, mountain ranges are preferred, first because they offer an additional target for the neutrinos, and also because mountain slopes are better suited to the detection of Earth-skimming showers compared to flat areas which are parallel to the neutrino-induced EAS trajectories.
GRAND antennas are foreseen to operate in the $30-100$\,MHz band. Below this range, short-wave background prevents detection, while coherence of radio emission fades above it.
However, an extension of the antenna response up to 200 or 300\,MHz would enable us to better observe the Cherenkov ring associated with the air shower \cite{Alvarez2012}, which represents a sizable fraction of the total electromagnetic signal at these frequencies. This could provide an unambiguous signature for background rejection.
\section{GRAND layout and neutrino sensitivity}\seclab{sensitivity}
We present here a preliminary evaluation of the potential of GRAND for the detection of cosmic neutrinos, based on the simulated response of a 90\,000 antennas setup deployed on a square layout of 60\,000\,km$^2$ in a remote mountainous area, the Tianshan mountains in the XinJiang province, China.
{\bf Simulation method.}
We perform a 1D tracking of a primary $\nu_{\tau}$, simulated down to the converted tau decay.
We assume standard rock with a density of 2.65~g/cm$^3$ at sea level and above, while the Earth core is modeled following the Preliminary Reference Earth Model \cite{Dziewonski81}.
The simulation of the deep inelastic scattering of the neutrinos is performed with Pythia6.4, using the CTEQ5d probability distribution functions (PDF) combined with \cite{Gandhi98} for cross section calculations. The propagation of the outgoing tau is simulated using randomized values from parameterisations of GEANT4.9 PDFs for
tau path length and proper time. Photonuclear interactions in GEANT4.9 have been extended above PeV energies following \cite{Dutta00}. The tau decay is simulated using the TAUOLA package. The radiodetection of neutrino-initiated EAS is simulated in the following way:~
- for a limited set of $\nu_{\tau}$ showers simulated with ZHaireS \cite{Zhaires} at various energies (see \figref{fig}), we determine a conical volume inside which the electric field is above the expected detection threshold of the GRAND antennas (30~$\mu$V/m in an agressive scenario, 100~$\mu$V/m in a conservative one).
- from this set of simulations, we parametrize the shape (angle at top and height) of this detection cone as a function of energy.
- for each neutrino-initiated EAS in our simulation, we compute the expected cone shape and position, and select the antennas located inside the corresponding volume, taking into account signal shadowing by mountains.
- if a cluster of 8 neighbouring units can be found among these selected antennas, we consider that the primary $\nu_{\tau}$ is detected.
{\bf Results and implications.}
Assuming a 3-year observation with no neutrino candidate on this 60\,000 km$^2$ simulated array, a 90\%\,C.L. integral limit of $6.6\times10^{-10}$~GeV$^{-1}$~cm$^{-2}$~s$^{-1}$ can be derived for an $E ^{-2}$ neutrino flux in our agressive scenario~($1.3\times10^{-9}$ in our conservative scenario). This is a factor $\ge 5$ better than than other projected giant neutrino telescopes for EeV energies \cite{ARA2016}.
This preliminary analysis also shows that mountains constitute a sizable target for neutrinos, with $\sim$50\% of down-going events coming from neutrinos interacting inside the mountains.
It also appears that specific parts of the array (large mountains slopes facing another mountain range at distances of $30-80$\,km) are associated with a detection rate well above the average. By splitting the detector into smaller sub-arrays of a few 10 000 km$^2$ each, deployed solely on favorable sites, an order-of-magnitude improvement in sensitivity could be reached with only a factor-of-3 increase in detector size, compared to the 60 000 km$^2$ simulation area. This is the envisioned GRAND setup.
This neutrino sensitivity corresponds to a detection rate of 1 to 60 cosmogenic events per year. Besides, the angular resolution on the arrival directions, computed following \cite{Ardouin11}, could be as low as 0.05$^\circ$ for a 3~ns precision on the antenna trigger timing, opening the door for neutrino astronomy.
\begin{figure*}
\centering
\includegraphics[width=5cm,clip]{taudecay.png}
\includegraphics[width=5cm,clip]{sens.png}
\caption{ {\it Left:} Expected radio footprint for a $5\cdot10^{17}$~eV horizontal shower induced by a tau decay at the origin. The color coding corresponds to the Efield maximum amplitude integrated over the 30-80MHz range (in $\mu$V/m). The sky background level is $\sim$15$\mu$V/m in this frequency range. Note the different x and y scales. {\it Right:} Differential sensitivity of the 60\,000 km$^2$ simulated setup (brown region, top limit: conservative, bottom: aggressive) and of the projected GRAND array (brown thick curve). The integral sensitivity limit for GRAND is shown as a thick line. We also show the expected limit for the projected final configuration of ARA~\cite{ARA2016} and theoretical estimates for cosmogenic neutrino fluxes \cite{KAO10}: the blue line stands for the most pessimistic fluxes, the gray-shaded region to the ``reasonable'' parameter range. All curves are for single-flavor neutrino fluxes.}
\figlab{fig}
\end{figure*}
\section{Background rejection}\seclab{bg}
A few tens of cosmogenic neutrinos per year are expected in GRAND. The rejection of events initiated by high-energy particles other than cosmic neutrinos should be manageable \cite{ICRC2015}. The event rates associated to terrestrial sources (human activities, thunderstorms, etc.) are difficult to evaluate, but an estimate can be derived from the results of the Tianshan Radio Experiment for Neutrino
Detection (TREND). TREND~\cite{Ardouin11} is an array of 50 self-triggered antennas deployed over a surface $\gtrsim1$\,km$^2$ in a populated valley of the Tianshan mountains, with antenna design and sensitivity similar to what is foreseen for GRAND. The observed rate of events triggering six selected TREND antennas separated by $\sim$800~m over a sample period of 120 live days was found to be around 1~day$^{-1}$, with two-thirds of them coming in bursts of events, mostly due to planes. Direct extrapolation from TREND results thus leads to an expected event rate of $\sim1$~Hz for GRAND for a trigger algorithm based on coincident triggers on neighbouring antennas and a rejection of events bursts.
Amplitude patterns on the ground (emission beamed along the shower axis and signal enhancement on the Cherenkov ring \cite{Alvarez2012}), as well as wave polarization \cite{Aab14} are strong signatures of neutrino-initiated EAS that could provide efficient discrimination tools for the remaining background events.
These options are being investigated within GRAND, through simulations and experimental work. In 2017 the GRANDproto project \cite{Gou15} will deploy a hybrid detector composed of 35 3-arm antennas (allowing for a complete measurement of the wave polarization) and 24 scintillators, that will cross-check the EAS nature of radio-events selected from a polarization signature compatible with EAS.
\section{GRAND development plan}\seclab{engineering}
Before considering the complete GRAND layout, several validation steps have to be considered. The first one will consist of establishing the autonomous radiodetection of very inclined EAS with high efficiency and excellent background rejection, with a dedicated setup of size $\sim 300$\,km$^2$. This array will be too small to perform a neutrino search, but cosmic rays should be detected above $10^{18}\,$eV. Their reconstructed properties (energy spectrum, composition) will enable us to validate this stage. The absence of events below the horizon will confirm our EAS identification strategy. A second array, 10 times larger, will allow to test the technological choices for the DAQ chain, trigger algorithm and data transfer. This will mark the start of GRAND data taking, foreseen in the mid-2020s.
\section{Conclusion}
The GRAND project aims at building the ultimate next-generation neutrino telescope. Preliminary simulations indicate that
a sensitivity guaranteeing the detection of cosmogenic neutrinos is achievable. Work is ongoing to assess GRAND achievable scientific goals and the corresponding technical constraints. Background rejection strategies and technological options are being investigated.\\
\noindent\footnotesize{{\it Acknowledgements.} The GRAND and GRANDproto projects are supported by the Institut Lagrange de Paris, the France China Particle Physics Laboratory, the Natural Science Fundation of China (Nos.11135010, 11375209), the Chinese Ministry of Science and Technology and the S\~ao Paulo Research Foundation FAPESP (grant 2015/15735-1).}
| proofpile-arXiv_065-7462 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Appendix}
\subsection{Cases with More than Two Areas}
In this subsection, we generalize GCTS to cases with more than two areas. For each area, the network equivalence is illustrated in Fig. \ref{fig:threearea}. Therein, internal buses are eliminated, and the equivalent admittance matrix $Y_{\bar{1}\bar{1}}$ and injection $\tilde{g}_i$ are still calculated by (\ref{eq:yeq}). The calculation of $Y_{\bar{1}\bar{1}}$ and $\tilde{g}_i$ only requires local information.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{Fig/MoreArea1.eps}
\caption{\small Network equivalence with more than two areas}\label{fig:threearea}
\end{figure}
Thereby, the equivalent model of the global power system, corresponding to the Fig. \ref{fig:1}, can be obtained by eliminating all internal buses. An example of a three-area system can be seen in Fig.\ref{fig:sys3}. In the clearing of GCTS with $n$ areas, the constraint (\ref{eq:mc_bndpf}) becomes
\begin{equation}
\left[
\begin{array}{cccc}
\tilde{Y}_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}&\dots&Y_{\bar{1}\bar{n}}\\
Y_{\bar{2}\bar{1}}&\tilde{Y}_{\bar{2}\bar{2}}&\dots&Y_{\bar{2}\bar{n}}\\
\dots & \dots & \dots & \dots\\
Y_{\bar{n}\bar{1}}&Y_{\bar{n}\bar{2}}&\dots&\tilde{Y}_{\bar{n}\bar{n}}\\
\end{array}
\right]\!\left[\!
\begin{array}{c}
\bar{\theta}_{1}\\
\bar{\theta}_{2}\\
\dots \\
\bar{\theta}_{n}
\end{array}\!
\right]\!=\!\left[\!
\begin{array}{c}
M_1 \\
M_2 \\
\dots \\
M_n
\end{array}\!\right]s.\label{eq:threearea_bndpf}
\end{equation}
The clearing problem can be solved in a distributed fashion via existing solutions like \cite{GuoTongetc:16TPS}, which is capable to solve problems with more than two areas.
The real-time problem of Area $i$ and the settlement process are similar to those of the two-area case in Subsection III-D.
\subsection{Proof of Theorem \ref{thm:JED}}
We first show that the given assumptions ensure that the matrix $M=[M_1;M_2]$ has full row rank after removing an arbitrary phase angle reference bus. Without loss of generality, we consider bids buying from Area 1 and selling to Area 2 and assume the phase angle reference bus is in Area 1. We have assumed that there are infinitely many bids for each pair of locations when $N\rightarrow \infty$. Therefore, we can rearrange interface bids and write the matrix $M$ as follows
\begin{equation}
\includegraphics[width=0.7\textwidth]{Fig/Mrank.eps}, \label{eq:Mrank}
\end{equation}
where the first block of columns includes all bids that buy at the reference bus in Area 1, the second block of columns includes all bids that buy at the first boundary bus in Area 1, etc. Scalars $n_1$ and $n_2$ are numbers of boundary buses in Area 1 and Area 2, respectively. One can pick the first block and first columns in other blocks, as highlighted by red in \eqref{eq:Mrank}, to obtain $n_1+n_2-1$ linearly independent columns. Therefore, the $M$ matrix in \eqref{eq:Mrank} has full row rank. Same conclusion holds for the other bidding direction.
Consider the optimal JED solution $\{g_{i}^{\textrm{JED}},\bar{\theta}^{\textrm{JED}}, \theta_i^{\textrm{JED}}\}$. For $N$ sufficiently large, there is an $\hat{s}$ such that the additional constraint (\ref{eq:mc_bndpf}) holds for the JED solution in GCTS. Therefore, $\{g_{i}^{\textrm{JED}},\bar{\theta}^{\textrm{JED}},$ $\theta_i^{\textrm{JED}}, \hat{s}\}$ is a feasible solution of the GCTS clearing problem (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}).
Because $\Delta \pi \rightarrow 0$ as $N\rightarrow \infty$, we have
\begin{equation}
\lim_{N\rightarrow \infty} c(g_1^{JED},g_2^{JED},s) = \sum_i c_i(g_i^{JED}),
\end{equation}
which is the lowest possible total generation cost. The claim holds since we assume JED and GCTS are both convex programs, each having an unique global optimum.
\subsection{Proof of Theorem \ref{thm:CTS}}
When there is a single tie-line, variables $\bar{\theta}_1$ and $\bar{\theta}_2$ are scalars. The equivalent network in Fig.\ref{fig:net_eq} is simplified to Fig.\ref{fig:neteq_CTS}:
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{Fig/neteq_CTS.eps}
\caption{\small Equivalent model with single tie-line}\label{fig:neteq_CTS}
\end{figure}
Setting $\bar{\theta}_1=0$ as the phase angle reference and Area 2 exporting to Area 1 as positive, the power interchange is given by $q=-Y_{\bar{1}\bar{2}} \bar{\theta}_2$. Thereby, the GCTS clearing problem (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}) changes to
\begin{align}
\min \limits_{\{g_{i},s,\bar{\theta}_2, \theta_i, q\}}\! & c(g_1, g_2, s)=\sum \limits_{i=1}^{2}c_{i}(g_{i})+ \Delta \pi^T s, \label{eq:mc_obj_cts}\\
\textrm{subject to}&\hspace{0.1cm}H_{i}\theta_{i}+H_{\bar{i}}\bar{\theta}_{i}\leq f_{i},i=1,2, \label{eq:mc_linecons_cts}\\
&\hspace{0.1cm}q \leq \bar{f}=q_{\textrm{max}}, \label{eq:mc_tieline_cts}\\
&\hspace{0.1cm}\check{g}_i\leq g_i\leq \hat{g}_i, i=1,2,\label{eq:glimits_cts}\\
&\hspace{0.1cm} 0\leq s\leq s_{\textrm{max}}, \label{eq:slimits_cts}\\
&\hspace{-1.0cm}
\left[\!
\begin{array}{cc}
Y_{11}&Y_{1\bar{1}}\\
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}
\end{array}\!\right]\left[\!
\begin{array}{c}
\theta_{1}\\
0
\end{array}
\right]=\left[\!
\begin{array}{c}
g_{1}-d_{1}\\
q\\
\end{array}\!\right], \label{eq:mc_pf1_cts} \\
&\hspace{-1.0cm}
\left[\!
\begin{array}{cc}
Y_{22}&Y_{2\bar{2}}\\
Y_{\bar{2}2}&Y_{\bar{2}\bar{2}}
\end{array}\!
\right]\left[\!
\begin{array}{c}
\theta_{2}\\
\bar{\theta}_2\\
\end{array}
\right]=\left[\!
\begin{array}{c}
g_{2}-d_{2}\\
-q\\
\end{array}\!\right], \label{eq:mc_pf2_cts} \\
& \hspace{0.1cm} q=M_2 s.\label{eq:mc_bndpf_cts}
\end{align}
Here the matrix $M_2$ is composed of 1 and -1, depending on directions of interface bids. Note that when $q$ is fixed, the problem (\ref{eq:mc_obj_cts})-(\ref{eq:mc_bndpf_cts}) can be decoupled into three sub-problems:
i) Local economic dispatch in Area 1:
\begin{equation}
\begin{array}{ll}
\min \limits_{\{g_{1}, \theta_1\}} & c_{1}(g_{1}) \\
\textrm{subject to:} & \hspace{0.1cm}H_{1}\theta_{1}\leq f_{1},\\
&\hspace{0.1cm}\check{g}_1\leq g_1\leq \hat{g}_1,\\
&\hspace{-1.0cm}
\left[\!
\begin{array}{cc}
Y_{11}&Y_{1\bar{1}}\\
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}
\end{array}\!\right]\left[\!
\begin{array}{c}
\theta_{1}\\
0
\end{array}
\right]=\left[\!
\begin{array}{c}
g_{1}-d_{1}\\
q\\
\end{array}\!\right].\\
\end{array}\label{eq:proof_cts1}
\end{equation}
ii) Local economic dispatch in Area 2, which is similar in form with (\ref{eq:proof_cts1});
iii) The optimal clearing of interface bids given the total amount $q$:
\begin{equation}
\begin{array}{ll}
\min \limits_{s} & \Delta \pi^T s \\
\textrm{subject to:} & q=M_2 s,\\
&\hspace{0.1cm} 0\leq s\leq s_{\textrm{max}}.
\end{array}\label{eq:proof_cts2}
\end{equation}
Therefore, GCTS is to search for the optimal $q^*\in [0, q_{\textrm{max}}]$ that minimizes the sum of objective functions of the three sub-problems. Note that in Fig.\ref{fig:ctscurve}, $\pi_1 (q)$ is the derivative curve of problem (\ref{eq:proof_cts1}) with respect to $q$, $\pi_2 (q)$ is the negative derive curve of the local problem in Area 2, and $\Delta \pi (q)$ is the negative derivative of (\ref{eq:proof_cts2}). Therefore, the solution of CTS in Fig.\ref{fig:ctscurve} or Fig.\ref{fig:ctscurveCongested} is the same as that of GCTS (\ref{eq:mc_obj_cts})-(\ref{eq:mc_bndpf_cts}).
\subsection{Proof of Theorem \ref{thm:Revenue}}
For Area 1, from the optimality condition for (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}) we have
\begin{equation}
\nabla_{\{\theta_1\}} L_1 = \left[
\begin{array}{cc}
Y_{11}&Y_{1\bar{1}}
\end{array}\right]\left[
\begin{array}{c}
\lambda_1^R\\
\bar{\lambda}_1^R
\end{array}\!
\right]+H_1^T \eta_1^R = 0,\label{eq:th2eq1}
\end{equation}
where $L$ is the Lagrangian of (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}).
By left-multiplying $(\theta_1^*)^T$ to (\ref{eq:th2eq1}) we have
\begin{equation}
\begin{array}{ll}
(\theta_1^*)^T\nabla_{\{\theta_1\}} L_1\!&=\! (g_1\!-\!d_1)^T\lambda_1^R+f_1^T\eta_1^R - [\bar{\theta}_1^T \hspace{0.2cm}\bar{\theta}_2^T](\!\left[\!
\begin{array}{cc}
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}\\
&Y_{\bar{2}\bar{1}}
\end{array}\!
\right]\!\!\left[\begin{array}{c}
\lambda_1^R\\
\bar{\lambda}_1^R
\end{array}\!
\right]\!+\!\left[\!
\begin{array}{c}
H_{\bar{1}}^T \eta_1^R \\
0
\end{array}\!
\right]\!)\\
&=\! (g_1\!-\!d_1)^T\lambda_1^R\!+\!f_1^T\eta_1^R\!-\!s^T M^T \!\left[\!
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}\!&\!Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}\!&\!\tilde{Y}_{\bar{2}\bar{2}}
\end{array}\!
\right]^{-1}\!\!\!\!\!\nabla_{\bar{\theta}} c_1^*\\
&=\! (g_1\!-\!d_1)^T\lambda_1^R\!+\!f_1^T\eta_1^R\!-s^T\mu_1=0.
\end{array}\label{eq:th2eq2}
\end{equation}
In the absence of tie-line congestion, Equation \eqref{eq:th2eq1} already proves the revenue adequacy. When tie-line congestions happen in the look-ahead dispatch, the corresponding congestion rent yield
\begin{equation}
\!\!\!
\bar{f}^T\bar{\eta}\!=\![\bar{\theta}_1^T \hspace{0.2cm}\bar{\theta}_2^T]\!\left[\!\!
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}\!\!&\!\!Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}\!\!&\!\!\tilde{Y}_{\bar{2}\bar{2}}
\end{array}\!\!
\right]\!\!\left[\!\!
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}\!\!&\!\!Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}\!\!&\!\!\tilde{Y}_{\bar{2}\bar{2}}
\end{array}\!\!
\right]^{-1}\!\!\left[\!\!\begin{array}{c}
\bar{H}_1^T\\
\bar{H}_2^T
\end{array}\!\!
\right]\!\bar{\eta}\!=\!s^T\rho.\label{eq:th2eq3}
\end{equation}
From \eqref{eq:th2eq2} and \eqref{eq:th2eq3}, we finally have
\begin{equation}
(d_1\!-\!g_1)^T\lambda_1^R\!+\!s^T(\mu_1+\frac{\rho}{2})\!=\!f_1^T\eta_1^R\!+\frac{\bar{f}^T\bar{\eta}}{2}>0. \label{eq:th2eq4}
\end{equation}
Equation \eqref{eq:th2eq4} proves Theorem \ref{thm:Revenue}. The left-hand side is the net revenue that ISO 1 collects from internal and external market participants, and the right-hand side is the sum of the internal congestion rent and a half of the tie-line congestion rent which is afforded by Area 1. Note that the net revenue in \eqref{eq:th2eq4} is non-negative because all terms on the right-hand side are non-negative.
\subsection{Proof of Theorem \ref{thm:Surplus}}
Let $\tilde{q}=1^T \tilde{s}$ and $\hat{q}=1^T \hat{s}$. We first prove that $\tilde{q}\geq\hat{q}$. Similar to the proof of Theorem \ref{thm:CTS}, the separate clearing in Area 1 is
\begin{equation}
\begin{array}{ll}
\min \limits_{\{g_{1}, \theta_1, s, q\}} & c_{1}(g_{1}) + \pi_1^T s\\
\textrm{subject to:} & \hspace{0.1cm}H_{1}\theta_{1}\leq f_{1},\\
&\hspace{0.1cm}\check{g}_1\leq g_1\leq \hat{g}_1,\\
&\hspace{-1.0cm}
\left[\!
\begin{array}{cc}
Y_{11}&Y_{1\bar{1}}\\
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}
\end{array}\!\right]\left[\!
\begin{array}{c}
\theta_{1}\\
0
\end{array}
\right]=\left[\!
\begin{array}{c}
g_{1}-d_{1}\\
q\\
\end{array}\!\right],\\
&q=M_2 s,\\
&0\leq s \leq s_{\textrm max}.
\end{array}\label{eq:proof_surplus1}
\end{equation}
Next we parameterize \eqref{eq:proof_surplus1} with respect to $q$. According to \cite{GuoBoseTong:17TPS}, the optimal value of \eqref{eq:proof_surplus1} is an piecewise affine and convex function of $q$, denoted by $J_1(q)$. We can similarly derive $J_2(q)$ for Area 2.
Letting $q_i$ be the optimal solution of $J_i(q)$, we have $\hat{q}\leq\min\{q_1,q_2\}$. Also $\tilde{q}$ is the optimal solution of $J_1(q)+J_2(q)$. If $q_1=q_2=q$, then $\tilde{q}\geq\hat{q}$. Otherwise, we have
\begin{equation}
J_1(q_1) < J_1(\tilde{q})< J_1(q_2), \label{eq:proof_surplus2}
\end{equation}
\begin{equation}
J_2(q_2) < J_2(\tilde{q})<J_2(q_1). \label{eq:proof_surplus3}
\end{equation}
There are two possibilities that $\tilde{q}$ is smaller than both $q_1$ and $q_2$: (i) $\tilde{q}<q_2<q_1$ which violates \eqref{eq:proof_surplus2}, (ii) $\tilde{q}<q_1<q_2$ which violates \eqref{eq:proof_surplus3}. Therefore, there is always $\tilde{q}\geq\hat{q}$.
Considering the importing area whose real-time dispatch model is \eqref{eq:proof_cts1}, there is $\hat{\bar{\lambda}}_1^R >\tilde{\bar{\lambda}}_1^R$ and we have
\begin{equation}
\begin{array}{ll}
\tilde{LS}_i-\hat{LS}_i&=c_i(\hat{g}_i)-c_i(\tilde{g}_i)+\hat{\bar{\lambda}}_1^R\hat{q}-\tilde{\bar{\lambda}}_1^R\tilde{q}\\
&\geq \tilde{\bar{\lambda}}_1^R (\tilde{q}-\hat{q})+\hat{\bar{\lambda}}_1^R \hat{q}-\tilde{\bar{\lambda}}_1^R \tilde{q} \\
& = \hat{q} (\hat{\bar{\lambda}}_1^R-\tilde{\bar{\lambda}}_1^R) >0.
\end{array}
\end{equation}
Considering the exporting area whose real-time dispatch model is \eqref{eq:proof_cts1} but the term $q$ is replaced by $-q$, there is $\hat{\bar{\lambda}}_1^R < \tilde{\bar{\lambda}}_1^R$ and we have
\begin{equation}
\begin{array}{ll}
\tilde{LS}_i-\hat{LS}_i&=c_i(\hat{g}_i)-c_i(\tilde{g}_i)-\hat{\bar{\lambda}}_1^R\hat{q}+\tilde{\bar{\lambda}}_1^R\tilde{q}\\
&\geq \tilde{\bar{\lambda}}_1^R (\hat{q}-\tilde{q})-\hat{\bar{\lambda}}_1^R \hat{q}+\tilde{\bar{\lambda}}_1^R \tilde{q} \\
& = \hat{q} (\tilde{\bar{\lambda}}_1^R-\hat{\bar{\lambda}}_1^R) >0.
\end{array}
\end{equation}
There CTS achieves better local surpluses in both local markets.
\subsection{Motivation}
Much of the power grid in North America is operated by independent system operators (ISOs). Each ISO is responsible for the administration of the electricity market in its operating area. Neighboring areas are connected by tie-lines, which makes it physically possible for one ISO to import power from or export power to its neighbors. It thus makes economic sense that an ISO with high generation price imports power from its neighbors that have excess and less costly resources.
The process of setting power transfer from one regional market to another, generally referred to as the \textit{interchange scheduling}, is nontrivial. If maximizing the overall system efficiency is the objective, the power flow across different operating areas should be determined by the \textit{joint economic dispatch} (JED) that treats the entire operating region as one and minimizes the overall generation cost. But efficiency is not the only goal that governs the operations of ISOs.
ISOs in the deregulated electricity market must operate under the principles of fair representations of stakeholders (including market participants) and the financial neutrality, \textit{i.e.}, the independence with respect to traders of the market \cite[Page 152-153]{FERC2000}. As a result, ISOs rely on market participants to set the level of power transfer across market boundaries. These market participants aim to profit from arbitrage opportunities by submitting bids to buy from one area and offers to sell in another. It is then the responsibility of ISOs who assume no financial position in the process to clear and settle these bids in a fair and transparent fashion.
As pointed out in \cite{Oren1998}, short-term operational efficiency is not always aligned with financial neutrality; there is an inherent cost associated with any market solution. A market solution to interchange scheduling creates the so-called ``seams problem'' as defined by the additional price gap over the interface compared against the seamless operation by a single ``super ISO''. The hope is that a well-designed market solution becomes more efficient as the number of market participants increases, ultimately achieving seamless interfaces, and improves the long-term performance. To our knowledge, there is no existing market solution that provably achieves the efficiency of JED. The goal of this paper is to fill, at least partially, this gap.
\subsection{Literature Review}
The interchange scheduling is a classical problem, which goes back to the 1980s \cite{EarlyHED}. Existing solutions can be classified into two categories. The first is based on JED and aims to achieve by neighboring ISOs the best efficiency in a distributed fashion. To this end, there is an extensive literature based on primal decomposition methods \cite{Bakirtzis&Biskas:03TPS,Zhao&LitvinovZheng:14TPS,Li&Wu&Zhang&Wang::15TPS,GuoTongetc:16TPS,GuoBoseTong:17TPS} and dual decomposition methods \cite{ConejoAguado98TPS,Binetti&Etal:14TPS,Erseghe:15TPS,Chen&Thorp&Mount:03HICCS}. These methods, unfortunately, are not compatible with the existing market structure because they remove arbitrage opportunities for external market participants, in essence, requiring ISOs to trade directly with each other.
The second category includes market-based solutions that optimize the net interchange by clearing interface bids with the coordination among ISOs \cite{White&Pike:11WP,Chatterjee&Baldic:14COR}. These techniques ensure the financial neutrality of ISOs but increase the overall generation cost. The state of the art is the \textit{coordinated transaction scheduling} (CTS), which has been recently approved by FERC for implementations in ISO-NE, NYISO, PJM, and MISO \cite{FERCCTS_NENY,FERCCTS_PJMNY,FERCCTS_PJMMISO}. A key component of these methods is the use of proxy buses as trading points for external market participants. In the clearing process, power interchange is represented by injections and withdrawals at proxy buses. Consequently, the actual power flow may differ from the schedule, causing the so-called \textit{loop flow problem} \cite{LoopFlowReport}. Furthermore, CTS is limited to setting the interchange between two neighboring areas. In practice, an ISO may need to set multiple interfaces simultaneously. An extension to interchange scheduling involving more than two areas is nontrivial, see \cite{JiTong16PESGM}. Generalizations to CTS to a stochastic setting is considered in \cite{JiZhengTong:17TPS}.
\subsection{Contribution}
This paper aims to bridge the gap between the ultimate seamless solution achieved by JED and the more practical and necessarily market-based solutions. In particular, we propose a generalization of CTS, referred to as GCTS, which achieves asymptotically seamless interfaces under certain conditions. GCTS retains the structure of bidding, clearing, and settlement of CTS, thus causing no interruption of existing market operations. A key improvement over CTS is that GCTS eliminates the proxy bus approximation (thus the associated loop flow problem) inherent in all existing market-based interchange solutions. Another advantage over existing techniques is that GCTS is shown to be revenue adequate, \textit{i.e.}, the net revenue of each ISO is non-negative and is equal to its congestion rent.
GCTS can also be viewed as a generalization of JED with two modifications. First, similar to CTS but different from JED, GCTS minimizes not only the total generation cost but also the market cost of clearing interface bids. Second, similar to JED but different from CTS, GCTS solves a distributed optimization problem with all network constraints as well as an additional constraint that uses cleared interface bids to define the boundary state. This constraint is consistent with the principle of an independent and financially neutral ISO in the sense that it is the market participants who set the interchange.
GCTS does have its own shortcoming. Because GCTS solves a distributed optimization problem as in JED, it has the computation cost similar to that of JED and is more costly than CTS. We acknowledge but do not address this issue here except pointing out that some recent techniques \cite{Zhao&LitvinovZheng:14TPS,GuoTongetc:16TPS,GuoBoseTong:17TPS} that enjoy a finite-step convergence, which alleviate to some degree such costs.
The remainder of this paper is organized as follows. We first review the CTS approach in Section II. In Section III, we present the model of multi-area power systems, the structure of interface bids, and their clearing and settlement process. Properties of GCTS are established in Section VI. Section V provides the results of simulations.
\subsection{Network Model}
\input Model
\subsection{Definition of Interface Bids}
GCTS uses the same format of bids as CTS. Namely, an interface bid $i$ is defined by a triple
\begin{equation}
\mathcal{B}\triangleq\{<B_{pm},B_{qn}>, \Delta \pi_i, s_{\rm max,i}\},\nonumber
\end{equation}
where
\begin{enumerate}
\item $<B_{pm},B_{qn}>$ is an ordered pair of boundary buses that specifies the bid as withdrawing at bus $m$ in Area $p$ and injecting the same amount at bus $n$ in Area $q$. They need not be directly connected by a tie-line;
\item $\Delta \pi_i$ is its price bidding on the anticipated price gap that the bid is settled in the two real-time markets\footnote{This may not be equal to the LMP difference. See Subsection III-D and Remark 2 after Theorem \ref{thm:JED} for mathematical and economical interpretations.};
\item $s_{\rm max,i}$ is its maximum quantity.
\end{enumerate}
The only difference between CTS and GCTS is that, in stead of using a single proxy bus in each area, GCTS allows bids to be submitted to all pairs of boundary buses across the boundary, as illustrated in Fig \ref{fig:net_eq}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.29\textwidth]{Fig/net_eq1.eps}
\caption{\small Network equivalence on the boundary. Dotted-line arrows represent three interface bids in the example below: $s_1$ injects at $B_{11}$ and withdraws at $B_{21}$; $s_2$ injects at $B_{11}$ and withdraw at $B_{22}$; $s_3$ injects at $B_{12}$ and withdraws at $B_{22}$.}\label{fig:net_eq}
\end{figure}
We aggregate all interface bids with an incidence matrix $M_i$ associated with boundary buses in Area $i$. Specifically, each row of $M_i$ corresponds to a boundary bus of Area $i$, and each column of which corresponds to an interface bid. The entry $M_{i}(m,k)$ is equal to one if interface bid $k$ buys power at boundary bus $B_{im}$ from Area $i$, minus one if it sells power at bus $B_{im}$ to Area $i$, and zero otherwise. For example, if there are three bids as illustrated in Fig.\ref{fig:net_eq}, matrices $M_i (i=1,2)$ are
\begin{equation}
M_1\!=\!
\left[\!
\begin{array}{ccc}
1\!&\!1\!&\!\!0\\
0\!&\!0\!&\!\!1
\end{array}\!
\right] \!\!\!\begin{array}{c}
(B_{11})\\
(B_{12})
\end{array}, M_2\!=\!
\left[\!
\begin{array}{ccc}
-1\!\!&\!0\!&\!0\\
0\!\!&\!\!-1\!\!&\!\!-1
\end{array}\!
\right]\!\!\! \begin{array}{c}
(B_{21})\\
(B_{22})
\end{array}.
\end{equation}
Consequently, let $s$ be the vector whose $i$th entry $s_i$ is the cleared quantity of bid $i$. Then $M_i s$ represents the aggregated equivalent power injection induced by cleared interface bids on boundary buses in Area $i$. By substituting the right-hand side in (\ref{eq:bnd_lf}) by $M_i s$, we have
\begin{equation}
\left[
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}&\tilde{Y}_{\bar{2}\bar{2}}
\end{array}
\right]\hspace{-0.1cm}\left[\!
\begin{array}{c}
\bar{\theta}_{1}\\
\bar{\theta}_{2}\\
\end{array}\!
\right]\!=\!\left[\!
\begin{array}{c}
M_1 s\\
M_2 s
\end{array}\!\right].
\label{eq:bnd_dclf}
\end{equation}
In (\ref{eq:bnd_dclf}), the interchange schedule is solely determined by the cleared interface bids from market participants. In the market clearing process of GCTS, as presented in the next subsection, Equation (\ref{eq:bnd_dclf}) will be incorporated as an equality constraint in the optimization model where the internal bids $g_i$ and interface bids $s$ are cleared together.
\subsection{Market Clearing Mechanism}
GCTS preserves the architecture of CTS; it assumes the presence of a coordinator who collects interface bids and clears them via a look-ahead dispatch, and the interface bids are settled separately in the real-time markets. GCTS removes the proxy bus approximation, and its clearing of interface bids is based on a generalization of JED. The key idea is to clear interface bids by optimizing the boundary state as follows:
\begin{align}
\min \limits_{\{g_{i},s,\bar{\theta}, \theta_i\}}\! & c(g_1,g_2,s)=\sum \limits_{i=1}^{2}c_{i}(g_{i})+ \Delta \pi^T s, \label{eq:mc_obj}\\
\textrm{subject to}&\hspace{0.1cm}\check{g}_i\leq g_i\leq \hat{g}_i, i=1,2,\label{eq:glimits}\\
&\hspace{0.1cm}0\leq s\leq s_{\textrm{max}}, \label{eq:slimits}\\
&\hspace{0.1cm}H_{i}\theta_{i}+H_{\bar{i}}\bar{\theta}_{i}\leq f_{i},i=1,2, \label{eq:mc_linecons}\\
&\hspace{0.1cm}\bar{H}_{\bar{1}}\bar{\theta}_{1}+\bar{H}_{\bar{2}}\bar{\theta}_{2}\leq \bar{f}, \label{eq:mc_tieline}\\
&\hspace{-2.0cm} \left[\!
\begin{array}{cccc}
Y_{11}&Y_{1\bar{1}}&\mbox{}&\mbox{}\\
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}&\mbox{}\\
\mbox{}&Y_{\bar{2}\bar{1}}&Y_{\bar{2}\bar{2}}&Y_{\bar{2}2}\\
\mbox{}&\mbox{}&Y_{2\bar{2}}&Y_{22}
\end{array}\!
\right]\!\!\left[\!
\begin{array}{c}
\theta_{1}\\
\bar{\theta}_{1}\\
\bar{\theta}_{2}\\
\theta_{2}
\end{array}\!
\right]\!=\!\left[\!
\begin{array}{c}
g_{1}-d_{1}\\
0\\
0\\
g_{2}-d_{2}
\end{array}\!\right], \label{eq:mc_pf}\\
& \hspace{-0.5cm} \left[
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}&\tilde{Y}_{\bar{2}\bar{2}}
\end{array}
\right]\!\left[\!
\begin{array}{c}
\bar{\theta}_{1}\\
\bar{\theta}_{2}\\
\end{array}\!
\right]\!=\!\left[\!
\begin{array}{c}
M_1 s\\
M_2 s
\end{array}\!\right],\label{eq:mc_bndpf}
\end{align}
where decision variables are the cleared internal generation bids $g_i$ with the quantity limit (\ref{eq:glimits}), the cleared interface bids $s$ with the quantity limit (\ref{eq:slimits}), and the system states ($\theta_i, \bar{\theta}$) subject to internal and tie-line power limits (\ref{eq:mc_linecons}) and (\ref{eq:mc_tieline}). Any bid $i$ with $s_i=s_{\rm max, i}$ is fully cleared, any with $s_i=0$ is rejected, and any with $0<\!s_i\!<s_{\rm max, i}$ is partially cleared at amount $s_i$.
Note that, the term $\Delta\pi^T s$ represents the market cost of clearing interface bids. Because the price difference in the real time is in general different from the look-ahead dispatch, market participants carry a certain amount of risk. Thus the bid $\Delta\pi_i$ represents the willingness of the bidder $i$ to take that risk. See \cite{MEAN_VAR_1995} for details of the quantification for risks.
The market clearing model of GCTS (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}) differs from JED in two aspects: (i) the market cost of clearing interface bids $\Delta\pi^T s$ in the objective function and (ii) the additional equality constraint (\ref{eq:mc_bndpf}) that determines the boundary state by clearing interface bids subject to their quantity limits. The coordinator sets the interchange by clearing the interface bids to minimize the overall cost subject to operational constraints and constraints (\ref{eq:slimits}) and (\ref{eq:mc_bndpf}) imposed by the interface bids.
The clearing problem (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}) of GCTS is a look-ahead economic dispatch where the load powers $d_i$ are predicted values. It should be solved in a hierarchical or decentralized manner. Any effective multi-area economic dispatch method can be employed. See, \textit{e.g.}, \cite{GuoTongetc:16TPS,Zhao&LitvinovZheng:14TPS} where (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}) is solved with a finite number of iterations.
\subsection{Real-time Dispatch and Settlement}
Interface bids are settled in the real-time market together with internal bids. There is no coordination required at this step. Specifically, ISO 1 solves its local economic dispatch with fixed boundary state $\bar{\theta}$:
\begin{eqnarray}
\min \limits_{\{g_{1},\theta_1\}} & c_{1} (g_{1}), & \label{eq:rt_obj} \\
\textrm{subject to}&\hspace{0.1cm}H_{1}\theta_{1}+H_{\bar{1}}\bar{\theta}_{1}\leq f_{1}, &(\eta_1^R) \label{eq:rt_line} \\
&\hspace{0.1cm}\check{g}_1\leq g_1\leq \hat{g}_1, i=1,2,& (\bar{\xi}_1^R, \underline{\xi}_1^R)\label{eq:rt_glimits}\\
&\hspace{-2.5cm}
\left[\!
\begin{array}{ccc}
Y_{11}&Y_{1\bar{1}}&\mbox{}\\
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}
\end{array}\!
\right]\!\!\left[\!
\begin{array}{c}
\theta_1\\
\bar{\theta}_1\\
\bar{\theta}_2
\end{array}\!
\right]\!\!=\!\!\left[\!
\begin{array}{c}
g_1-d_1^R\\
0
\end{array}\!\right],\!\!&\!\!\begin{array}{c}
(\lambda_1^R)\\
(\bar{\lambda}_1^R)
\end{array}\label{eq:rt_lf1}
\end{eqnarray}
where $d_1^{R}$ represents real-time internal loads, which may deviate from their predictions in the look-ahead dispatch (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}). The real-time internal dispatch in each area should be compliant with the pre-determined interchange schedule. To this end, boundary state $\bar{\theta}$ is fixed at the solution to Equation (\ref{eq:mc_bndpf}) with $s$ cleared interface bids solved from (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}). All multipliers are given to the right of corresponding constraints.
ISO 1 simultaneously settles internal and interface bids in the real-time market. Internal bids are settled at the LMP $\lambda_1^R$. To settle interface bids, we need to analyze the sensitivity of the local optimal cost in (\ref{eq:rt_obj}) with respect to $s$. In the real-time dispatch (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}), the impact of interface bids $s$ is imposed via the fixed boundary state variables $\bar{\theta}$. The sensitivity of local optimal cost with respect to $\bar{\theta}$ is
\begin{equation}
\nabla_{\bar{\theta}} c_1^*=
\left[\!
\begin{array}{cc}
Y_{\bar{1}1}&Y_{\bar{1}\bar{1}}\\
&Y_{\bar{2}\bar{1}}
\end{array}\!
\right]\left[\!
\begin{array}{c}
\lambda_1^R \\
\bar{\lambda}_1^R
\end{array}\!
\right]+\left[\!
\begin{array}{c}
H_{\bar{1}}^T \eta_1^R \\
0
\end{array}\!
\right].
\end{equation}
The sensitivity of local optimal cost with respect to $s$ is
\begin{equation}
\nabla_{s} c_1^* \!=\! [\nabla_{s} \bar{\theta}]^T \nabla_{\bar{\theta}} c_1^* \!=\! M^T \left[\!
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}&\tilde{Y}_{\bar{2}\bar{2}}
\end{array}\!
\right]^{-1}\!\!\! \nabla_{\bar{\theta}} c_1^*\triangleq\! \mu_1^R. \label{eq:rt_mu}
\end{equation}
In the absence of tie-line congestion, interface bids pay prices $\mu_1^R$ in Area 1 and $\mu_2^R$ in Area 2 (they get paid if $\mu_i<0$). In general, interface bids are not settled at LMPs. This is because the change of the objective function \eqref{eq:rt_obj} with an increment of cleared $s$ differs from that with an increment of load power.
If there are tie-lines congested, similar to CTS, we will compute congestion rents according to the look-ahead dispatch (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}) and subtract them from the payment to interface bidders. Tie-line congestion prices associated with interface bids are calculated by
\begin{equation}
\rho=M^T \tilde{S}^T \bar{\eta}, \tilde{S}= [\bar{H}_{\bar{1}} \hspace{0.1cm} \bar{H}_{\bar{2}}]\left[
\begin{array}{cc}
\tilde{Y}_{\bar{1}\bar{1}}&Y_{\bar{1}\bar{2}}\\
Y_{\bar{2}\bar{1}}&\tilde{Y}_{\bar{2}\bar{2}}
\end{array}
\right]^{-1},
\label{eq:bndcongestion}
\end{equation}
where $\bar{\eta}$ is the shadow price in (\ref{eq:mc_tieline}), and $\tilde{S}$ is the shift factor of boundary buses with respect to tie-lines in Fig. \ref{fig:net_eq}. Similar to CTS, we evenly split the congestion rent price $\rho$ into two areas\footnote{When there are more than two areas, tie-line congestions may induce positive shadow prices $\rho$ for interface bids over other interfaces. Nevertheless, the calculation of $\rho$ is the same as in \eqref{eq:bndcongestion}, and the shadow price should be evenly split by neighboring areas.}. Namely, market participants pay $\mu_i^{R}+\frac{\rho}{2}$ in Area $i, i=1,2$.
If $d_i^R=d_i$, one can prove that the real-time dispatch level and prices are consistent with the look-ahead dispatch. Note that fixing some variables at their optimal values does not change optimal values of other primal and dual variables. If the real-time dispatch (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}) is infeasible, ad hoc adjustments such as relaxations of flow limits can be employed in practice.
\subsection{Efficiency and price convergence of GCTS}
By removing the proxy bus approximation and adopting a strict DC OPF model in (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}), we are able to establish many important properties for GCTS. Their proofs are relegated to the Appendix.
We first show that GCTS asymptotically achieves seamless interfaces when more and more bidders participate in the competition at all possible pairs of trading locations. Intuitively, for GCTS to achieve the cost of JED, two conditions are necessary in general. First, there have to be enough bidders who try to capture the arbitrage profits across the interface so that they drive $\Delta \pi \rightarrow 0$. This follows the standard economic argument of perfect competition. Second, bids need to be diverse enough to make the matrix $M$ full row rank so that the tie-line flows of the GCTS can match those of JED. It turns out that both conditions can be satisfied simultaneously by conditions below.
\begin{theorem}
\label{thm:JED}
(Asymptotic efficiency) Consider a market with $N$ independent interface bidders. Assume that (i) both JED and GCTS are feasible and each has an unique optimum, (ii) the number of aggregated bids for each pair of source and sink buses grows unbounded with $N$, and (iii) bidding prices for all participants go to zero as $N \rightarrow \infty$, \textit{i.e.} $\lim_{N\rightarrow \infty} \Delta \pi =0$, then the scheduled tie-line power flows and generations in each area by GCTS converge to those of the JED as $N \rightarrow \infty$.
\end{theorem}
\textbf{Remark 1}: Recall that JED by a ``super ISO'' provides the lowest possible generation cost, thus achieving the overall market efficiency. In practice, the power system is artificially partitioned into multiple subareas that are operated by financially neutral ISOs, and interchange scheduling has to rely on bids from market participants. Such operational regulations will naturally create seams at interfaces. Theorem \ref{thm:JED} shows that, however, GCTS asymptotically achieves seamless interfaces under mild conditions. This indicates that GCTS leads to the price convergence between regional electricity markets.
\textbf{Remark 2}: The price convergence implies that there is no arbitrage opportunity, and that the dispatch level of GCTS is the same as that of JED. Note that due to congestions, boundary buses may have different LMPs even under the administration of a ``super-ISO''. So the price convergence is in fact for shadow prices of $s$. This also explains why interface bids should be settled at $\mu_i^R$ in \eqref{eq:rt_mu} but not LMPs.
\textbf{Remark 3}: The assumptions that bidding locations are diverse enough and that $\Delta\pi$ goes to zero as $N$ increases come from the interpretation that, as the number of bidders increases, there are always enough bids that can be cleared to satisfy the desired interchange level. Thus individually, each bidder seeks trading locations with seams and reduces its bidding price so that it will have a better chance to be cleared.
\subsection{Relation Between GCTS and CTS}
Next we establish connections between GCTS and CTS. Specifically, we show that the two mechanisms are equivalent in a particular simple setting.
\begin{theorem}
\label{thm:CTS}
When there is a single tie-line between two areas, the clearing process of GCTS (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}) provides the same interchange as that of CTS.
\end{theorem}
\textbf{Remark:} A natural corollary of Theorem \ref{thm:CTS} is that when there is a single tie-line between two areas and real-time load is the same as the load considered in the interchange scheduling, then CTS provides the optimal interchange schedule in the sense that the posterior real-time dispatch $g_i^R$ minimizes the total cost of all internal and external market participants.
In practice, however, neither condition in these two theorems is likely to hold. In such cases, our simulations show that GCTS generally has lower overall cost than CTS and its dispatch satisfies security constraints. CTS, may on the other hand, may violate security constraints due to the loop flow problem engendered by its proxy-bus approximation. See Section V for details.
\subsection{Revenue Adequacy}
In this subsection, we establish the revenue adequacy for the real-time market (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}). Recall that, in the single-area economic dispatch, each area has a non-negative net revenue, which is equal to its congestion rent. We prove in the following theorem that each area achieves its revenue adequacy in the same fashion in GCTS in an interconnected power system.
\begin{theorem}
\label{thm:Revenue}
Assume that the real-time dispatch (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}) is feasible and that the settlement process follows our description in Subsection III-D, then
the net revenue of each area is non-negative and is equal to its congestion rent.
\end{theorem}
\subsection{Local Performance}
ISOs are mainly responsible of the efficiencies of their own regional markets, rather the overall efficiency. Therefore, an ISO may be reluctant to implement any interchange scheduling approach that worsens its local performance for the sake of the overall efficiency. We partly address this issue in this subsection.
In the conventional interchange scheduling before CTS, market participants split their bidding prices into $\Delta \pi=\pi_1+\pi_2$ and separately submit them to the two neighboring ISOs who clear these bids independently. Only bids cleared in both markets will be scheduled \cite{NENY:BeforeCTS}. In essence, we take the minimum of the cleared quantities. In the following theorem, we prove that GCTS achieves higher local surpluses in all areas than the conventional approach under a simple setting:
\begin{theorem}
\label{thm:Surplus}
Assume that (i) there is a single tie-line between two neighboring areas, (ii) real-time load demands are the same as their look-ahead predictions, and (iii) each market clearing problem has an unique optimum, then there is
\begin{equation}
\tilde{LS}_i \geq \hat{LS}_i,
\end{equation}
where $\tilde{LS}_i$ is the local surplus of area $i$ in its real-time market (\ref{eq:rt_obj})-(\ref{eq:rt_lf1}) with $\bar{\theta}$ determined by the optimal $\tilde{s}$ cleared in GCTS (\ref{eq:mc_obj})-(\ref{eq:mc_bndpf}). Specifically, it is defined as
\begin{equation}
\tilde{LS}_i \triangleq (D_i-(\tilde{\lambda}_i^R)^T d_i)+((\tilde{\lambda}_i^R)^T \tilde{g}_i - c_i(\tilde{g}_i))+f_1^T \tilde{\eta}_1^R,
\end{equation}
where $D_i$ is the constant utility of consumers. Variables with tildes are solved with $\tilde{s}$. The local total surplus in Area $i$ is the sum of its consumer surplus, supplier surplus, and the surplus of transmission owners. The local surplus $\hat{LS}_i$ with $\hat{s}$ the result of separate clearing is similarly defined.
\end{theorem}
We remove this result from our journal submission because this is more about CTS. In general, when there are multiple tie-lines, Theorem \ref{thm:Surplus} may not hold for GCTS. Nevertheless, it is important to look into performances in regional markets, especially for power systems that cover multiple regions or even countries. Investigating weaker conditions for Theorem \ref{thm:Surplus} would be an interesting direction for future works.
\section{Introduction}
\input intro_v1
\section{Coordinated Transaction Scheduling}
\input CTS
\section{Generalized CTS}
\input Method
\section{Properties of GCTS}
\input Properties
\section{Numerical Tests}
\input Tests
\section{Conclusion}
The aim of this paper is to unify major approaches to interchange scheduling: JED that achieves the ultimate economic efficiency and CTS that is the state-of-the-art market solution. GCTS partially meets this goal by maintaining the same market structure as CTS while asymptotically achieving the economic efficiency of JED under given assumptions. GCTS also ensures the revenue adequacy of each system operator.
Several important issues not considered here require further investigation. Among these are the impacts of strategic behavior of market participants, uncertainties in real-time operations, and the asynchronous mode of interchange scheduling among more than two areas.
\input{appendix.tex}
{
\bibliographystyle{plain}
\subsection{Two-area 44-bus system}
GCTS was tested on a two-area system composed of the IEEE 14-bus system (Area 1) and the 30-bus system \cite{Case30} (Area 2). The system configuration and reactance and capacities of tie-lines are illustrated in Fig.\ref{fig:sys}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig/Figtestsys.eps}
\caption{\small Configuration of the two-area power system}
\label{fig:sys}
\end{figure}
Two groups of simulations were conducted. First, we aimed to illustrate the market clearing process of GCTS. Second, we compared GCTS with JED and CTS and numerically demonstrated the asymptotic convergence of GCTS to JED as in Theorem \ref{thm:JED}.
\subsubsection{Illustration of the market clearing process}
Eight interface bids were considered in the first group of simulations. Their trading locations and prices $\Delta \pi$ are listed in TABLE \ref{table:bid}. Some market participants traded on boundary buses without direct connections, such as bids 2 and 5. The maximal quantities of all bids were set as 30MW.
From default prices in Area 1, we used a weighting factor $w$ to generate scenarios with various degrees of price discrepancies. For all scenarios, cleared interface bids, tie-line power flows, marginal prices, and system costs are presented in TABLE \ref{table:result}:
\begin{table}[ht]
\centering
\caption{\small Profile of Interface Bids}
\begin{tabularx}{11cm}{cccc}
\hline
Indices&Sell to&Buy from&Price (\$/MWh)\\
\hline
1& Bus 15 (Area 2) & Bus 5 (Area 1) & 1\\
2& Bus 28 (Area 2) & Bus 5 (Area 1) & 2\\
3& Bus 5 (Area 1) & Bus 15 (Area 2) &1.5\\
4& Bus 5 (Area 1) & Bus 28 (Area 2) &0.5\\
5& Bus 15 (Area 2)& Bus 9 (Area 1) &1.0\\
6& Bus 28 (Area 2)& Bus 9 (Area 1) &2.0\\
7& Bus 9 (Area 1) &Bus 15 (Area 2)&1.5\\
8& Bus 9 (Area 1) &Bus 28 (Area 2)&0.5\\
\hline
\end{tabularx}\label{table:bid}
\end{table}
\begin{table}[ht]
\centering
\caption{\small Results of Interface Bid Clearing}
\begin{tabularx}{11cm}{cccccc}
\hline
\multicolumn{2}{c}{Weighting factor}&0.1&0.15&0.2&1.0\\
\hline
\multirow{8}{*}{\shortstack{Cleared\\quantities\\of\\interface\\bids\\(MW)}}&1& 30 & 30 & 5.56 & 5.56 \\
&2& 0 & 0 & 0 & 0 \\
&3& 0 & 0 & 0 & 0\\
&4& 0 & 0 & 30 & 30\\
&5& 0 & 0 & 0 & 0\\
&6& 0 & 0 & 0 & 0\\
&7& 10.40 & 30 & 30 & 30\\
&8& 30 & 30 & 30 & 30\\
\hline
Tie-line & bus 15 to 5 & -8.66 & 5.53 & 38.60 & 38.60 \\
flow (MW) & bus 28 to 9 & 19.05 & 41.31 & 45.84 & 45.84 \\
\hline
\multirow{4}{*}{\shortstack{Marginal \\prices\\(\$/MWh)}}& bus 5 & -0.10 & 0.02 & 0.82 & 15.62 \\
& bus 9 & 2.90 & 4.34 & 6.49 & 45.96\\
& bus 15 & 1.40 & 1.04 & 1.82 & 16.62\\
& bus 28 & 0.0 & 0.0 & 0.0 & 0.0\\
\hline
\multirow{3}{*}{\shortstack{Market \\costs\\(\$/h)}} & Internal & 923.2 & 1148.2 & 1371.0 & 4525.4 \\
& Interface & 58.1 & 90 & 80.56 & 80.56 \\
& Total & 983.0 & 1238.2 & 1451.6 & 4605.9\\
\hline
\end{tabularx}\label{table:result}
\end{table}
The second block (second to ninth rows) in TABLE \ref{table:result} lists cleared amounts for interface bids. With the increase of $w$, bids delivering power from Area 2 to Area 1 were cleared at greater quantities, see the fifth and eighth rows, while those delivering power in the opposite direction were cleared at smaller quantities, see the second row.
The third block includes results on tie-line power flows. They were determined by the boundary power flow equation (\ref{eq:bnd_dclf}) and cleared amounts of bids in the second block. When $w=0.1$, tie-line power flows were in both directions. When $w$ was increased, which signified greater price discrepancies, tie-line power flows became unidirectional from the low-price area to the high-price area.
The fourth block are marginal prices for all boundary buses in the market clearing process, \textit{i.e.}, multipliers associated with boundary equality constraint (\ref{eq:mc_bndpf}). All bids whose prices were lower than marginal price gaps between their trading points were totally cleared, see the second, twelfth, and fourteenth rows when $w=0.1$ and the second row in TABLE \ref{table:bid} as an example. All bids whose prices were higher were rejected, see the third, twelfth, and fifteenth rows when $w=0.1$ and the third row in TABLE \ref{table:bid} as an example. For partially cleared interface bids, marginal price gaps between their trading points were equal to their bidding prices, see the eighth, thirteenth, and fourteenth rows when $w=0.1$ and the eighth row in TABLE \ref{table:bid} as an example.
The last block are generation costs, costs of market participants, and total costs per hour in the proposed approach. GCTS considered the total market cost of internal and interface bidders.
\subsubsection{Comparison with Existing Benchmarks}
In the second group of simulations, we compared the proposed method with existing approaches on tie-line scheduling. Specifically, the following methods were compared:
i) JED that minimized the total generation cost;
ii) CTS wherein proxy buses were selected as bus 5 in Area 1 and bus 15 in Area 2;
iii) The proposed mechanism of GCTS.
Default generation prices were considered in this test. We used similar bids to those in TABLE \ref{table:bid} for GCTS but their quantity limits and prices were uniformly set as $s_{\textrm max}=100MW$ and $\Delta \pi=\$0.1/MWh$. In CTS, all bids were placed at proxy buses with the same quantity limits and prices.
We compared market costs in the look-ahead interchange scheduling as well as those in the real-time local dispatch. For the latter, we generated 100 normally distributed realizations of real-time load consumptions whose mean values were their look-ahead predictions (default values in the system data) and standard deviations were 5\% of their mean values. Comparisons on net interchange quantities, look-ahead generation costs and total costs, and real-time average total costs for all samples are recorded in TABLE \ref{table:compare1}:
\begin{table}[ht]
\centering
\caption{\small Comparison of JED, CTS, and GCTS for the Two-area Test}
\begin{tabularx}{11cm}{cccc}
\hline
& JED & CTS & GCTS \\
\hline
Net interchange amounts (MW) & 87.0 & 80.3 & 87.0 \\
Look-ahead generation costs (\$/h) & 4039.8 & 4109.9 & 4039.8\\
Look-ahead total costs (\$/h) & -- & 4118.0 & 4048.5\\
Average real-time total costs (\$/h) & 4096.2 & 4139.8 & 4115.7\\
\hline
\end{tabularx}\label{table:compare1}
\end{table}
From TABLE \ref{table:compare1} we observed that GCTS achieved lower look-ahead and average real-time costs than CTS. Specifically, GCTS had lower real-time costs in 88 out of the 100 samples. In addition, CTS suffered from the loop-flow problem in that branch power flows solved with the global power flow equation and real-time dispatch levels in both areas deviated from internal real-time schedules. In this test, average discrepancies on tie-line power flows were 18.25\% for Area 1 and 16.38\% for Area 2, respectively. As a result, CTS caused unpredicted overflows for transmission lines in all of the 100 scenarios, with 2.72 overflowed transmission lines in each scenario on average and the average ratio of overflows as 11.27\%. In GCTS, however, such problems did not exist because it is based on the exact DC power flow model. Another takeaway of TABLE \ref{table:compare1} is that, with sufficient bids and relatively low prices ($\Delta \pi=\$0.1/MWh$), the interchange scheduled by GCTS was the same as that in JED in this test.
We illustrate the price convergence of GCTS with different values of $w$ in Fig. \ref{fig:priceconvergence} by adjusting the uniform bidding price $\Delta \pi$. No bid was cleared when the bidding price $\Delta \pi=\$100/MWh$. When $\Delta \pi$ decreased to small enough values (\$0.1/MWh in this test), generation costs of GCTS in all scenarios were equal to those of JED. In general, the more significant the price discrepancy was, the faster the price converged. This is consistent with our intuition that market participants could be cleared at higher prices when there is more room for arbitrations.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig/PriceConvergence.eps}
\caption{\small Price convergence of GCTS with different bidding prices}
\label{fig:priceconvergence}
\end{figure}
Note that such price convergence did not happen in CTS. For the test in TABLE \ref{table:compare1}, for example, if we set the bidding price of CTS as zero, the total generation cost would be \$4109.7 per hour, which was higher than that of JED.
\subsection{Three area 189-bus system test}
The proposed method was also tested on a three-area system as shown in Fig. \ref{fig:sys2}. The system was composed of IEEE {14,} 57, and 118-bus systems. Power flow limits on all lines were set as 100 MW. Eight interface bids were considered. For each tie-line, there were two interface bids who traded at their terminal buses but in opposite directions. The prices and maximum quantities for all interface bids were respectively set as \$0.5/(MW-h) and 100MW.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig/ThreeAreaSys.eps}
\caption{\small Configuration of the three-area power system}
\label{fig:sys2}
\end{figure}
The results of market clearing are given in Fig. \ref{fig:sys3}, where internal parts of all areas are represented by their network equivalences. Cleared interface bids are denoted by power injections at boundary buses. Power flows through tie-lines are also shown, which were determined by the DC power flow equation for the network (\ref{eq:bnd_dclf}) in Fig. \ref{fig:sys3}.
The total cost of the three-area system was $\$1.263\times 10^{5} /h$, in which the cost of market participants was $\$601.43/h$ and the rest was the generation cost. As a reference, if there is no interchange at all, the total generation cost would be $\$1.394\times 10^{5} /h$. The reduction of generation cost largely exceeded the cost of market participants.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Fig/ThreeAreaSys_neteq.eps}
\caption{\small Clearing of interface bids and tie-line power flows (MW)}
\label{fig:sys3}
\end{figure}
We did similar comparisons of JED, CTS, and GCTS for this three-area test as in TABLE \ref{table:compare2}. In CTS, interchange schedules were set in a pairwise manner, and proxy buses were always selected as ones with the smallest indices on their sides.
\begin{table}[ht]
\centering
\caption{\small Comparison of JED, CTS, and GCTS for the Three-area Test}
\begin{tabularx}{12cm}{cccc}
\hline
& JED & CTS & GCTS \\
\hline
\!\!Look-ahead generation costs (\$/h) \!\! &\!\! $\!\!1.255 \!\times\! 10^5\!$ & $\!\!\!1.261 \!\times\! 10^5\!$ & $\!\!1.257 \!\times \!10^5\!$\\
\!\!Look-ahead total costs (\$/h) & \!\!\!-- & $\!\!\!1.262 \!\times\! 10^5\!$ & $\!\!1.257 \!\times\! 10^5\!$\\
\!\!Average real-time total costs (\$/h) \!\! & \!\!$\!1.255 \!\times\! 10^5\!$ & $\!\!\!1.263 \!\times\! 10^5\!$ & $\!\!1.263 \!\times \!10^5\!$\\
\hline
\end{tabularx}\label{table:compare2}
\end{table}
Our conclusions of comparisons were similar to those in the two-area test. GCTS had lower look-ahead costs than CTS, which was close to JED. Although its real-time costs were similar to CTS, GCTS removed the loop-flow problem in CTS. Namely, CTS suffered from overflow problems in 92 out of the 100 scenarios with randomly generated load powers.
| proofpile-arXiv_065-7469 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\indent Iron nitrides, especially in iron-rich phases, have been under intense
research due to the strong ferromagnetism and interest in its physical origin
\cite{Coey1999Magnetic-nitrid,Frazer1958Magnetic-Struct}. The
difficulty in obtaining a single phase has been a long-standing problem
for ferromagnetic iron nitrides, to hinder fundamental understanding of
intrinsic physical properties \cite{Coey1994The-magnetizati,Komuro1990Epitaxial-growt,Ortiz1994Epitaxial-Fe16N}. Recently,
the successful epitaxial growth of single-phase ferromagnetic
$\bf{\gamma}'$-Fe$_{4}$N has been reported on various substrates, which
helps to comprehend a crucial role for the hybridization between Fe and
N states in the ferromagnetism of $\bf{\gamma}'$-Fe$_{4}$N
\cite{Atiq2008Effect-of-epita,Borsa2001High-quality-ep,Gallego2004Mechanisms-of-e,Ito2011Spin-and-orbita,Nikolaev2003Structural-and-,Kokado2006Theoretical-ana,Ito2015Local-electroni}. The
robust Fe-N bonding also renders an Fe$_{2}$N layer strongly
two-dimensional \cite{Fang2014Predicted-stabi}, which possibly
facilitates a layer-by-layer stacking of $\bf{\gamma}'$-Fe$_{4}$N on metals. This contrasts
with the case of elemental 3$d$ transition metals (TMs) deposited on 3$d$ TM substrates, in which inevitable atom
intermixing and exchange of constituents prevent the formation of ordered
overlayers \cite{Kim1997Subsurface-grow,Nouvertne1999Atomic-exchange,Torelli2003Surface-alloyin}. Therefore,
the investigation into the electronic and magnetic states of
$\bf{\gamma}'$-Fe$_{4}$N atomic layers can not only elucidate the
layer-/site-selective electronic and magnetic states of
$\bf{\gamma}'$-Fe$_{4}$N, but unravel the origin of the strongly thickness-dependent physical properties in a thin-film
limit of 3$d$ TM ferromagnets \cite{Srivastava1997Modifications-o,Farle1997Anomalous-reori,Farle1997Higher-order-ma,Schulz1994Crossover-from-,Li1994Magnetic-phases,Straub1996Surface-Magneti,Weber1996Structural-rela,Meyerheim2009New-Model-for-M}.\\
\indent Here, we report two growth modes of
$\bf{\gamma}'$-Fe$_{4}$N/Cu(001) depending on preparation methods. The scanning tunneling
microscopy/spectroscopy (STM/STS) observations indicated a successful
growth of ordered trilayer $\bf{\gamma}'$-Fe$_{4}$N, without extra
nitrogen bombardment onto the existing structures. X-ray absorption
spectroscopy/magnetic circular dichroism (XAS/XMCD) measurements
revealed the thickness dependence of the magnetic moments of Fe atoms,
the origin of which was well explained by the
first-principles calculations. Based on an atomically-resolved
structural characterization of the system, the layer-by-layer electronic and
magnetic states of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers have been
understood from both experimental and theoretical points of view.\\
\section{Methods}
\indent A clean Cu(001) surface was prepared by repetition of sputtering
with Ar$^{+}$ ions and subsequent annealing at 820 K. Iron was deposited at room temperature (RT) in a preparation chamber under an ultrahigh vacuum (UHV) condition
($<1.0\times10^{-10}$ Torr), using an electron-bombardment-type
evaporator (EFM, FOCUS) from a high-purity Fe rod
(99.998 \%). The STM measurements were
performed at 77 K in UHV ($<3.0\times10^{-11}$ Torr) using
electrochemically etched W tips. The differential conductance d$I$/d$V$ was recorded for STS using a
lock-in technique with a bias-voltage modulation of 20 mV and 719
Hz. The XAS and XMCD measurements were
performed at BL 4B of UVSOR-I\hspace{-.1em}I\hspace{-.1em}I \cite{Gejo2003Angle-resolved-,Nakagawa2008Enhancements-of} in a total
electron yield (TEY) mode. The degree of circular
polarization was $\sim 65\ \%$, and the x-ray propagation vector lay within the (1\=10) plane of a Cu(001) substrate. All the
XAS/XMCD spectra were recorded at
$\sim8\ {\rm K}$, with external magnetic field $B$ up to $\pm5$ T
applied parallel to the incident
x-ray. The symmetry and quality of the surface were also checked by low energy electron
diffraction (LEED) in each preparation chamber. First-principles calculations were performed within the density
functional theory in the local density approximation
\cite{Perdew1992Accurate-and-si}, using a self-consistent full-potential Green function method specially
designed for surfaces and
interfaces \cite{Luders2001Ab-initio-angle,Geilhufe2015Numerical-solut}.\\
\section{Results and Discussion}
\subsection{\label{secbilayer}Monolayer and bilayer-dot $\bf{\gamma}'$-F\lowercase{e$_{4}$}N}
\begin{figure}
\includegraphics[width=86mm]{Fig1_Multilayer-01.eps}
\caption{\label{fig1} (Color online) Topography and atomic structure of the monolayer
$\bf{\gamma}'$-Fe$_{4}$N on Cu(001). (a) Topographic image (100$\times$50 nm$^2$,
sample bias $V_{\rm s}=+1.0\ {\rm V}$, tunneling current $I=0.1\ {\rm nA}$)
of the monolayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001). White lines represent
step edges of the Cu(001)
terraces. Color contrast is enhanced within each terrace. (b) Close view (2.5$\times$2.5 nm$^2$, $V_{\rm s}=0.25\ {\rm V}$, $I=45\ {\rm
nA}$) of the surface Fe$_{2}$N layer. The dimerization of Fe atoms is indicated by encirclement. (c) LEED pattern obtained with an incident electron energy of 100
eV. (d) Bulk crystal structure of $\bf{\gamma}'$-Fe$_{4}$N. A dotted parallelogram represents an Fe$_{2}$N plane. (e) Atomic structure of the
monolayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001). (f) Schema illustrating $p4g(2\times2)$ reconstruction in the surface
Fe$_{2}$N layer of $\bf{\gamma}'$-Fe$_{4}$N. Arrows indicate the shift of the Fe atoms from an
unreconstructed $c(2\times2)$ coordination (dotted circles). For (d) to (f), large blue
(yellow) and small red spheres represent Fe (Cu) and N
atoms, respectively.}
\end{figure}
\indent Monolayer Fe$_{2}$N on Cu(001) was prepared prior to any growth
of multilayer $\bf{\gamma}'$-Fe$_{4}$N by the following cycle: N$^{+}$ ion
bombardment with an energy of 0.5 keV to a clean Cu(001) surface,
subsequent Fe deposition at RT, and annealing at 600 K. Note that the
monolayer Fe$_{2}$N is identical to Fe$_{4}$N on Cu(001) in a monolayer
limit, and thus referred to as also ''monolayer
$\bf{\gamma}'$-Fe$_{4}$N'' hereafter. A topographic image of the sample after one growth cycle is shown in
Fig. \ref{fig1}(a). The monolayer $\bf{\gamma}'$-Fe$_{4}$N is formed on the
Cu terraces at $\sim$ 0.85 ML coverage. An atomically-resolved
image of that surface displayed in
Fig. \ref{fig1}(b) reveals a clear
dimerization of the Fe atoms, typical of ordered
$\bf{\gamma}'$-Fe$_{4}$N on Cu(001)
\cite{Gallego20051D-Lattice-Dist,Takahashi2016Orbital-Selecti}. A LEED
pattern of the surface is shown in
Fig. \ref{fig1}(c), which exhibits sharp spots with the corresponding
$p4g(2\times2)$ symmetry. It is known that
\cite{Gallego20051D-Lattice-Dist,Gallego2004Self-assembled-,Navio2007Electronic-stru,Takahashi2016Orbital-Selecti} the topmost layer of the $\bf{\gamma}'$-Fe$_{4}$N on
Cu(001) always consists of the Fe$_{2}$N plane in a bulk Fe$_{4}$N crystal
shown in Fig. \ref{fig1}(d). A schematic model of the monolayer
$\bf{\gamma}'$-Fe$_{4}$N is given in Fig. \ref{fig1}(e), composed of
a single Fe$_{2}$N plane on Cu(001). Accordingly, the surface Fe$_{2}$N plane takes
reconstruction to the $p4g(2\times2)$ coordination \cite{Gallego20051D-Lattice-Dist}, in which the Fe
atoms dimerize in two perpendicular directions as illustrated in Fig. \ref{fig1}(f).\\
\begin{figure}
\includegraphics[width=86mm]{Fig2_Multilayer-01.eps}
\caption{\label{fig2} (Color online) Topography of the bilayer
$\bf{\gamma}'$-Fe$_{4}$N dot on Cu(001). (a) Topographic image (120$\times$60
nm$^2$, $V_{\rm s}=-0.1\ {\rm V}$, $I=0.1\ {\rm nA}$)
of the monolayer (darker area) and dot-like bilayer $\bf{\gamma}'$-Fe$_{4}$N
on Cu(001). White lines represent step edges of the Cu(001)
terraces. Color contrast is enhanced within each terrace. (b,c) Upper panels:
Atomically-resolved topographic images (7$\times$3 nm$^2$, $I=2.0\ {\rm
nA}$) taken at (b) $V_{\rm s}=-0.1\ {\rm V}$ and (c) $+0.1\ {\rm V}$. Lower panels:
Height profiles measured along lines indicated in the upper panels. (d)
Proposed atomic structure of the bilayer-dot $\bf{\gamma}'$-Fe$_{4}$N on
Cu(001). Large blue
(yellow) and small red spheres correspond to Fe (Cu) and N
atoms, respectively.}
\end{figure}
\indent After repeating the growth cycles, we found a new structure different from the monolayer
$\bf{\gamma}'$-Fe$_{4}$N. Figure \ref{fig2}(a) displays the surface after two
growth cycles in total, namely, another cycle of the N$^{+}$ ion
bombardment, Fe deposition, and annealing onto the existing
monolayer $\bf{\gamma}'$-Fe$_{4}$N surface. Then,
the surface
becomes mostly covered with the monolayer $\bf{\gamma}'$-Fe$_{4}$N, which contains a small
number of bright dots. For a structural
identification of these dots, we measured atomically-resolved topographic
images and line profiles at different $V_{\rm
s}$ as shown in Fig. \ref{fig2}(b) and
\ref{fig2}(c). The dot structure imaged at $V_{\rm
s}=-0.1\ V$ reveals the dimerization of the Fe atoms as the monolayer
$\bf{\gamma}'$-Fe$_{4}$N surface. This indicates that the topmost part of the dot
consists of the reconstructed Fe$_{2}$N. At positive $V_{\rm s}$ of +0.1 V, in contrast, the dot is
recognized as a single protrusion both in the topographic image and line
profile, while the surrounding monolayer $\bf{\gamma}'$-Fe$_{4}$N still shows
the Fe dimerization. This implies the different electronic structure of the dot
compared to the monolayer $\bf{\gamma}'$-Fe$_{4}$N, which comes from
the difference in a subsurface atomic structure.\\
\indent The observed height difference between the dot and the monolayer
$\bf{\gamma}'$-Fe$_{4}$N ranges from 4 to 10 pm depending on $V_{\rm
s}$. These values are in the same order of a lattice
mismatch between the bulk crystals of the $\bf{\gamma}'$-Fe$_{4}$N/Cu(001)
(380 pm) and Cu(001) (362 pm) \cite{Gallego20051D-Lattice-Dist}, but an order of magnitude smaller than the lattice constant of
the $\bf{\gamma}'$-Fe$_{4}$N/Cu(001). This
suggests that the topmost layer of the dot is not located above the
monolayer $\bf{\gamma}'$-Fe$_{4}$N surface, but shares the Fe$_{2}$N
plane with. Furthermore, the bright
dot is composed of only four pairs of the Fe dimer as imaged in
Fig. \ref{fig2}(b), indicating that the difference in the atomic and/or
electronic structures is restricted within a small area. Considering the
above, it is most plausible that one Fe atom is
embedded just under the surface N atom at the dot center, and thus a bilayer
$\bf{\gamma}'$-Fe$_{4}$N dot is formed as
schematically shown in Fig. \ref{fig2}(d). This structure corresponds to a
minimum unit of the bilayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001).\\
\begin{figure}
\includegraphics[width=86mm]{Fig3_Multilayer-01.eps}
\caption{\label{fig3} (Color online) Topographic images (15$\times$15
nm$^{2}$) of the surface
after repetition of (a) two and (b) three growth cycles. The set point
is ($V_{\rm s},\ I$) = (+0.25 V, 5.0 nA) for (a) and (+0.1 V, 3.0 nA) for (b).}
\end{figure}
\indent This bilayer dot formed clusters by a further
repetition of the growth cycles. Figure \ref{fig3}(a) shows an enlarged
view of the iron-nitride surface after two growth cycles. The
coverage of the dot is estimated to be $\sim$ 5 \% of
the entire surface. Another growth cycle onto this surface led to an
increase in a dot density up to $\sim$ 40 \%, as shown in
Fig. \ref{fig3}(b). However, further repetitions of the cycles resulted in neither a
considerable increase in the dot density nor the formation of a continuous bilayer film. This can be attributed to an inevitable sputtering
effect in every growth cycle: an additional N$^{+}$
ion bombardment to the existing surface not only implanted
N$^{+}$ ions but also sputtered the surface, which caused the loss
of the iron nitrides already formed at the surface, as well as the increase in the
surface roughness.\\
\indent To compensate this loss of surface Fe atoms by the sputtering
effect, we also tried to increase the amount of deposited Fe per
cycle. Nonetheless, the number of Fe atoms, which remained at the surface after annealing, did not
increase possibly because of the
thermal metastability of Fe/Cu systems
\cite{Detzel1994Substrate-diffu,Memmel1994Growth-structur,Shen1995Surface-alloyin,Bayreuther1993Proceedings-of-}. The
isolated Fe atoms without any bonding to N atoms were easily
diffused and embedded into the
Cu substrate during the annealing process. As a result, only the imperfect bilayer $\bf{\gamma}'$-Fe$_{4}$N was obtained
through this method.\\
\subsection{\label{sectrilayer}Trilayer $\bf{\gamma}'$-F\lowercase{e$_{4}$}N film}
\begin{figure}
\includegraphics[width=86mm]{Fig4_Multilayer-01.eps}
\caption{\label{fig4} (Color online) Topography of the trilayer $\bf{\gamma}'$-Fe$_{4}$N film on Cu(001). Topographic
images (100$\times$100 nm$^2$) after (a) two and (b) three cycles of the Fe
deposition under N$_{2}$ atmosphere and subsequent annealing onto the
monolayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001). The setpoint is $I=0.1\
{\rm nA}$, $V_{\rm s}=-0.1\ {\rm
V}$ for (a) and -0.05 V for (b). White lines indicate step edges of the Cu terraces. Color contrast is enhanced within each terrace. (c) Atomically-resolved topographic image
(4$\times$4 nm$^2$, $I=5.0\ {\rm nA},\ V_{\rm s}=-0.1\ {\rm V}$) of the
trilayer $\bf{\gamma}'$-Fe$_{4}$N surface. An
inset represents a LEED pattern of the sample shown in (b), obtained with
an incident electron energy of 100 eV. (d) Height profile measured along
the line indicated in (b). (e) XAS edge jump spectra of the trilayer
(solid) and monolayer (dotted) samples at the Fe and Cu
$L$ edges. The intensity is normalized to the Cu edge jump. (f) Atomic model expected for the trilayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001). Blue (yellow) large and red
small spheres represent Fe (Cu) and N atoms, respectively.}
\end{figure}
\indent Multilayer $\bf{\gamma}'$-Fe$_{4}$N films were obtained by the following procedure. First, the monolayer
$\bf{\gamma}'$-Fe$_{4}$N was
prepared on Cu(001) as above. Then, 2 ML Fe was deposited under N$_{2}$ atmosphere (5.0$\times$10$^{-8}$ Torr) \footnote{We checked the ionization of nitrogen molecules/atoms without bombardment
using an ion gun. The ion flux monitored for the Fe evaporator increased in proportion to the rise in the N$_{2}$ pressure, far below
the parameters at which Fe started to be evaporated. This indicates the
ionization of the N$_{2}$ molecules and/or N atoms around the evaporator possibly by thermal electrons
created inside it. Then, the N$^{+}$ and N$_{2}^{+}$ ions could reach to the surface
together with the evaporated Fe atoms, or iron nitride was
already formed before landing.} at RT, and the sample was annealed at
600 K. Figures \ref{fig4}(a) and \ref{fig4}(b) show topographic images
after two and three above mentioned cycles, respectively. In the images, the coverage
of new bright area, different from the imperfect bilayer dot,
monotonously increases with repeating the cycles. A close view of that
new surface is displayed in Fig. \ref{fig4}(c), revealing the dimerized
(or even $c(2\times2)$-like dot) structures. Because a LEED pattern shown in the inset of Fig. \ref{fig4}(c)
exhibits the $p4g(2\times2)$
symmetry without extra spots, the topmost layer of this surface is
composed of the reconstructed Fe$_{2}$N plane
\cite{Takahashi2016Orbital-Selecti}. Therefore, these observations suggest that the
new area would consist of $\bf{\gamma}'$-Fe$_{4}$N other than both of the monolayer and bilayer dot.\\
\indent In order to determine the structure of this newly obtained
$\bf{\gamma}'$-Fe$_{4}$N, a typical height profile of the surface was recorded as shown in
Fig. \ref{fig4}(d). It is clear that the new structure is higher than both the
Cu surface and the surface including the monolayer/dot-like bilayer
$\bf{\gamma}'$-Fe$_{4}$N. This suggests that the new area is composed of
$\bf{\gamma}'$-Fe$_{4}$N thicker than bilayer. Quantitative information on the
thickness of the new structure could be obtained from Fe $L\
(2p\rightarrow3d)$ edge jump spectra shown in Fig. \ref{fig4}(e),
whose intensity is roughly proportional to the amount of
surface/subsurface Fe atoms. The sample
prepared in the same procedure as that shown in Fig. \ref{fig4}(b) reveals an edge jump value of 0.32, while the
monolayer $\bf{\gamma}'$-Fe$_{4}$N 0.12 \footnote{The amount of the Fe
atoms detected in the edge-jump spectra was smaller than that expected
from the initially deposited ones. This implies that a certain amount of Fe
atoms, not participating in forming any $\bf{\gamma}'$-Fe$_{4}$N structures,
was embedded into the Cu substrate during annealing, at least several nms
(probing depth in the TEY mode) below the surface.}. Considering that the new area
occupies $\sim$ 60 \% of the entire surface as deduced from
Fig. \ref{fig4}(b), the thickness of this $\bf{\gamma}'$-Fe$_{4}$N must be less than
quadlayer to meet the experimental edge jump value of 0.32 (See Appendix
\ref{jumptoML}). Hence, the newly
obtained structure is identified as a trilayer $\bf{\gamma}'$-Fe$_{4}$N
film. An atomic structure expected for
the trilayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001) is presented in Fig. \ref{fig4}(f). The
growth without any ion bombardment to the monolayer surface possibly
stabilizes the subsurface pure Fe layer, which
could promote the formation of the
trilayer $\bf{\gamma}'$-Fe$_{4}$N film in a large area.\\
\indent Finally, let us mention another growth method of the
$\bf{\gamma}'$-Fe$_{4}$N film. We previously report a possible layer-by-layer growth of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers on
Cu(001), by the N$^{+}$ ion bombardment with a relatively low
energy of 0.15 kV \cite{Takagi2010Structure-and-m}. This soft
implantation of N$^{+}$ ions successfully
avoids extra damage to the existing $\bf{\gamma}'$-Fe$_{4}$N structures
during the repetition of the growth cycles. The reported different electronic/magnetic states could then originate from the difference in the fabrication processes. Another
finding is that, in the current study, only the monolayer and trilayer $\bf{\gamma}'$-Fe$_{4}$N
could be obtained in a continuous film form. This
implies that an Fe$_{2}$N-layer termination would be preferable through the present methods, possibly due to
the metastability of an interface between Cu and pure Fe layers \cite{Detzel1994Substrate-diffu,Memmel1994Growth-structur,Shen1995Surface-alloyin,Bayreuther1993Proceedings-of-}.\\
\subsection{\label{secelemag}Electronic and magnetic properties of $\bf{\gamma}'$-F\lowercase{e$_{4}$}N atomic layers}
\begin{figure}
\includegraphics[width=60mm]{Fig5_Multilayer-01.eps}
\caption{\label{fig5} (Color online) Surface electronic structures of
the $\bf{\gamma}'$-Fe$_{4}$N on Cu(001). Experimental d$I$/d$V$
spectra recorded above the trilayer (solid) and monolayer (dotted)
$\bf{\gamma}'$-Fe$_{4}$N surfaces are presented. The d$I$/d$V$ intensity is arbitrary. A STM
tip was stabilized at $V_{\rm s}=+1.0\ {\rm V}$, $I=3.0$ and 7.0 nA for
the trilayer and monolayer surfaces, respectively. Gray lines are
guide to the eye.}
\end{figure}
\indent The surface
electronic structures of $\bf{\gamma}'$-Fe$_{4}$N showed large dependence on
the sample thickness. Figure \ref{fig5} displays experimental d$I$/d$V$
spectra measured on the surfaces of the
trilayer and monolayer $\bf{\gamma}'$-Fe$_{4}$N. The
peaks located at $V_{\rm s}\sim$ +0.20, +0.55, and +0.80 V, mainly originating
from the unoccupied states in the down-spin band characteristic of
Fe local density of states (LDOS), are observed for both the trilayer and monolayer
surfaces. A significant
difference between the spectra is a dominant peak located around $V_{\rm
s}=-50\ {\rm mV}$ observed only for the trilayer surface. This peak possibly originates
from the LDOS peak located around $E-E_{\rm F}=-0.2\ {\rm eV}$, calculated
for the Fe atoms not bonded to N atoms in the subsurface Fe layer
[corresponding site of Fe4 shown in Fig. \ref{fig7}(b)]. Because of the $d_{\rm 3z^2-r^2}$
orbital character, this peak could be dominantly detected in the STS
spectrum for the trilayer surface. Thus, the appearance of this additional peak could support the different subsurface structure of the trilayer sample,
especially, the existence of the subsurface Fe layer proposed
above.\\
\begin{figure*}
\includegraphics[width=178mm]{Fig6_Multilayer-01.eps}
\caption{\label{fig6} (Color online) Thickness-dependent electronic
and magnetic
properties of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers on Cu(001). (a) Upper panels: XAS spectra under $B=\pm5\ {\rm T}$ of the trilayer (left) and monolayer (right) samples in the grazing (top) and normal
(bottom) incidence. Lower panels: Corresponding XMCD spectra in the grazing (solid) and normal
(dotted) incidence. All the
spectra are normalized to the Fe XAS $L$-edge
jump. (b) Upper [lower] panel: Experimental spin [orbital] magnetic moment in the grazing (circle) and normal
(square) incidence plotted with respect to the Fe $L$-edge jump values. The edge jump values of 0.12 and 0.32 correspond to
those of the monolayer and trilayer samples, respectively. Dotted lines are
guide to the eye. Error bars are indicated to all the data, and smaller
than the marker size if not seen. (c) Magnetization of the monolayer sample recorded in the grazing (circle and line)
and normal (square) incidence. A dotted line is the guide to
the eye. An inset shows an enlarged view of the curve recorded in the
grazing incidence.}
\end{figure*}
\indent The entire electronic and magnetic
properties of the sample, including both surface and subsurface information, were investigated by using XAS and XMCD techniques at the
Fe $L_{2,3}\ (2p_{1/2,3/2}\rightarrow 3d)$ absorption edges. Figure
\ref{fig6}(a) shows XAS ($\mu_{+},\ \mu_{-}$) and XMCD
($\mu_{+}-\mu_{-}$) spectra under $B=\pm5\
T$ of the trilayer and monolayer samples in the grazing
($\theta=55^{\circ}$) and normal incidence ($\theta=0^{\circ}$). Here, $\mu_{+}\ (\mu_{-})$
denotes a x-ray absorption spectrum with the photon helicity parallel (antiparallel) to the Fe 3$d$
majority spin, and an incident angle $\theta$ is
defined as that between the sample normal and incident x-ray. The
trilayer (monolayer) sample was prepared in the same procedure as that
shown in Fig. \ref{fig4}(b) [Fig. \ref{fig1}(a)]. It is clear that the XMCD intensity is larger in the trilayer
one, indicating an enhancement of magnetic moments with increasing
thickness.\\
\indent For a further quantitative analysis on the magnetic moments, we applied
XMCD sum rules \cite{Carra1993X-ray-circular-,Thole1992X-ray-circular-}
to the obtained spectra and estimated spin ($M_{\rm spin}$) and orbital
($M_{\rm orb}$) magnetic moments separately. Note that the average number of 3$d$
holes ($n_{\rm hole}$) of 3.2 was used in the sum-rule analysis, which
was estimated by comparing the area of the experimental XAS spectra with that of a reference spectrum of
bcc Fe/Cu(001) ($n_{\rm hole}=3.4$) \cite{Chen1995Experimental-Co}. The
thickness dependence of the $M_{\rm spin}$ and $M_{\rm orb}$ values is
summarized in Fig. \ref{fig6}(b). The value of $M_{\rm spin}$ increases
monotonously with increasing the Fe $L$-edge jump value, namely, an
average sample thickness, and finally saturates
at $\sim1.4\ \mu_{\rm B}$/atom in the trilayer sample (corresponding
edge jump value of 0.32). The change in $M_{\rm orb}$ is not so
systematic relative to $M_{\rm spin}$, however, the $M_{\rm orb}$ values
seem to be enhanced in the
grazing incidence. This implies
an in-plane easy magnetization of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers on
Cu(001), also consistent with the previous reports on the
$\bf{\gamma}'$-Fe$_{4}$N thin films on Cu(001) \cite{Gallego2004Mechanisms-of-e,Takagi2010Structure-and-m}. Figure \ref{fig6}(c) shows magnetization curves of the
monolayer sample, whose intensity corresponds to the $L_{3}$-peak XAS intensity normalized to the $L_{2}$ one. The curve
recorded in the normal incidence shows negligible remanent
magnetization. On the other hand, that in the grazing one draws a
rectangular hysteresis loop, which confirms the in-plane easy magnetization. The coercivity
of the monolayer sample is estimated to be $\sim$ 0.05 T at 8.0 K, larger
than $\sim$ 0.01 T for 5 ML Fe/Cu(001)
\cite{Li1994Magnetic-phases}, $\sim$ 1 mT for 5 ML
Fe/GaAs(100)-(4$\times$6) \cite{Xu1998Evolution-of-th} and the 30 nm
thick $\bf{\gamma}'$-Fe$_{4}$N film \cite{Gallego2004Mechanisms-of-e} at RT.\\
\subsection{\label{sectheory}Theoretical analysis on the electronic and
magnetic states of $\bf{\gamma}'$-F\lowercase{e$_{4}$}N atomic layers on Cu(001)}
\begin{figure}
\includegraphics[width=86mm]{Fig7_Multilayer-01.eps}
\caption{\label{fig7} (Color online) Layer-by-layer electronic states of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers on Cu(001). Calculated layer-resolved DOS
projected to each 3$d$ orbital of the (a) monolayer and (b) trilayer $\bf{\gamma}'$-Fe$_{4}$N on Cu(001). The DOS in the up-(down-)
spin band is shown at upper (lower) panels. Note that the
states with $d_{\rm yz}$ and $d_{\rm zx}$ orbitals are
degenerated for the Fe3 and Fe4 sites in (b).}
\end{figure}
\begin{table}
\caption{\label{tab1} Calculated atomic magnetic
moments of the Fe atoms at each site (in units of $\mu_{\rm
B}$/atom). The site notation is the same as that used in Fig. \ref{fig7}.}
\begin{tabular}{c|>{\centering\arraybackslash}p{3.0em}|>{\centering\arraybackslash}p{3.0em}|>{\centering\arraybackslash}p{3.25em}|>{\centering\arraybackslash}p{3.25em}|>{\centering\arraybackslash}p{3.5em}|>{\centering\arraybackslash}p{3.5em}}
&\multicolumn{2}{>{\centering\arraybackslash}p{6.0em}|}{Surface Fe$_{2}$N}&\multicolumn{2}{>{\centering\arraybackslash}p{6.5em}|}{Subsurface Fe} &\multicolumn{2}{p{7.0em}}{Interfacial Fe$_{2}$N}\\
& Fe1& Fe2 & Fe3 & Fe4 & Fe5 & Fe6\\ \hline
Monolayer& 1.1 & 1.1 & -&-&-&-\\ \hline
Trilayer& 1.8 & 1.8 & 2.0 & 3.0 & 0.62 &0.62
\end{tabular}
\end{table}
\indent The observed thickness dependence of the magnetic moments can be
well understood with a help of first-principles calculations. Figures \ref{fig7}(a)
and \ref{fig7}(b) show layer-resolved DOS of the monolayer and trilayer
$\bf{\gamma}'$-Fe$_{4}$N on Cu(001), respectively. Here, non-equivalent Fe sites in
each layer are distinguished by different numbering. In particular, the Fe atoms at the Fe3 (Fe4) site in the trilayer
$\bf{\gamma}'$-Fe$_{4}$N correspond to those with (without) a bond to
N atoms \footnote{The difference of
DOS between (Fe1, Fe2) in the monolayer $\bf{\gamma}'$-Fe$_{4}$N, (Fe1,
Fe2) and (Fe5, Fe6) in the trilayer one is just a switch of the orbital
assignment between $d_{\rm yz}$ and $d_{\rm zx}$. Therefore, the DOS of
Fe2 in the monolayer $\bf{\gamma}'$-Fe$_{4}$N, Fe2 and Fe6 in the
trilayer one is not presented here.}. In Table \ref{tab1},
calculated values of an atomic magnetic moment $M_{\rm atom}$, corresponding to
$M_{\rm {spin}}$ + $M_{\rm {orb}}$ along the easy magnetization direction, are also listed. In the monolayer case, the
calculated $M_{\rm atom}$ is 1.1 $\mu_{\rm B}$/atom, which is in perfect
agreement with the experimental value. This supports an ideal
atomic structure of our monolayer sample.\\
\indent Interestingly, the value of $M_{\rm atom}$ for the Fe atoms in the
monolayer $\bf{\gamma}'$-Fe$_{4}$N is more than 1.5 times smaller than
that in the topmost layer of the
trilayer one (1.83 $\mu_{\rm B}$/atom). In comparison with the DOS shown at the top of
Fig. \ref{fig7}(b), the impact of the hybridization with the Cu states
on the Fe DOS can be seen in Fig. \ref{fig7}(a): First, the DOS in the
up-spin band, especially with $d_{\rm 3z^2-r^2}$ and $d_{\rm yz}$ orbitals, becomes
to have a tail toward a higher-energy side across the $E_{\rm F}$. This
change deviates the 3$d$ electrons in the up-spin band from a
fully-occupied nature. Moreover, the spin asymmetry of the occupied 3$d$
electrons, the difference between the electron occupation into each spin band
normalized by the sum of them, reduces especially for the DOS with
$d_{\rm xy}$, $d_{\rm 3z^2-r^2}$ and $d_{\rm yz}$ orbitals. These changes could decrease $M_{\rm
spin}$ of the Fe atoms. Note that the similar reduction in the magnetic
moments of 3$d$ TMs due to the hybridization with Cu states is
reported, for example, in
Ref. \onlinecite{Tersoff1982Magnetic-and-el,Hjortstam1996Calculated-spin}.\\
\indent Then, by comparing two different
Fe$_{2}$N interfaces with the Cu substrate, it turns out that $M_{\rm atom}$ of the
monolayer $\bf{\gamma}'$-Fe$_{4}$N (1.1 $\mu_{\rm B}$/atom) is almost twice compared to
that of the trilayer one (0.62 $\mu_{\rm B}$/atom). In the monolayer case, the Fe$_{2}$N layer faces to a vacuum and the
Fe atoms are under reduced atomic coordination. This results in the narrower
band width, and thus the DOS intensity increases in the
vicinity of $E_{\rm F}$. Accordingly, a larger exchange splitting can be
possible and the spin asymmetry of the occupied 3$d$ electrons increases as shown in Fig. \ref{fig7}(a), compared to the interfacial Fe$_{2}$N layer of the trilayer
$\bf{\gamma}'$-Fe$_{4}$N [bottom panel of Fig. \ref{fig7}(b)]. This leads to larger magnetic
moments at the surface. As a result, the competition between
the enhancement at the surface and the
decrease at the interface would make $M_{\rm atom}$ values quite layer-sensitive.\\
\indent In the subsurface Fe layer of
the trilayer $\bf{\gamma}'$-Fe$_{4}$N, the value of $M_{\rm atom}$ becomes
largest due to the bulk coordination of the Fe atoms. Especially the Fe atoms not
bonded to the N ones possess $M_{\rm atom}$ of 3.0 $\mu_{\rm B}$/atom,
which is comparable to the values of Fe atoms at the same site in the
bulk $\bf{\gamma}'$-Fe$_{4}$N \cite{Frazer1958Magnetic-Struct}. Consequently, by averaging the
layer-by-layer $M_{\rm atom}$ values of the trilayer
$\bf{\gamma}'$-Fe$_{4}$N, the total magnetic moment
detected in the XMCD measurement is expected to be 1.7 $\mu_{\rm
B}$/Fe, with the electron escape depth taken into account (See Appendix
\ref{jumptoML}). Considering the composition expected to the trilayer
sample, this value can well explain the experimental one of $\sim$ 1.5 $\mu_{\rm
B}$/Fe.\\
\indent The theory also demonstrates the direction of an easy magnetization
axis. The in-plane easy magnetization of
our $\bf{\gamma}'$-Fe$_{4}$N samples was confirmed by the magnetization
curves as well as the incidence
dependence of the $M_{\rm orb}$ value. In contrast, the pristine ultrathin Fe
films, which form either fct or fcc structures on Cu(001), show uncompensated
out-of-plane spins over a few surface layers
\cite{Pescia1987Magnetism-of-Ep,Meyerheim2009New-Model-for-M}. This shift
of magnetic anisotropy by nitridation can be understood from the orbital-resolved Fe DOS shown in Figs. \ref{fig7}(a) and \ref{fig7}(b). Unlike
the pure Fe/Cu(001) system \cite{Lorenz1996Magnetic-struct}, the occupation of 3$d$ electrons in states with out-of-plane-oriented orbitals ($d_{\rm yz},\
d_{\rm zx},\ d_{\rm 3z^2-r^2}$) is considerably larger than that with
in-plane-oriented ones ($d_{\rm
xy},\ d_{\rm x^2-y^2}$). This could make $M_{\rm orb}$ prefer to align
within a film plane, resulting in the in-plane magnetization of the system
\cite{Bruno1989Tight-binding-a}.\\
\section{Summary}
\indent In conclusion, we have conducted a detailed study on the growth,
electronic and magnetic properties of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers on
Cu(001). The ordered trilayer film of $\bf{\gamma}'$-Fe$_{4}$N can be
prepared by the Fe deposition under N$_{2}$ atmosphere onto the existing monolayer surface. On the other hand, the repetition of
the growth cycles including the high-energy N$^{+}$ ion implantation
resulted in the imperfect bilayer $\bf{\gamma}'$-Fe$_{4}$N. The STM and STS observations revealed the change in the surface topography
and electronic structures with increasing the sample thickness. The XAS
and XMCD measurements also showed the thickness dependence of the spectra, and the corresponding
evolution of the $M_{\rm spin}$ values. All the thickness
dependence of the electronic and magnetic properties is well explained
by the layer-resolved DOS calculated using the first
principles. Structural perfection of the system makes it possible
to fully comprehend the layer-by-layer electronic/magnetic states of the $\bf{\gamma}'$-Fe$_{4}$N atomic layers.\\
\section{Acknowledgement}
\indent This work was partly supported by the JSPS Grant-in-Aid for Young Scientists (A), Grant No. 16H05963, for Scientific Research (B),
Grant No. 26287061, the Hoso Bunka Foundation, Shimadzu Science
Foundation, Iketani Science and Technology Foundation, and
Nanotechnology Platform Program (Molecule and Material Synthesis) of the
Ministry of Education, Culture, Sports, Science and Technology (MEXT),
Japan. Y. Takahashi was supported by
the Grant-in-Aid for JSPS Fellows and the Program for Leading Graduate
Schools (MERIT). A.E. acknowledges funding by the German Research
Foundation (DFG Grants No. ER 340/4-1).\\
| proofpile-arXiv_065-7482 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion and Future Directions}\label{sec:conclusion}
We presented a framework for evaluating the performance of callout mechanisms in repeated auctions using historical data only. Our framework is general enough to enable the study of other heuristics in settings beyond those considered here (e.g. alternative auction mechanisms, bidding distributions, etc.).
In future, we intend to investigate the performance of more complicated callout mechanisms, including ones with more sophisticated learning steps;
ones that combine multiple heuristics in a single score; ones that target bidders by means not easily represented by single-metric thresholding; and mechanisms that use online dynamic (as opposed to myopic) targeting.
\section*{Acknowledgments}
\bibliographystyle{apalike}
\section{Experiments}\label{sec:experiments}
In this section we demonstrate our framework on both synthetic and real-world auction data. By simulating each mechanism on such data, we estimate its immediate revenue impact and social welfare impact using the estimators proposed in Section \ref{subsec:estimators}, and compare them with the baselines in Section \ref{sec:baselines}. As predicted earlier, we see that most of our heuristics consistently outperform the baselines.
At a high level, our simulator receives the auction data and processes it one auction at a time using the heuristic specified. For any given item, the simulator decides which subset of bidders to call to the auction for that item (by setting the threshold value), simulates the auction mechanism among those bidders, and finally calculates the revenue and social welfare of the auction. By changing the threshold value $\theta$, the percentage of called-out bidders varies. That allows us to obtain a range of values for the performance metrics for each heuristic as a function of the percentage of bidders called out.
In our simulations, we assume the qps rates are constant across bidders ($q_i =c$ for all $i$ and some constant $c$). This not only simplifies the simulation, but also, by enlarging the number of potential buyers in each auction, it implicitly increases the scale of each simulation. In practice, when different bidders have different qps's, one can designate different threshold values for each one of them; these thresholds can be set to guarantee no bidder is called out to more than their qps.
More importantly, the above choice allows us to see how each mechanism's performance evolves as the percent of bidders it can keep in the auction increases (i.e as we vary $p$).
Based on the argument in Section~\ref{sec:performance}, a callout mechanism outperforms the baselines if:
(1) By calling the same percentage $p$ of bidders to auctions, it results in revenue higher than both RQT and GRA.
(2) By calling the same percentage $p$ of bidders to auctions, it results in social welfare at least as large as RQT.
For example, in Figure \ref{fig:baseline} the hypothetical callout mechanism outperforms both baselines.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{figures/baseline.pdf}
\caption{A good callout mechanism (red) must exceed the baselines RQT (blue) and GRA (cyan) in terms of revenue (left panel) while maintaining social welfare at least as large as RQT (right panel).}
\label{fig:baseline}
\end{figure}
\subsection{Datasets}
\paragraph{Synthetic auction data}
In this dataset each bidder's bid is sampled from a fixed distribution specific to that bidder.
We generate the data as follows: We assume there is a total of $T$ items that arrive at time steps $1,2,\ldots,T$. We have $n$ bidders and each one of them samples their bid from a \textit{log-normal} distribution with a fixed but bidder-specific median and variance. Note that the assumption that bids follow a log-normal distribution is standard in the literature and is backed by multiple empirical studies. See for instance~\cite{wilson1998sequential,xiao2009optimal,ostrovsky2011reserve}. For each bidder the mean bid and variance is sampled from the \textit{uniform} distribution with support on $[0,\mu]$ and $[0,\sigma]$, respectively. For simplicity, we assume the reserve price is \textit{fixed} and equal to $r$ across all auctions. We generate $M$ datasets with this specifications. By repeating out calculations on these $M$ datasets, we obtain confidence intervals for our empirical results. Throughout we set $n=100, \mu=1, \epsilon = 0.05, M=10$.
\paragraph{Real auction data}
This dataset consists of the bids observed on Google's DoubleClick Ad Exchange for a random set of 100 buyers,
submitted over three consecutive weekdays for a set of similar auctions.
We ensure that the items are similar by restricting the queries to a small industrialized upper-middle-income country and one type of device.
For ease of interpretation, we scale the observed bids so they are in units of reserve price,
i.e. on the same scale as the simulated auction data above with $r = 1$.
For each bidder we generate the missing/unobserved bids by resampling from the empirical distribution of her observed bids.
\iffalse
Note that with the synthetic data, we have the luxury of having a \emph{complete} dataset: we observe every bidder's bid for all the previously auctioned items. In practice however, this is usually not the case. All bidders are not called to all auctions, and as result we have access to the bids \emph{only} for those who have been called to the previous auctions. To solve this problem when working with actual auction data, we propose making the following assumption: The counterfactual effect of a callout mechanism is the same as its scaled-up effect when applied to incomplete data. Figure \ref{fig:methodology} illustrates this.
\begin{SCfigure}
\centering
\includegraphics[width=0.57\textwidth]{meth.pdf}
\caption{Illustration of our assumption about the counterfactual effect of over-throttling. Suppose 50\% of the bidders, chosen uniformly at random, have been called to each auction; the bidding data is missing for half the bidders. We simulate the given callout mechanism on this incomplete data ---we \emph{over-throttle}--- and observe the effect (solid red curve). Now had the data been complete, the assumption is the mechanism's effect on revenue and welfare would have followed the same trend (dashed red curve).}
\label{fig:methodology}
\end{SCfigure}
\fi
\begin{figure*}[h!]
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r1mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r50mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r1mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r50mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r1mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r50mu1v10.pdf}}
\caption{The effect of reserve price on the performance of each callout mechanism. Here $T=100$ and $\sigma = 1$.}
\label{fig:reserve}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v1.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v5.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v1.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v5.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v1.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v5.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v10.pdf}}
\caption{The effect of variance on the performance of each callout mechanism. Here $T=100$ and $r=1$.}
\label{fig:variance}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m50r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m200r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m50r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m200r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m50r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v10.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m200r10mu1v10.pdf}}
\caption{The effect of $T$ on the performance of each callout mechanism. Here $r=1$ and $\sigma= 1$.}
\label{fig:m}
\end{figure*}
\subsection{Findings}
Figures \ref{fig:reserve}, \ref{fig:variance}, \ref{fig:m} illustrate the performance of our callout mechanisms along with the baselines on the synthetic dataset for different setting of the parameters. The first row in each figure depicts the revenue and the second row depicts the social welfare versus the percentage of bidders called out to each auction. The third row depicts the average revenue earned by each heuristic across all threshold values. This can be thought of as an overall score for each algorithm. Figure~\ref{fig:real} illustrates the performance of our callout mechanisms on the real dataset.
Figure \ref{fig:reserve} illustrates the effect of the reserve price $r$ on the performance of each callout mechanism when the average variance $\sigma$ across bidders is $1$ and the number of items to be auctioned off, $T$, is $100$. We observe that regardless of the reserve price, the relative rank of the heuristics in terms of the revenue integral, remains unchanged. For example ShA always outperforms the other algorithms by an statistically significant margin. Also see Figure~\ref{fig:real} for a similar trend. As we expect, the performance of BAR improves as $r$ grows larger.
Note that when the percentage of bidders called to each auction is high, WIN, SPD and RVC fail to beat the RQT baseline. The reason is that these metrics give high scores to only a small subset of bidders and assign the remainder of bidders scores that are about equal. This induces the flat interval in the corresponding curves.
In terms of social welfare, our heuristics always outperform RQT, except for BAR and RNK. RNK does not maintain sufficient social welfare when the percentage of bidders called to the auction is low. The reason is obvious: RNK calls bidders with lowest (best) ranks, making the winner pay more and degrading the social welfare as the result.
Figure \ref{fig:variance} illustrates the effect of average bidding variance $\sigma$ on the performance of each callout mechanism when the reserve price $r$ is fixed and equal to $1$ and the number of items $T$ is $100$.
As variance increases, we see a divergence in the performance of various heuristics. In particular, the performances of GRA, BAR, RNK, and BID all start to deteriorate.
Also, when the percentage of called bidders is small, we observe a sudden jump in social welfare. The heuristics exhibiting this phenomenon fail to maintain the auction pressure, dropping the winner's close competitors. As the result, the winner pays less, which boosts the social welfare.
Figure \ref{fig:m} illustrates the effect of the number of items $T$ on the performance of each callout mechanism when the reserve price $r$ is fixed and equal to $1$ and the average variance $\sigma$ across bidders is $1$.
We see that as $T$ increases, the difference between the performance of different heuristics begins to vanish: the performance of all algorithms (expect RQT) converge to that of ShA, even WIN.
The main takeaway messages from the empirical results presented above are:
\begin{itemize}
\item \textit{It is easy to beat the RQT baseline.} Even our crudest heuristics, WIN and SPD, outperform RQT most of the time.
\item \textit{Some of our heuristics outperform both baselines.} More sophisticated heuristics, e.g. RVC, RNK, BID, and ShA, consistently outperform the baselines.
\item \textit{A good callout mechanism can significantly improve revenue. } For example in certain settings, ShA results in revenue up to 50\% more than that of RQT and 25\% more than that of GRA.
\item \textit{ShA $>$ BID$>$ \{ RNK, RVC, GRA\} $>$ \{SPD, WIN, RQT, BAR \}.} And these results are statistically significant across the settings investigated here.
\item \textit{The more information a heuristic contains about the revenue impact of bidders, the better it performs.} We believe this is why ShA outperforms all the other heuristics: As we noted earlier, ShA estimates the counterfactual revenue impact of each bidder, and as a result improves revenue the most. Overall better heuristics call out more specifically to bidders with greater revenue impact.
\end{itemize}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu4v130.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu4v130.pdf}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu4v130.pdf}}
\caption{The performance of each callout mechanism on the real auction dataset. Here $n=100$, $\sigma= 130$, and $r=1$.}
\label{fig:real}
\end{figure*}
\section{Omitted Figures and Plots}\label{app:figs}
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{flowchart.pdf}
\caption{The high-level flowchart of the system. The two main building blocks of a callout mechanism are the components for \textit{learning} and \textit{targeting}. We treat the auction mechanism as fixed and given throughout.} \label{fig:flowchart}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{baseline.png}
\caption{A good callout mechanism (red) must exceed the baselines RQT (blue) and GRA (cyan) in terms of revenue (left panel) while maintaining social welfare at least as large as RQT (right panel).}
\label{fig:baseline}
\end{figure}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v1.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v5.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v1.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v5.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v1.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v5.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v10.png}}
\caption{The effect of variance on the performance of each callout mechanism. Here $T=100$ and $r=1$.}
\label{fig:variance}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m50r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m100r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/rv-n100m200r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m50r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m100r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/sw-n100m200r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m50r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m100r10mu1v10.png}}
\subfigure{\includegraphics[width=0.32\textwidth]{figures/scores-n100m200r10mu1v10.png}}
\caption{The effect of $T$ on the performance of each callout mechanism. Here $r=1$ and $\sigma= 1$.}
\label{fig:m}
\end{figure*}
\section{Our Callout Algorithms}\label{sec:heuristics}
\subsection{Baselines}\label{sec:baselines}
We propose two baselines, RQT and GRA, against which we compare other callout mechanisms\footnote{Note that SCA~\cite{SelectiveCallouts} is not among our baselines, because the details of the algorithm have not been published.}.
\paragraph{Random Quota Throttling (RQT)}
As a naive comparison baseline, we consider random quota throttling, RQT: we drop each bidder with a fixed probability $p$.
\iffalse
Usually when the auction mechanism is simple, the expected revenue of this approach can be computed analytically. For example, the expected revenue is equal to $\sum_i (i-1)p^2(1-p)^{i-2} b_{(i)}$ for second-price auctions,
where $b_i$ denotes bidder $i$'s bid if she is called to the auction and $b_{(i)}$ specifies the $i$th order statistic. For a more complicated auction where the computations become difficult, one can estimate this quantity empirically, repeating the randomized step $\frac{1}{\epsilon}$ times, $\epsilon>0$ an accuracy parameter, then taking the average of the observed revenue.
\fi
\paragraph{The Greedy Algorithm (GRA)}
In settings where the auction revenue is monotone and submodular\footnote{This is for example the case for revenue maximizing auction mechanisms in matroid markets with bidders whose bids are drawn independently (see \cite{Dughmi} for the details).}
a simple greedy algorithm, which greedily adds bidders by their marginal impact on revenue, is guaranteed to obtain revenue at least as large as $(1-1/e)$ of the (one-shot) optimal solution~\cite{Dughmi}. The details of the algorithm can be found in Appendix~\ref{app:other}.
\subsection{Our Heuristics}
Throughout this section we focus on callout mechanisms with two natural properties, \textit{symmetricity} and \textit{myopicity}.
We call a callout heuristic \textit{symmetric} if two bidders with exactly the same bidding history up to time $t$ have the same chance of being called to the auction of $a_t$. This basic property must be satisfied to ensure that the mechanism is fair.
A \textit{myopic} callout heuristic disregards the effect of today's targeting on the future bidding behavior.
The targeting in all of our algorithm is done via \emph{thresholding}: the algorithm calculates a score vector $\mathbf{s}_t$ from the (relevant part of the) history vector $\mathbf{H}_t$ for all bidders; by designating a suitable thresholds $\boldsymbol{\theta}$, it selects which bidders to call out. See Algorithm \ref{alg:sym} for details. Next we present several natural choices for the metric $m(.)$ and specify how to update the scores with it.
\begin{algorithm}
\caption{A Myopic Symmetric Thresholding Algorithm}
\label{alg:sym}
\begin{algorithmic}[1]
\State \textbf{Input:} metric $\mathbf{m}(.)$ and threshold $\theta$.
\State Start with $t = 0$ and equal scores for all bidders ($\mathbf{s}_0 = \mathbf{0}$).
\While{there exist more ad slots}
\State $t=t+1$.
\State Receive ad slot $a_t$.
\State Set $B_t = {i: s^i_{t-1} \geq \theta}$.
\State Run the auction for $a_t$ among bidders in $B_t$.
\State Update $\mathbf{m}_{t}$ by including the bids for $a_t$.
\State Update $\mathbf{s}_t$ using $\mathbf{s}_{t-1}$ and $\mathbf{m}_{t}$.
\EndWhile
\end{algorithmic}
\end{algorithm}
We end this section with a remark: we restrict ourselves to thresholding mechanisms---as opposed to the broader class of \emph{randomized} bidder targeting---because the expected revenue earned from any randomized targeting can be written as the weighted sum of revenue earned by different $\boldsymbol{\theta}$ vectors. This sum is maximized when all the weight is put on a single (best) threshold vector $\boldsymbol{\theta}$.
\subsection{A Linear Heuristic}\label{sec:linear}
Consider an auction with $n$ bidders and let $b_i$ denote the bid of bidder $i$. Let $\mathbf{b} = (b_{(1)},b_{(2)},\cdots ,b_{(n)})$ denote the ordered bid vector where the subscript $(j)$ denotes the $j$-th order statistic: $b_{(1)} \geq b_{(2)} \geq \cdots \geq b_{(n)}$.
For any $S \subseteq B$, let $R(S)$ denote the revenue that the exchange earns if it calls all the bidders in $S$ to the auction. We want to attribute to each bidder $i \in S$ part of the total revenue, which we denote by $R_i$, such that the attribution satisfies the following properties for any set $S$:\footnote{The axioms introduced here bear similarity to those leading to Shapley values in cooperative game theory~\cite{Shapley,Neyman}, hence the naming.}
\begin{enumerate}
\item \textit{Symmetry:} bidders with equal bids are attributed the same revenue: $i,j \in S$ and $b_i=b_j \Rightarrow R_i = R_j$.
\item \textit{Linearity:} $\mathbf{R}_S = \mathbf{A} \mathbf{b}_S $ for some fixed matrix $\mathbf{A}$. Throughout, for any vector $\mathbf{x}$ and any set $S$, $\mathbf{x}_S$ is a vector that is equal to 0 on any component $i \not\in S$ and equal to $\mathbf{x}$ everywhere else.
\item \textit{Conservation of Revenue:} the sum of attributions equals the total revenue: $\sum_{i\in S} R_i = R(S)$.
\end{enumerate}
\begin{proposition}\label{prop:ShA}
For a second-price auction properties 1--3 uniquely identify $\mathbf{A}$.
\end{proposition}
Shapley's Linear Heuristic (ShA) works by computing and thresholding on the average value of the above metric for each bidder.
We next argue that the above heuristic estimates the expected counterfactual revenue impact of adding a new bidder to an auction with respect to a particular distribution.
Let $S$ denote the subset of bidders called to the auction, but without bidder $i$. Consider two almost-duplicate worlds, one with bidders $S \cup {i}$ called to the auctions (the observable) and the other without the bidder in question, $S$ (not observable, i.e. counterfactual). If everybody participates, the impact on revenue of the intervention --- including $i$ --- is $\left(R(S \cup {i}) - R(S)\right)$. However, the set of bidders who actually end up participating in the auction is a random variable. Suppose the probability of the subset $T \subseteq S$ of bidders ending up participating is $\Pr(T)$. Then we can write the expected revenue impact of adding $i$ to $S$ as follows:
\begin{equation*}
R(S \cup {i})-R(S) = \sum_{T \subseteq S} \Pr(T) \left(R(T \cup {i}) - R(T)\right)
\end{equation*}
It is easy to see that the above is exactly equivalent to the calculation in Proposition \ref{prop:ShA} if
$\Pr(T) = \frac{|T|!(|S| - |T| - 1)!}{|S|!}$.
The above distribution is uniquely imposed due to the symmetry property\footnote{
As a future generalization, one can discard this property and consider a more general linear approach in which $ A^{(P)} \times \vec{e}_k = \vec{p}_k$,
where $\vec{p}_k= (p_{k1},..,p_{kk},0,..,0),$ $\sum^k_{l=1} p_{kl} = 1$, and $p_{kl} \geq 0$.}.
\subsection{Non-linear Heuristics}\label{sec:non-linear}
We now turn to propose and discuss the following (nonlinear) heuristics.
\paragraph{History of bidding above reserve (BAR)} We call bidder $i$ to an auction if the number of times she has in the past bid above the reserve price for similar items exceeds some threshold $\theta_i$. Obviously, as we increase $\theta_i$, the number of bidders called to the auction decreases. As we will see in Section \ref{sec:experiments}, the performance of this heuristic depends heavily on the reserve price setting algorithm. The more accurate this algorithm is --- in predicting the auction winner, in predicting the winner's bid --- the better this heuristic performs. In the ideal case where the pricing algorithm can predict exactly the winner's bid, the BAR heuristic maximizes the revenue: we only need to call to the auction the person who is willing to pay the most and set the reserve price to a level just below her bid.
Conversely, consider the extreme case when the reserve price is 0 and therefore contains no information about bidder interest: the bid-above-reserve metric is equal for all bidders, and the heuristic BAR therefore performs poorly.
\paragraph{History of winning (WIN)}
We call bidder $i$ to an auction if the average number of times she has won in the past for similar items exceeds some threshold $\theta_i$. This algorithm performs well in the absence of an accurate reserve price setting algorithm for the following reason: The number of times a bidder wins in a segment indicates how interested she is in similar impressions. So by calling these interested bidders, competition (and, therefore, the second price) increases. The problem with this approach is that when multiple bidders have equal interest in an impression segment. Instead of splitting the impressions among them, this heuristic calls them simultaneously, driving competition up and dissipating bidders' resources. To the extent that the price setting algorithm is accurate, WIN is wasteful.
In addition to the above drawback, a bidder may not win often in a segment, but still succeed in setting a high second price for the winner. The WIN heuristic ignores this effect and does not call such price-pressuring bidders to the auction.
\paragraph{Total spend (SPD)}
We call bidder $i$ to an auction if her total spend for similar items so far exceeds some threshold $\theta_i$.
This heuristic can be thought of as an extension of WIN, one weighted not only by how many times a bidder wins in a segment, but also by how much she spends upon winning.
\paragraph{Average ranking (RNK)}
We call bidder $i$ to an auction if her average rank in past auctions for similar items lies below some threshold $\theta_i$.
This heuristic can be thought of as a generalized and smoothed version of WIN. With this heuristic the winner (i.e. the first ranked bidder) is not the only one who receives credit. Rather, every bidder increases her score proportional to the placement of where her bids stand relative to others.
\paragraph{Total bid (BID)}
We call bidder $i$ to an auction if her total past bids for similar items exceeds some threshold $\theta_i$.
The problem with this heuristic is the following: Consider a bidder who bids low most of the time, but every once in a while submits an unreasonably high bid to raise their average. This heuristic cannot distinguish this bidder from one that consistently submits reasonably high bids.
\paragraph{Total attributed revenue (RVC)}
We call bidder $i$ to an auction if her total attributed revenue for similar items exceeds some threshold $\theta_i$. Note that a bidder's revenue impact manifests not only when she directly provides the winning bid, but also indirectly when she influences the price of any other winners.
The problem with this heuristic is that it completely disregards the role bidders other than the first and second-highest could have played in the auction. When the number of repetitions is not high, we expect ShA to outperform this heuristic. As the number of repetitions increase, this heuristic converges to ShA.
Table \ref{tab:rt} compares the running time of our algorithms and baselines with one another. Shortly in Section \ref{sec:experiments} we see not only that our heuristics are faster, but also that they outperform both baselines. One heuristic that deserves particular attention is ShA. ShA does not suffer from the problems pointed out for non-linear heuristics above; we therefore expect it to outperform them in practice. In Section \ref{sec:experiments} we see that this is indeed the case.
\section{Introduction}
For online businesses advertising is a major source of monetization. Every day companies like Bing, Facebook, Google, and Yahoo run auctions --- billions of auctions --- to determine which advertising impressions to show. In particular, online display ad space is usually bought and sold through high-volume auction-based exchanges, of which AppNexus, Google's DoubleClick, and Microsoft Ad Exchange are examples.
In these online display ad exchanges, impressions are continuously received from publishers and auctioned off among real-time bidders. Economic theory holds that such auctions allocate resources efficiently. The auction also determines the price paid by the winner. Of this payment, a fixed percentage goes to the exchange and the remainder is paid to the publisher. These transactions constitute the two revenue streams of online advertising, those of ad exchanges and of publishers.
On the exchange side the process of running an auction usually consists of two steps: (1) Once a query arrives from the publisher side, the exchange calls a subset of buyers\footnote{While technically not the same, for simplicity in this work we use the terms ``buyer" and ``bidder" interchangeably.} to participate in the auction (the callout step). (2) Then, among the responding buyers the exchange runs an auction to determine the winner and price (the auction step). There are multiple reasons for the existence of the first step --- the focus of this work. First, a significant percentage
of bidders are limited by the number of calls per second they can respond to, their \emph{quota}~\cite{SelectiveCallouts,CEG,DJBW}. The exchange must protect these bidders from receiving more calls than their servers can handle\footnote{Beside such technological limitations, bidders may have financial constraints (see for example~\cite{Borgs,Dobzinski}) and/or specify volume limits to control exposure (see~\cite{Lahaie}).}. Furthermore, the exchange itself may need to limit the number of callouts sent to bidders to conserve its resources
In practice, \emph{random quota throttling (RQT)} is the principal solution by which these constraints are enforced. At a high level RQT decides which buyers to call randomly and with probabilities proportional to the quota-per-seconds (qps). Given that the exchange interacts with its bidders repeatedly and over time, it has access to \emph{historical auction data} containing information about each bidder's segments of interest. By \emph{learning} this information the exchange can \emph{target} bidders more effectively. The combination of the learning and targeting is what we call a \emph{``callout mechanism"}. An ideal callout mechanism reduces resources when bidders are unlikely to be interested, and increases calls when bidders are more likely to perceive value.
Finding the optimal callout set is equivalent to solving the following optimization problem: Subject to bidders' quota, which callouts should be sent to them for valuation and bidding so that the exchange's \emph{long-term revenue} is maximized?
Finding the revenue-maximizing callout is computationally hard (Section~\ref{sec:setting}).
As a consequence, the exchange has to rely on heuristics and approximation callout mechanisms.
Different callout mechanisms can impact the long-term revenue differently, and the space of all possible callout mechanisms is extremely large.
It is therefore not feasible for the exchange to evaluate every mechanism by running an actual experiment\footnote{Experiments can be unpredictably costly. In addition, there are usually restrictions in place on buyers the exchange can run experiments on.}.
This necessitates the design of a framework that utilizes \emph{historical data only} to compare the performance of various callout mechanisms.
This is the main contribution of the present work.
Rather than focusing on any particular mechanism, here we lay out a framework for evaluating any given callout mechanism in terms of its impact on long-term revenue.
The paper is organized as follows:
We formalize the setting in Section \ref{sec:setting}.
In Section~\ref{sec:performance} we start by observing that different callout mechanisms can impact the long-term revenue differently, mainly for the following two reasons:
(1) Different mechanisms satisfy the quota differently.
(2) Bidders are strategic and adapt their response to the choice of the callout mechanism.
Measuring the former can be readily done using historical data, however, to measure the latter we need to have a model for the way bidders adapt their behavior to a new callout mechanism. We propose in Section~\ref{subsec:game} a game-theoretic model that captures the repeated interaction between the exchange and its bidders.
This model motivates two performance metrics:
\emph{immediate revenue impact} and \emph{social welfare}, both of which can be estimated from historical data (Section~\ref{subsec:estimators}).
To establish baselines for comparison, in Section \ref{sec:baselines} we consider two mechanisms: RQT, as well as a greedy algorithm (GRA) for which theoretical guarantees have been established, albeit under certain restrictive assumptions.
In Section \ref{sec:heuristics} we propose several natural callout heuristics. Finally in Section \ref{sec:experiments}, we demonstrate our empirical framework, measuring the performance of these callout mechanisms on both real-world and synthetic auction data.
We characterize the conditions under which each heuristic performs well and show that, in addition to being computationally faster, in practice our heuristics consistently and significantly outperform the baselines.
\iffalse
To quantify this, we follow the standard modeling approach in microeconomic theory for \emph{repeated} interactions among strategic agents. In particular in Section \ref{subsec:game} we analyze a \emph{two-stage} game between the exchange and a bidder in which the exchange chooses the callout mechanism at the beginning and the bidder reacts to it in two stages. We assume the bidder seeks to \emph{maximize their utility} (or return on investment) by choosing in each round an exchange to participate in. An equilibrium argument implies that achieving high long-term revenue requires that the exchange provide its bidders high value.
These considerations motivate two performance metrics: immediate revenue and social welfare. In Section \ref{subsec:estimators} we present estimators for these metrics.
To establish baselines for comparison, in Section \ref{sec:baselines} we consider two mechanisms: RQT, as well as a greedy algorithm (GRA) for which theoretical guarantees have been established, albeit under several restrictive assumptions~\cite{Dughmi}.
In Section \ref{sec:heuristics} we propose several natural callout heuristics including (but not limited to) algorithms that call bidders with the highest average bids, or highest spend, or highest auction rank.
In Section \ref{sec:experiments}, we demonstrate our empirical framework, measuring the performance of these callout mechanisms on simulated auction data.
Indeed, our heuristics consistently outperform the baselines both on performance metrics and in computational speed.
One heuristic that deserves particular attention is Shapley's linear heuristic (ShA). This heuristic outperforms all others in the settings studied here. As we discuss in Section \ref{sec:linear}, ShA is derived from an axiom system and carries with it an interesting interpretation, one based on the \emph{counterfactual} concept of considering the changes in revenue from including or excluding a given bidder in an auction.
In summary, our work lays out the necessary framework for the design and comparison of callout mechanisms in display ad exchanges using historical data, and opens the door to many interesting future research avenues (see Section~\ref{sec:conclusion}). The framework is general enough to enable the study of other mechanisms in settings beyond those considered here. Finally, we assert that our work further illustrates the potential for utilizing historical data for decision making in online environments; it offers a tool for utilizing historical auction data to guide potential future callout experiments in online ad exchanges.
\fi
\section{Performance}\label{sec:performance}
We start by observing---via a series of examples in Appendix~\ref{app:ex}---that different callout mechanisms result in different levels of long-term revenue for the following two reasons:
(1) Different mechanisms satisfy quota differently.
(2) Bidders are strategic and adapt their response to the choice of the callout mechanism.
Measuring the first type of impact (illustrated in Example~\ref{ex:1}) is readily possible using historical data --- one simply needs to run each callout mechanism on the data and observe the change in revenue. As illustrated by Example~\ref{ex:1} by selling previously unfilled impressions, a smart callout mechanism can increase the exchange's revenue, and at the same time maintain, and perhaps improve, bidders' utilities.
In order to improve long-term revenue, however, it does not suffice for the exchange to find a callout mechanism with high revenue performance on historical data. In real world bidders are strategic and adapt their response to the choice of the callout mechanism.
In Example \ref{ex:2} for instance, callout mechanism (2) does not result in selling previously unfilled part of the inventory. Rather, it merely increases the competition and as a result, the price for item 2. While this does not hurt the revenue immediately, it reduces the utility that the bidders (in particular, bidder 3) earn. Bidders potentially react to this change in their payoffs in future rounds. To quantify this type of impact on long-term revenue we need to have a \emph{model} for bidders' reactions to the choice of the callout mechanism.
\subsection{A Two-Stage Game Formulation}\label{subsec:game}
There are usually many options (i.e exchanges) available to bidders to choose from (i.e. participate in).
We make the following simple, natural assumption about the bidder's reaction to the choice of the callout mechanism in one particular exchange: Bidder seek to maximize their \emph{utility} (ROI), that is, they always choose to participate in the exchange that provides them with the highest utility (value minus cost). In what follows, we make two simplifying assumptions: First we limit the bidder's action space to the choice of which exchange to \emph{participate} in. Second, we assume by participating in an exchange, all that the bidder can observe is their utility from that exchange. We acknowledge that compared to real-world, these assumptions drastically simplify the action space and the information structure for both the bidders and the exchange. Nonetheless as the following analysis shows, this simplified model suffices to capture some of the most important aspects of the interaction between the exchange and bidders.
We follow the standard modeling approach in microeconomic theory for repeated interactions among strategic agents (see~\cite{MS}).
Consider the following two-stage game between two players, the exchange and one bidder. The action space for the exchange consists of all possible callout mechanisms, denoted by the set $E$. The action space for the bidder consists of two actions: participating or taking the outside option.
At the beginning of the first stage the exchange commits to a callout mechanism $e \in E$. (Note that our analysis does not rely on a particular choice of $e$.) Next, the bidder, without the knowledge of $e$, decides whether to participate in the exchange in the first round. If they do, then their expected utility is equal to
$$u^e = v^e - c^e $$
where $v^e$ is the bidder's average valuation for the items they win and $c^e$ is the average cost they pay for those items under callout mechanism $e$. If the bidder chooses to participate, the expected revenue that the exchange receives equals $c^e$. If they do not participate in the auction, they take their outside option, which provides them with utility $u$ and the exchange with an incremental revenue of $0$.
In the second stage of the game, which captures the future interaction of the bidder with the exchange, the bidder again chooses whether to participate or not. This time, however, the bidder is aware of the utility they earns from participation --- if they chose to participate in the first stage. Note that this is in line with the anecdotal evidence suggesting that bidders do in practice run experiments to estimate the utility they can earn from different exchanges and adjust their rates of participation accordingly. Utilities for this stage are defined similarly to the first stage. Denote by $\delta$ the players' discount factor for the future (i.e. the second stage) payoffs.
\begin{proposition}\label{prop:eq}
Among all callout mechanisms $e \in E$ for which $u^e \geq u$, let $e^*$ be the one for which $c^e$ is maximized.
If $\frac{\max_e c^e}{c^{e^*}} -1 \leq \delta$, the following strategy profile is the unique sub-game perfect equilibrium of the above game: The exchange chooses $e^*$, and the bidder chooses to participate in each stage if and only if according to their beliefs at that stage $\mathbb{E} u^e \geq u$.
\end{proposition}
The above proposition suggests that when players value their future utilities --- or more precisely, when the discount factor $\delta$ is large enough --- the ideal callout mechanism increases the immediate revenue as much as possible (i.e. maximizes $c^e$) while providing the bidder with a utility level at least as large as what they can earn from their outside option (i.e. maintaining the constraint $u^e \geq u$).
In other words, when choosing a callout mechanism, the exchange faces a trade-off between immediate- and long-term revenue: Of course, for a given callout mechanism, increasing its callouts can boost the revenue. However, unless the exchange induces sufficiently high value for bidders, such increases in callouts ultimately discourage their participation in future rounds --- they find their outside option more profitable. This in turn translates into less revenue for our exchange in the long run.
\subsection{Performance Metrics and Estimators}\label{subsec:estimators}
The argument above leads us to two metrics for evaluating the performance of a callout mechanism: immediate revenue impact and social welfare. Next we propose ways for estimating these metrics, $c^e$, $u^e$, as well as the outside option utility $u$, from historical bidding data. Throughout we assume we know and can therefore simulate the auction mechanism on any given set of bids.
More precisely, suppose we have access to the bidding data over a period of length $S$ for $n$ bidders (i.e. $b^t_i$ for all $t=1,\cdots,S$ and $i=1,\cdots,n$).
We will need the following notation:
Let $\tilde{c}_{i}$ be the average cost bidder $i$ pays for the items they win over time period $t=1,\cdots,S$---before a new callout mechanism is implemented. Similarly, let $\tilde{b}_{i}$ be the average bid submitted by bidder $i$ for the items they win over this time period.
Let $\tilde{c}_{i}$ be the average cost bidder $i$ pays for the items they win over time period $t=1,\cdots,S$---under callout mechanism $e$. Note that this can be easily computed by simulating the auction mechanism on the historical bidding data and the new callout set. Similarly, let $\tilde{b}_{i}$ be the average bid submitted by bidder $i$ for the item they wins over this time period---under callout mechanism $e$.
\paragraph{Immediate revenue impact} The following is an unbiased estimator of $c^e$:
$\bar{c}^e = \frac{1}{n} \sum_{i=1}^n \tilde{c}^e_{i}.$
\paragraph{Social welfare} To estimate the social welfare, we need some proxy of the bidders' valuations. Since we do not have access to actual valuations, for practical reasons we are constrained to rely on bids as a proxy for value. In our setting, the assumption of bid as a proxy for valuation is relatively benign: any bias in measuring the utility of winning auctions in one exchange is likely the same bias for winning auctions in any other exchange. Further, the choice of bid-as-value enables bid-minus-cost as the residual value for the buyer, one that is visible both to each buyer and to the exchange. In that sense, bid-minus-cost represents the good-faith estimate of the residual value, one that the exchange can actively work to preserve over the set of buyers.
Assuming that a bidder's average bid reflects their true valuation, the following is an unbiased estimator for $u^e$:
$\bar{u}^e = \frac{1}{n} \sum_{i=1}^n (\tilde{b}^e_{i} - \tilde{c}^e_{i}).$
\paragraph{Outside option utility} The following is an estimator for $u$:
$\bar{u}^e = \frac{1}{n} \sum_{i=1}^n (\tilde{b}_i - \tilde{c}_i).$
We argue that the above is an unbiased estimator assuming that bidders are rational and participate at positive rates in both the exchange and their outside option.
Here is why:
Consider a strategic bidder who participate at positive rates in both our exchange and the outside option. This means that both of these options provide them with equal utilities on average, otherwise, they would have been better of by only participating in the higher paying exchange.
\subsection{Related Work}\label{app:rel}
Microeconomics has a long line of research literature on auctions; see~\cite{Klem} for a survey. Repeated auctions, in which the auctioneer interacts with bidders multiple times, have received considerable attention~\cite{Bikhchandan,OW,Thomas}. In particular, the problem of \emph{pricing} in repeated auctions has been studied extensively~\cite{ARS'13,KL,BKRW,BHW,CGM,MM}. That said, most of the previous work has not considered strategic buyers. Some consider random bidders~\cite{MM}, some study bidders who participate only in a single round~\cite{KL,BKRW,BHW,CGM}, and some focus on bidders who participate in multiple rounds of the auction, but are interested in acquiring exactly one copy of a single good~\cite{HKP}. In none of these cases do bidders react to the seller's past behavior to gain higher payoffs in future rounds. However, in many real-world applications, the same set of buyers interacts repeatedly with the same seller. There is empirical evidence suggesting that these buyers behave strategically, adapting to their history to induce better payoffs \cite{EO}. Indeed, a growing literature enumerates the various strategies buyers can follow in order to improve their payoff \cite{CDE,KL'4,JP,Lucier,GKP}.
More recently, several studies have focused on the strategic aspect of bidders' behavior in repeated auctions and, in particular, on the problem of setting near-optimal prices. \cite{AV,KN} study the impact of Intertemporal Price Discrimination (IPD), i.e. conditioning the price on bidder's past behavior, and examine the conditions under which IPD becomes profitable. \cite{ARS'13,ARS'14} investigate repeated posted-price auctions with strategic buyers, and present adaptive pricing algorithms for auctioneers. With these algorithms the auctioneer's regret increases as a function of bidders' discount factors.
In this work, rather than optimizing reserve prices, we focus on improving the callout routine. Of course, targeting bidders to call to an auction has already been reduced to practice. Consider \emph{Selective callouts}, as implemented by Google's display ad exchange~\cite{SelectiveCallouts}: Unlike RQT, Google's selective callout algorithm (SCA) identifies the types of impressions a bidder values, thereby increasing the number of impressions the bidder bids on and reducing the number of callouts the bidder ignores.
To our knowledge, \cite{CEG} is the first work to study the callout optimization problem from a purely theoretical perspective. The authors model callouts by an online recurrent Bayesian decision-making algorithm, one with bandwidth constraints and multiple performance guarantees. The optimization criteria considered in \cite{CEG} are different from ours --- here we consider long-term revenue as the principal criterion and assert that it is the most natural choice. Also, unlike \cite{CEG}, we study the effect of strategic buyer behavior on system-wide, long-term outcomes. Relatedly, \cite{Dughmi} investigates the conditions under which the expected revenue of a one-shot auction is a submodular function of the set of participating bidders. While these conditions are restrictive and do not often hold in practice, we adopt Dughmi's greedy algorithm as a baseline and compare other algorithms to it.
Our work is indirectly related to the large and growing body of research on budget constraints in online auctions. Papers in this literature can be divided into two main categories: (a) those concerned with the design and analysis of auction mechanisms with desirable properties --- truthfulness, optimal revenue, and so on --- for budget constrained bidders (see for example~\cite{Borgs,Hafalir,Dobzinski,BBGW}); and (b) those that present optimal or near-optimal bidding algorithms for such bidders (see for example~\cite{Chakrabarty,Archak}).
\textbf{Incentive Issues:} We close this section with a remark for readers familiar with the literature on incentive issues and truthfulness in mechanism design (see Chapter 23 in \cite{Mas-Colell} for an overview of the main results of this topic.) While we are concerned with, and present shortly a model for, the bidders' strategic reactions to the choice of the callout mechanism and its impact on long-term revenue, we do not claim that a bidder is better off by \emph{bidding truthfully} when the system is in equilibrium. Indeed, in a setting as complicated as that with which we are dealing here --- in which bidders have complex strategy spaces and information structures --- the auction mechanism itself already fails to maintain incentive compatibility; see \cite{Borgs,Gonen} for related hardness and impossibility results. Rather than setting the ambitious goal of restoring bidding truthfulness, here we consider a model in which both the action space and the information structure are simplified (see Section \ref{subsec:game} for further details). In spite of the simplicity, the analysis of our model provides us with important insights about the performance of callout mechanisms.
\section{Setting and Preliminaries}\label{sec:setting}
Let $B = \{1,2,\cdots,n\}$ denote the set of bidders active in the exchange. Each bidder $i \in B$ has a quota-per-second constraint denoted by $q_i>0$. This quantity is known to the exchange and specifies the number of auctions bidder $i$ can be called to per second.
Consider a particular time interval of length one second, and assume that during this period the exchange receives a sequence of ad slots $A = \{a_t\}_{t=1}^T$ in an online fashion.
Let $v^t_i$ denote the value (or ROI) of ad slot $a_t$ to bidder $i$. The exchange does not know the valuations in advance, but can learn about them through the bids. Let $b^t_{i}$ specify bidder $i$'s bid for the ad slot $a_t$.
At time $t=1,2,\cdots$ when ad slot $a_t$ arrives, the exchange must choose a subset $B_t \subseteq B$ to call for participation in the auction for $a_t$ while respecting all the quota-induced constraints. A \textit{callout mechanism/algorithm} is any logic used to decide which subset of bidders to call at each time step $t$ using the history of bidding behavior observed up to time $t$. More precisely, let the matrix $\mathbf{H}_t$ denote the bidding history observed by the exchange up to time $t$. Given $\mathbf{H}_{t-1}$ as the input a callout mechanism selects $B_t$ for every $t=1,2,\cdots$.
Once called, each bidder $i \in B_t$ decides whether to participate in the auction. The auction mechanism then specifies the winner (if any) and price. The recent bids along with $\mathbf{H}_{t-1}$ are then used to set $\mathbf{H}_{t}$. Figure \ref{fig:flowchart}
illustrates the flow of a typical callout mechanism.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figures/flowchart.pdf}
\caption{The high-level flowchart of the system. The two main building blocks of a callout mechanism are the components for \textit{learning} and \textit{targeting}. We treat the auction mechanism as fixed and given throughout.} \label{fig:flowchart}
\end{figure}
Throughout, we treat the auction step as given and fixed; we assert no control over how the auction is run or how its parameters, e.g. the reserve price, are tuned. While much of our work extends readily to other auctions, unless otherwise specified we assume the auction mechanism is second price with reserve. Let $r_t$ be the (given) reserve price for auction $t$. Bidders do not observe $r_t$ before submitting their bids.
For the exchange, the ultimate goal is to maximize the \emph{long-term revenue}. Therefore we define
the performance of a callout mechanism by its impact on long-term revenue. Long-term revenue is simply the (possibly discounted) sum of the revenue earned at each time step $t=1,2,\cdots$. We denote the discount factor by $\delta$ ($0 \leq \delta \leq 1$).
The optimal callout mechanism is the one that maximizes the long-term revenue.
We conclude this section by demonstrating that identifying the optimal callout is computationally hard. And this remains true even if we assume bids are sampled from fixed distributions, the exchange knows these distributions ahead of time, and can accurately predict the sequence of ad slots it receives.
\begin{proposition}\label{prop:hardness}
Suppose when called to an auction for item $t$, bidder $i$ samples their bid, $b^t_{i}$, from a distribution $D^t_i$ ($i=1,\cdots,n$ and $t=1,\cdots,T$). Let variable $x_{i,t}$ be the indicator of whether bidder $i$ is called to the auction for $a_t$. Solving the following optimization problem is NP-hard:
\begin{eqnarray*}
\max && \sum_{t=1}^T \delta^t \mathbb{E} R(x_{1,t},\cdots,x_{n,t})\\
\text{s.t. } &&\forall i: \sum_{t=1}^T x_{i,t} \leq q_i \\
&& \forall i,t: x_{i,t} \in \{0,1\}
\end{eqnarray*}
where the random variable $R(x_{1,t},\cdots,x_{n,t})$ denotes the revenue of auction $t$ with participants $\{i \vert x_{i,t}=1\}$, and the expectation is taken with respect to $D^t_i$'s.
\end{proposition}
We establish this by showing that the above class of problems includes a known NP-hard problem---maximum 3-dimensional matching---as a special case. Omitted proofs and other technical material can be found in Appendix~\ref{app:tech}.
Proposition~\ref{prop:hardness} motivates our approach in this work. Consider the hypothetical world in which we have access to a perfectly accurate \emph{bid forecasting} model. In this setting one natural proposal is to use this model to forecast the bids, then call the two bidders with the highest bids in every auction. We emphasize that in practice this is simply not possible: Even the best forecasting model can only forecast a \textit{bidding distribution}, not a particular number. Furthermore, even if we assume this distribution is fully compliant with the true bidding behavior, according to Propostion~\ref{prop:hardness} finding the optimal callout set is computationally hard. This implies that the exchange has to rely on heuristics and approximation algorithms to construct the callout set. In this work we present a data-driven approach for evaluating and comparing various callout algorithms in terms of their average impact on long-term revenue.
\section{Introduction}
\label{introduction}
In the new online era, auctions are an important part of monetization. Every day, Bing, Facebook, Google, Yahoo, and many other exchanges run billions of auctions to determine allocations and prices for streams of queries and advertising impressions.
To understand an auction, a core challenge is determining the value of any bidder. For example, in second price auctions, a bidder's impact manifests not only directly when he/she provides the winning bid but also indirectly by setting the price of other winners. So a bidder has both an impact on a particular auction as well as on a set of auctions -- those run in a day or those comprising an advertising campaign, say.
From the point of view of game theory, the bidders are the players. The theory of rational economic players suggests each bidder pose bids to represent their value (~\cite{chen2011real}, \cite{harstad2000dominant}, \cite{irwin2004balancing}). From the point of view of auction theory (\cite{engelbrecht1980state}), there is a complementary role, the {\em seller}. For the seller, the value of a bidder derives from the change in revenue that derives from that bidder's participation. Thus, we have available two notions of value: that from the point of view of the buyer/bidder and that from the point of view of the seller. The focus of this paper is upon the latter, the value of the set of bids and bidders from the point of view of the seller.
At its core, the value to the seller of any bidder inherently involves a counterfactual: the value comes from the auction that includes that bidder relative to the auction that does not include that bidder. Only one of these auctions can be observed, and one of these auctions always goes unobserved.
To address this counterfactual, we calculate the marginal impact of adding a new bidder to each auction, then compute the expected marginal impact. This approach directly considers both possibilities, both that of including the bidder and that of not including him/her.
For the seller, calculating changes in marginal revenue does not perfectly capture changes in auction value: Suppose we have four bidders all with bids equal to $\$100$. Removing any individual bidder has $\$0$ impact on the revenue -- even though all bidders highly value the query. Further, were other bidders to not attend the auction, each could win this value.
In other words, by removing or adding a bidder, we can not assume the other bidders in the auction have no variability nor can we assume that the set of participating bidders is itself stable. This motivates a different valuation method to address the above issues. The key idea is that auctions actually have a cooperative dimension: the bidders come together, as a set, and as a set, they implicitly set the cost/revenue value of the auction. We seek to attribute this value -- the value of the auction to the seller -- fairly across all bidders.
In this paper, we provide an estimator of revenue impact of each bidder on a class of auctions. This estimate has good fairness properties. Further, we set out to answer the key counterfactual question: what is the revenue impact of removing a bidder from an auction? We draw from two points of view: the cooperative game-theoretic approach of dividing value fairly based on Shapley values (~\cite{shapley1952value}) and the statistical approach of answering counterfactual questions properly (~\cite{rubin2005causal}). We make a connection between Shapley values and their modifications and potential outcome view. We then provide an algebraic form for modifying as well as polynomial time computation of Shapley values in the case of a class of auctions including first price and second price auctions.
Attribution problem in advertisement is often related to attribution of user conversions to different ads (~\cite{jordan2011multiple}), however, we are attributing the revenue in an auction to different bidders.
Cooperative game theory and Shapley values are used in understanding collusions in auctions (~\cite{Leyton-Brown:2000:BCI:352871.352899,Bachrach:2010:HTC:1838206.1838287}), however, they don't consider the problem of attribution as well as application of Shapley and modified Shapley for the multi-agent game defined for revenue in the auction.
Variations (modifications) of Shapley values has been studied extensively (~\cite{kalai1987weighted,monderer2002variations}). We focus on the specific case of this for auctions and provide polynomial time computation for our specific case as well as an algebraic framework to manipulate the assumptions of the evaluation.
In section \ref{preliminary}, we introduce Shapley valuation (~\cite{shapley1952value}) and its modifications and also the setting of potential outcomes and discuss the connection of Shapley value to potential outcomes setting. In section \ref{connection} we extend this connection to a modification of Shapley values. Having established the utility of Shapley values framework, we define an additive group and provide an algebraic approach to Shapley valuation. The algebraic approach provides flexibility to modify the properties of this valuation as well as establish its computational efficiency. We parameterize a model, essentially by dropping the Shapley symmetry axiom.
\section{Preliminary}
\label{preliminary}
In this section, we recap some key ideas from cooperative game theory as well as some statistical theory associated with causal inference. We use the notion of Shapley values from cooperative game theory and potential outcomes from the causal inference setting. In section \ref{variation}, we apply these notions to attribute auction value to each bidder.
\subsection{Shapley values}
Suppose we have set of agents (bidders) $N= \{1,2,..,n\}$, a coalition game (here, an auction), and an associated coalition value function $v$ such that $v(S)$ for $S \subseteq N$ is the value we receive from coalition of elements in $S$. We want to design a fair reward system (or fair attribution system) $\phi_i(v)$. We start with the following desirable properties:\\
\begin{enumerate}
\item Efficiency: The sum of rewards (attributions) equals the value of the coalition as a whole: $\sum_i \phi_i(v) = v(N)$.\\
\item Symmetry: For any coalition value function $v$, if for all $S$ such that $i,j \not\in S$, we have $v(S \cup {i}) = v(S \cup {j})$, then $\phi_i(v) = \phi_j(v)$.\\
\item Dummy Player: For any coalition value function
$v$, if for all $S$ such that $i \not\in S$, we have $v(S \cup {i}) = v(S) + v({i})$, then $\phi_i(v) = v({i})$.\\
\item Strong Positivity: For any agent $i$ and for two games $v$ and $w$, if for all $S \subseteq N$ we have $v(S \cup {i})-v(S) \le w(S \cup {i})-w(S)$ then $\phi_i(v) \le \phi_i(w)$.\\
\item Additivity: For any agent $i$ and for two coalition
value functions $v$ and $w$, if we define the addition of
games as $(v + w)(S) = v(S) + w(S)$ then we have
$\phi_i(v + w) = \phi_i(v) + \phi_i(w)$.\\
\end{enumerate}
The first three properties all seem plausible:
Given the Efficiency property, such an evaluation system
distributes all value generated by the grand coalition completely among the participants. There is no leakage of value, nor any ``rent'' extracted by the seller or another economic agent.
The Symmetry property asserts that any ties as identified by the value function $v$ imply ties in the associated reward (attribution) system $\phi_i(v)$. This property says that the value function $v$ has complete information; there is no supplementary store of second-order criteria for tie-breaking. In this sense, the symmetry property asserts that $v$ plays the role analogous to a sufficient statistic, and acts as the single point of reference for attributing the entire budget of reward available.
The Dummy Player property asserts that additivity of the value function $v$ implies a unique allocation of rewards $\phi(v)$. Among other things, this requires the value and reward to have the same units and to share the same zero point.
The Strong Positivity property considers two games $v$ and $w$, and posits a meaningful relationship between value performance and the agent-assigned share value across these two games: When an agent provides to a game higher marginal value, this agent receives for that game correspondingly higher reward or attribution.
The Additivity property likewise considers two games and asserts that the additivity of the value of two games $v$ and $w$ implies the additivity of the agent-attributed reward functions $\phi_i$. So when two games are sufficiently separable that their values add, then likewise the bidder-specific attributions associated with these two games also add.
The following is well known:
\begin{theorem}\label{Shapley} (Uniqueness of Shapley (~\cite{dubey1975uniqueness,neyman1989uniqueness})) The combination of the properties 1 to 4 implies a unique reward/attribution system, the Shapley value:
\begin{align}
Sh_v(N, i) = \sum_{S \subseteq N} \frac{|S|!(|N|-|S|- 1)!}{|N|!}(v(S \cup {i}) - v(S))
\end{align}
\end{theorem}
\begin{proposition}(Additivity of Shapley) The Shapley Value satisfies the Additivity property. Indeed, the Shapley value is also unique among methods for value division that satisfy Efficiency, Symmetry, Dummy Player, and Additivity.
\end{proposition}
One way to understand additivity is that the Shapley value for an agent is the expectation, taken with respect to a distribution on coalition set ($S$), of the increase in value brought by that agent to coalition set ($S$). And as an expectation, the Shapley value is linear in the value function.
Based on the foregoing set of 5 properties, it is possible to modify the Shapley valuation system. In particular, the following result highlights the Symmetry property and proves central to the remainder of this paper:
\begin{theorem}\label{VariationTheorem}(~\cite{monderer2002variations})
Any valuation consistent with Efficiency, Dummy Player, and Additivity properties can be represented in the following form:
\begin{align}
\phi_i(v) = \sum_{S \subseteq N} \Pr(S) (v(S \cup {i}) - v(S))
\end{align}
\end{theorem}
By comparing Theorem \ref{VariationTheorem} with Theorem \ref{Shapley}, we see that the Symmetry property effectively constrains the probability weights to the unique values listed in Theorem \ref{Shapley}.
\subsection{Potential outcome setting}
A key concept for estimating the effect of a treatment is the comparison of potential outcomes under different exposures of the experimental units -- the auctions -- to one of two possible treatments.
For example, the revenue of an auction is $v(S)$ for a coalition $S$ of bidders, if we add a new bidder $i$ to the coalition then the revenue is going to be $v(S \cup i)$. The difference $v(S \cup i)-v(S)$ is a reasonably obvious definition for the attribution of bidder $i$ to the coalition $S$. Note how this difference compares two potential outcomes.
However, these incremental attributions are not going to be the same for different coalitions, and the remaining question is what combination of the marginal impacts should be considered as the attribution of bidder $i$.
The Shapley values defined in the former section is a weighted average of the marginal attributions for all of the coalitions, that doesn't include bidder $i$, with weights as functions of size of the coalition sets. We will provide a more general connection between modified Shapley values and weighted averages with arbitrary weights in the next section.
\section{The Potential Outcome View and Counterfactual Questions}\label{connection}
Suppose we want to estimate the impact on value from a bidder in an auction. We can start with the potential outcome approach defined in section \ref{preliminary}. Consider two almost-duplicate worlds, one with the specific bidder attending the auctions (the observable) and the other without the bidder in question (not observable, i.e. counterfactual).
For a given auction that includes bidder $i$, designate the set of other bidders as $S$. The impact on revenue of the intervention -- including $i$ -- is $(v(S \cup {i}) - v(S))$. However, the set $S$ is a random variable. Suppose the subset $S$ of bidders that attend the auction has probability $\Pr(S)$. Then we can write the intervention's impact as:
\begin{align}
\Delta_i = \sum_{S \subseteq N} \Pr(S) (v(S \cup {i}) - v(S))
\end{align}
Comparing the above equation with the formulation of generalized Shapley, we notice that the counterfactual question proposed above has exactly the same form as the modified Shapley value. Therefore, we conclude the Symmetry property imposes this particular probability distribution on the Shapley value:
$$\Pr(S) = \frac{|S|!(|N| - |S| - 1)!}{|N|!}$$
The above calculation highlights why the modified Shapley value gives an appropriately richer family of estimates
for the key counterfactual question: the value accrued to the seller that results from including any given bidder. However, Shapley values and its modification both involve combinatorially large numbers of subsets, so both are computationally expensive.
In the next sections, we develop an algebraic framework for computing Shapley values and the modified Shapley values for a class of auctions. The new algebraic framework helps us in designing modifications of Shapley values.
\section{An algebraic approach to Shapley values}\label{variation}
We now present a decomposition of the bids in an auction using a basis set. Our framework expresses the Shapley valuation as a matrix operator. Further, it allows us to modify some properties and build corresponding valuations.
\subsection{Additive groups for auctions and their sub-groups}
Consider the auction with $|N| = n$ bidders represented by an ordered bid vector $\vec{b} = (b_{(1)},b_{(2)},...b_{(n)})$ where $b_{(1)} \ge b_{(2)} \ge ... \ge b_{(n)} \ge 0$. Conforming to a standard notation, the subscript $(j)$ denotes the $j$-th largest order statistic.
Define a revenue function for an auction involving some subset $S$ of these $n$ bidders as $v(\vec{b}^S)$. Note that in a first price auction, $v(\vec{b}^S) = b^S_{(1)}$, where the superscript $S$ corresponds to the ordered bids from subset $S \subseteq N$.
Define $\vec{e}_k = (1,1,..,1,0,..,0)$, the vector that places ones in the first $k$ positions, and zeros in the $n-k$ positions thereafter. With this notation, the following linear decomposition of $\vec{b}$ holds:
\begin{align}
\vec{b} = \sum^{n-1}_{k=1} (b_{(k)}-b_{(k+1)}) \vec{e}_k + b_{(n)} \vec{e}_n
\end{align}
In this way, we establish an additive group generated from $\vec{e}_k$ for $k=1,...,n$. (We note in passing that the matrix consisting of rows $\vec{e}_k$ defines the cumulative sum operator; the inverse of this corresponds to the difference operator.)
\begin{theorem}\label{linearity}
The revenue functions for first price and second price auctions are linear for the additive group defined by $\vec{e}_k$ bids, that is,
\begin{align}
v(\vec{b}^S) = \sum^{n-1}_{k=1} (b^S_{(k)}-b^S_{(k+1)}) v(\vec{e}^S_k) + b^S_{(n)} v(\vec{e}^S_n)
\end{align}
\end{theorem}
\begin{proof}
For first price auction, proof is direct from a telescopic sum since $v(\vec{e}^S_k)$s are either all $1$. The sum leads to $b^S_{(1)}$ which is the revenue from first price auction.
For second price auction, $v(\vec{e}_1)$ is $0$ and all other $v(\vec{e}^S_k)$s are $1$, leading to removing $(b^S_{(1)}-b^S_{(2)})$ term from the sum. Then the right hand side ads to $b^S_{(2)}$ which is the revenue for a second price auction.
\end{proof}
\subsection{Algebraic form for the Shapley value and proofs of uniqueness}
Define $v_k$ as a subgame of auction $v$ which is defined only for $e_k$ component of the auction meaning: $$v_k(S) = v(S)$$ for auction running on bids as $e_k$. Given the linearity of revenue function for the first and second price auctions in Theorem \ref{linearity}, we can apply the linearity condition in the definition of Shapley values leading to:
\begin{align}
\phi_i(v) = \sum^{n-1}_{k=1} (b_{(k)}-b_{(k+1)}) \phi_i(v_k) + b_{(n)} \phi_i(v_n),
\end{align}
where $v_k$ is the auction with bids $e_k$. Define $\vec{\phi}(v_k) = (\phi_1(v_k),..,\phi_{n}(v_k))$.
\begin{theorem}\label{uniqueness1st}
For a first price auction, using the properties 1,2 and 3 of Shapley valuation we have
$$\vec{\phi}(v_k) = \frac{1}{k} \times \vec{e}_k$$
and
$$\vec{\phi}(v) = A_f \times \vec{b}$$ where,
$$ A_f \times \vec{e}_k = \frac{1}{k} \times \vec{e}_k $$
for $k=1,..,n$. The last set of equations identifies matrix $A_f$ uniquely. Matrix $A$ has the following form:
$$
A_f = \left(
\begin{array}{cccccc}
1 & -\frac{1}{2} & -\frac{1}{6} & \ldots &-{1 \over (n-1)(n-2)} & -\frac{1}{n(n-1)}\\
0 & +\frac{1}{2} & -\frac{1}{6} & \ldots & -{1 \over (n-1)(n-2)} & -\frac{1}{n(n-1)}\\
0 & 0 & +\frac{1}{3} & \ldots & -\frac{1}{(n-1)(n-2)} &-{1 \over n(n-1)}\\
\vdots & \vdots & \vdots & \ldots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & -\frac{1}{(n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & 0 & 0 & \ldots & +\frac{1}{(n-1)} & -{1 \over n(n-1)}\\
0 & 0 & 0 & \ldots & 0 & {1 \over n}\\
\end{array} \right)
$$
\end{theorem}
\begin{proof}
We have $ A_f \times \vec{e}_k = \frac{1}{k} \times \vec{e}_k$ for $k=1,..,n$, hence, we have $n$ eigen vectors of the matrix $A_f$ which uniquely defines $A_f$ as above.
\end{proof}
We can use the same approach to characterize Shapley value matrix operator for second price auctions as following:
\begin{theorem}\label{uniqueness2nd}
For a second price auction, using properties 1,2 and 3 of Shapley valuation we have
$$\vec{\phi}(v_k) = {1\over k} \times \vec{e}_k$$
and,
$$\vec{\phi}(v) = A_s \times \vec{b}$$ where,
$$ A_s \times \vec{e}_k = {1\over k} \times \vec{e}_k, \ \ A_s \times \vec{e}_1 = 0 \times \vec{e}_1 $$
for $k=2,..,n$. The last set of equations identifies matrix $A_s$ uniquely, leading to an algebraic proof for uniqueness. Matrix A has the following form:
$$
A_s = \left(
\begin{array}{cccccc}
0 & \frac{1}{2} & -{1 \over 6} & \ldots &-{1 \over (n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & \frac{1}{2} & -{1 \over 6} & \ldots & -{1 \over (n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & 0 & +{1 \over 3} & \ldots & -{1 \over (n-1)(n-2)} &-{1 \over n(n-1)}\\
\vdots & \vdots & \vdots & \ldots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & -{1 \over (n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & 0 & 0 & \ldots & +{1 \over (n-1)} & -{1 \over n(n-1)}\\
0 & 0 & 0 & \ldots & 0 & {1 \over n}\\
\end{array} \right)
$$
\end{theorem}
\begin{proof}
We have $A_s \times \vec{e}_k = \frac{1}{k} \times \vec{e}_k$ for $k=2,..,n$, hence, we have $n-1$ eigen-vectors of the matrix $A_s$ as well as the null space $A_s \times \vec{e}_k = 0 \times \vec{e}_k$ which uniquely defines $A_s$ as above.
\end{proof}
\subsection{Flexible properties}
Theorems \ref{uniqueness1st} and \ref{uniqueness2nd} suggest a flexible approach to valuation. The Additivity property ensures the linear form of $\vec{\phi}(v)$. The Dummy Player property transfers zeros from bid vector into zeros in the valuation vector. The Symmetry property ensures similar valuations for similar bids in $\vec{e}_k$s and Efficiency provides the ${1/k}$ coefficient for normalizing the valuation so that the total value is always exactly conserved.
\subsection{Computation complexity}
For games with many agents, Shapley values are generally computationally intractable, overwhelmed in short order by the exponential growth in the number of subsets. However, when an analytical solution is available, we can achieve computations requiring only polynomial time.
\section{A Modified Shapley Value Model}
\label{variation1}
Given the aforementioned algebraic form, we can challenge and change the properties of a valuation. For example, if we discard the Symmetry property, the following set of equations result:
$$ A^{(P)} \times \vec{e}_k = \vec{p}_k, $$
where $\vec{p}_k= (p_{k1},..,p_{kk},0,..,0),$ $\sum^k_{l=1} p_{kl} = 1$, and $p_{kl} \geq 0.$
Without the Symmetry property, we no longer have a unique valuation result. As a consequence, we have available some degrees of freedom for choosing the valuation function. At the most general level, we can parameterize by probability vectors $\vec{p}_k$.
\subsection{Interpretation}
One interpretation of $\vec{p_k}$, implicit in the notation, views them as vectors of probabilities. If $p_{ki} \not = p_{kj}$ it means that even if the $i$th and $j$th buyers bid the same, the attribution of value between them is not necessarily symmetric. Instead, it depends on the parameters $p_k$. For example, over multiple auctions, the probabilities of the $p_k$ vectors can be chosen to correspond to the fraction of such auctions for which that buyer can actually spend the claimed bid.
%
%
\section{Conclusion}
\label{conclusion}
The theory of rational economics provides a unique solution to the problem of attributing value, the Shapley valuation. We illustrate that the terms of this model give answers to the counterfactual questions associated with auctions -- the value to the seller of each bidder's participation. However, the empirical validity of the underlying assumptions, the Shapley properties, remains open. We build an algebraic framework to relax the properties as well as provide computationally feasible valuations. These we explore by posing parameterized models. We work out one of these, that which results by relaxing the Symmetry property and term the result as modified Shapley values. The new modified Shapley valuation can be applied in revenue attribution to bidders in a first and second price auctions.
\bibliographystyle{plainnat}
\section{Omitted Technical Details}\label{app:tech}
\subsection{Proof of Proposition~\ref{prop:hardness}}
We show that this class of problems includes a known NP-hard problem---maximum 3-dimensional matching~\cite{3DimMatching}---as a special case. Consider the following setting: Let $\delta=1$. Given an instance of the 3-dimensional matching, construct an instance of our problem as follows: consider an item $i_y$ for each $y \in Y$ and a bidder $b_x$ for each $x \in X \cup Z$. All bidders have capacity 1 (i.e. $q_x = 1$). If there is an edge between $x \in X \cup Z$ and $y \in Y$ in the 3-dimensional matching problem, $b_x$ bids 1 for $i_y$, otherwise she bids $0$ (that is, the bidding distributions are all degenerate with all the mass on either 0 or 1). Let the reserve price $r_y$ be 0 for all $y \in Y$. This means that to sell item $i_y$ for a price greater than zero we need to call at least two bidders to the auction for $i_y$. In this case the winner will pay exactly $\$ 1$ for the item. Also note that calling more than two people to the auction cannot increase the auction's revenue.
With this reduction it is easy to see that the maximum 3-dimensional matching in the original problem is precisely the callout that maximizes the exchange's revenue and vice versa. \hfill$\rule{2mm}{3mm}$
\subsection{Examples}\label{app:ex}
\begin{example}\label{ex:1}
Consider the following setting (See Figure~\ref{fig:ex1}): Suppose $A =\{a_1, a_2 \}$, $B = \{1,2\}$, $q_1= q_2 = 1$, and $\delta=1$.
Assume $b^1_{1} = 1, b^2_{1}=0$ and $b^1_{2} = b^2_{2}=0.5$. Also $r_1= 0.5$ and $r_2= 0.4$. The callout mechanism (a) assigns bidder 1 to $a_1$ and randomly decides which auction to call the second bidder to. This results in an expected revenue to the exchange equal to $0.5 + 0.5 \times 0.4=0.7$ and utility for the bidders equal to $0.5$ and $0.05$, respectively. The callout mechanism (b) assigns bidder 1 to $a_1$ and bidder 2 to $a_2$ and results in exchange revenue equal to 0.9 and bidder utilities equal to $0.5$ and $0.1$. In this example, (b) has a better performance than (a) because (b) brings higher expected revenue to the exchange.
\end{example}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{figures/ex1.pdf}
\caption{Illustration of Example \ref{ex:1}. Callout mechanism (a) on the left and (b) on the right.}
\label{fig:ex1}
\end{figure}
\begin{example}\label{ex:2}
Consider the following setting (See Figure~\ref{fig:ex2}). Suppose $A =\{a_1, a_2 \}$, $B = \{1,2,3\}$, $q_1= q_2 = q_3 = 1$, and $\delta=1$.
Let's assume $b^1_{1} = 0.1, b^2_{1}=0.5$, $b^1_{2} = 0.1$ and $b^2_{2}=0$, and finally $b^1_{3} = 0$ and $b^2_{3}=1$. Also $r_1= 0.1$ and $r_2= 0.4$.
Both mechanisms (a) and (b) call bidder 2 to item 1 and bidder 3 to item 2. Mechanism (a) calls bidder 1 to item 1, while mechanism (b) calls her to item 2. In this example both callout mechanism (a) and (b) results in a revenue equal to $0.5$ and both clear the market. However, in this example mechanism (2) decreases the utility that bidder 3 earns---the utilities earned by bidders 1 and 2 remain unchanged and equal to 0, while that of bidder 3 decrease from 0.6 to 0.5.
\end{example}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{figures/ex2.pdf}
\caption{Illustration of Example \ref{ex:2}. Callout mechanism (a) on the left and (b) on the right.}
\label{fig:ex2}
\end{figure}
\subsection{Proof of Proposition~\ref{prop:eq}}
Let us assume that the bidder's initial belief about the exchange's choice of $e$ is such that she chooses to participate in the first stage.
If from the resulting information set she observes $u^e \geq u$, then the sub-game perfect criterion requires her to participate in the second stage as well. That results in a payoff equal to $c^e(1+\delta)$ for the exchange and $u^e(1+\delta)$ for the bidder.
On the other hand, if the bidder observes a utility of $u^e < u$ after participating in the first stage, then the sub-game perfect criterion requires that she stop participating in the second stage. This choice in turn results in a payoff equal to $c^e$ for the exchange and $(u^e+\delta u)$ for the bidder.
Let $e$ specify the exchange's strategy (i.e. its choice of callout mechanism). Two cases are possible: either $u^e \geq u$ or $u^e < u$. If former is the case, then the exchange's payoff is maximized when $e=e^*$ (by definition), and if latter is the case, then the exchange's payoff is maximized when $e= \arg\max_{e'} c^{e'}$. Therefore when $\max_e c^e \leq c^{e^*} (1+\delta)$ (or equivalently $\frac{\max_e c^e}{c^{e^*}} -1 \leq \delta$) the exchange's best response to the bidder's strategy is to choose $e^*$. \hfill$\rule{2mm}{3mm}$
\subsection{The Greedy Algorithm}\label{app:other}
Algorithm~\ref{alg:greedy} describes our baseline, GRA.
\begin{algorithm}
\caption{The Greedy Algorithm (GRA)}
\label{alg:greedy}
\begin{algorithmic}[1]
\State \textbf{Input:} $K \in \mathbb{N}$, $\epsilon>0$.
\State Start with $t = 0$.
\State Let $\hat{D}_i$ be the estimated bidding distribution for bidder $i$. Start with equal estimations for all bidders.
\While{there exist more ad slots}
\State $t=t+1$.
\State Receive ad slot $a_t$.
\State $B_t = \emptyset$.
\For{$i=1,2,\cdots,K$}
\State Approximate the marginal revenue impact of adding bidder $i$ to $B_t$ using $\frac{1}{\epsilon}$ repetitions.
\State Add that bidder with highest marginal revenue impact to $B_t$.
\EndFor
\State Run the auction among bidders in $B_t$.
\State Update $\hat{D}_i$ for all $i$ using $\mathbf{b}_{t}$.
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{Proof of Proposition~\ref{prop:ShA}}
Let $B_k$ denote the set of bidders with the highest $k$ bids. Consider the case where all bidders submit bids equal to $1$.
Define $\mathbf{e}_k = (1,1,..,1,0,..,0)$ to be the vector that places ones in the first $k$ positions, and zeros in the $n-k$ positions thereafter. For $k=2,3,\cdots,n$ properties 1, 2, and 3 can then be translated to:
\begin{enumerate}
\item Symmetry: there exists $c$ such that $\mathbf{R}_{B_k} = c \mathbf{e}_k$.
\item Linearity: $\mathbf{R}_{B_k} = \mathbf{A} \mathbf{e}_k$ for some fixed matrix $\mathbf{A}$.
\item Conservation of Revenue: $\textbf{1}^T \mathbf{R}_{B_k}= 1$.
\end{enumerate}
From 1 we have that $R_1=\cdots=R_k=c$. Combining this with 3, we have $c = \frac{1}{k}$, or $\mathbf{R}_{B_k} = \frac{1}{k} \mathbf{e}_k$. Combining the latter with property 2 we have $\frac{1}{k} \mathbf{e}_k = \mathbf{A} \mathbf{e}_k$ for $k=2,3,\ldots,n$. Similarly we have that $\mathbf{A} \mathbf{e}_1 = 0$. This uniquely identifies $\mathbf{A}$ as follows\footnote{For a second-price auction with reserve price $r$ the derivation is similar. The only difference is that we must treat bids below $r$ as $0$, because such bids cannot have positive revenue impact.}:
$$
\mathbf{A} = \left(
\begin{array}{cccccc}
0 & \frac{1}{2} & -{1 \over 6} & \ldots &-{1 \over (n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & \frac{1}{2} & -{1 \over 6} & \ldots & -{1 \over (n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & 0 & +{1 \over 3} & \ldots & -{1 \over (n-1)(n-2)} &-{1 \over n(n-1)}\\
\vdots & \vdots & \vdots & \ldots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & -{1 \over (n-1)(n-2)} & -{1 \over n(n-1)}\\
0 & 0 & 0 & \ldots & +{1 \over (n-1)} & -{1 \over n(n-1)}\\
0 & 0 & 0 & \ldots & 0 & {1 \over n}\\
\end{array} \right)
$$
This finishes the proof.
\hfill$\rule{2mm}{3mm}$
\subsection{Running Times}
\begin{table*}[ht!]\label{tab:rt}
\centering
\caption{Running time of each algorithm in terms of the number of bidders $n$, auctions $m$, and accuracy $\epsilon$}
\begin{tabular}{ |l|l|l|l|l|l|l|l|l|l|}
\hline
Algorithm & RQT & GRA & ShA & BAR & WIN & SPD & BID & RVC & RNK\\
\hline
Running Time & $\theta(\frac{nm}{\epsilon})$ & $\theta(\frac{n^2 m}{\epsilon})$ & $\theta(n^2 m)$ & $O(n m)$ & $\theta(m)$ & $\theta(m)$ & $\theta(n m)$ & $\theta(m)$ & $\theta(n \log(n) m)$ \\
\hline
\end{tabular}
\end{table*} | proofpile-arXiv_065-7503 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since a long time, the measure of the caloric curves of hot nuclei is a way to study the phase transition of nuclear matter \cite{natowitz1}.
But we may wonder if these caloric curves are really a robust signal of the phase transition in nuclei. We will try to answer this question.
With the different campaigns done by the INDRA collaboration we can build different caloric curves of Quasi-Projectiles for different symmetric or quasi symmetric systems.
The main opportunity for this study is the possibility to use the same multidetector array and a single experimental protocol.
The calorimetry, used here, is a new one, called "3D calorimetry" \cite{vient1,legouee1, vient2} which allows to determine the excitation energy $E^{\ast}$ of the QP. The temperatures $T$ are estimated from the slopes of kinetic energy spectra defined in the reconstructed Quasi-Projectile frame. We have tried to optimize at the most these two methods of measurement.
\section{Presentation of the necessary event selections}
First, to well experimentally characterize and reconstruct a hot nucleus and specially a QP, we need to do event selections.
Consequently, we use an event generator here, HIPSE \cite{lacroix1} and a software filter to simulate the experimental response of INDRA. We can thus understand and control our experimental work.
We verify the correct detection of particles and fragments coming from the QP by the two following conditions :
\[ 1.05 >\frac{\sum_{i=1}^{Total\,Mul}(ZV_{//})_{i}}{Z_{Proj}\times V_{Proj}} > 0.7 \]
and \[ 1.05 > \frac{\sum_{i=1}^{Forward\,Mul}(2\times Z_{i})}{Z_{Proj}+Z_{Targ}} > 0.7\]
By this good measurement of the total charge in the forward hemisphere of the center of mass and this correct conservation of the total parallel pseudo-momentum, we obtain a criterion of completeness for the event detection in the forward hemisphere of the center of mass. We can study with HIPSE the consequences of these both selections on the impact parameter distribution. We keep only 28 $\%$ of the total cross section according to HIPSE with these selections for the system Xe + Sn at 50 A.MeV.
We need also a supplementary selection. Indeed, we must try to control the geometry and the violence of the collision.
For that, we use the normalized total transverse kinetic energy of Light Charged Particles ($Z<3$), which is define by the following relation :
\[E_{tr12}^{Norm}=\frac{\sum_{i=1}^{Mul_{LCP}} T_{k_i} \times sin^2(\theta_i)} {(2E_{Available\,in\,c.m.}/3)}\]
Where $T_{k_i}$ is the kinetic energy and $\theta_i$ the polar angle in the laboratory frame.\\
For a completely dissipative collision, this global variable is equal to one.
In reference \cite{vient1} for HIPSE, it is shown a clear mean correlation between this variable and the impact parameter for all the events seen by INDRA. The same one is also observed for the experimental data. For HIPSE, this mean correlation remains the same for the complete events in the forward hemisphere of the center of mass.
We chose then to select events with this global variable. We divide its distribution in ten slices corresponding to the same cross section.
\section{Fast description of the 3D calorimetry}
Now, we will remind quickly what is the 3D calorimetry. The QP frame is reconstructed with IMF's and fragments located in the forward hemisphere of the center of mass. We define the direction of any particle in the QP frame with two angles : the azimuthal angle $\phi$ in the reaction plane, and the polar angle $\theta_ {spin}$, out-of-plane (see part A of figure \ref{fig01}).
In the QP frame, we decided to divide the whole space in six spatial zones with a selection using the azimuthal angle $\phi$ as shown in the part B of figure \ref{fig01}. These spatial domains represent the same solid angle in this frame.
\begin{figure}[h]
\centerline{\includegraphics[width=12.46cm,height=3.4cm]{fig01.eps}}
\caption{A) Angle definitions in the QP frame. B) Presentation of angular domains of selection.}
\label{fig01}
\end{figure}
In figure \ref{fig02}, the kinetic energy histograms of protons in the QP frame for the six spatial zones are presented for semi-peripheral collisions, for the system Xe + Sn at 50 A.MeV. The black graphs correspond to the data, the pink to HIPSE. We have simply normalized the two histograms to the same number of events. The agreement between the data and HIPSE is really remarkable. With HIPSE, the origin of the protons is known. Therefore, we know that the blue curves correspond to the protons emitted by the QP and the green curves to the others contributions. We see in this figure that we have a superposition of the blue and pink curves only for the angular zone : $0^{\circ}$, $-60^{\circ}$. We can note that there is a small green contribution also in this zone.
\begin{figure}[h]
\centerline{\includegraphics[width=9.53cm,height=10.14cm]{fig02.eps}}
\caption{Kinetic energy spectra for the different angular domains obtained for semi-peripheral collisions Xe + Sn at 50 A.MeV. The black curves correspond to the data, the pink to HIPSE, the blue to HIPSE for the protons emitted by the QP and green for the protons with another origin.}
\label{fig02}
\end{figure}
For the 3D calorimetry of the QP, we have chosen to consider that all the particles located in the azimuthal angular range ($0^{\circ}$, $-60^{\circ}$) in the reconstructed frame of the QP are strictly evaporated particles by the QP.
We estimate the evaporation probability according to the kinetic energy and the angular domain by comparison with this reference domain. For example, we divide, for a kind of particles, the kinetic energy distribution of the reference domain by the kinetic energy distribution of another angular domain. We thus obtain an experimental distribution for probability of emission by the QP for this kind of particle.
We can then use these probabilities $Prob_i$ defined for different particles, kinetic energy, $\theta_{spin}$, $\phi$, normalized total transverse kinetic energy of LCP's to do a calorimetry of the QP, event by event, using different formulas to obtain the charge, the mass and the excitation energy of the QP :
\[E^{\ast}_{QP}=\sum_{i=1}^{Multot} Prob_{i}\times Ec_{i}+ N_{neutron}\times 2\times\langle T \rangle_{p+\alpha}-Q -Ec_{QP} \]
\[Z_{QP}=\sum_{i=1}^{Multot} Prob_{i}\times Z_{i} \; and \; A_{QP}=Z_{QP} \times 129/54=\sum_{i=1}^{Multot} Prob_{i}\times A_{i}+N_{neutron}\]
\section{Application of the 3D Calorimetry to obtain caloric curves}
We have applied this 3D calorimetry to different symmetric or quasi symmetric systems : Ni+Ni, Xe+Sn and Au+Au (see figure \ref{fig03}). The measured temperature is an average temperature calculated using the slopes obtained by a fit of the spectra of protons, deuterons and tritons found for the azimuthal angular domain of reference.
We can remark two important facts.
\begin{figure}[h]
\centerline{\includegraphics[width=13.69cm,height=5.53cm]{fig03.eps}}
\caption{Experimental caloric curves obtained for the systems Ni+Ni, Xe+Sn and Au+Au at different incidental energies.}
\label{fig03}
\end{figure}
First, we see a systematic bump for the peripheral collisions and an apparent drift of the temperature upward with the size of the system.
It is a consequence of a complex effect due to the detection and to the criteria of completeness \cite{vient1, vient2}.
Second, we observe a systematic change of slope for all systems. It seems to correspond to a change of the de-excitation mode of the nuclei. The slope of this change seems to evolve with the size of the system. Does it mean that we have reached the limiting temperature of existence of hot nuclei ?
In fact, it is difficult to conclude. We know that we have experimental limitations concerning the measure of the temperature.
To better understand that, we can observe what happens with HIPSE. We applied the 3D calorimetry to filtered events generated by HIPSE for the system Xe+Sn for different beam energies. The obtained caloric curves are presented in figure \ref{fig04}. In this figure, we compare the data (black circles), HIPSE (red squares) and the data supplied by HIPSE when a perfect calorimetry is applied (blue triangles).
\begin{figure}[h]
\centerline{\includegraphics[width=9.72cm,height=7.56cm]{fig04.eps}}
\caption{Experimental caloric curves obtained for the system Xe+Sn at different incidental energies for data and HIPSE, with the 3D calorimetry (black circles and red squares respectively) and with a perfect calorimetry (blue triangles).}
\label{fig04}
\end{figure}
With HIPSE, we known if a nucleus has been or not emitted by the QP. With this information, it is easy to do a perfect calorimetry.
We see clearly that there is an effect due to a pollution by the others contributions in the azimuthal angular domain taken as reference of the QP contribution.
The problem concerning the separation of the different contributions in the nuclear reaction stays experimentally challenging and not completely resolved. It is our main experimental problem.
\section{Which are the observed signals of phase transition ?}
We can then ask: is there a phase transition for the Quasi-Projectiles in these data? For Xe+Sn, if we study the evolution of the mean evaporation multiplicity of different types of particles and fragments of the QP as a function of its excitation energy per nucleon, we observe a rise and fall for these IMF's \cite{kreutz1} (see the part A of the figure \ref{fig05}). On these graphs, the symbol corresponds to a beam energy and the color to a range of QP mass.
\begin{figure}[h]
\centerline{\includegraphics[width=13.73cm,height=7.761cm]{fig05.eps}}
\caption{A) Evolution of the mean IMF multiplicity according to the excitation energy per nucleon for the system Xe+Sn at different incident energies. B) Evolution of $ln((\sigma_{Z_{max}})^2)$ as a function of $ln(\left\langle Z_{max}\right\rangle^2) $ for the system Xe+Sn at different incident energies.}
\label{fig05}
\end{figure}
We can observe also another possible signal of phase transition : the delta-scaling in the liquid part \cite{frankland1}.
For that, we present in a graph the logarithm of the variance of the charge of the largest fragment in the forward hemisphere as a function of the logarithm of the square of its average value (see the part B of the figure \ref{fig05}). We observe a scaling with $\Delta =1$ for the most central and violent collisions of the higher beam energies as in reference \cite{frankland1}.
\section{Conclusions and outlooks}
Concerning the question of the existence of a phase transition, the answer is not clear enough for the moment. To get a more definitive answer, we must still improve the QP calorimetry. It means that we must determine the efficiency of the multi-detector array with an event generator or a model sufficiently realistic and later correct its effect. We must still optimize the separation criteria between the different contributions (QP, QT, pre-equilibrium), ever with an event generator or a model. But these corrections are evidently model-dependent.
| proofpile-arXiv_065-7505 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Results}
\subsection*{Optimal projective measurements for a single two-level system}
\label{optimalconditions}
Let us begin by considering a single two-level system interacting with an arbitrary environment. The Hamiltonian of the system is $H_S$, the environment Hamiltonian is $H_{B}$, and there is some interaction between the system and the environment that is described by the Hamiltonian $V$. The total system-environment Hamiltonian is thus $H = H_S + H_B + V$. At time $t = 0$, we prepare the system state $\ket{\psi}$. In the usual treatment of the quantum Zeno and anti-Zeno effects, repeated projective measurements described by the projector $\ket{\psi}\bra{\psi}$ are then performed on the system with time interval $\tau$. The survival probability of the system state after one measurement is then $s(\tau) = \text{Tr}_{S,B}[(\ket{\psi}\bra{\psi} \otimes \mathds{1}) e^{iH_S\tau}e^{-iH\tau} \rho(0) e^{iH\tau}e^{-iH_S\tau}]$, where $\text{Tr}_{S,B}$ denotes taking the trace over the system and the environment, $\rho(0)$ is the initial combined state of the system and the environment, and the evolution of the system state due to the system Hamiltonian itself has been eliminated via a suitable unitary operation just before performing the measurement \cite{MatsuzakiPRB2010,ChaudhryPRA2014zeno,Chaudhryscirep2016}. Assuming that
the system-environment correlations can be neglected, the survival probability after $N$ measurements can be written as $[s(\tau)]^N = e^{-\Gamma(\tau)N\tau}$, thereby defining the effective decay rate $\Gamma(\tau)$. It should be noted that the behaviour of the effective decay rate $\Gamma(\tau)$ as a function of the measurement interval allows us to identify the Zeno and anti-Zeno regimes. Namely, if $\Gamma(\tau)$ increases as $\tau$ increases, we are in the Zeno regime, while if $\Gamma(\tau)$ decreases if $\tau$ increases, we are in the anti-Zeno regime \cite{KurizkiNature2000, SegalPRA2007, ThilagamJMP2010, ChaudhryPRA2014zeno,Chaudhryscirep2016}.
We now consider an alternative way of repeatedly preparing the initial state with time interval $\tau$. Once again, we start from the initial system state $\ket{\psi}$. After time $\tau$, we know that the state of the system is given by the density matrix $\rho_S(\tau) = \text{Tr}_B[e^{iH_S\tau}e^{-iH\tau} \rho(0) e^{iH\tau}e^{-iH_S\tau}]$, where once again the evolution due to the free system Hamiltonian has been removed. Now, instead of performing the projective measurement $\ket{\psi}\bra{\psi}$, we perform an arbitrary projective measurement given by the projector $\ket{\chi}\bra{\chi}$. The survival probability is then $s(\tau) = \text{Tr}_S[(\ket{\chi}\bra{\chi})\rho_S(\tau)]$, and the post-measurement state is $\ket{\chi}$. By performing a unitary operation $U_R$ on the system state on a short timescale, where $U_R\ket{\chi} = \ket{\psi}$, we can again end up with the initial state $\ket{\psi}$ after the measurement. This process can then, as before, repeated again and again to repeatedly prepare the system state $\ket{\psi}$. Once again, if the correlations between the system and the environment can be neglected, we can write the effective decay rate as $\Gamma(\tau) = -\frac{1}{\tau}\ln s(\tau)$. But now, we can, in principle, via a suitable choice of the projector $\ket{\chi}\bra{\chi}$, obtain a larger survival probability (and a correspondingly smaller decay rate) than what was obtained with repeatedly using projective measurements given by the projector $\ket{\psi}\bra{\psi}$. The question, then, is what is this projector $\ket{\chi}\bra{\chi}$ that should be chosen to maximize the survival probability?
For an arbitrary quantum system, it is difficult to give a general condition or formalism that will predict this optimal projective measurement. However, most studies of the effect of repeated quantum measurements on quantum systems have been performed by considering the quantum system to be a single two-level system \cite{KoshinoPhysRep2005}. Let us now show that if the quantum system is a two-level system, then it is straightforward to derive a general method for calculating the optimal projective measurements that need to be performed as well as an expression for the optimized decay rate. We start from the observation that the system density matrix at time $\tau$, just before the measurement, can be written as
\begin{equation} \label{do}
\rho_{S}(\tau) = \frac{1}{2} \Big (\mathds{1} + n_{x}(\tau)\sigma_{x} + n_{y}(\tau)\sigma_{y} + n_{z}(\tau)\sigma_{z} \Big ) = \frac{1}{2} \Big ( \mathds{1} + \mathbf{n}(\tau) \cdot \mathbf{\sigma} \Big ),
\end{equation}
where $\mathbf{n}(\tau)$ is the Bloch vector of the system state.
We are interested in maximizing the survival probability $s(\tau) = \text{Tr}_S[(\ket{\chi}\bra{\chi})\rho_S(\tau)]$. It is clear that we can also write
\begin{equation} \label{dop}
\ket {\chi} \bra {\chi} = \frac{1}{2} \Big ( \mathds{1} + n'_{x}\sigma_{x} + n'_{y}\sigma_{y} + n'_{z}\sigma_{z} \Big ) = \frac{1}{2} \Big ( \mathds{1} + \mathbf{n'} \cdot \mathbf{\sigma} \Big ),
\end{equation}
where $\mathbf{n'}$ is a unit vector corresponding to the Bloch vector for the projector $\ket{\chi}\bra{\chi}$. Using Eqs.~\eqref{do} and \eqref{dop}, we find that the survival probability is
\begin{equation}
s(\tau) = \frac{1}{2}\left(1 + \mathbf{n}(\tau) \cdot \mathbf{n'} \right).
\end{equation}
It should then be obvious how to find the optimal projective measurement $\ket{\chi}\bra{\chi}$ that needs to be performed. The maximum survival probability is obtained if $\mathbf{n'}$ is parallel to $\mathbf{n}(\tau)$. If we know $\rho_S(\tau)$, we can find out $\mathbf{n}(\tau)$. Consequently, $\mathbf{n'}$ is simply the unit vector parallel to $\mathbf{n}(\tau)$. Once we know $\mathbf{n'}$, we know the projective measurement $\ket{\chi}\bra{\chi}$ that needs to be performed. The corresponding optimal survival probability is given by
\begin{equation}
\label{optimizedprobability}
s^{*}(\tau) = \frac{1}{2}\left(1 + \norm{\mathbf{n}(\tau)}\right).
\end{equation}
Now, if we ignore the correlations between the system and environment, which is valid for weak system-environment coupling, we can again derive the effective decay rate of the quantum state to be $\Gamma(\tau) = -\frac{1}{\tau}\ln s^{*}(\tau)$. We now investigate the optimal effective decay rate for a variety of system-environment models.
\begin{comment}
after optimizing over \eqref{optimalprob1}, the expression for the optimal survival probability is given by:
\begin{equation}\label{optimalprob}
s^{*}(\tau) = \frac{1}{2} \Big (1 + \norm{n(\tau)} \Big ),
\end{equation}
where $\norm{n(\tau)}$ is the Bloch vector of the system's density matrix at time $\tau$. We assume that the system-environment interaction is weak enough such that we can safely ignore
the build-up of the system-environment correlations as the system and the environment evolve together\cite{ChaudhryPRA2014zeno}. In this case if one makes $N$ optimal projective measurements, the survival probability, $S(\tau)$, is given by $S(\tau) = [s^{*}(\tau)]^N$, where $s(\tau)$ is the survival probability associated with one measurement. We define $S(\tau) = e^{- \Gamma(\tau) N \tau}$. From this definition, we note that the effective decay rate is given by $\Gamma(\tau) = - \frac{1}{\tau} \ln s^{*} (\tau)$. We now use these results for the case of various models for a single two level system coupled to an environment.
\end{comment}
\begin{comment}
that it is very easy to derive the optimal measurement that needs to be performed in this case. We are interested in determining the optimal projective measurements one could make to maximize the probability, under a single measurement, for the state of the system to collapse to the state corresponding to the projective measurement that is made. Assume that Hamiltonian can be written written in the form $H = H_S + H_B + V$, where $H_S$ is the system Hamiltonian, $H_B$ is the
Hamiltonian of the environment, and $V$ describes the system-environment interaction.
Assume that the system is weakly coupled to an external environment (a bath) such that the density matrix of the composite (total) system is:
\begin{equation}
\rho_{T} (0) = \rho_{S}(0) \otimes \rho_{B}.
\end{equation}
We don't assume a specific functional form of the time-evolved density operator of the composite system at a later time $\tau, \rho_T(\tau)$, since both the system and the bath could represent an entangled composite state. However, since the system of interest is a two-level system, we can easily work in the Bloch picture by writing the time-evolved density matrix of the system as:
\end{comment}
\subsection*{The population decay model}
\begin{figure}
{\includegraphics[scale = 0.55]{PopulationDecayMerged2.pdf}}\caption{\textbf{Behaviour of both the survival probability and the effective decay rate for the population decay model.} \textbf{(a)} $s(\tau)$ versus $\tau$. The purple dashed curve shows the survival probability if the excited state is repeatedly measured; the black curve shows the survival probability if the optimal projective measurement is repeatedly made. \textbf{(b)} $\Gamma(\tau)$ versus $\tau$. The blue dashed curve shows the decay rate if the excited state is repeatedly measured; the solid red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.01, \omega_c = 50$ and $\varepsilon = 1$. In this case, $\tau^* \approx 10.6$.}
\label{PopDecayMerged2}
\end{figure}
To begin, we consider the paradigmatic population decay model. The system-environment Hamiltonian is (we use $\hbar = 1$ throughout)
\begin{equation}
H = \frac{\varepsilon}{2}\sigma_{z} + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + \sum_{k} (g_{k}^{*}b_{k}\sigma^{+} + g_{k}b_{k}^{\dagger}\sigma^{-}),
\end{equation}
where $\varepsilon$ is the energy difference between the two levels, $\sigma_{z}$
is the standard Pauli matrix, $\sigma^{+}$ and $\sigma^{-}$ are the raising
and lowering operators, and $b_k$ and $b_k^{\dagger}$ are the annihilation and creation operators for mode $k$ of the environment. It should be noted that here we have made the rotating-wave approximation. This system-environment Hamiltonian is widely used to study, for instance, spontaneous emission \cite{Scullybook}. We consider the very low temperature regime. We initially prepare the system-environment state $\ket{\uparrow_z,0}$ that describes the two-level system to be in the excited state and the environment oscillators to be in their ground state. Ordinarily, in the studies of the QZE and the QAZE, the system is repeatedly projected onto the excited state with time interval $\tau$. As discussed before, we, on the other hand, allow the system to be projected onto some other state such that the effective decay rate is minimized. To find this optimal projective measurement, we need to understand how the Bloch vector of the system evolves in time. Due to the structure of the system-environment Hamiltonian, the system-environment state at a later time $\tau$ can be written as $
| \psi(t) \rangle = f(t) \ket{\uparrow_z, 0} + \sum_{k} f_k(t) \ket{\downarrow_z, 0}$,
where $\ket{\downarrow_z, k}$ means that the two-level system is in the ground state and that
mode $k$ of the environment has been excited. It then follows that the density matrix of the system at time $\tau$ is
\begin{align}\label{reduceddebsitymatrix}
\rho_{S}(\tau)& =
\begin{bmatrix}
|f(t)|^2 & 0 \\
0 & \displaystyle \sum_{k} |f_{k}(t) |^2
\end{bmatrix}
\end{align}
\begin{comment}
Using \eqref{pop.psti.t}, we calculate the time-evolved density matrix of the composite (system and environment) system, which is
$\rho_{T}(\tau) = | \psi(\tau) \rangle \langle \psi(\tau) |$. This expression takes the form
\begin{align} \label{pop.density}
\rho_{T}(\tau) = |f(\tau)|^{2} \ket{\uparrow_z, 0} \bra{\uparrow_z, 0} + f(\tau) \sum_{k'} \ket {\uparrow_z, 0} f_{k'}^{*}(\tau) \bra{\downarrow_z, k'} + f^{*}(\tau) \sum_{k} f_k(\tau) \ket{\downarrow_z, k} \bra{\uparrow_z, 0} + \sum_{k, k'}f_k(\tau)f_{k'}^{*}(\tau) \ket{\downarrow_z, k} \bra{\downarrow_z, k'}.
\end{align}
The aforementioned expression expresses the density operator for the total (composite) system. To calculate the density operator for the two level system, we take the partial trace of \eqref{pop.density}. The result is:
\begin{comment}
\begin{align}\label{partialtrace}
\rho_{S}(\tau) & = \Tr_{B} [ \rho_{T}(\tau) ] = \sum_{m} (\mathds{1}_{A} \otimes \bra {m}_{B}) \rho_{T}(\tau) (\mathds{1}_{A} \otimes \ket{m}_{B}).
\end{align}
To avoid making the equations cumbersome, we evaluate each of the four terms (labeled alphabetically) in \eqref{partialtrace} individually.
\begin{align} \label{first}
A & = \sum_{m} |f(\tau)|^2 \Big ( \mathds{1}_{A} \otimes \bra {m}_{B} \Big ) \ket{\uparrow_z, 0} \bra{\uparrow_z, 0} \Big ( \mathds{1}_{A} \otimes \ket{m}_{B} \Big ) \nonumber \\
& = |f(\tau)|^2 \sum_{m} \ket {\uparrow_z} \braket{m|0}_{B} \bra{\uparrow_z} \braket{0|m}_{B}, \nonumber \\
& = |f(\tau)|^2 \sum_{m} \delta_{m_{B}, 0_{B}} \ket {\uparrow_z} \bra{\uparrow_z}, \nonumber \\
& = |f(\tau)|^2 \ket {\uparrow_z} \bra{\uparrow_z}.
\end{align}
\begin{align} \label{second}
B & = f(\tau) \sum_{k', m} f_{k'}^{*}(\tau) \Big ( \mathds{1}_{A} \otimes \bra {m}_{B} \Big ) \ket {\uparrow_z, 0} \bra{\downarrow_z, k'} \Big ( \mathds{1}_{A} \otimes \ket{m}_{B} \Big ) \nonumber, \\
& = f(\tau) \sum_{k', m} f_{k'}^{*}(\tau) \ket {\uparrow_z} \braket{m|0}_{B} \bra{\downarrow_z} \braket{k'|m}_{B}, \nonumber \\
& = f(\tau) \sum_{k', m} f_{k'}^{*}(\tau) \delta_{m_{B}, 0_{B}} \delta_{k'_{B}, m_{B}} \ket{\uparrow_z} \bra{\downarrow_z}, \nonumber \\
& = 0.
\end{align}
The last equality follows since the sequence of two Kronecker delta functions imply that we must have:
\begin{equation} \label{contradiction}
m_{B} = k'_{B} = 0_{B}.
\end{equation}
Equation \eqref{contradiction} is not true since by assumption,
\begin{equation}
k_{B}' \geq 1,
\end{equation}
since we assume that excitations of the number state cause transitions from the excited state to the ground state in the two level system. If $k'_{B} = 0$, then by assumption the construction in \eqref{second} would not hold true. A similar line of reasoning allows use to conclude that the other off-diagonal term, labeled $C$, must also be zero.
\begin{align} \label{fourth}
D & = \sum_{m, k, k'} f_{k}(\tau) f_{k'}^{*}(\tau) \Big ( \mathds{1}_{A} \otimes \bra {m}_{B} \Big ) \ket {\downarrow_z, k} \bra{\downarrow_z, k'} \Big ( \mathds{1}_{A} \otimes \ket{m}_{B} \Big ) \nonumber \\
& = \sum_{m, k, k'} f_{k}(\tau) f_{k'}^{*}(\tau) \ket {\downarrow_z} \braket{m|k}_{B} \bra{\downarrow_z} \braket{k'|m}_{B} \nonumber \\
& = \sum_{m, k, k'} f_{k}(\tau) f_{k'}^{*}(\tau) \delta_{m_{B}, k_{B}} \delta_{k'_{B}, m_{B}} \ket{\downarrow_z} \bra{\downarrow_z}, \nonumber \\
& = \sum_{k} |f_{k}(\tau) |^2 \ket{\uparrow_z} \bra{\downarrow_z}.
\end{align}
Equations \eqref{first}, \eqref{second} and \eqref{fourth} imply that the matrix representation of reduced density matrix of the system, written in the basis of the $\sigma_{z}$ operator, is:
\end{comment}
We consequently find that the components of the Bloch vector of the system are $n_{x}(t) = n_{y} (t) = 0$, while $n_{z}(t) = 1 - 2 \displaystyle \sum_{k} |f_{k}(t) |^2$.
\begin{comment}
\begin{align}\label{nzpop}
n_{z}(\tau) & = \Tr (\sigma_{z} \rho(\tau)), \nonumber \\
& = \Tr \begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
\begin{bmatrix}
|f(\tau)|^2 & 0 \\
0 & \displaystyle \sum_{k} |f_{k}(\tau) |^2
\end{bmatrix} \nonumber, \\
& = \Tr \begin{bmatrix}
|f(\tau)|^2 & 0 \\
0 & - \displaystyle \sum_{k} |f_{k}(\tau) |^2
\end{bmatrix} \nonumber, \\
& = |f(\tau)|^2 - \displaystyle \sum_{k} |f_{k}(\tau) |^2, \nonumber \\
& = 1 - 2 \displaystyle \sum_{k} |f_{k}(\tau) |^2,
\end{align}
\end{comment}
Thus, we have a nice interpretation for the dynamics of the system. Initially, the system is in the excited state. As time goes on, the coherences remain zero, and the probability that the system makes a transition to the ground state increases. In other words, initially the Bloch vector of the system is a unit vector with $n_z(0) = 1$. The Bloch vector then decreases in magnitude (while keeping the $x$ and $y$ components zero) until the size of the Bloch vector becomes zero. The Bloch vector thereafter flips direction and increases in length until effectively $n_z(t) = -1$. Since the direction of the Bloch vector corresponding to the optimal measurement is parallel to the Bloch vector of the system, we find that if the measurement interval is short enough such that $n_z(\tau) > 0$, then we should keep on applying the projector $\ket{\uparrow_z}\bra{\uparrow_z}$. On the other hand, if the measurement interval is large enough so that $n_z(\tau) < 0$, then we should rather apply the projector $\ket{\downarrow_z}\bra{\downarrow_z}$, and then, just after the measurement, apply a $\pi$ pulse so as to end up with the system state $\ket{\uparrow_z}$ again. In other words, the time $t = \tau^*$ at which the Bloch vector flips direction is of critical importance to us and needs to be found out in order to optimize the effective decay rate. To find this time, we assume that the system and the environment are weakly coupled and thus we can use a master equation to analyze the dynamics of the system. Since we numerically solve this master equation, we might as well put back in the non-rotating wave approximation terms so that the system-environment interaction that we consider in solving the master equation is $\sum_{k} \sigma_{x} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger})$. The master equation that we use can be written as
\begin{equation} \label{masterequation}
\frac{d \rho_S (t)}{dt} = i[\rho_S(t), H_{S}] + \int_{0}^{t} ds \Big \{ [\bar{F}(t,s)\rho_S(t), F ]C_{t s} + \; \text{h.c.} \Big \},
\end{equation}
where the system Hamiltonian is $H_S = \frac{\varepsilon}{2}\sigma_z$, the system-environment interaction Hamiltonian has been written as $F \otimes B$ with $F = \sigma_x$ and $B = \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger})$, $\bar{F}(t,s) = e^{iH_S(t -s)}Fe^{-iH_S(t -s)}$, and h.c. denotes the hermitian conjugate. Here the environment correlation function $C_{ts}$ is defined as $C_{ts} = \text{Tr}_B [\widetilde{B}(t)\widetilde{B}(s) \rho_B]$ where $\widetilde{B}(t) = e^{iH_B t} B e^{-iH_B t}$ and $\rho_B = e^{-\beta H_B}/Z_B$ with $Z_B$ the partition function. To find the environment correlation function, we introduce the spectral density function $J(\omega) = G \omega^s \omega_{c}^{1-s} e^{- \omega/ \omega_c}$, where $G$ parametrizes the system-environment coupling strength, $s$ characterizes the Ohmicity of the
environment, and $\omega_c$ is the cutoff frequency. We can then numerically solve this differential equation to find the system density matrix at any time $t$ and consequently the Bloch vector of the system. We consequently know the optimal projective measurement that needs to performed.
\begin{comment}
In the absence of the system-environment coupling, the state of the system will remain in the excited state. In the presence of the system-environment coupling, an excitation of one of the modes in the environment can cause transitions in the system: the quantum mechanical particle can make the transition from being in the excited state to the ground state, $\ket{\downarrow_z}$. The probability that the particle is in the ground state and the $k^{th}$ of the environment has been excited is given by $|f_{k}(\tau)|^2$. At $\tau = 0$, the $|f_{k}(\tau)|^2$ is zero for every $k$ since the system is in the excited state and the environment is in the ground state; hence, we have that $n_z(0)$ is 1. At a later time $\tau$, the system could have made a transition to the ground state with at least any one of the modes of the system being excited. Clearly, in this case $n_z(0)$ is less than $1$. The effect of such a time evolution would be that the magnitude of $n_z$ will continue to decrease until it is effectively at $-1$. We are interested in the time $\tau^*$ when the system's Bloch vector, specified by $n_z$, flips its sign. This is true because for $\tau < \tau^*$, the Bloch vector of the system would be parallel to the Bloch vector specifying the pure state, $\ket{\uparrow_z}$. Hence, we would continue to measure the projector $\ket{\uparrow_z} \bra {\uparrow_z}$ in so far as $\tau < \tau^*$. If $\tau > \tau^*$, the Bloch vector of the system would be parallel to the Bloch vector specifying the pure state, $\ket{\downarrow_z}$. Hence, it would now be optimal to measure the projector $\ket{\downarrow_z} \bra {\downarrow_z}$ in so far as $\tau > \tau^*$.
\end{comment}
\begin{comment}
While first-order time-dependent perturbation theory can be used to calculate $f_k(\tau)$ -- and hence $f(\tau)$ -- which will yield an expression for the survival probability -- we choose not to use this approach since for very weak system-environment coupling, the time at which the Bloch vector flips, $\tau^*$, may be very large. Perturbation theory is almost certain to fail for large times. Instead, we numerically evaluate the dynamics of the system using the Redfield master equation\cite{ChaudhryThesis} :
\begin{equation} \label{masterequation}
\frac{d \rho (\tau)}{d \tau} = i[\rho(\tau),\mathcal{H}_{S}(\tau)] + \int_{t_0}^{\tau} ds \Big \{ [\bar{F}(\tau,s)\rho(\tau), F(\tau) ]C_{\tau s} + \; \text{h.c.} \Big \},
\end{equation}
where the first term denotes coherent evolution of the system. $F(\tau)$ denotes the system's operator at time $\tau$ present in the system-environment coupling term, which is assumed to be of the form $F(\tau) \otimes B(\tau)$; here, $B(\tau)$ is the environment's operator in the coupling term. The effect of the environment's operator in the coupling term is captured in the bath correlation function, $C_{\tau s}$. In addition, $\bar{F}(\tau,s)$ is given by the equation $\bar{F}(\tau,s) = U_{S}(\tau, s) F(s) U_{S}^{\dagger}(\tau, s).$
\begin{figure}
{\includegraphics[scale = 0.55]{PopulationDecayMerged1.pdf}}
\caption{ \label{PopDecayMerged1} \textbf{Behaviour of both the survival probability and the effective decay rate for the population decay model.} \textbf{(a)} $s(\tau)$ versus $\tau$. The purple dashed curve shows the survival probability if the excited state is repeatedly measured; the black curve shows the survival probability if the optimal projective measurement is repeatedly made. \textbf{(b)} $\Gamma(\tau)$ versus $\tau$. The blue dashed curve shows the decay rate if the excited state is repeatedly measured; the red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.1, \omega_c = 50$ and $\varepsilon = 1$. In this case, $\tau^* \approx 0.6$. For $\tau = 1.2$ and $N = 2$, $\Delta s(\tau)= 0.3 !$}
\end{figure}
We first note the coherent evolution of the system is zero. This is clear because the density matrix of the system, \eqref{reduceddebsitymatrix}, is diagonal for all times $\tau$; hence, it commutes with the system's Hamiltonian, $\frac{\varepsilon}{2} \sigma_z$ at all times $\tau$. We also note that the coupling term in the Hamiltonian is given by $\sum_{k} (g_{k}^{*}b_{k}\sigma^{+} + g_{k}b_{k}^{\dagger}\sigma^{-})$,
which is of the form $\sum_{\alpha = 1}^{2} F_{\alpha}(\tau) \otimes B_{\alpha}(\tau)$. In order to use \eqref{masterequation}, we must cast the coupling term to be a single tensor product of two operators -- as stated in the previous paragraph. We take the coupling term to be $\sum_{k} \sigma_{x} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger})$. The only difference between the new coupling term and the old coupling term is that the new coupling term contains the non-rotating wave approximation terms. However, we expect that these
additional terms to not play a significant role in the weak system environment-coupling regime. Noting that all the operators in our case show no explicit time dependence, the master equation for our model assumes the form:
\begin{equation} \label{masterequation1}
\frac{d \rho (\tau)}{d \tau} = \int_{t_0}^{\tau} ds \Big \{ [\bar{F}(\tau,s)\rho(\tau), F]C(\tau) + \; \text{h.c.} \Big \},
\end{equation}
where $\bar{F}(\tau,s) = U_{S}(\tau, s) F U_{S}^{\dagger}(\tau, s)$ and $C(\tau) \equiv C_{\tau s} = \langle \bar{B}(\tau) B \rangle$.
Note that in the aforementioned operators, the time dependence arises since we are considering the Heisenberg picture operators. We also note that h.c. denotes the hermitian conjugate of the expression in the curly bracket. We also note that the Redfield master equation holds true when one assumes both that the system is weakly coupled to the environment and that the composite state of the system is given by a product state at time $\tau = 0$. Since we have consistently made these assumptions in our analysis thus far, we proceed to numerically evaluate the master equation, \eqref{masterequation1}.
\end{comment}
\begin{figure}
{\includegraphics[scale = 0.5]{Dephasing1.pdf}}
\caption{\label{Dephasing1}\textbf{Graphs of the effective decay rate under making optimal projective measurements in the pure dephasing model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1, 0, 0)$. The thickened light gray curve curve shows the decay rate if the initial state is repeatedly measured; the green curve shows the decay rate if the optimal projective measurement is repeatedly made. It is clear from the figure that the two curves identically overlap. \textbf{(b)} $\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3} )$. The blue dashed curve shows the decay rate if the initial state is repeatedly measured; the solid red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.1, \omega_c = 10, \beta = 0.5$. For $\tau = 1$ and $N = 3$, the difference in the survival probabilities is already $0.15$.}
\end{figure}
We now present a computational example (see the previous page). We plot the single measurement survival probability [see Fig.~\ref{PopDecayMerged2}(a)] and the effective decay rate [see Fig.~\ref{PopDecayMerged2}(b)] as a function of the measurement interval $\tau$. The dotted lines illustrate what happens if we keep on projecting the system state onto $\ket{\uparrow_z}$. For a small measurement interval, the optimal measurement is $\ket{\uparrow_z}\bra{\uparrow_z}$ since the system Bloch vector has only a positive $z$-component. On the other hand, if we realize that when the measurement interval is long enough such that, between each measurement, the Bloch vector flips direction, then to maximize the survival probability, we should project instead onto the state $\ket{\downarrow_z}$ and then apply $\pi$ pulse. Doing so, we can obtain a higher survival probability or, equivalently, a lower effective decay rate. This is precisely what we observe from the figure. For this population decay case, we find that if the measurement interval is larger than $\tau^* \approx 10.6$, then we are better off by performing the measurement $\ket{\downarrow_z}\bra{\downarrow_z}$. There is also a small change in the anti-Zeno behaviour. For our modified strategy of repeatedly preparing the quantum state, we find that beyond measurement interval $\tau = \tau^*$, there is a sharper signature of anti-Zeno behaviour as compared to the usual strategy of repeatedly measuring the excited state of the system.
\begin{comment}
From the figure, it is clear that $\tau^* \approx 0.6$. For $\tau < \tau^*$, one is already make the optimal projective measurement by repeatedly measuring the excited state. For $\tau > \tau^*$, it is now optimal to measure the ground state of the system. In so far as the condition $\tau \ggg \tau^*$ is not met, one can repeatedly measure the ground state of the system at equal time intervals $\tau$, such that $\tau > \tau^*$. If the the measurement leads to the quantum state of the system collapsing to the ground state, one can implement a unitary time evolution operation such that, $
U(\tau', \tau) \ket{\downarrow_z} = \ket{\uparrow_z} \quad \text{such that} \quad \tau' \approx \tau > \tau^*$.
Using this method, one can `forcefully' collapse the quantum state of the system to the ground state and recover the excited state. If one waits for long times $\tau$ such that the condition $\tau \ggg \tau^*$ is met, the quantum state of the system is already likely to be in the ground state of the system due to spontaneous emission caused by the system-environment coupling. In that case, implementing the unitary time evolution is trivial since one is choosing to implement it after spontaneous emission has occurred. On the other hand as argued above, if $\tau \approx \tau^*$ and $\tau > \tau^*$, one can effectively freeze the quantum state of the system to be in the excited state by repeatedly measuring the ground state of the system followed by implementing the unitary time evolution. In this case, the relevant time interval in which one can make the relevant measurement and effectively freeze the quantum state of the system is approximately $[0.6, 1.0]$. In FIG. \ref{PopDecayMerged2}, we plot another computational example for the case where the system-environment coupling is weaker compared to FIG. \ref{PopDecayMerged1}. The same interpretation as in the previous paragraphs applies. In this case, $\tau^{*} \approx 10.6$.
\end{comment}
\subsection*{Pure dephasing model} \label{dephasing}
\begin{figure}
{\includegraphics[scale = 0.6]{Dephasing2.pdf}}
\caption{\label{Dephasing2}\textbf{Graphs of the effective decay rate and the optimal spherical angles under making optimal projective measurements in the pure dephasing model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1/\sqrt{10}, 0, \sqrt{9/10} )$. The same parameters as used in Fig.~\ref{Dephasing1} have been used for this case as well. \textbf{(b)} Graphs of the optimal spherical angles that maximize the survival
probability. $\theta$ is the polar angle and $\alpha$ is the azimuthal angle, which remains $0$ at all times.}
\end{figure}
We now analyze our strategy for the case of the pure dephasing model \cite{BPbook}. The system-environment Hamiltonian is now given by
\begin{equation}
H = \frac{\varepsilon}{2}\sigma_{z} + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + \sigma_{z} \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger}).
\end{equation}
The difference now compared to the previous population decay model is that the system-environment coupling term now contains $\sigma_z$ instead of $\sigma_x$. This difference implies that the diagonal entries of the system density matrix $\rho_S(t)$ (in the $\ket{\uparrow_z}, \bra{\uparrow_z}$ basis) cannot change - only dephasing can possibly take place, which is the reason that this model is known as the pure dephasing model\cite{ChaudhryPRA2014zeno}. Furthermore, this pure dephasing model is exactly solvable. The off-diagonal elements of the density matrix undergo both unitary time evolution due to the system Hamiltonian and non-unitary time evolution due to the coupling with the environment. Assuming the initial state of the total system is the standard product state $\rho_S(0) \otimes \rho_B/Z_B$, the off-diagonals of the density matrix $[\rho_S(t)]_{mn}$, once the evolution due to the system Hamiltonian itself is removed, are given by $[\rho_S(t)]_{mn} = [\rho_{S}(0)]_{mn} e^{- \gamma(\tau)}$ where $\gamma(t) = \sum_k 4|g_k|^2 \frac{(1 - \cos (\omega_k t))}{\omega_k^2} \coth \left( \frac{\beta \omega_k}{2} \right)$\cite{ChaudhryPRA2014zeno}. Writing an arbitrary initial state of the system as $\ket \psi = \cos \Big (\frac{\theta}{2} \Big ) \ket{\uparrow_z} + \sin \Big (\frac{\theta}{2} \Big ) e^{i \phi} \ket{\downarrow_z}$, it is straightforward to find that
\begin{align} \label{DephasingBlochVector}
n_x(t) = e^{- \gamma(t)} n_{x}(0) , \; n_y(t) = e^{- \gamma(t)} n_{y}(0), \; n_z(t) & = n_{z}(0).
\end{align}
The optimal survival probability obtained using optimized measurements is then
\begin{equation}\label{probnew}
s^{*}(\tau) = \frac{1}{2} \Big (1 + \sqrt{n_z(0)^2 + (e^{- \gamma(\tau)})^2(n_x(0)^2 + n_z(0)^2)} \; \Big ),
\end{equation}
where Eq.~\eqref{optimizedprobability} has been used. On the other hand, if we keep on preparing the initial state $\ket{\psi}$ by using the projective measurements $\ket{\psi}\bra{\psi}$, we find that
\begin{equation} \label{probold}
s(\tau) = \frac{1}{2} \Big (1 + n_z(0)^2 + e^{- \gamma(\tau)} \big ( n_x(0)^2 + n_z(0)^2 \big ) \; \Big ).
\end{equation}
We now analyze Eqs.~\eqref{probnew} and \eqref{probold} to find conditions under which we can lower the effective decay rate by using optimized projective measurements. It is clear that if the initial state, in the Bloch sphere picture, lies in the equatorial plane, then $n_z(0) = 0$ while $n_x(0)^2 + n_y(0)^2 = 1$. In this case, Eqs.~\eqref{probnew} and \eqref{probold} give the same survival probability. Thus, in this case, there is no advantage of using our strategy of optimized measurements as compared with the usual strategy. The reasoning is clear. In the Bloch sphere picture, the magnitude of the time evolved Bloch vector of the density matrix reduces such that the time evolved Bloch vector is always parallel to the Bloch vector of the initial pure system state. As argued before, the optimal projector to measure at time $\tau$, $\ket \chi \bra \chi$, must be parallel to the Bloch vector of the density matrix at time $\tau$. Hence in this case, the optimal projector to measure is $\ket \psi \bra \psi$, corresponding to the initial state. The computational example shown in Fig.~\ref{Dephasing1}(a) illustrates our predictions.
On the other hand, if we make some other choice of the initial state that we repeatedly prepare, we can easily see that our optimized strategy can give us an advantage. We simply look for the case where the evolved Bloch vector (after removal of the evolution due to the system Hamiltonian) no longer remains parallel with the initial Bloch vector. Upon inspecting Eq.~\eqref{DephasingBlochVector}, we find that our optimized strategy can be advantageous if $n_z(0) \neq 0$ (excluding, of course, the cases $n_z(0) = \pm 1$). In other words, if the Bloch vector of the initial state does not lie in the equatorial plane, then the Bloch vector of this state at some later time will not remain parallel to the initial Bloch vector. In this case then, our optimal measurement scheme will give a higher survival probability as compared to repeatedly measuring the same initial state. This is illustrated in Fig.~\ref{Dephasing1}(b) where we show the effective decay rate and the survival probability after a single measurement for the initial state specified by the Bloch vector $(1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3} )$. After the time at which the transition between the Zeno and the anti-Zeno regimes occurs, we clearly observe that the decay rate is lower when one makes the optimal projective measurements. Although this difference on first sight may not appear as very significant, if we perform a relatively large number of repeated measurements, the difference is very significant. For example, even for three measurements with measurement interval $\tau = 1$, we find that the quantum state has $0.15$ greater survival probability with the optimized measurements as compared with the usual unoptimized strategy of repeatedly preparing the quantum state.
Another computational example has been provided in Fig.~\ref{Dephasing2} where the initial state is now given by the Bloch vector $(1/\sqrt{10}, 0, \sqrt{9/10} )$. In Fig.~\ref{Dephasing2}(a) we have again illustrated that our optimized strategy of repeatedly preparing the quantum state is better at protecting the quantum compared to the usual strategy. In Fig.~\ref{Dephasing2}(b) we have shown how the optimal projective measurement that needs to be performed changes with the measurement interval $\tau$. In order to do so, we have parametrized the Bloch vector corresponding to $\ket{\chi}\bra{\chi}$ using the usual spherical polar angles $\theta$ and $\alpha$. Note that the value of the azimuthal angle $\alpha$ is expected to remain constant since we have
$\alpha(\tau) = \arctan ( n_{y}(\tau)/n_{x}(\tau) ) = \alpha(0)$.
On the other hand, the optimal value of the polar angle $\theta$ changes with the measurement interval. This is also expected since as the system dephases, $e^{- \gamma(\tau)} \rightarrow 0$, ensuring that $n_{x}(\tau), n_{y}(\tau) \rightarrow 0$. Thus, for long measurement intervals, the system Bloch vector becomes effectively parallel to the $z-\text{axis}$. It follows that $\theta \rightarrow 0$ for long measurement intervals. These predictions are borne out by the behaviour of $\theta$ and $\alpha$ in Fig.~\ref{Dephasing2}(b).
\subsection*{The Spin-Boson Model}
We now consider the more general system-environment model given by the Hamiltonian
\begin{figure}
{\includegraphics[scale = 0.55]{SB1.pdf}}\caption{\textbf{Graphs of the effective decay rate under making optimal projective measurements in the spin-boson model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ (low temperature) for the state specified by $\theta = \pi/2$ and $\alpha = 0$. The blue dashed curve shows the decay rate in the spin boson model ($\Delta = 2, \varepsilon = 2$) if the initial state is repeatedly measured, and the solid red curve shows the effective decay rate with the optimal measurements. We have used $G = 0.01, \omega_c = 10$ and $s = 1$. \textbf{(b)} is the same as \textbf{(a)} except for the domain of the graph.}
\label{SB1}
\end{figure}
\begin{equation}\label{spinboson}
H = \frac{\varepsilon}{2} \sigma_z + \frac{\Delta}{2} \sigma_x + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + \sigma_{z} \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger}),
\end{equation}
where $\Delta$ can be understood as the tunneling amplitude for the system, and the rest of the parameters are defined as before.
This is the well-known spin-boson model\cite{LeggettRMP1987,Weissbook,BPbook}, which can be considered as an extension of
the previous two cases in that we can now generally have
both population decay and dephasing taking place. Experimentally, such a model can be realized, for instance, using superconducting qubits \cite{ClarkeNature2008, YouNature2011,SlichterNJP2016} and the properties of the environment can be appropriately tuned as well \cite{HurPRB2012}. Once again, assuming that the system and the environment are interacting weakly with each other, we can use the master equation that we have used before [see Eq.~\eqref{masterequation}] to find the system density matrix as a function of time. We now have $H_S = \frac{\varepsilon}{2}\sigma_z + \frac{\Delta}{2}\sigma_x$ and $F = \sigma_z$. It should be remembered that once we find the density matrix just before the measurement $\rho_S(\tau)$, we remove the evolution due to system Hamiltonian via $\rho_S(\tau) \rightarrow e^{iH_S \tau} \rho_S(\tau)e^{-iH_S\tau}$.
Let us first choose as the initial state $n_x(0) = 1$ (or, in the words, the state that is paramterized by $\theta = \pi/2$ and $\alpha = 0$ on the Bloch sphere). In Fig.~\ref{SB1}, we plot the behaviour of the effective decay rate as a function of the measurement interval using both our optimized strategy (the solid red lines) and the unoptimized usual strategy (the dotted, blue curves). It is clear from Fig.~\ref{SB1}(a) that for relatively short measurement intervals, there is little to be gained by using the optimal strategy. As we have seen before in the pure dephasing case, for the state in the equatorial plane of the Bloch sphere, there is no advantage to be gained by following the optimized strategy. On the other hand, for longer time intervals $\tau$, population decay can be considered to become more significant. This then means we that see significant difference at long measurement intervals if we use the optimized strategy. This is precisely what we observe in Fig.~\ref{SB1}(b).
\begin{figure}
{\includegraphics[scale = 0.4]{SB2andSB4.pdf}}\caption{\label{SB2} \textbf{Graphs of the effective decay rate under making optimal projective measurements in the spin-boson model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ (low temperature) for the state specified by $\theta = \pi/2$ and $\phi = 0$. We have used the same parameters as in \ref{SB1} except that we have now modeled a sub-Ohmic environment with $s = 0.8$. \textbf{(b)} same as \textbf{(a)}, except that we have now modeled a super-Ohmic environment with $s = 2.0$. \textbf{(c)} We have used $G = 0.025, \omega_c = 10$ and $s = 1$. For this case ($\varepsilon \gg \Delta$), we have used $\varepsilon = 6, \Delta = 2$. \textbf{(d)} same as \textbf{(c)}, except that for this case ($\Delta \gg \varepsilon $), we have used $\varepsilon = 2, \Delta = 6$.}
\end{figure}
For completeness, let us also investigate how the the effective decay rate depends on the functional form of the spectral density. In Fig.~\ref{SB2}(a) and Fig.~\ref{SB2}(b), we investigate what happens in the case of a sub-Ohmic and a super-Ohmic environment respectively. The case of a sub-Ohmic environment with $s = 0.8$ is similar to the case with $s = 1$ (Ohmic environment) - once again, the optimal projective measurements decrease the decay rate substantially only at long periods of time. For the case of a super-Ohmic environment with $s=2$ [see Fig.~\ref{SB2}(b)], we find that the optimal projective measurements do not substantially lower the decay rate, even for long times. Thus, it is clear that the Ohmicity of the environment plays an important role in determining the usefulness of using the optimal projective measurements.
Let us now revert to the Ohmic environment case to present more computational examples. First, if $\varepsilon \gg \Delta$, then the effect of dephasing is much more dominant than the effect of population decay. Results for this case are illustrated in Fig.~\ref{SB2}(c). We see that there is negligible difference upon using the optimal measurements. This agrees with what we found when we analyzed the pure dephasing model. We also analyze the opposite case where the effect of population decay is more dominant than the effect of dephasing. This is done by setting $\Delta \gg \varepsilon$. We consider higher temperature in this case \underline{what value???}. We now observe differences between the unoptimized and optimized decay rates for relatively short times [see Fig.~\ref{SB2}(d)], and the difference becomes even bigger at longer times. In fact, while we observe predominantly only the Zeno effect with the unoptimized measurements, we observe very distinctly both the Zeno and the anti-Zeno regimes with the optimized measurements.
\subsection*{Large Spin System}
\begin{figure}
{\includegraphics[scale = 0.55]{LSJ1.pdf}}
\caption{\label{LSJ1}\textbf{Graphs of the effective decay rate and the optimal spherical angles under making optimal projective measurements in the large spin model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ for $J=1$. Here we have $\Delta = 0$. The blue dashed curve shows the decay rate if the initial state is repeatedly measured; the red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.01, \omega_c = 50, \beta = 1$ and we take $\theta = \pi/2$ and $\phi = 0$ as parameters for the initial state. \textbf{(b)} Same as (a), except now that $\Delta \neq 0$. The insets show how the optimal measurements change with the measurement interval $\tau$.}
\end{figure}
We extend our study to a collection of two-level systems interacting with a common environment. This Hamiltonian can be considered to be a generalization of the usual spin-boson model to a large spin $j = N_s/2$ \cite{ChaudhryPRA2014zeno,VorrathPRL2005
,KurizkiPRL2011}, where $N_s$ is the number of two-level systems coupled to the environment. Physical realizations include a two-component Bose-Einstein condensate \cite{GrossNature2010,RiedelNature2010} that interacts with a thermal reservoir via collisions \cite{KurizkiPRL2011}. In this case, the system-environment Hamiltonian is given by
\begin{equation}
H = \varepsilon J_{z} + \Delta J_x + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + 2 J_{z} \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger}),
\end{equation}
where $J_x$ and $J_z$ are the usual angular momentum operators and the environment is again modeled as a collection of harmonic oscillators. We first look at the pure dephasing case by settting $\Delta = 0$. In this case, the system dynamics can be found exactly. The system density matrix, in the eigenbasis of $J_z$, after removal of the evolution due to the system Hamiltonian can be written as
$[\rho(t)]_{mn} = [\rho(0)]_{mn} e^{- i\triangle(t) (m^2 - n^2)} e^{- \gamma(t) (m - n)^2}$.
Here $\gamma(t)$ has been defined before, and $\Delta(t) = \sum_k 4|g_k|^2 \frac{[\sin(\omega_k t) - \omega_k t]}{\omega_k^2}$\text{\cite{ChaudhryPRA2014zeno}}
describes the indirect
interaction between the two-level systems due to their
interaction with a common environment. For
vanishingly small time $t$, $\triangle(t) \approx 0$. On the other
hand, as $t$ increases, the effect of
$\triangle(t)$ becomes more
pronounced. Thus, we expect significant differences as compared to the single two-level system case for long measurement intervals. However, it is important to note that we can no longer find the optimal measurements using the formalism presented before since our system is no longer a single two-level system. In principle, we need then need to carry out a numerical optimization procedure in order to find the projector $\ket{\chi}\bra{\chi}$ such that the survival probability is maximized. Rather than looking at all possible states $\ket{\chi}$, we instead restrict ourselves to the SU(2) coherent states since these projective measurements are more readily experimentally accessible. In other words, we look at $\ket{\chi}\bra{\chi}$ where
\begin{equation}
\ket{\chi} = \ket{\zeta, J} = (1 + |\zeta|^2)^{- J} \sum_{m=-J}^{m = J} \sqrt{\binom{2J}{J + m}} \zeta^{J + m} \ket{J, m},
\end{equation}
and $\zeta = e^{i \phi'} \tan(\theta'/2)$ with the states $\ket{J, m}$ being the angular momentum eigenstates of $J_z$. Suppose that we prepare the coherent state $\ket{\eta, J}$ with a fixed, pre-determined value of $ \eta = e^{i \phi} \tan(\theta/2)$ repeatedly. In order to do so, we project, with time interval $\tau$, the system state onto the coherent state $\ket{\zeta, J}$. After each measurement, we apply a suitable unitary operator to arrive back at the state $\ket{\eta,J}$. Again assuming the system-environment correlations are negligible, we find that
\begin{align}\label{largedephasingdecay}
\Gamma(\tau) & = - \frac{1}{\tau} \ln \Bigg \{ \Bigg [ \frac{|\zeta|}{1 + |\zeta|^2} \Bigg ]^{2J} \Bigg [ \frac{|\eta|}{1 + |\eta|^2} \Bigg ]^{2J} \sum_{m, n = -J}^{J} (\zeta^{*} \eta)^m (\eta^{*} \omega)^n \; \binom{2J}{J + m}\binom{2J}{J + n} e^{- i\triangle(\tau) (m^2 - n^2)} e^{- \gamma(\tau) (m - n)^2} \Bigg \}.
\end{align}
For equally spaced measurement time intervals, we numerically optimize Eq.~\eqref{largedephasingdecay} over the variables $\phi'$ and $\theta'$. We present a computational example in Fig.~\ref{LSJ1}(a). We take as the initial state the SU(2) coherent state with $\theta = \pi/2$ and $\phi = 0$ and we let $J = 1$. This is simply the generalization of the pure dephasing model that we have looked at before to $J = 1$. Previously, there was no difference in the optimized and unoptimized probabilities. Now, we see that because of the indirect interaction, there is a very noticeable difference. Where we observe the Zeno regime with the unoptimized measurements, we instead see the anti-Zeno regime with the optimized measurements. Furthermore, the survival probability can be significantly enhanced using the optimized measurements.
For completeness, we have also considered the more general case with $\Delta \neq 0$. In this case, the system dynamics cannot be solved exactly, so we resort again to the master equation to find the system dynamics. With the system dynamics known, we again find the projector, parametrized by $\theta'$ and $\phi'$, such that the decay rate is minimized. Results are illustrated in ~\ref{LSJ1}(b). Once again, using optimizing projective measurements change the Zeno and anti-Zeno behaviour quatitatively as well as qualitatively.
\begin{comment}
\begin{figure}[t]
{\includegraphics[scale = 0.6]{LargeSB.eps}}\caption{\label{LargeSpinBoson1} \textbf{Graphs of the effective decay rate and the optimal spherical angles under making optimal projective measurements in the large spin-boson model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ (low temperature) for the state specified by $\theta = \pi/2$ and $\phi = 0$. The blue dashed curve shows the decay rate if the initial state is repeatedly measured; and the red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $\varepsilon = 2, \Delta = 2, J = 1, G = 0.01, \omega_c = 50$ and $s = 1$. In the inset, we show the 2D plots that traces the change in the values of the optimal spherical angles. \textbf{(b)} We show the 3D plot that traces the change in the values of the optimal spherical angles. $\chi$ is the polar angle and $\alpha$ is the azimuthal angle. For $\tau = 1$ and $N = 3, \Delta s(\tau) \approx 0.25!$. \textbf{(c)} same as \textbf{(a)} except that we have used $\varepsilon = 8, \Delta = 2$. \textbf{(d)} same as a \textbf{(a)}. For $\tau = 2.1$ and $N = 1, \Delta s(\tau) \approx 0.3!$.}
\end{figure}
Let us now generalize to the case $\Delta \neq 0$. We now consider $N$ two-level systems interacting with a common environment. The Hamiltonian for the system is given by:
\begin{equation}
\mathcal{H} = \varepsilon J_z + \Delta J_x + \sum_k \omega_k b_k^\dagger b_k + 2J_z \sum_k (g_k^* b_k + g_k b_k^\dagger),
\end{equation}
where $\varepsilon$ and $\Delta$ are the same as as in the case of the spin-boson model (for a single two-level system) and $J_x$ and $J_z$ are the angular momentum operators.
We analyze the model in the same way we analyzed the spin-boson model for a single two level system. In addition, we analyze the model for the case of SU(2) coherent states, as in section \ref{LargeDephasing}. We choose the initial state such that it is parameterized by the spherical angles $\theta = \pi/2$ and $\phi = 0$. We first analyze the case where both dephasing and population decay are given the same relative weight, with the parameters set to $\varepsilon = \Delta = 2$. The results are plotted in FIG. \ref{LargeSpinBoson1}. The interesting aspect of this result is that it is in stark contrast to the analagous case plotted in FIG. \ref{SB1}. In that figure, we plotted the case in which both population decay and dephasing were given the same relative weight in the spin-boson model for a single two-level system. The results suggested that the effect of making optimal projective measurements on the decay rate is alomost negligible at short times. In the large spin-boson model, however, we observe the opposite effect: within the first 1 second, we observe considerable differences in the decay rate if one makes the optimal projective measurements. This result, in analogy with the results derived in section \ref{LargeDephasing} arisse due to the indirect interaction between the two two-level systems. This interaction accounts for considerable changes in the decay rates at short times, an effect that was not observed before. While we don't have a functional form of the indirect coupling between the two level systems, we expect it to show a similar behavior as in the case of the large spin model with pure dephasing, where the functional form of the relevant expression is given by equations \ref{interaction1} and \ref{interaction2}.
However, the effect of population decay is expected to dominate the effect of dephasing at intermediate to long time ranges, making the observation of the aforementined effect difficult, especially in the case where both population decay and dephasing have been given the same relative weight.In FIG. \ref{LargeSpinBoson1}(b), we show a 3D plot that traces the change in the optimal spherical angles with time. We clearly observe that in this case, the optimal angles change both erratically and cyclically. This observation is in accordance with the aforementioned comment: since the interaction term is expected to show some sinusoidal pattern, the optimal spherical angles also reflect the same changes -- at least at short times scales where where pure dephasing has the more dominant contribution.
We show another computational examples in FIG. \ref{LargeSpinBoson1}. In this example, the value of $\varepsilon$ is four times than that of $\Delta$, indicating that dephasing has been given more relative weight as compared to population decay. We in part recover the results already observed in section \ref{LargeDephasing}: the optimal projective measurements exhibit a significantly lesser decay rate as compared to the case where one does not make the optimal measurement. With dephasing given more weight compared to population decay, we indeed expect this to be case. The more interesting point to note is that compared to the case where pure dephasing is given a larger weight compared to population decay in the spin-boson model as shown in FIG. \ref{SB4}, we now observe larger differences in the decay rates when one makes optimal measurements. This effect, once again, can be attributed to the indirect interaction between the two two-level systems. Hence, we once again observe that the effect of making optimal measurements is much more pronounced for the case of interacting two-level systems.
\end{comment}
\section*{Discussion}
Our central idea is that instead of repeatedly preparing the quantum state of a system using only projective measurements, we can repeatedly prepare the quantum state by using a combination of both projective measurements and unitary operations. This then allows us to consider the projective measurements that yield the largest survival probability. if the central quantum system is a simple two-level system, we have derived an expression that optimizes the survival probability, or equivalently the effective decay rate. This expression implies that the optimal projective measurement at time $\tau$ corresponds to the projector that is parallel to the Bloch vector of the system's density matrix at that time. We consequently applied our expression for the optimized survival probability to various models. For the population decay model, we found that beyond a critical time $\tau^{*}$, we should flip the measurement and start measuring the ground state rather than the excited state. For the pure dephasing model, we found that for states prepared in the equatorial plane of the Bloch sphere, it is optimal to measure the initial state - determining and making the optimal projective measurement has no effect on the effective decay rate. In contrast, for states prepared outside of the equatorial plane, the effect of making the optimal projective measurement substantially lowers the effective decay rate in the anti-Zeno regime. In the general spin-boson model, we have found that there can a considerable difference between the effective decay rate if we use the optimal measurements. We then extended our analysis by analyzing the case of large spin systems. In this case, we find that the indirect interaction between the two-level systems causes the optimal measurements to be even more advantageous. The results of this paper show that by exploiting the choice of the measurement performed, we could substantially decrease the effective decay rate for various cases. This allows us to `effectively freeze' the state of the quantum system with a higher probability. Experimental implementations
of the ideas presented in this paper are expected to be
important for measurement-based quantum control.
\begin{comment}
We ignore first exponential factor since it represents a unitary transformation (rotation) on the Bloch sphere due to the system's time evolution as in section \ref{optimal condition}. Including the effect of the system's unitary time evolution potentially has the effect of producing sinusoidal changes in the survival probability -- producing multiple peaks in the plot of the survival probability (and the effective decay), which allows one to over-identify the number of Zeno and anti-Zeno regimes (and the transitions between the Zeno and Anti-Zeno regimes) and the numerical values of decay rate(s). The details of this result are substantiated below by means of a computational example. Ignoring the unitary time evolution of the system, the system's density matrix is
$\rho_{S}(\tau)]_{mn} = [\rho_{S}(0)]_{mn} e^{-i \triangle(\tau)(m^2 - n^2) - \gamma(\tau)(m - n)^2}$.
For the case of a two-level system, the first exponential is one for all the four entries; hence, we can ignore it. The second exponential contributes changes only the off diagonal terms.
\end{comment}
\begin{comment}
We next turn to finding the the time-evolved Bloch vector for the case of an arbitrary initial state of the system, $\ket \psi = \cos \Big (\frac{\theta}{2} \Big ) \ket{\uparrow_z} + \sin \Big (\frac{\theta}{2} \Big ) e^{i \phi} \ket{\downarrow_z}$. Using the expression (omitted) for the density matrix of the system at time $\tau$,
\end{comment}
\begin{comment}
The density operator of the system at time $\tau$ is:
\begin{align}
\rho_{S}(0) & = \frac{1}{2}\begin{bmatrix}
1 + \cos\theta & \sin \theta e^{-i \phi} \\[6pt]
\sin \theta e^{i \phi} & 1 - \cos \theta
\end{bmatrix}.
\end{align}
Using the explicit expression of $\rho_{S}$, we have:
\begin{equation}
\rho_{S}(\tau) = \frac{1}{2}\begin{bmatrix}
1 + \cos \theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\
e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos \theta
\end{bmatrix}.
\end{equation}
\end{comment}
\begin{comment}
we find that the time-evolved Bloch vector for the time-evolved density matrix is given by:
\begin{align}
n_{x}(\tau) & = \Tr (\sigma_{x} \rho_{S}(\tau)), \nonumber \\
& = \frac{1}{2} \Tr \begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 + \cos\theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\
e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos\theta
\end{bmatrix}.
\end{align}
Keeping track of only the diagonal terms, we have:
\begin{align}
n_x(\tau) & = \frac{1}{2} \Tr \begin{bmatrix}
e^{- \gamma(\tau)} \sin \theta e^{i \phi} & ... \\
... & e^{- \gamma(\tau)} \sin \theta e^{-i \phi}
\end{bmatrix},
\end{align}
which implies that
\begin{align}\label{dephasing.n_x}
n_x(\tau) & = \frac{1}{2} \Big ( e^{- \gamma(\tau)} \sin \theta ( e^{i \phi} + e^{-i \phi} ) \Big ), \nonumber \\
& = \frac{1}{2} \Big (2 e^{- \gamma(\tau)} \sin \theta \cos \phi \Big ), \nonumber \\
& = e^{- \gamma(\tau)} \sin \theta \cos \phi.
\end{align}
\begin{align}
n_{y}(\tau) & = \Tr (\sigma_{y} \rho_{S}(\tau)), \nonumber \\
& = \frac{1}{2} \Tr \begin{bmatrix}
0 & -i \\
i & 0
\end{bmatrix}
\begin{bmatrix}
1 + \cos \theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\
e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos \theta
\end{bmatrix}.
\end{align}
Keeping track of only the diagonal terms, we have:
\begin{align}
n_y(\tau) & = \frac{1}{2} \Tr \begin{bmatrix}
-i e^{- \gamma(\tau)} \sin \theta e^{i \phi} & ... \\
... & i e^{- \gamma(\tau)} \sin \theta e^{-i \phi}
\end{bmatrix},
\end{align}
which implies that
\begin{align}\label{dephasing.n_y}
n_y(\tau) & = \frac{1}{2} \Big ( -i e^{- \gamma(\tau)} \sin \theta (e^{i \phi} - e^{-i \phi}) \Big ), \nonumber \\
& = \frac{1}{2} \Big (2 e^{- \gamma(\tau)} \sin \theta \sin \phi \Big ), \nonumber \\
& = e^{- \gamma(\tau)} \sin \theta \sin \phi.
\end{align}
\begin{align}
n_{z}(\tau) & = \Tr (\sigma_{z} \rho_{S}(\tau)), \nonumber \\
& = \frac{1}{2} \Tr \begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
\begin{bmatrix}
1 + \cos\theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\
e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos\theta
\end{bmatrix}.
\end{align}
Keeping track of only the diagonal terms, we have:
\begin{align}
n_z(\tau) & = \frac{1}{2} \Tr \begin{bmatrix}
1 + \cos \theta & ... \\
... & - 1 + \cos \theta
\end{bmatrix},
\end{align}
which implies that
\begin{align}\label{dephasing.n_z}
n_z(\tau) & = \frac{1}{2} \Big (1 + \cos \theta - 1 + \cos \theta), \nonumber \\
& = \frac{1}{2} (2 \cos \theta), \nonumber \\
& = \cos \theta.
\end{align}
Noting that
\begin{align} \label{dephsing.bloch}
n_x(0) & = \sin \theta \cos \phi, \nonumber \\
n_y(0) & = \sin \theta \sin \phi; \; \text{and} \nonumber \\
n_z(0) & = \cos \theta,
\end{align}
equations \eqref{dephasing.n_x}, \eqref{dephasing.n_y}, \eqref{dephasing.n_z}, \eqref{dephsing.bloch} imply that we have:
\end{comment}
\begin{comment}
We first analyze the claim made previously in the section: including the effect of the system's unitary time evolution allows one to over-identify the number of Zeno and anti-Zeno regimes (and the transitions between the Zeno and anti-Zeno regimes). We analyze this claim in the context of an example. Consider the quantum state $\ket \psi = \frac{1}{\sqrt{2}} \ket {\uparrow_z} + \frac{1}{\sqrt{2}} \ket {\downarrow_z}$. The time-evolved density matrix for the system is given by:
\begin{equation}
\rho_{S}(\tau) = \frac{1}{2}\begin{bmatrix}
1 & e^{- \gamma(\tau)} e^{- i \varepsilon \tau} \\
e^{- \gamma(\tau)} e^{i \varepsilon \tau} & 1
\end{bmatrix}.
\end{equation}
\begin{figure}
{\includegraphics[scale = 0.58]{Dephasing0.eps}}
\caption{\label{Dephasing0} \textbf{Comparing the survival probability and the effective decay if the system's unitary evolution is removed or not removed.} \textbf{(a)} $s(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1, 0, 0 )$. The purple dashed curve shows the survival probability if system's unitary time evolution is ignored; the black curve shows the survival probability system's unitary time evolution is not ignored. \textbf{(b)}$\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1, 0, 0 )$. The blue dashed curve shows the survival probability if system's unitary time evolution is ignored; the red curve shows the survival probability system's unitary time evolution is not ignored. We have used $G = 0.01, \omega_c = 15, \beta = 1$ and $\varepsilon = 2$.}
\end{figure}
We assume that we choose to measure the initial state of the system at time $\tau$. It can easily be shown that survival probability in this case is given by $s'(\tau) = \frac{1}{2} \Big (1 + e^{- \gamma(\tau)} \cos (\varepsilon \tau) \Big )$. Assuming one ignores the unitary time evolution due to the system, \eqref{probold} implies that the survival probability is given by $s(\tau) = \frac{1}{2} \Big (1 + e^{- \gamma(\tau)} \Big )$.
The graphs for these expressions are shown in FIG. \ref{Dephasing0}. The effect of including the system's unitary time evolution is surprisingly large. The periodic change in the survival probability (red curve) reflects the unitary time evolution of $\ket \psi$ in the equatorial plane on the Bloch sphere. The exponentially decaying change in the survival probability (blue curve) reflects the non unitary evolution of the quantum state in the equatorial plane due to the system-environment coupling. Both of these effects are reflected in the red curve -- which oscillates sinuously with an exponentially decaying amplitude. The unintended consequence of including both these effects is as stated before: one over-identifies the number of Zeno and anti-Zeno regimes (and the transitions between the Zeno and anti-Zeno regimes). This is clearly evident when one plots the decay rates, which we also plot in FIG \ref{Dephasing0}. The graph of the decay rate with the system's unitary time evolution included (red curve) shows both i) an unusually large decay rate for a single measurement and ii) multiple Zeno and anti-Zeno regimes.
\end{comment}
| proofpile-arXiv_065-7519 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
All this has begun in my last year at the university. The only thing that I knew of
\LaTeX\ was that it exists, and that it is ``good''. I started using it, but I needed to typeset some
algorithms. So I begun searching for a good algorithmic style, and I have found the \texttt{algorithmic}\ package.
It was a great joy for me, and I started to use it\dots\
Well\dots\ Everything went nice, until I needed some block that wasn't defined in there. What to do?
I was no \LaTeX\ guru, in fact I only knew the few basic macros. But there was no other way, so I opened
the style file, and I copied one existing block, renamed a few things, and voil\`a! This (and some other
small changes) where enough for me\dots
One year later --- for one good soul --- I had to make some really big changes on the style. And there on
a sunny day came the idea. What if I would write some macros to let others create blocks automatically?
And so I did! Since then the style was completely rewritten\dots\ several times\dots
I had fun writing it, may you have fun using it! I am still no \LaTeX\ guru, so if you are, and you find
something really ugly in the style, please mail me! All ideas for improvements are welcome!
Thanks go to Benedek Zsuzsa, Ionescu Clara, Sz\H ocs Zolt\'an, Cseke Botond, Kanoc
and many-many others. Without them I would have never started or continued \textbf{algorithmicx}.
\section{General informations}
\subsection{The package}
The package \textbf{algorithmicx} itself doesn't define any algorithmic commands, but gives
a set of macros to define such a command set. You may use only \textbf{algorithmicx}, and define
the commands yourself, or you may use one of the predefined command sets.
These predefined command sets (layouts) are:
\begin{description}
\item[algpseudocode] has the same look\footnote{almost :-)} as the one defined in the
\texttt{algorithmic}\ package. The main difference is that while the \texttt{algorithmic}\ package doesn't
allow you to modify predefined structures, or to create new ones, the \texttt{algorithmicx}\
package gives you full control over the definitions (ok, there are some
limitations --- you can not send mail with a, say, \verb:\For: command).
\item[algcompatible] is fully compatible with the \texttt{algorithmic}\ package, it should be
used only in old documents.
\item[algpascal] aims to create a formatted pascal program, it performs
automatic indentation (!), so you can transform a pascal program into an
\textbf{algpascal} algorithm description with some basic substitution rules.
\item[algc] -- yeah, just like the \textbf{algpascal}\dots\ but for c\dots\
This layout is incomplete.
\end{description}
To create floating algorithms you will need \verb:algorithm.sty:. This file may or may not be
included in the \texttt{algorithmicx}\ package. You can find it on CTAN, in the \texttt{algorithmic}\ package.
\subsection{The algorithmic block}
Each algorithm begins with the \verb:\begin{algorithmic}[lines]: command, the
optional \verb:lines: controls the line numbering: $0$ means no line numbering,
$1$ means number every line, and $n$ means number lines $n$, $2n$, $3n$\dots\ until the
\verb:\end{algorithmic}: command, witch ends the algorithm.
\subsection{Simple lines}
A simple line of text is beginned with \verb:\State:. This macro marks the begin of every
line. You don't need to use \verb:\State: before a command defined in the package, since
these commands use automatically a new line.
To obtain a line that is not numbered, and not counted when counting the lines for line numbering
(in case you choose to number lines), use the \verb:Statex: macro. This macro jumps into a new line,
the line gets no number, and any label will point to the previous numbered line.
We will call \textit{statament\/}s the lines starting with \verb:\State:. The \verb:\Statex:
lines are not stataments.
\subsection{Placing comments in sources}\label{Putting comments in sources}
Comments may be placed everywhere in the source using the \verb:\Comment: macro
(there are no limitations like those in the \texttt{algorithmic}\ package), feel the freedom!
If you would like to change the form in witch comments are displayed, just
change the \verb:\algorithmiccomment: macro:
\begin{verbatim}
\algrenewcommand{\algorithmiccomment}[1]{\hskip3em$\rightarrow$ #1}
\end{verbatim}
will result:
\medskip
\begin{algorithmic}[1]
\algrenewcommand{\algorithmiccomment}[1]{\hskip3em$\rightarrow$ #1}
\State $x\gets x+1$\Comment{Here is the new comment}
\end{algorithmic}
\subsection{Labels and references}
Use the \verb:\label: macro, as usual to label a line. When you use \verb:\ref: to reference
the line, the \verb:\ref: will be subtitued with the corresponding line number. When using the
\textbf{algorithmicx} package togedher with the \textbf{algorithm} package, then you can label
both the algorithm and the line, and use the \verb:\algref: macro to reference a given line
from a given algorithm:
\begin{verbatim}
\algref{<algorithm>}{<line>}
\end{verbatim}
\noindent\begin{minipage}[t]{0.5\linewidth}
\begin{verbatim}
The \textbf{while} in algorithm
\ref{euclid} ends in line
\ref{euclidendwhile}, so
\algref{euclid}{euclidendwhile}
is the line we seek.
\end{verbatim}
\end{minipage}\begin{minipage}[t]{0.5\linewidth}
The \textbf{while} in algorithm \ref{euclid} ends in line \ref{euclidendwhile},
so \algref{euclid}{euclidendwhile} is the line we seek.
\end{minipage}
\subsection{Breaking up long algorithms}
Sometimes you have a long algorithm that needs to be broken into parts, each on a
separate float. For this you can use the following:
\begin{description}
\item[]\verb:\algstore{<savename>}: saves the line number, indentation, open blocks of
the current algorithm and closes all blocks. If used, then this must be the last command
before closing the algorithmic block. Each saved algorithm must be continued later in the
document.
\item[]\verb:\algstore*{<savename>}: Like the above, but the algorithm must not be continued.
\item[]\verb:\algrestore{<savename>}: restores the state of the algorithm saved under
\verb:<savename>: in this algorithmic block. If used, then this must be the first command
in an algorithmic block. A save is deleted while restoring.
\item[]\verb:\algrestore*{<savename>}: Like the above, but the save will not be deleted, so it
can be restored again.
\end{description}
See example in the \textbf{Examples} section.
\subsection{Multiple layouts in the same document}
You can load multiple algorithmicx layouts in the same document. You can switch between the layouts
using the \verb:\alglanguage{<layoutname>}: command. After this command all new algorithmic
environments will use the given layout until the layout is changed again.
\section{The predefined layouts}
\subsection{The \textbf{algpseudocode} layout}\label{algpseudocode}
\alglanguage{pseudocode}
If you are familiar with the \texttt{algorithmic}\ package, then you'll find it easy to
switch. You can use the old algorithms with the \textbf{algcompatible} layout, but please
use the \textbf{algpseudocode} layout for new algorithms.
To use \textbf{algpseudocode}, simply load \verb:algpseudocode.sty::
\begin{verbatim}
\usepackage{algpseudocode}
\end{verbatim}
You don't need to manually load the \textbf{algorithmicx} package, as this is done by
\textbf{algpseudocode}.
The first algorithm one should write is the first algorithm ever (ok,
an improved version), \textit{Euclid's algorithm}:
\begin{algorithm}[H]
\caption{Euclid's algorithm}\label{euclid}
\begin{algorithmic}[1]
\Procedure{Euclid}{$a,b$}\Comment{The g.c.d. of a and b}
\State $r\gets a\bmod b$
\While{$r\not=0$}\Comment{We have the answer if r is 0}
\State $a\gets b$
\State $b\gets r$
\State $r\gets a\bmod b$
\EndWhile\label{euclidendwhile}
\State \Return $b$\Comment{The gcd is b}
\EndProcedure
\end{algorithmic}
\end{algorithm}
Created with the following source:
\begin{verbatim}
\begin{algorithm}
\caption{Euclid's algorithm}\label{euclid}
\begin{algorithmic}[1]
\Procedure{Euclid}{$a,b$}\Comment{The g.c.d. of a and b}
\State $r\gets a\bmod b$
\While{$r\not=0$}\Comment{We have the answer if r is 0}
\State $a\gets b$
\State $b\gets r$
\State $r\gets a\bmod b$
\EndWhile\label{euclidendwhile}
\State \textbf{return} $b$\Comment{The gcd is b}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{verbatim}
The \verb:\State: stands at the beginning of each simple statement; the respective
statement is put in a new line, with the needed indentation.
The \verb:\Procedure: \dots\verb:\EndProcedure: and
\verb:\While: \dots\verb:\EndWhile: blocks (like any block defined in the
\textbf{algpseudocode} layout) automatically indent their content.
The indentation of the source doesn't matter, so
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Repeat
\Comment{forever}
\State this\Until{you die.}
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Repeat
\Comment{forever}
\State this\Until{you die.}
\Statex
\end{algorithmic}
\AENDSKIP
But, generally, it is a good idea to keep the source indented, since you will find
errors much easier. And your tex file looks better!
All examples and syntax descriptions will be shown as the previous
example --- the left side shows the \LaTeX\ input, and the right side
the algorithm, as it appears in your document. I'm cheating! Don't look
in the \verb:algorithmicx.tex: file! Believe what the examples state! I may use some
undocumented and dirty stuff to create all these examples. You might be more
confused after opening \verb:algorithmicx.tex: as you was before.
In the case of syntax
descriptions the text between $<$ and $>$ is symbolic, so if you type
what you see on the left side, you will not get the algorithm on the
right side. But if you replace the text between $<$ $>$ with a proper piece of
algorithm, then you will probably get what you want. The parts between
$[$ and $]$ are optional.
\subsubsection{The \textbf{for} block}
The \textbf{for} block may have one of the forms:
\ASTART
\begin{verbatim}
\For{<text>}
<body>
\EndFor
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\For{$<$text$>$}
\State $<$body$>$
\EndFor
\end{algorithmic}
\AEND
\ASTART
\begin{verbatim}
\ForAll{<text>}
<body>
\EndFor
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\ForAll{$<$text$>$}
\State $<$body$>$
\EndFor
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\State $sum\gets 0$
\For{$i\gets 1, n$}
\State $sum\gets sum+i$
\EndFor
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\State $sum\gets 0$
\For{$i\gets 1, n$}
\State $sum\gets sum+i$
\EndFor
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{while} block}
The \textbf{while} block has the form:
\ASTART
\begin{verbatim}
\While{<text>}
<body>
\EndWhile
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\While{$<$text$>$}
\State $<$body$>$
\EndWhile
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\While{$i\le n$}
\State $sum\gets sum+i$
\State $i\gets i+1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\While{$i\le n$}
\State $sum\gets sum+i$
\State $i\gets i+1$
\EndWhile
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{repeat} block}
The \textbf{repeat} block has the form:
\ASTART
\begin{verbatim}
\Repeat
<body>
\Until{<text>}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Repeat
\State $<$body$>$
\Until{$<$text$>$}
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\Repeat
\State $sum\gets sum+i$
\State $i\gets i+1$
\Until{$i>n$}
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\Repeat
\State $sum\gets sum+i$
\State $i\gets i+1$
\Until{$i>n$}
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{if} block}
The \textbf{if} block has the form:
\ASTART
\begin{verbatim}
\If{<text>}
<body>
[
\ElsIf{<text>}
<body>
...
]
[
\Else
<body>
]
\EndIf
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\If{$<$text$>$}
\State $<$body$>$
\Statex [
\ElsIf{$<$text$>$}
\State $<$body$>$
\Statex \dots
\Statex ]
\Statex [
\Else
\State $<$body$>$
\Statex ]
\EndIf
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\If{$quality\ge 9$}
\State $a\gets perfect$
\ElsIf{$quality\ge 7$}
\State $a\gets good$
\ElsIf{$quality\ge 5$}
\State $a\gets medium$
\ElsIf{$quality\ge 3$}
\State $a\gets bad$
\Else
\State $a\gets unusable$
\EndIf
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\If{$quality\ge 9$}
\State $a\gets perfect$
\ElsIf{$quality\ge 7$}
\State $a\gets good$
\ElsIf{$quality\ge 5$}
\State $a\gets medium$
\ElsIf{$quality\ge 3$}
\State $a\gets bad$
\Else
\State $a\gets unusable$
\EndIf
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{procedure} block}
The \textbf{procedure} block has the form:
\ASTART
\begin{verbatim}
\Procedure{<name>}{<params>}
<body>
\EndProcedure
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Procedure{$<$name$>$}{$<$params$>$}
\State $<$body$>$
\EndProcedure
\end{algorithmic}
\AENDSKIP
\noindent Example: See Euclid's\ algorithm on page \pageref{euclid}.
\subsubsection{The \textbf{function} block}The
\textbf{function} block has the same syntax as the \textbf{procedure} block:
\ASTART
\begin{verbatim}
\Function{<name>}{<params>}
<body>
\EndFunction
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Function{$<$name$>$}{$<$params$>$}
\State $<$body$>$
\EndFunction
\end{algorithmic}
\AEND
\subsubsection{The \textbf{loop} block}
The \textbf{loop} block has the form:
\ASTART
\begin{verbatim}
\Loop
<body>
\EndLoop
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Loop
\State $<$body$>$
\EndLoop
\end{algorithmic}
\AEND
\subsubsection{Other commands in this layout}
The starting conditions for the algorithm can be described with the \textbf{require}
instruction, and its result with the \textbf{ensure} instruction.
A procedure call can be formatted with \verb:\Call:.
\ASTART
\begin{verbatim}
\Require something
\Ensure something
\Statex
\State \Call{Create}{10}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Require something
\Ensure something
\Statex
\State \Call{Create}{10}
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Require $x\ge5$
\Ensure $x\le-5$
\Statex
\While{$x>-5$}
\State $x\gets x-1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Require $x\ge5$
\Ensure $x\le-5$
\Statex
\While{$x>-5$}
\State $x\gets x-1$
\EndWhile
\Statex
\end{algorithmic}
\AEND
\subsubsection{Package options}\label{algpseudocode package options}
The \texttt{algpseudocode} package supports the following options:
\begin{description}
\item[compatible/noncompatible]\ \textit{Obsolote, use the algcompatible layout instead.}\\
If you would like to use old
algorithms, written with the \texttt{algorithmic}\ package without (too much)
modification, then use the \textbf{compatible} option. This option
defines the uppercase version of the commands. Note that you still need
to remove the \verb:[...]: comments (these comments appeared due to some
limitations in the \texttt{algorithmic}\ package, these limitations and comments are gone now).
The default \textbf{noncompatible} does not define the all uppercase
commands.
\item[noend/end]\ \\With \textbf{noend} specified all \textbf{end \dots}
lines are omitted. You get a somewhat smaller algorithm, and the ugly
feeling, that something is missing\dots{} The \textbf{end} value is the
default, it means, that all \textbf{end \dots} lines are in their right
place.
\end{description}
\subsubsection{Changing command names}
One common thing for a pseudocode is to change the command names. Many people
use many different kind of pseudocode command names. In \textbf{algpseudocode}
all keywords are declared as \verb:\algorithmic<keyword>:. You can change them
to output the text you need:
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\begin{algorithmic}[1]
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x + 1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\begin{algorithmic}[1]
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x + 1$
\EndWhile
\Statex
\end{algorithmic}
\end{minipage}\bigskip
In some cases you may need to change even more (in the above example
\textbf{am\'\i g} and \textbf{v\'ege} should be interchanged in the \verb:\EndWhile:
text). Maybe the number of the parameters taken by some commands must be changed too.
this can be done with the command text customizing macros (see section
\ref{custom text}). Here I'll give only some examples of the most common usage:
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\algrenewtext{EndWhile}{\algorithmicwhile\ \algorithmicend}
\begin{algorithmic}[1]
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x - 1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\begin{algorithmic}[1]
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\algrenewtext{EndWhile}{\algorithmicwhile\ \algorithmicend}
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x - 1$
\EndWhile
\Statex
\end{algorithmic}
\end{minipage}
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algnewcommand\algorithmicto{\textbf{to}}
\algrenewtext{For}[3]%
{\algorithmicfor\ #1 \gets #2 \algorithmicto\ #3 \algorithmicdo}
\begin{algorithmic}[1]
\State $p \gets 1$
\For{i}{1}{n}
\State $p \gets p * i$
\EndFor
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\begin{algorithmic}[1]
\algnewcommand\algorithmicto{\textbf{to}}
\algrenewtext{For}[3]%
{\algorithmicfor\ $#1 \gets #2$ \algorithmicto\ $#3$ \algorithmicdo}
\State $p \gets 1$
\For{i}{1}{n}
\State $p \gets p * i$
\EndFor
\Statex
\end{algorithmic}
\end{minipage}\bigskip
You could create a translation package, that included after the \textbf{algpseudocode}
package translates the keywords to the language you need.
\subsection{The \textbf{algpascal} layout}
\alglanguage{pascal}
The most important feature of the \textbf{algpascal} layout is that
\textit{it performs automatically the block indentation}. In
section \ref{algorithmicx} you will see how to define such
automatically indented loops. Here is an example to demonstrate this
feature:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\For{i=1}{n}\Comment{sum(i)}
\State $sum:=sum+i$;
\State writeln($sum$);
\End.
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\For{i=1}{n}\Comment{sum(i)}
\State $sum:=sum+i$;
\State writeln($sum$);
\End.
\Statex
\end{algorithmic}
\AENDSKIP
Note, that the \verb:\For: is not closed explicitly, its end is
detected automatically. Again, the indentation in the source doesn't
affect the output.
In this layout every parameter passed to a command is put in
mathematical mode.
\subsubsection{The \textbf{begin} \dots{} \textbf{end} block}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Begin
<body>
\End
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Begin
\State $<$body$>$
\End
\end{algorithmic}
\AENDSKIP
The \verb:\Begin: \dots{} \verb:\End: block and the
\verb:\Repeat: \dots{} \verb:\Until: block are the only blocks in
the \textbf{algpascal} style (instead of \verb:\Begin: you may write
\verb:\Asm:). This means, that every other loop is ended automatically
after the following command (another loop, or a block).
\subsubsection{The \textbf{for} loop}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\For{<assign>}{<expr>}
<command>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\For{<$\relax$assign$\relax$>}{<$\relax$expr$\relax$>}
\State $<$command$>$
\end{algorithmic}
\AENDSKIP
The \textbf{For} loop (as all other loops) ends after the following command (a block counts
also as a single command).
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\State $prod:=1$;
\For{i:=1}{10}
\Begin
\State $sum:=sum+i$;
\State $prod:=prod*i$;
\End
\End.
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\State $prod:=1$;
\For{i:=1}{10}
\Begin
\State $sum:=sum+i$;
\State $prod:=prod*i$;
\End
\End.
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{while} loop}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\While{<expression>}
<command>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\While{<$\relax$expression$\relax$>}
\State $<$command$>$
\end{algorithmic}
\AEND
\subsubsection{The \textbf{repeat}\dots\ \textbf{until} block}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Repeat
<body>
\Until{<expression>}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Repeat
\State $<$body$>$
\Until{<$\relax$expression$\relax$>}
\end{algorithmic}
\AEND
\subsubsection{The \textbf{if} command}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\If{<expression>}
<command>
[
\Else
<command>
]
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\If{<$\relax$expression$\relax$>}
\State $<$command$>$
\Statex \hskip-\algorithmicindent\hskip-\algorithmicindent[
\Else
\State $<$command$>$
\Statex \hskip-\algorithmicindent\hskip-\algorithmicindent]
\end{algorithmic}
\AENDSKIP
Every \verb:\Else: matches the nearest \verb:\If:.
\subsubsection{The \textbf{procedure} command}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Procedure <some text>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Procedure $<$some text$>$
\end{algorithmic}
\AENDSKIP
\verb:\Procedure: just writes the ``procedure'' word on a new
line... You will probably put a \verb:\Begin:\dots\ \verb:\End:
block after it.
\subsubsection{The \textbf{function} command}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Function<some text>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Function $<$some text$>$
\end{algorithmic}
\AENDSKIP
Just like \textbf{Procedure}.
\subsection{The \textbf{algc} layout}
Sorry, the \textbf{algc} layout is unfinished.
The commands defined are:
\begin{itemize}
\item\verb:\{:\dots\ \verb:\}: block
\item\verb:\For: with 3 params
\item\verb:\If: with 1 param
\item\verb:\Else: with no params
\item\verb:\While: with 1 param
\item\verb:\Do: with no params
\item\verb:\Function: with 3 params
\item\verb:\Return: with no params
\end{itemize}
\section{Custom algorithmic blocks}\label{algorithmicx}
\alglanguage{default}
\subsection{Blocks and loops}
Most of the environments defined in the standard layouts (and most probably
the ones you will define) are divided in two categories:
\begin{description}
\item[Blocks] are the environments witch contain an arbitrary number of
commands or nested blocks. Each block has a name, begins with a starting command
and ends with an ending command. The commands in a block are
indented by \verb:\algorithmicindent: (or another amount).
If your algorithm ends without closing all blocks, the \texttt{algorithmicx}\ package gives
you a nice error. So be good, and close them all!
Blocks are all the environments defined in the \verb:algpseudocode:
package, the \verb:\Begin: \dots \verb:\End: block in the
\verb:algpascal: package, and some other ones.
\item[Loops] (Let us call them loops\dots) The loops are environments
that include only one command, loop or block; a loop is closed
automatically after this command. So loops have no ending commands. If
your algorithm (or a block) ends before the single command of a loop,
then this is considered an empty command, and the loop is closed. Feel
free to leave open loops at the end of blocks!
Loops are most of the environments in the \verb:algpascal: and
\verb:algc: packages.
\end{description}
For some rare constructions you can create mixtures of the two
environments (see section \ref{setblock}).
Each block and loop may be continued with another one (like the \verb:If:
with \verb:Else:).
\subsection{Defining blocks}\label{defblocks}
There are several commands to define blocks. The difference is in what is defined
beyond the block. The macro \verb:\algblock: defines a new block with starting and
ending entity.
\begin{verbatim}
\algblock[<block>]{<start>}{<end>}
\end{verbatim}
The defined commands have no parameters, and the text displayed by them is
\verb:\textbf{<start>}: and \verb:\textbf{<end>}:. You can change these texts later
(\ref{custom text}).
With \verb:\algblockdefx: you can give the text to be output by the starting
and ending command and the number of parameters for these commands. In the text
reference with \#$n$ to the parameter number $n$. Observe that the text
is given in the form you define or redefine macros, and really, this is what happens.
\begin{verbatim}
\algblockdefx[<block>]{<start>}{<end>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
This defines a new block called \verb:<block>:, \verb:<start>: opens the block,
\verb:<end>: closes the block,
\verb:<start>: displays \verb:<start text>:, and has \verb:<startparamcount>: parameters,
\verb:<end>: displays \verb:<end text>:, and has \verb:<endparamcount>: parameters.
For both \verb:<start>: and \verb:<end>:, if
\verb:<default value>: is given, then the first parameter is optional, and its default value
is \verb:<default value>:.
If you want to display different text (and to have a different number of parameters)
for \verb:<end>: at the end of different blocks, then use
the \verb:\algblockx: macro. Note that it is not possible to display different starting texts,
since it is not possible to start different blocks with the same command. The \verb:<start text>:
defined with \verb:\algblockx: has the same behavior as if defined with \verb:\algblockdefx:. All ending commands
not defined with \verb:\algblockx: will display the same text, and the ones defined with this
macro will display the different texts you specified.
\begin{verbatim}
\algblockx[<block>]{<start>}{<end>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
If in the above definitions the \verb:<block>: is missing, then the name of the starting command
is used as block name. If a block with the given name
already exists, these macros don't define a new block, instead this it will be used the defined
block. If \verb:<start>: or \verb:<end>: is empty, then
the definition does not define a new starting/ending command for the block, and then the
respective text must be missing from the definition. You may have more starting and ending commands
for one block. If the block name is missing, then a starting command must be given.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algblock[Name]{Start}{End}
\algblockdefx[NAME]{START}{END}%
[2][Unknown]{Start #1(#2)}%
{Ending}
\algblockdefx[NAME]{}{OTHEREND}%
[1]{Until (#1)}
\begin{algorithmic}[1]
\Start
\Start
\START[One]{x}
\END
\START{0}
\OTHEREND{\texttt{True}}
\End
\Start
\End
\End
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algblock[Name]{Start}{End}
\algblockdefx[NAME]{START}{END}%
[2][Unknown]{Start #1(#2)}%
{Ending}
\algblockdefx[NAME]{}{OTHEREND}%
[1]{Until (#1)}
\begin{algorithmic}[1]
\Start
\Start
\START[One]{x}
\END
\START{0}
\OTHEREND{\texttt{True}}
\End
\Start
\End
\End
\Statex
\end{algorithmic}
}
\end{minipage}
\subsection{Defining loops}
The loop defining macros are similar to the block defining macros. A loop has no ending command
and ends after the first state, block or loop that follows the loop.
Since loops have no ending command, the macro \verb:\algloopx: would not have mutch sense.
The loop defining macros are:
\begin{verbatim}
\algloop[<loop>]{<start>}
\algloopdefx[<loop>]{<start>}
[<startparamcount>][<default value>]{<start text>}
\end{verbatim}
Both create a loop named \verb:<loop>: with the starting command \verb:<start>:.
The second also sets the number of parameters, and the text displayed by the starting command.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algloop{For}
\algloopdefx{If}[1]{\textbf{If} #1 \textbf{then}}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\For
\Begin
\If{$a < b$}
\For
\Begin
\End
\Begin
\End
\End
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algloop{For}
\algloopdefx{If}%
[1]{\textbf{If} #1 \textbf{then}}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\For
\Begin
\If{$a < b$}
\For
\Begin
\End
\Begin
\End
\End
\Statex
\end{algorithmic}
}
\end{minipage}
\subsection{Continuing blocks and loops}
For each block/loop you may give commands that close the block or loop and open another
block or loop. A good example for this is the \textbf{if}~\dots~\textbf{then}~\dots~\textbf{else}
construct. The new block or loop can be closed or continued, as any other blocks and loops.
To create a continuing block use one of the following:
\begin{verbatim}
\algcblock[<new block>]{<old block>}{<continue>}{<end>}
\algcblockdefx[<new block>]{<old block>}{<continue>}{<end>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\algcblockx[<new block>]{<old block>}{<continue>}{<end>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
All three macros define a new block named \verb:<new block>:. If \verb:<new block>: is not given,
then \verb:<continue>: is used as the new block name. It is not allowed to have both
\verb:<new block>: missing, and \verb:<continue>: empty. The \verb:<continue>: command ends the
\verb:<old block>: block/loop and opens the \verb:<new block>: block. Since \verb:<continue>: may
end different blocks and loops, it can have different text
at the end of the different blocks/loops. If the \verb:<continue>: command doesn't find an
\verb:<old block>: to close, then an error is reported.
Create continuing loops with the followings:
\begin{verbatim}
\algcloop[<new loop>]{<old block>}{<continue>}
\algcloopdefx[<new loop>]{<old block>}{<continue>}
[<continueparamcount>][<default value>]{<continue text>}
\algcloopx[<new loop>]{<old block>}{<continue>}
[<continueparamcount>][<default value>]{<continue text>}
\end{verbatim}
These macros create a continuing loop, the \verb:<continue>: closes the \verb:<old block>:
block/loop, and opens a \verb:<new loop>: loop.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algblock{If}{EndIf}
\algcblock[If]{If}{ElsIf}{EndIf}
\algcblock{If}{Else}{EndIf}
\algcblockdefx[Strange]{If}{Eeee}{Oooo}
[1]{\textbf{Eeee} "#1"}
{\textbf{Wuuuups\dots}}
\begin{algorithmic}[1]
\If
\If
\ElsIf
\ElsIf
\If
\ElsIf
\Else
\EndIf
\EndIf
\If
\EndIf
\Eeee{Creep}
\Oooo
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algblock{If}{EndIf}
\algcblock[If]{If}{ElsIf}{EndIf}
\algcblock{If}{Else}{EndIf}
\algcblockdefx[Strange]{If}{Eeee}{Oooo}
[1]{\textbf{Eeee} "#1"}
{\textbf{Wuuuups\dots}}
\begin{algorithmic}[1]
\If
\If
\ElsIf
\ElsIf
\If
\ElsIf
\Else
\EndIf
\EndIf
\If
\EndIf
\Eeee{Creep}
\Oooo
\Statex
\end{algorithmic}
}
\end{minipage}\bigskip
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algloop{If}
\algcloop{If}{Else}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\If
\Begin
\End
\Else
\If
\Begin
\End
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algloop{If}
\algcloop{If}{Else}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\If
\Begin
\End
\Else
\If
\Begin
\End
\Statex
\end{algorithmic}
}
\end{minipage}\bigskip
\subsection{Even more customisation}\label{setblock}
With the following macros you can give the indentation used by the new block (or loop),
and the number of stataments after that the "block" is automatically closed. This value is $\infty$
for blocks, 1 for loops, and 0 for stataments. There is a special value, 65535, meaning that the
defined "block" does not end automatically, but if it is enclosed in a block, then the ending
command of the block closes this "block" as well.
\begin{verbatim}
\algsetblock[<block>]{<start>}{<end>}
{<lifetime>}{<indent>}
\algsetblockdefx[<block>]{<start>}{<end>}
{<lifetime>}{<indent>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\algsetblockx[<block>]{<start>}{<end>}
{<lifetime>}{<indent>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\algcsetblock[<new block>]{<old block>}{<continue>}{<end>}
{<lifetime>}{<indent>}
\algcsetblockdefx[<new block>]{<old block>}{<continue>}{<stop>}
{<lifetime>}{<indent>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\algcsetblockx[<new block>]{<old block>}{<continue>}{<stop>}
{<lifetime>}{<indent>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
The \verb:<lifetime>: is the number of stataments after that the block is closed. An empty
\verb:<lifetime>: field means $\infty$. The \verb:<indent>: gives the indentation of the block.
Leave this field empty for the default indentation. The rest of the parameters has the same
function as for the previous macros.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algsetblock[Name]{Start}{Stop}{3}{1cm}
\algsetcblock[CName]{Name}{CStart}{CStop}{2}{2cm}
\begin{algorithmic}[1]
\Start
\State 1
\State 2
\State 3
\State 4
\Start
\State 1
\Stop
\State 2
\Start
\State 1
\CStart
\State 1
\State 2
\State 3
\Start
\State 1
\CStart
\State 1
\CStop
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algsetblock[Name]{Start}{Stop}{3}{1cm}
\algsetcblock[CName]{Name}{CStart}{CStop}{2}{2cm}
\begin{algorithmic}[1]
\Start
\State 1
\State 2
\State 3
\State 4
\Start
\State 1
\Stop
\State 2
\Start
\State 1
\CStart
\State 1
\State 2
\State 3
\Start
\State 1
\CStart
\State 1
\CStop
\Statex
\end{algorithmic}
}
\end{minipage}\bigskip
The created environments behave as follows:
\begin{itemize}
\item It starts with \verb:\Start:. The nested environments are
indented by 1 cm.
\item If it is followed by at least 3 environments (stataments), then it closes
automatically after the third one.
\item If you put a \verb:\Stop: before the automatic closure, then this
\verb:\Stop: closes the environment. \verb:CStart: closes a block called \verb:Name:
and opens a new one called \verb:CName: and having an indentaion of 2 cm.
\item \verb:CName: can be closed with \verb:CStop: or it is closed automatically after
2 environments.
\end{itemize}
\subsection{Parameters, custom text}\label{custom text}
With \verb:\algrenewtext: you can change the number of parameters, and the text displayed by the
commands. With \verb:algnotext: you can makes the vole output line disappear, but
it works only for ending commands, for beginning commands you will get an incorrect output.
\begin{verbatim}
\algrenewcommand[<block>]{<command>}
[<paramcount>][<default value>]{<text>}
\algnotext[<block>]{<ending command>}
\end{verbatim}
If \verb:<block>: is missing, then the default text is changed, and if \verb:<block>: is given,
then the text displayed at the end of \verb:<block>: is changed.
To make a command output the default text at the end of a block (say, you have changed the text
for this block), use \verb:\algdefaulttext:.
\begin{verbatim}
\algdefaulttext[<block>]{<command>}
\end{verbatim}
If the \verb:<block>: is missing, than the default text itself will be set to the default value
(this is \verb:\textbf{<command>}:).
\subsection{The ONE defining macro}
All block and loop defining macros call the same macro. You may use this macro to gain a
better acces to what will be defined. This macro is \verb:\algdef:.
\begin{verbatim}
\algdef{<flags>}...
\end{verbatim}
Depending on the flags the macro can have many forms.
\begin{center}
\begin{tabular}{|c|l|}
\hline
\textbf{Flag}&\textbf{Meaning}\\
\hline
s&starting command, without text\\
S&starting command with text\\
c&continuing command, without text\\
C&continuing command, with default text\\
xC&continuing command, with block specific text\\
\hline
e&ending command, without text\\
E&continuing command, with default text\\
xE&continuing command, with block specific text\\
N&ending command, with default "no text"\\
xN&ending command, with no text for this block\\
\hline
b&block(default)\\
l&loop\\
L&loop closes after the given number of stataments\\
\hline
i&indentation specified\\
\hline
\end{tabular}
\end{center}
The \verb:<new block>: may be given for any combination of flags, and it is not allowed to have
\verb:<new block>: missing and \verb:<start>: missing/empty.
For c, C, xC an old block is expected. For s, S, c, C, xC the \verb:<start>: must be given.
For e, E, xE, N, xN the \verb:<end>: must be given. For L the \verb:<lifetime>: must be given.
For i the \verb:<indent>: must be given.
For S, C, xC the starting text and related infos must be given. For E, xE the ending text must be given.
For each combination of flags give only the needed parameters, in the following order:
\begin{verbatim}
\algdef{<flags>}[<new block>]{<old block>}{<start>}{<end>}
{<lifetime>}{<indent>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
The block and loop defining macros call \verb:\algdef: with the following flags:
\begin{center}
\begin{tabular}{|l|l|}
\hline
\textbf{Macro}&\textbf{Meaning}\\
\hline
\verb:\algblock:&\verb:\algdef{se}:\\
\hline
\verb:\algcblock:&\verb:\algdef{ce}:\\
\hline
\verb:\algloop:&\verb:\algdef{sl}:\\
\hline
\verb:\algcloop:&\verb:\algdef{cl}:\\
\hline
\verb:\algsetblock:&\verb:\algdef{seLi}:\\
\hline
\verb:\algsetcblock:&\verb:\algdef{ceLi}:\\
\hline
\verb:\algblockx:&\verb:\algdef{SxE}:\\
\hline
\verb:\algblockdefx:&\verb:\algdef{SE}:\\
\hline
\verb:\algcblockx:&\verb:\algdef{CxE}:\\
\hline
\verb:\algcblockdefx:&\verb:\algdef{CE}:\\
\hline
\verb:\algsetblockx:&\verb:\algdef{SxELi}:\\
\hline
\verb:\algsetblockdefx:&\verb:\algdef{SELi}:\\
\hline
\verb:\algsetcblockx:&\verb:\algdef{CxELi}:\\
\hline
\verb:\algsetcblockdefx:&\verb:\algdef{CELi}:\\
\hline
\verb:\algloopdefx:&\verb:\algdef{Sl}:\\
\hline
\verb:\algcloopx:&\verb:\algdef{Cxl}:\\
\hline
\verb:\algcloopdefx:&\verb:\algdef{Cl}:\\
\hline
\end{tabular}
\end{center}
\vfill
\section{Examples}
\subsection{A full example using \textbf{algpseudocode}}
\begin{verbatim}
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\begin{document}
\begin{algorithm}
\caption{The Bellman-Kalaba algorithm}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\Statex
\Procedure {FindPathBK}{$v$, $u$, $p$}
\If {$v = u$}
\State \textbf{Write} $v$
\Else
\State $w \leftarrow v$
\While {$w \not= u$}
\State \textbf{Write} $w$
\State $w \leftarrow p(w)$
\EndWhile
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{document}
\end{verbatim}
\eject
\alglanguage{pseudocode}
\begin{algorithm}[h]
\caption{The Bellman-Kalaba algorithm}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\Statex
\Procedure {FindPathBK}{$v$, $u$, $p$}
\If {$v = u$}
\State \textbf{Write} $v$
\Else
\State $w \leftarrow v$
\While {$w \not= u$}
\State \textbf{Write} $w$
\State $w \leftarrow p(w)$
\EndWhile
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\eject
\subsection{Breaking up an algorithm}
\begin{verbatim}
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\begin{document}
\begin{algorithm}
\caption{Part 1}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State \Comment For some reason we need to break here!
\algstore{bkbreak}
\end{algorithmic}
\end{algorithm}
And we need to put some additional text between\dots
\begin{algorithm}[h]
\caption{Part 2}
\begin{algorithmic}[1]
\algrestore{bkbreak}
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{document}
\end{verbatim}
\eject
\alglanguage{pseudocode}
\begin{algorithm}[h]
\caption{Part 1}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State \Comment For some reason we need to break here!
\algstore{bkbreak}
\end{algorithmic}
\end{algorithm}
And we need to put some additional text between\dots
\begin{algorithm}[h]
\caption{Part 2}
\begin{algorithmic}[1]
\algrestore{bkbreak}
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\eject
\subsection{Using multiple layouts}
\begin{verbatim}
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{algpascal}
\begin{document}
\alglanguage{pseudocode}
\begin{algorithm}
\caption{A small pseudocode}
\begin{algorithmic}[1]
\State $s \gets 0$
\State $p \gets 0$
\For{$i \gets 1,\, 10$}
\State $s \gets s + i$
\State $p \gets p + s$
\EndFor
\end{algorithmic}
\end{algorithm}
\alglanguage{pascal}
\begin{algorithm}
\caption{The pascal version}
\begin{algorithmic}[1]
\State $s := 0$
\State $p := 0$
\For{i = 1}{10}
\Begin
\State $s := s + i$
\State $p := p + s$
\End
\end{algorithmic}
\end{algorithm}
\end{document}
\end{verbatim}
\eject
\alglanguage{pseudocode}
\begin{algorithm}
\caption{A small pseudocode}
\begin{algorithmic}[1]
\State $s \gets 0$
\State $p \gets 0$
\For{$i \gets 1,\, 10$}
\State $s \gets s + i$
\State $p \gets p + s$
\EndFor
\end{algorithmic}
\end{algorithm}
\alglanguage{pascal}
\begin{algorithm}
\caption{The pascal version}
\begin{algorithmic}[1]
\State $s := 0$
\State $p := 0$
\For{i = 1}{10}
\Begin
\State $s := s + i$
\State $p := p + s$
\End
\end{algorithmic}
\end{algorithm}
\eject
\section{Bugs}
If you have a question or find a bug you can contact me on:
\medskip
\textbf{[email protected]}
\medskip
\noindent If possible, please create a small \LaTeX{} example related to your problem.
\end{document}
\section{Introduction}
All this has begun in my last year at the university. The only thing that I knew of
\LaTeX\ was that it exists, and that it is ``good''. I started using it, but I needed to typeset some
algorithms. So I begun searching for a good algorithmic style, and I have found the \texttt{algorithmic}\ package.
It was a great joy for me, and I started to use it\dots\
Well\dots\ Everything went nice, until I needed some block that wasn't defined in there. What to do?
I was no \LaTeX\ guru, in fact I only knew the few basic macros. But there was no other way, so I opened
the style file, and I copied one existing block, renamed a few things, and voil\`a! This (and some other
small changes) where enough for me\dots
One year later --- for one good soul --- I had to make some really big changes on the style. And there on
a sunny day came the idea. What if I would write some macros to let others create blocks automatically?
And so I did! Since then the style was completely rewritten\dots\ several times\dots
I had fun writing it, may you have fun using it! I am still no \LaTeX\ guru, so if you are, and you find
something really ugly in the style, please mail me! All ideas for improvements are welcome!
Thanks go to Benedek Zsuzsa, Ionescu Clara, Sz\H ocs Zolt\'an, Cseke Botond, Kanoc
and many-many others. Without them I would have never started or continued \textbf{algorithmicx}.
\section{General informations}
\subsection{The package}
The package \textbf{algorithmicx} itself doesn't define any algorithmic commands, but gives
a set of macros to define such a command set. You may use only \textbf{algorithmicx}, and define
the commands yourself, or you may use one of the predefined command sets.
These predefined command sets (layouts) are:
\begin{description}
\item[algpseudocode] has the same look\footnote{almost :-)} as the one defined in the
\texttt{algorithmic}\ package. The main difference is that while the \texttt{algorithmic}\ package doesn't
allow you to modify predefined structures, or to create new ones, the \texttt{algorithmicx}\
package gives you full control over the definitions (ok, there are some
limitations --- you can not send mail with a, say, \verb:\For: command).
\item[algcompatible] is fully compatible with the \texttt{algorithmic}\ package, it should be
used only in old documents.
\item[algpascal] aims to create a formatted pascal program, it performs
automatic indentation (!), so you can transform a pascal program into an
\textbf{algpascal} algorithm description with some basic substitution rules.
\item[algc] -- yeah, just like the \textbf{algpascal}\dots\ but for c\dots\
This layout is incomplete.
\end{description}
To create floating algorithms you will need \verb:algorithm.sty:. This file may or may not be
included in the \texttt{algorithmicx}\ package. You can find it on CTAN, in the \texttt{algorithmic}\ package.
\subsection{The algorithmic block}
Each algorithm begins with the \verb:\begin{algorithmic}[lines]: command, the
optional \verb:lines: controls the line numbering: $0$ means no line numbering,
$1$ means number every line, and $n$ means number lines $n$, $2n$, $3n$\dots\ until the
\verb:\end{algorithmic}: command, witch ends the algorithm.
\subsection{Simple lines}
A simple line of text is beginned with \verb:\State:. This macro marks the begin of every
line. You don't need to use \verb:\State: before a command defined in the package, since
these commands use automatically a new line.
To obtain a line that is not numbered, and not counted when counting the lines for line numbering
(in case you choose to number lines), use the \verb:Statex: macro. This macro jumps into a new line,
the line gets no number, and any label will point to the previous numbered line.
We will call \textit{statament\/}s the lines starting with \verb:\State:. The \verb:\Statex:
lines are not stataments.
\subsection{Placing comments in sources}\label{Putting comments in sources}
Comments may be placed everywhere in the source using the \verb:\Comment: macro
(there are no limitations like those in the \texttt{algorithmic}\ package), feel the freedom!
If you would like to change the form in witch comments are displayed, just
change the \verb:\algorithmiccomment: macro:
\begin{verbatim}
\algrenewcommand{\algorithmiccomment}[1]{\hskip3em$\rightarrow$ #1}
\end{verbatim}
will result:
\medskip
\begin{algorithmic}[1]
\algrenewcommand{\algorithmiccomment}[1]{\hskip3em$\rightarrow$ #1}
\State $x\gets x+1$\Comment{Here is the new comment}
\end{algorithmic}
\subsection{Labels and references}
Use the \verb:\label: macro, as usual to label a line. When you use \verb:\ref: to reference
the line, the \verb:\ref: will be subtitued with the corresponding line number. When using the
\textbf{algorithmicx} package togedher with the \textbf{algorithm} package, then you can label
both the algorithm and the line, and use the \verb:\algref: macro to reference a given line
from a given algorithm:
\begin{verbatim}
\algref{<algorithm>}{<line>}
\end{verbatim}
\noindent\begin{minipage}[t]{0.5\linewidth}
\begin{verbatim}
The \textbf{while} in algorithm
\ref{euclid} ends in line
\ref{euclidendwhile}, so
\algref{euclid}{euclidendwhile}
is the line we seek.
\end{verbatim}
\end{minipage}\begin{minipage}[t]{0.5\linewidth}
The \textbf{while} in algorithm \ref{euclid} ends in line \ref{euclidendwhile},
so \algref{euclid}{euclidendwhile} is the line we seek.
\end{minipage}
\subsection{Breaking up long algorithms}
Sometimes you have a long algorithm that needs to be broken into parts, each on a
separate float. For this you can use the following:
\begin{description}
\item[]\verb:\algstore{<savename>}: saves the line number, indentation, open blocks of
the current algorithm and closes all blocks. If used, then this must be the last command
before closing the algorithmic block. Each saved algorithm must be continued later in the
document.
\item[]\verb:\algstore*{<savename>}: Like the above, but the algorithm must not be continued.
\item[]\verb:\algrestore{<savename>}: restores the state of the algorithm saved under
\verb:<savename>: in this algorithmic block. If used, then this must be the first command
in an algorithmic block. A save is deleted while restoring.
\item[]\verb:\algrestore*{<savename>}: Like the above, but the save will not be deleted, so it
can be restored again.
\end{description}
See example in the \textbf{Examples} section.
\subsection{Multiple layouts in the same document}
You can load multiple algorithmicx layouts in the same document. You can switch between the layouts
using the \verb:\alglanguage{<layoutname>}: command. After this command all new algorithmic
environments will use the given layout until the layout is changed again.
\section{The predefined layouts}
\subsection{The \textbf{algpseudocode} layout}\label{algpseudocode}
\alglanguage{pseudocode}
If you are familiar with the \texttt{algorithmic}\ package, then you'll find it easy to
switch. You can use the old algorithms with the \textbf{algcompatible} layout, but please
use the \textbf{algpseudocode} layout for new algorithms.
To use \textbf{algpseudocode}, simply load \verb:algpseudocode.sty::
\begin{verbatim}
\usepackage{algpseudocode}
\end{verbatim}
You don't need to manually load the \textbf{algorithmicx} package, as this is done by
\textbf{algpseudocode}.
The first algorithm one should write is the first algorithm ever (ok,
an improved version), \textit{Euclid's algorithm}:
\begin{algorithm}[H]
\caption{Euclid's algorithm}\label{euclid}
\begin{algorithmic}[1]
\Procedure{Euclid}{$a,b$}\Comment{The g.c.d. of a and b}
\State $r\gets a\bmod b$
\While{$r\not=0$}\Comment{We have the answer if r is 0}
\State $a\gets b$
\State $b\gets r$
\State $r\gets a\bmod b$
\EndWhile\label{euclidendwhile}
\State \Return $b$\Comment{The gcd is b}
\EndProcedure
\end{algorithmic}
\end{algorithm}
Created with the following source:
\begin{verbatim}
\begin{algorithm}
\caption{Euclid's algorithm}\label{euclid}
\begin{algorithmic}[1]
\Procedure{Euclid}{$a,b$}\Comment{The g.c.d. of a and b}
\State $r\gets a\bmod b$
\While{$r\not=0$}\Comment{We have the answer if r is 0}
\State $a\gets b$
\State $b\gets r$
\State $r\gets a\bmod b$
\EndWhile\label{euclidendwhile}
\State \textbf{return} $b$\Comment{The gcd is b}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{verbatim}
The \verb:\State: stands at the beginning of each simple statement; the respective
statement is put in a new line, with the needed indentation.
The \verb:\Procedure: \dots\verb:\EndProcedure: and
\verb:\While: \dots\verb:\EndWhile: blocks (like any block defined in the
\textbf{algpseudocode} layout) automatically indent their content.
The indentation of the source doesn't matter, so
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Repeat
\Comment{forever}
\State this\Until{you die.}
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Repeat
\Comment{forever}
\State this\Until{you die.}
\Statex
\end{algorithmic}
\AENDSKIP
But, generally, it is a good idea to keep the source indented, since you will find
errors much easier. And your tex file looks better!
All examples and syntax descriptions will be shown as the previous
example --- the left side shows the \LaTeX\ input, and the right side
the algorithm, as it appears in your document. I'm cheating! Don't look
in the \verb:algorithmicx.tex: file! Believe what the examples state! I may use some
undocumented and dirty stuff to create all these examples. You might be more
confused after opening \verb:algorithmicx.tex: as you was before.
In the case of syntax
descriptions the text between $<$ and $>$ is symbolic, so if you type
what you see on the left side, you will not get the algorithm on the
right side. But if you replace the text between $<$ $>$ with a proper piece of
algorithm, then you will probably get what you want. The parts between
$[$ and $]$ are optional.
\subsubsection{The \textbf{for} block}
The \textbf{for} block may have one of the forms:
\ASTART
\begin{verbatim}
\For{<text>}
<body>
\EndFor
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\For{$<$text$>$}
\State $<$body$>$
\EndFor
\end{algorithmic}
\AEND
\ASTART
\begin{verbatim}
\ForAll{<text>}
<body>
\EndFor
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\ForAll{$<$text$>$}
\State $<$body$>$
\EndFor
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\State $sum\gets 0$
\For{$i\gets 1, n$}
\State $sum\gets sum+i$
\EndFor
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\State $sum\gets 0$
\For{$i\gets 1, n$}
\State $sum\gets sum+i$
\EndFor
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{while} block}
The \textbf{while} block has the form:
\ASTART
\begin{verbatim}
\While{<text>}
<body>
\EndWhile
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\While{$<$text$>$}
\State $<$body$>$
\EndWhile
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\While{$i\le n$}
\State $sum\gets sum+i$
\State $i\gets i+1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\While{$i\le n$}
\State $sum\gets sum+i$
\State $i\gets i+1$
\EndWhile
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{repeat} block}
The \textbf{repeat} block has the form:
\ASTART
\begin{verbatim}
\Repeat
<body>
\Until{<text>}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Repeat
\State $<$body$>$
\Until{$<$text$>$}
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\Repeat
\State $sum\gets sum+i$
\State $i\gets i+1$
\Until{$i>n$}
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\State $sum\gets 0$
\State $i\gets 1$
\Repeat
\State $sum\gets sum+i$
\State $i\gets i+1$
\Until{$i>n$}
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{if} block}
The \textbf{if} block has the form:
\ASTART
\begin{verbatim}
\If{<text>}
<body>
[
\ElsIf{<text>}
<body>
...
]
[
\Else
<body>
]
\EndIf
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\If{$<$text$>$}
\State $<$body$>$
\Statex [
\ElsIf{$<$text$>$}
\State $<$body$>$
\Statex \dots
\Statex ]
\Statex [
\Else
\State $<$body$>$
\Statex ]
\EndIf
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\If{$quality\ge 9$}
\State $a\gets perfect$
\ElsIf{$quality\ge 7$}
\State $a\gets good$
\ElsIf{$quality\ge 5$}
\State $a\gets medium$
\ElsIf{$quality\ge 3$}
\State $a\gets bad$
\Else
\State $a\gets unusable$
\EndIf
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\If{$quality\ge 9$}
\State $a\gets perfect$
\ElsIf{$quality\ge 7$}
\State $a\gets good$
\ElsIf{$quality\ge 5$}
\State $a\gets medium$
\ElsIf{$quality\ge 3$}
\State $a\gets bad$
\Else
\State $a\gets unusable$
\EndIf
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{procedure} block}
The \textbf{procedure} block has the form:
\ASTART
\begin{verbatim}
\Procedure{<name>}{<params>}
<body>
\EndProcedure
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Procedure{$<$name$>$}{$<$params$>$}
\State $<$body$>$
\EndProcedure
\end{algorithmic}
\AENDSKIP
\noindent Example: See Euclid's\ algorithm on page \pageref{euclid}.
\subsubsection{The \textbf{function} block}The
\textbf{function} block has the same syntax as the \textbf{procedure} block:
\ASTART
\begin{verbatim}
\Function{<name>}{<params>}
<body>
\EndFunction
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Function{$<$name$>$}{$<$params$>$}
\State $<$body$>$
\EndFunction
\end{algorithmic}
\AEND
\subsubsection{The \textbf{loop} block}
The \textbf{loop} block has the form:
\ASTART
\begin{verbatim}
\Loop
<body>
\EndLoop
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Loop
\State $<$body$>$
\EndLoop
\end{algorithmic}
\AEND
\subsubsection{Other commands in this layout}
The starting conditions for the algorithm can be described with the \textbf{require}
instruction, and its result with the \textbf{ensure} instruction.
A procedure call can be formatted with \verb:\Call:.
\ASTART
\begin{verbatim}
\Require something
\Ensure something
\Statex
\State \Call{Create}{10}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Require something
\Ensure something
\Statex
\State \Call{Create}{10}
\end{algorithmic}
\AENDSKIP
\noindent Example:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Require $x\ge5$
\Ensure $x\le-5$
\Statex
\While{$x>-5$}
\State $x\gets x-1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Require $x\ge5$
\Ensure $x\le-5$
\Statex
\While{$x>-5$}
\State $x\gets x-1$
\EndWhile
\Statex
\end{algorithmic}
\AEND
\subsubsection{Package options}\label{algpseudocode package options}
The \texttt{algpseudocode} package supports the following options:
\begin{description}
\item[compatible/noncompatible]\ \textit{Obsolote, use the algcompatible layout instead.}\\
If you would like to use old
algorithms, written with the \texttt{algorithmic}\ package without (too much)
modification, then use the \textbf{compatible} option. This option
defines the uppercase version of the commands. Note that you still need
to remove the \verb:[...]: comments (these comments appeared due to some
limitations in the \texttt{algorithmic}\ package, these limitations and comments are gone now).
The default \textbf{noncompatible} does not define the all uppercase
commands.
\item[noend/end]\ \\With \textbf{noend} specified all \textbf{end \dots}
lines are omitted. You get a somewhat smaller algorithm, and the ugly
feeling, that something is missing\dots{} The \textbf{end} value is the
default, it means, that all \textbf{end \dots} lines are in their right
place.
\end{description}
\subsubsection{Changing command names}
One common thing for a pseudocode is to change the command names. Many people
use many different kind of pseudocode command names. In \textbf{algpseudocode}
all keywords are declared as \verb:\algorithmic<keyword>:. You can change them
to output the text you need:
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\begin{algorithmic}[1]
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x + 1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\begin{algorithmic}[1]
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x + 1$
\EndWhile
\Statex
\end{algorithmic}
\end{minipage}\bigskip
In some cases you may need to change even more (in the above example
\textbf{am\'\i g} and \textbf{v\'ege} should be interchanged in the \verb:\EndWhile:
text). Maybe the number of the parameters taken by some commands must be changed too.
this can be done with the command text customizing macros (see section
\ref{custom text}). Here I'll give only some examples of the most common usage:
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\algrenewtext{EndWhile}{\algorithmicwhile\ \algorithmicend}
\begin{algorithmic}[1]
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x - 1$
\EndWhile
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\begin{algorithmic}[1]
\algrenewcommand\algorithmicwhile{\textbf{am\'\i g}}
\algrenewcommand\algorithmicdo{\textbf{v\'egezd el}}
\algrenewcommand\algorithmicend{\textbf{v\'ege}}
\algrenewtext{EndWhile}{\algorithmicwhile\ \algorithmicend}
\State $x \gets 1$
\While{$x < 10$}
\State $x \gets x - 1$
\EndWhile
\Statex
\end{algorithmic}
\end{minipage}
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algnewcommand\algorithmicto{\textbf{to}}
\algrenewtext{For}[3]%
{\algorithmicfor\ #1 \gets #2 \algorithmicto\ #3 \algorithmicdo}
\begin{algorithmic}[1]
\State $p \gets 1$
\For{i}{1}{n}
\State $p \gets p * i$
\EndFor
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\begin{algorithmic}[1]
\algnewcommand\algorithmicto{\textbf{to}}
\algrenewtext{For}[3]%
{\algorithmicfor\ $#1 \gets #2$ \algorithmicto\ $#3$ \algorithmicdo}
\State $p \gets 1$
\For{i}{1}{n}
\State $p \gets p * i$
\EndFor
\Statex
\end{algorithmic}
\end{minipage}\bigskip
You could create a translation package, that included after the \textbf{algpseudocode}
package translates the keywords to the language you need.
\subsection{The \textbf{algpascal} layout}
\alglanguage{pascal}
The most important feature of the \textbf{algpascal} layout is that
\textit{it performs automatically the block indentation}. In
section \ref{algorithmicx} you will see how to define such
automatically indented loops. Here is an example to demonstrate this
feature:
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\For{i=1}{n}\Comment{sum(i)}
\State $sum:=sum+i$;
\State writeln($sum$);
\End.
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\For{i=1}{n}\Comment{sum(i)}
\State $sum:=sum+i$;
\State writeln($sum$);
\End.
\Statex
\end{algorithmic}
\AENDSKIP
Note, that the \verb:\For: is not closed explicitly, its end is
detected automatically. Again, the indentation in the source doesn't
affect the output.
In this layout every parameter passed to a command is put in
mathematical mode.
\subsubsection{The \textbf{begin} \dots{} \textbf{end} block}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Begin
<body>
\End
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Begin
\State $<$body$>$
\End
\end{algorithmic}
\AENDSKIP
The \verb:\Begin: \dots{} \verb:\End: block and the
\verb:\Repeat: \dots{} \verb:\Until: block are the only blocks in
the \textbf{algpascal} style (instead of \verb:\Begin: you may write
\verb:\Asm:). This means, that every other loop is ended automatically
after the following command (another loop, or a block).
\subsubsection{The \textbf{for} loop}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\For{<assign>}{<expr>}
<command>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\For{<$\relax$assign$\relax$>}{<$\relax$expr$\relax$>}
\State $<$command$>$
\end{algorithmic}
\AENDSKIP
The \textbf{For} loop (as all other loops) ends after the following command (a block counts
also as a single command).
\ASTART
\begin{verbatim}
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\State $prod:=1$;
\For{i:=1}{10}
\Begin
\State $sum:=sum+i$;
\State $prod:=prod*i$;
\End
\End.
\end{algorithmic}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Begin
\State $sum:=0$;
\State $prod:=1$;
\For{i:=1}{10}
\Begin
\State $sum:=sum+i$;
\State $prod:=prod*i$;
\End
\End.
\Statex
\end{algorithmic}
\AEND
\subsubsection{The \textbf{while} loop}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\While{<expression>}
<command>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\While{<$\relax$expression$\relax$>}
\State $<$command$>$
\end{algorithmic}
\AEND
\subsubsection{The \textbf{repeat}\dots\ \textbf{until} block}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Repeat
<body>
\Until{<expression>}
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Repeat
\State $<$body$>$
\Until{<$\relax$expression$\relax$>}
\end{algorithmic}
\AEND
\subsubsection{The \textbf{if} command}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\If{<expression>}
<command>
[
\Else
<command>
]
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\If{<$\relax$expression$\relax$>}
\State $<$command$>$
\Statex \hskip-\algorithmicindent\hskip-\algorithmicindent[
\Else
\State $<$command$>$
\Statex \hskip-\algorithmicindent\hskip-\algorithmicindent]
\end{algorithmic}
\AENDSKIP
Every \verb:\Else: matches the nearest \verb:\If:.
\subsubsection{The \textbf{procedure} command}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Procedure <some text>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Procedure $<$some text$>$
\end{algorithmic}
\AENDSKIP
\verb:\Procedure: just writes the ``procedure'' word on a new
line... You will probably put a \verb:\Begin:\dots\ \verb:\End:
block after it.
\subsubsection{The \textbf{function} command}
\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\Function<some text>
\end{verbatim}
\ACONTINUE
\begin{algorithmic}[1]
\Function $<$some text$>$
\end{algorithmic}
\AENDSKIP
Just like \textbf{Procedure}.
\subsection{The \textbf{algc} layout}
Sorry, the \textbf{algc} layout is unfinished.
The commands defined are:
\begin{itemize}
\item\verb:\{:\dots\ \verb:\}: block
\item\verb:\For: with 3 params
\item\verb:\If: with 1 param
\item\verb:\Else: with no params
\item\verb:\While: with 1 param
\item\verb:\Do: with no params
\item\verb:\Function: with 3 params
\item\verb:\Return: with no params
\end{itemize}
\section{Custom algorithmic blocks}\label{algorithmicx}
\alglanguage{default}
\subsection{Blocks and loops}
Most of the environments defined in the standard layouts (and most probably
the ones you will define) are divided in two categories:
\begin{description}
\item[Blocks] are the environments witch contain an arbitrary number of
commands or nested blocks. Each block has a name, begins with a starting command
and ends with an ending command. The commands in a block are
indented by \verb:\algorithmicindent: (or another amount).
If your algorithm ends without closing all blocks, the \texttt{algorithmicx}\ package gives
you a nice error. So be good, and close them all!
Blocks are all the environments defined in the \verb:algpseudocode:
package, the \verb:\Begin: \dots \verb:\End: block in the
\verb:algpascal: package, and some other ones.
\item[Loops] (Let us call them loops\dots) The loops are environments
that include only one command, loop or block; a loop is closed
automatically after this command. So loops have no ending commands. If
your algorithm (or a block) ends before the single command of a loop,
then this is considered an empty command, and the loop is closed. Feel
free to leave open loops at the end of blocks!
Loops are most of the environments in the \verb:algpascal: and
\verb:algc: packages.
\end{description}
For some rare constructions you can create mixtures of the two
environments (see section \ref{setblock}).
Each block and loop may be continued with another one (like the \verb:If:
with \verb:Else:).
\subsection{Defining blocks}\label{defblocks}
There are several commands to define blocks. The difference is in what is defined
beyond the block. The macro \verb:\algblock: defines a new block with starting and
ending entity.
\begin{verbatim}
\algblock[<block>]{<start>}{<end>}
\end{verbatim}
The defined commands have no parameters, and the text displayed by them is
\verb:\textbf{<start>}: and \verb:\textbf{<end>}:. You can change these texts later
(\ref{custom text}).
With \verb:\algblockdefx: you can give the text to be output by the starting
and ending command and the number of parameters for these commands. In the text
reference with \#$n$ to the parameter number $n$. Observe that the text
is given in the form you define or redefine macros, and really, this is what happens.
\begin{verbatim}
\algblockdefx[<block>]{<start>}{<end>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
This defines a new block called \verb:<block>:, \verb:<start>: opens the block,
\verb:<end>: closes the block,
\verb:<start>: displays \verb:<start text>:, and has \verb:<startparamcount>: parameters,
\verb:<end>: displays \verb:<end text>:, and has \verb:<endparamcount>: parameters.
For both \verb:<start>: and \verb:<end>:, if
\verb:<default value>: is given, then the first parameter is optional, and its default value
is \verb:<default value>:.
If you want to display different text (and to have a different number of parameters)
for \verb:<end>: at the end of different blocks, then use
the \verb:\algblockx: macro. Note that it is not possible to display different starting texts,
since it is not possible to start different blocks with the same command. The \verb:<start text>:
defined with \verb:\algblockx: has the same behavior as if defined with \verb:\algblockdefx:. All ending commands
not defined with \verb:\algblockx: will display the same text, and the ones defined with this
macro will display the different texts you specified.
\begin{verbatim}
\algblockx[<block>]{<start>}{<end>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
If in the above definitions the \verb:<block>: is missing, then the name of the starting command
is used as block name. If a block with the given name
already exists, these macros don't define a new block, instead this it will be used the defined
block. If \verb:<start>: or \verb:<end>: is empty, then
the definition does not define a new starting/ending command for the block, and then the
respective text must be missing from the definition. You may have more starting and ending commands
for one block. If the block name is missing, then a starting command must be given.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algblock[Name]{Start}{End}
\algblockdefx[NAME]{START}{END}%
[2][Unknown]{Start #1(#2)}%
{Ending}
\algblockdefx[NAME]{}{OTHEREND}%
[1]{Until (#1)}
\begin{algorithmic}[1]
\Start
\Start
\START[One]{x}
\END
\START{0}
\OTHEREND{\texttt{True}}
\End
\Start
\End
\End
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algblock[Name]{Start}{End}
\algblockdefx[NAME]{START}{END}%
[2][Unknown]{Start #1(#2)}%
{Ending}
\algblockdefx[NAME]{}{OTHEREND}%
[1]{Until (#1)}
\begin{algorithmic}[1]
\Start
\Start
\START[One]{x}
\END
\START{0}
\OTHEREND{\texttt{True}}
\End
\Start
\End
\End
\Statex
\end{algorithmic}
}
\end{minipage}
\subsection{Defining loops}
The loop defining macros are similar to the block defining macros. A loop has no ending command
and ends after the first state, block or loop that follows the loop.
Since loops have no ending command, the macro \verb:\algloopx: would not have mutch sense.
The loop defining macros are:
\begin{verbatim}
\algloop[<loop>]{<start>}
\algloopdefx[<loop>]{<start>}
[<startparamcount>][<default value>]{<start text>}
\end{verbatim}
Both create a loop named \verb:<loop>: with the starting command \verb:<start>:.
The second also sets the number of parameters, and the text displayed by the starting command.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algloop{For}
\algloopdefx{If}[1]{\textbf{If} #1 \textbf{then}}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\For
\Begin
\If{$a < b$}
\For
\Begin
\End
\Begin
\End
\End
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algloop{For}
\algloopdefx{If}%
[1]{\textbf{If} #1 \textbf{then}}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\For
\Begin
\If{$a < b$}
\For
\Begin
\End
\Begin
\End
\End
\Statex
\end{algorithmic}
}
\end{minipage}
\subsection{Continuing blocks and loops}
For each block/loop you may give commands that close the block or loop and open another
block or loop. A good example for this is the \textbf{if}~\dots~\textbf{then}~\dots~\textbf{else}
construct. The new block or loop can be closed or continued, as any other blocks and loops.
To create a continuing block use one of the following:
\begin{verbatim}
\algcblock[<new block>]{<old block>}{<continue>}{<end>}
\algcblockdefx[<new block>]{<old block>}{<continue>}{<end>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\algcblockx[<new block>]{<old block>}{<continue>}{<end>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
All three macros define a new block named \verb:<new block>:. If \verb:<new block>: is not given,
then \verb:<continue>: is used as the new block name. It is not allowed to have both
\verb:<new block>: missing, and \verb:<continue>: empty. The \verb:<continue>: command ends the
\verb:<old block>: block/loop and opens the \verb:<new block>: block. Since \verb:<continue>: may
end different blocks and loops, it can have different text
at the end of the different blocks/loops. If the \verb:<continue>: command doesn't find an
\verb:<old block>: to close, then an error is reported.
Create continuing loops with the followings:
\begin{verbatim}
\algcloop[<new loop>]{<old block>}{<continue>}
\algcloopdefx[<new loop>]{<old block>}{<continue>}
[<continueparamcount>][<default value>]{<continue text>}
\algcloopx[<new loop>]{<old block>}{<continue>}
[<continueparamcount>][<default value>]{<continue text>}
\end{verbatim}
These macros create a continuing loop, the \verb:<continue>: closes the \verb:<old block>:
block/loop, and opens a \verb:<new loop>: loop.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algblock{If}{EndIf}
\algcblock[If]{If}{ElsIf}{EndIf}
\algcblock{If}{Else}{EndIf}
\algcblockdefx[Strange]{If}{Eeee}{Oooo}
[1]{\textbf{Eeee} "#1"}
{\textbf{Wuuuups\dots}}
\begin{algorithmic}[1]
\If
\If
\ElsIf
\ElsIf
\If
\ElsIf
\Else
\EndIf
\EndIf
\If
\EndIf
\Eeee{Creep}
\Oooo
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algblock{If}{EndIf}
\algcblock[If]{If}{ElsIf}{EndIf}
\algcblock{If}{Else}{EndIf}
\algcblockdefx[Strange]{If}{Eeee}{Oooo}
[1]{\textbf{Eeee} "#1"}
{\textbf{Wuuuups\dots}}
\begin{algorithmic}[1]
\If
\If
\ElsIf
\ElsIf
\If
\ElsIf
\Else
\EndIf
\EndIf
\If
\EndIf
\Eeee{Creep}
\Oooo
\Statex
\end{algorithmic}
}
\end{minipage}\bigskip
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algloop{If}
\algcloop{If}{Else}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\If
\Begin
\End
\Else
\If
\Begin
\End
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algloop{If}
\algcloop{If}{Else}
\algblock{Begin}{End}
\begin{algorithmic}[1]
\If
\Begin
\End
\Else
\If
\Begin
\End
\Statex
\end{algorithmic}
}
\end{minipage}\bigskip
\subsection{Even more customisation}\label{setblock}
With the following macros you can give the indentation used by the new block (or loop),
and the number of stataments after that the "block" is automatically closed. This value is $\infty$
for blocks, 1 for loops, and 0 for stataments. There is a special value, 65535, meaning that the
defined "block" does not end automatically, but if it is enclosed in a block, then the ending
command of the block closes this "block" as well.
\begin{verbatim}
\algsetblock[<block>]{<start>}{<end>}
{<lifetime>}{<indent>}
\algsetblockdefx[<block>]{<start>}{<end>}
{<lifetime>}{<indent>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\algsetblockx[<block>]{<start>}{<end>}
{<lifetime>}{<indent>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\algcsetblock[<new block>]{<old block>}{<continue>}{<end>}
{<lifetime>}{<indent>}
\algcsetblockdefx[<new block>]{<old block>}{<continue>}{<stop>}
{<lifetime>}{<indent>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\algcsetblockx[<new block>]{<old block>}{<continue>}{<stop>}
{<lifetime>}{<indent>}
[<continueparamcount>][<default value>]{<continue text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
The \verb:<lifetime>: is the number of stataments after that the block is closed. An empty
\verb:<lifetime>: field means $\infty$. The \verb:<indent>: gives the indentation of the block.
Leave this field empty for the default indentation. The rest of the parameters has the same
function as for the previous macros.
\bigskip\noindent\begin{minipage}[b]{0.5\linewidth}
\begin{verbatim}
\algsetblock[Name]{Start}{Stop}{3}{1cm}
\algsetcblock[CName]{Name}{CStart}{CStop}{2}{2cm}
\begin{algorithmic}[1]
\Start
\State 1
\State 2
\State 3
\State 4
\Start
\State 1
\Stop
\State 2
\Start
\State 1
\CStart
\State 1
\State 2
\State 3
\Start
\State 1
\CStart
\State 1
\CStop
\end{algorithmic}
\end{verbatim}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
{
\algsetblock[Name]{Start}{Stop}{3}{1cm}
\algsetcblock[CName]{Name}{CStart}{CStop}{2}{2cm}
\begin{algorithmic}[1]
\Start
\State 1
\State 2
\State 3
\State 4
\Start
\State 1
\Stop
\State 2
\Start
\State 1
\CStart
\State 1
\State 2
\State 3
\Start
\State 1
\CStart
\State 1
\CStop
\Statex
\end{algorithmic}
}
\end{minipage}\bigskip
The created environments behave as follows:
\begin{itemize}
\item It starts with \verb:\Start:. The nested environments are
indented by 1 cm.
\item If it is followed by at least 3 environments (stataments), then it closes
automatically after the third one.
\item If you put a \verb:\Stop: before the automatic closure, then this
\verb:\Stop: closes the environment. \verb:CStart: closes a block called \verb:Name:
and opens a new one called \verb:CName: and having an indentaion of 2 cm.
\item \verb:CName: can be closed with \verb:CStop: or it is closed automatically after
2 environments.
\end{itemize}
\subsection{Parameters, custom text}\label{custom text}
With \verb:\algrenewtext: you can change the number of parameters, and the text displayed by the
commands. With \verb:algnotext: you can makes the vole output line disappear, but
it works only for ending commands, for beginning commands you will get an incorrect output.
\begin{verbatim}
\algrenewcommand[<block>]{<command>}
[<paramcount>][<default value>]{<text>}
\algnotext[<block>]{<ending command>}
\end{verbatim}
If \verb:<block>: is missing, then the default text is changed, and if \verb:<block>: is given,
then the text displayed at the end of \verb:<block>: is changed.
To make a command output the default text at the end of a block (say, you have changed the text
for this block), use \verb:\algdefaulttext:.
\begin{verbatim}
\algdefaulttext[<block>]{<command>}
\end{verbatim}
If the \verb:<block>: is missing, than the default text itself will be set to the default value
(this is \verb:\textbf{<command>}:).
\subsection{The ONE defining macro}
All block and loop defining macros call the same macro. You may use this macro to gain a
better acces to what will be defined. This macro is \verb:\algdef:.
\begin{verbatim}
\algdef{<flags>}...
\end{verbatim}
Depending on the flags the macro can have many forms.
\begin{center}
\begin{tabular}{|c|l|}
\hline
\textbf{Flag}&\textbf{Meaning}\\
\hline
s&starting command, without text\\
S&starting command with text\\
c&continuing command, without text\\
C&continuing command, with default text\\
xC&continuing command, with block specific text\\
\hline
e&ending command, without text\\
E&continuing command, with default text\\
xE&continuing command, with block specific text\\
N&ending command, with default "no text"\\
xN&ending command, with no text for this block\\
\hline
b&block(default)\\
l&loop\\
L&loop closes after the given number of stataments\\
\hline
i&indentation specified\\
\hline
\end{tabular}
\end{center}
The \verb:<new block>: may be given for any combination of flags, and it is not allowed to have
\verb:<new block>: missing and \verb:<start>: missing/empty.
For c, C, xC an old block is expected. For s, S, c, C, xC the \verb:<start>: must be given.
For e, E, xE, N, xN the \verb:<end>: must be given. For L the \verb:<lifetime>: must be given.
For i the \verb:<indent>: must be given.
For S, C, xC the starting text and related infos must be given. For E, xE the ending text must be given.
For each combination of flags give only the needed parameters, in the following order:
\begin{verbatim}
\algdef{<flags>}[<new block>]{<old block>}{<start>}{<end>}
{<lifetime>}{<indent>}
[<startparamcount>][<default value>]{<start text>}
[<endparamcount>][<default value>]{<end text>}
\end{verbatim}
The block and loop defining macros call \verb:\algdef: with the following flags:
\begin{center}
\begin{tabular}{|l|l|}
\hline
\textbf{Macro}&\textbf{Meaning}\\
\hline
\verb:\algblock:&\verb:\algdef{se}:\\
\hline
\verb:\algcblock:&\verb:\algdef{ce}:\\
\hline
\verb:\algloop:&\verb:\algdef{sl}:\\
\hline
\verb:\algcloop:&\verb:\algdef{cl}:\\
\hline
\verb:\algsetblock:&\verb:\algdef{seLi}:\\
\hline
\verb:\algsetcblock:&\verb:\algdef{ceLi}:\\
\hline
\verb:\algblockx:&\verb:\algdef{SxE}:\\
\hline
\verb:\algblockdefx:&\verb:\algdef{SE}:\\
\hline
\verb:\algcblockx:&\verb:\algdef{CxE}:\\
\hline
\verb:\algcblockdefx:&\verb:\algdef{CE}:\\
\hline
\verb:\algsetblockx:&\verb:\algdef{SxELi}:\\
\hline
\verb:\algsetblockdefx:&\verb:\algdef{SELi}:\\
\hline
\verb:\algsetcblockx:&\verb:\algdef{CxELi}:\\
\hline
\verb:\algsetcblockdefx:&\verb:\algdef{CELi}:\\
\hline
\verb:\algloopdefx:&\verb:\algdef{Sl}:\\
\hline
\verb:\algcloopx:&\verb:\algdef{Cxl}:\\
\hline
\verb:\algcloopdefx:&\verb:\algdef{Cl}:\\
\hline
\end{tabular}
\end{center}
\vfill
\section{Examples}
\subsection{A full example using \textbf{algpseudocode}}
\begin{verbatim}
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\begin{document}
\begin{algorithm}
\caption{The Bellman-Kalaba algorithm}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\Statex
\Procedure {FindPathBK}{$v$, $u$, $p$}
\If {$v = u$}
\State \textbf{Write} $v$
\Else
\State $w \leftarrow v$
\While {$w \not= u$}
\State \textbf{Write} $w$
\State $w \leftarrow p(w)$
\EndWhile
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{document}
\end{verbatim}
\eject
\alglanguage{pseudocode}
\begin{algorithm}[h]
\caption{The Bellman-Kalaba algorithm}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\Statex
\Procedure {FindPathBK}{$v$, $u$, $p$}
\If {$v = u$}
\State \textbf{Write} $v$
\Else
\State $w \leftarrow v$
\While {$w \not= u$}
\State \textbf{Write} $w$
\State $w \leftarrow p(w)$
\EndWhile
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\eject
\subsection{Breaking up an algorithm}
\begin{verbatim}
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\begin{document}
\begin{algorithm}
\caption{Part 1}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State \Comment For some reason we need to break here!
\algstore{bkbreak}
\end{algorithmic}
\end{algorithm}
And we need to put some additional text between\dots
\begin{algorithm}[h]
\caption{Part 2}
\begin{algorithmic}[1]
\algrestore{bkbreak}
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{document}
\end{verbatim}
\eject
\alglanguage{pseudocode}
\begin{algorithm}[h]
\caption{Part 1}
\begin{algorithmic}[1]
\Procedure {BellmanKalaba}{$G$, $u$, $l$, $p$}
\ForAll {$v \in V(G)$}
\State $l(v) \leftarrow \infty$
\EndFor
\State $l(u) \leftarrow 0$
\Repeat
\For {$i \leftarrow 1, n$}
\State $min \leftarrow l(v_i)$
\For {$j \leftarrow 1, n$}
\If {$min > e(v_i, v_j) + l(v_j)$}
\State $min \leftarrow e(v_i, v_j) + l(v_j)$
\State \Comment For some reason we need to break here!
\algstore{bkbreak}
\end{algorithmic}
\end{algorithm}
And we need to put some additional text between\dots
\begin{algorithm}[h]
\caption{Part 2}
\begin{algorithmic}[1]
\algrestore{bkbreak}
\State $p(i) \leftarrow v_j$
\EndIf
\EndFor
\State $l'(i) \leftarrow min$
\EndFor
\State $changed \leftarrow l \not= l'$
\State $l \leftarrow l'$
\Until{$\neg changed$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\eject
\subsection{Using multiple layouts}
\begin{verbatim}
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{algpascal}
\begin{document}
\alglanguage{pseudocode}
\begin{algorithm}
\caption{A small pseudocode}
\begin{algorithmic}[1]
\State $s \gets 0$
\State $p \gets 0$
\For{$i \gets 1,\, 10$}
\State $s \gets s + i$
\State $p \gets p + s$
\EndFor
\end{algorithmic}
\end{algorithm}
\alglanguage{pascal}
\begin{algorithm}
\caption{The pascal version}
\begin{algorithmic}[1]
\State $s := 0$
\State $p := 0$
\For{i = 1}{10}
\Begin
\State $s := s + i$
\State $p := p + s$
\End
\end{algorithmic}
\end{algorithm}
\end{document}
\end{verbatim}
\eject
\alglanguage{pseudocode}
\begin{algorithm}
\caption{A small pseudocode}
\begin{algorithmic}[1]
\State $s \gets 0$
\State $p \gets 0$
\For{$i \gets 1,\, 10$}
\State $s \gets s + i$
\State $p \gets p + s$
\EndFor
\end{algorithmic}
\end{algorithm}
\alglanguage{pascal}
\begin{algorithm}
\caption{The pascal version}
\begin{algorithmic}[1]
\State $s := 0$
\State $p := 0$
\For{i = 1}{10}
\Begin
\State $s := s + i$
\State $p := p + s$
\End
\end{algorithmic}
\end{algorithm}
\eject
\section{Bugs}
If you have a question or find a bug you can contact me on:
\medskip
\textbf{[email protected]}
\medskip
\noindent If possible, please create a small \LaTeX{} example related to your problem.
\end{document}
\section{Learning the Best Probing Strategy}
\label{sec:algorithm}
Due to the hardness of \textsf{ada-GPM}{} that no polynomial time algorithm with any finite approximation factor can be devised unless $P = NP$, an efficient algorithm that provides good guarantee on the solution quality in general case is unlikely to be achievable. This section proposes our machine learning based framework to tackle the \textsf{ada-GPM}{} problem. Our approach considers the cases that in addition to the probed network, there is a reference network with similar characteristics and can be used to derive a good strategy.
Designing such a machine learning framework is challenging due to three main reasons: \textbf{1)} what should be selected as learning samples, e.g., (incomplete) subgraphs or (gray) nodes and how to generate them? (Subsec.~\ref{subsec:build_data}); \textbf{2)} what features from incomplete subnetwork are useful for learning? (Subsec.~\ref{subsec:features}) and \textbf{3)} how to assign labels to learning samples to indicate the benefit of selecting that node in long-term, i.e., to account for future probes?(Subsec.~\ref{subsec:features}).
\begin{figure}[!ht]
\vspace{-0.1in}
\centering
\includegraphics[width=0.6\linewidth]{figures/learning_2}
\vspace{-0.1in}
\caption{Learning Framework}
\label{fig:linear_model}
\vspace{-0.15in}
\end{figure}
\textbf{Overview.} The general framework, which is depicted in Figure~\ref{fig:linear_model}, contains four steps: \textbf{1)} Graph sampling which generates many subnetworks from $G^r$ where each subnetwork is a sampled incomplete network with black, gray and white nodes; each candidate gray node in each sampled subnetwork creates a data point in our training data; \textbf{2)} Data labeling which labels each gray node with their long-term probing benefit; \textbf{3)} Training a model to learn the probing benefit of nodes from the features and \textbf{4)} Probing the targeted network guided by the trained machine learning model.
\subsection{Building Training Dataset.}
\label{subsec:build_data}
Given the reference network $G^r = (V^r, E^r)$, where $V^r$ is the set of $n$ nodes and $E^r$ is set of $m$ edges. Let $\mathcal{G} = \{G'_{1}, G'_{2}, ...G'_{K}\}$ be a collection of subnetwork sampled from $G^r$. The size of sampled subgraph $G'_i$ is randomly drawn between $0.5\%$ to $10\%$ of the reference graph $G^r$ following a power-law distribution. Given a subnetwork size, the sample can be generated using different mechanisms, e.g., Breadth-First-Search, Depth-First-Search or Random Walk \cite{Maiya10}.
We use $\mathcal{G}$ to construct a training data where each data point is a feature vector representing a candidate gray node. For each sample $G'_{i}, (1 \le i \le K)$, in $\mathcal{G}$, let $V_{G'_i}^p$ be the set of gray nodes in $G'_i$, we compute all the features for each node $u \in V_{G'_i}^p$ to form a data point. As such, each sample $G'_{i}$ creates $|V_{G'_i}^p|$ training data points. For assigning the label for each data point, we use our proposed algorithm \textsf{Tada-Probe}{}, the heuristic improvement or the ILP algorithm (presented in Subsec.~\ref{subsec:tada}).
\subsection{Features for Learning.}
\label{subsec:features}
We select a rich set of intrinsic node features that only depend on the incomplete subnetwork and will be embedded in our learning model. Table~\ref{tbl:node_factors} shows a complete list of node features that we use in our machine learning model.
\renewcommand{\arraystretch}{1.2}
\setlength\tabcolsep{4pt}
\begin{table}[hbt]\scriptsize
\vspace{-0.1in}
\centering
\caption{Set of features for learning}
\vspace{-0.1in}
\begin{tabular}{p{1cm}p{6.5cm}}
\addlinespace
\toprule
\textbf{Factor} & \textbf{Description} \\
\midrule
$BC$ & Betweenness centrality score \cite{Newman10} of $u$ in $G'$ \\
$CC$ & Closeness centrality score \cite{Newman10} of $u$ in $G'$ \\
$EIG$ & Eigenvalue centrality score \cite{Newman10} of $u$ in $G'$\\
$PR$ & Pagerank centrality score \cite{Newman10} of $u$ in $G'$ \\
$Katz$ & Katz centrality score \cite{Newman10} of $u$ in $G'$ \\
$CLC$ & Clustering coefficient score of $u$ in $G'$ \\
$DEG$ & Degree of $u$ in $G'$ \\
$BNum$ & Number of black nodes in $G'$ \\
$GNum$ & Number of gray nodes in $G'$ \\
$BDeg$ & Total degree of black nodes in $G'$ \\
$BEdg$ & Number of edges between black nodes in $G'$ \\
\bottomrule
\end{tabular}
\label{tbl:node_factors}
\vspace{-0.2in}
\end{table}
\subsection{An $\frac{1}{r+1}$-Approximation Algorithm for \textsf{Tada-GPM}{}.}
\label{subsec:tada}
We first propose an $\frac{1}{r+1}$-optimal strategy for \textsf{Tada-GPM}{} to probe a sampled subnetwork of the reference network, where $r$, called \textit{radius}, is the largest distance from a node in the optimal solution to the initially observed network. This algorithm assigns labels for the training data.
An intuitive strategy, called Naive Greedy, is to select node with highest number of connections to unseen nodes. Unfortunately, this strategy can be shown to perform arbitrarily bad by a simple example. The example includes a fully probed node having a connection to a degree-2 node, which is a bridge to a huge component which is not reachable from the other nodes, and many other connections to higher degree nodes. Thus, the Naive Greedy will not select the degree-2 node and never reach the huge component.
Our algorithm is inspired by a recent theoretical result for solving the \textit{Connected Maximum Coverage} (\textsf{Connected Max-Cover}{}) problem in \cite{Vandin11}. The \textsf{Connected Max-Cover}{} assumes a set of elements $\mathcal{I}$, a graph $G = (V,E)$ and a budget $k$. Each node $v \in V$ associates with a subset $P_v \subseteq \mathcal{I}$. The problem asks for a \textit{connected} subgraph $g$ with at most $k$ nodes of $G$ that maximizes $\cup_{v \in g} P_v$. The proposed algorithm in \cite{Vandin11} sequentially selects nodes with highest ratio of newly covered nodes to the length of the shortest path from observed nodes and is proved to obtain an $\frac{e-1}{(2e-1)r}$-approximation factor.
\vspace{-0.1in}
\setlength{\textfloatsep}{3pt}
\begin{algorithm} \small
\caption{\small \textsf{Tada-Probe}{} Approximation Algorithm}
\label{alg:agpm}
\textbf{Input}: The reference network $G = (V,E)$, a sampled subnetwork $G' = (V',E')$ and a budget $k$. \\
\textbf{Output}: Augmented graph of $G'$.
\begin{algorithmic}[1]
\State Collapse all fully observed nodes to a single root node $R$
\State $i = 0$
\While{$i < k$}
\State $v_{max} \leftarrow \max_{v \in V \backslash V^f, |P_{V^f}(v)| \leq k-i} \frac{|O(v) \backslash V'|}{|P_{V^f}(v)|}$
\State Probe all the nodes $v \in P_{V^f}(v_{max})$
\State Update $V', V^f, V^p, V^u$ and $E'$ accordingly
\State $i = i + |P_{V^f}(v_{max})|$
\EndWhile
\State \textbf{return} $G' = (V', E')$
\end{algorithmic}
\end{algorithm}
\vspace{-0.1in}
Each node in a network can be viewed as associated with a set of connected nodes. We desire to select $k$ connected nodes to maximize the union of $k$ associated sets. However, different from \textsf{Connected Max-Cover}{} in which any connected subgraph is a feasible solution, \textsf{Tada-GPM}{} requires the $k$ selected nodes to be connected from the fully observed nodes $V^f$. Thus, we analogously put another constraint of connectivity to a fixed set of nodes on the returned solution and that adds a layer of difficulty. Interestingly, we show that rooting from observed nodes and greedily adding nodes in the same manner as \cite{Vandin11} gives an $\frac{1}{r+1}$-approximate solution. Additionally, our analysis leads to a better approximation result for \textsf{Connected Max-Cover}{} since $\frac{e-1}{(2e-1)r} < \frac{1}{2.58\cdot r} < \frac{1}{r+1}$.
Let denote $O(v)$ to be the set of nodes that $v$ is \textit{connected to}, i.e., $ O(v)= \{u | (v,u) \in E \}$ and $P_{V^f}(v)$ be the set of nodes on the shortest path from nodes in $V^f$ to $v$. For a set of nodes $S$, we call $f(S)$ the number of newly discovered nodes by probing $S$. Hence, $f(S)$ is our objective function. Our approximation algorithm, namely \textit{Topology-aware Adaptive Probing} (\textsf{Tada-Probe}{}), is described in Alg.~\ref{alg:agpm}.
The algorithm starts by collapsing all fully observed nodes in $G'$ into a single root node $R$ which serves as the starting point. It iterates till all $k$ alloted budget have been exhausted into selecting nodes (Line~3). In each iteration, it selects a node $v_{max} \in V \backslash V^f$ within distance $k-i$ having maximum ratio of the number of unobserved nodes $|O(v)\backslash V'|$ to the length of shortest path from nodes in $V^f$ to $v$ (Line~4). Afterwards, all the nodes on the shortest path from $V^f$ to $v_{max}$ are probed (Line~5) and the incomplete graph is updated accordingly (Line~6).
The approximation guarantee of \textsf{Tada-Probe}{} is stated in the following theorem.
\begin{Theorem}
\textsf{Tada-Probe}{} returns an $\frac{1}{r+1}$-approximate probing strategy for \textsf{Tada-GPM}{} problem where $r$ is the radius of the optimal solution.
\end{Theorem}
\begin{proof}
Let denote the solution returned by \textsf{Tada-Probe}{} $\hat S$ and an optimal solution $S^* = \{v^*_1,v^*_2,\dots,v^*_k\}$ which results in the maximum number of newly discovered nodes, denoted by $OPT$. We assume that both $\hat S$ and $S^*$ contain exactly $k$ nodes since adding more nodes never gives worse solutions. We call the number of additional unobserved nodes discovered by $S'$ in addition to that of $S$, denoted by $\Delta_{S}(S')$, the \textit{marginal benefit} of $S'$ with respect to $S$. For a single node $v$, $\Delta_{S}(v) = \Delta_{S}(\{v\})$. In addition, the ratio of the marginal benefit to the distance from the set $S$ to a node $v$, called \textit{benefit ratio}, is denoted by $\delta_{S}(v) = \frac{\Delta_{S}(v)}{|P_{S}(v)|}$.
Since in each iteration of \textsf{Tada-Probe}{}, we add all the nodes along the shortest path connecting $V^f$ to $v_{max}$, we assume that $t \leq k$ iterations are performed. In iteration $i \geq 0$, node $v^i_{max}$ is selected to probe and, up to that iteration, $S^i$ nodes have been selected so far.
Due to the greedy selection, we have, $\forall i \geq 1, \forall \hat v \in S^i\backslash S^{i-1}, \forall v^* \in S^*$,
\vspace{-0.1in}
\begin{align}
\delta_{S^{i-1}}(v^i_{max}) \geq \delta_{S^{i-1}}(v^*)
\end{align}
\vspace{-0.2in}
\noindent Thus, we obtain,
\vspace{-0.2in}
\begin{align}
|P_{S^{i-1}}(v^i_{max})|\cdot \delta_{S_{i-1}}(v^i_{max}) \geq \sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j)
\end{align}
\vspace{-0.25in}
\noindent or, equivalently,
\vspace{-0.2in}
\begin{align}
\label{eq:iter}
\Delta_{S^{i-1}(v^i_{max})} \geq \sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j)
\end{align}
\vspace{-0.15in}
Adding Eq.~\ref{eq:iter} over all iterations gives,
\vspace{-0.1in}
\begin{align}
\label{eq:iter_2}
\sum_{i = 1}^{t}\Delta_{S^{i-1}(v^i_{max})} \geq \sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j)
\end{align}
\vspace{-0.15in}
\noindent The left hand side is actually $f(\hat S)$ which is sum of marginal benefit over all iterations. The other is the sum of benefit ratios over all the nodes in the optimal solution $S^*$ with respect to sets $S^{i-1}$, where $0 \leq i \leq k-1$, which are subsets of $\hat S$. Thus, $\forall i,j$,
\vspace{-0.1in}
\begin{align}
\delta_{S^{i-1}}(v^*_j) \geq \delta_{\hat S}(v^*_j) \geq \frac{\Delta_{\hat S}(v^*_j)}{|P_{\hat S}(v^*_j)|} \geq \frac{\Delta_{\hat S}(v^*_j)}{r}
\end{align}
\vspace{-0.15in}
\noindent Then, the right hand side is,
\vspace{-0.1in}
\begin{align}
\label{eq:theo1_bound}
\sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j) \geq \sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \frac{\Delta_{\hat S}(v^*_j)}{r}
\end{align}
\vspace{-0.1in}
\noindent Notice that $\Delta_{\hat S}(v^*_j)$ is the marginal benefit of node $v^*_j$ with respect to set $\hat S$, hence, the summation itself becomes,
\vspace{-0.15in}
\begin{align}
\sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \Delta_{\hat S}(v^*_j) = \sum_{j = 1}^{k} \Delta_{\hat S}(v^*_j) & = f(S^*) - f(\hat S) \nonumber \\
& = OPT - f(\hat S) \nonumber
\end{align}
\vspace{-0.3in}
\noindent Thus, Eq.~\ref{eq:iter_2} is reduced to,
\vspace{-0.1in}
\begin{align}
f(\hat S) \geq \frac{OPT - f(\hat S)}{r}
\end{align}
\vspace{-0.2in}
\noindent Rearranging the above equation, we get,
\vspace{-0.05in}
\begin{align}
f(\hat S) \geq \frac{OPT}{r+1}
\end{align}
\vspace{-0.2in}
\noindent which completes our proof.
\end{proof}
\vspace{-0.05in}
\subsubsection{Improved Heuristic.}
Despite the $\frac{1}{r+1}$-approximation guarantee, \textsf{Tada-Probe}{} algorithm only considers the gain of ending node in a shortest path and completely ignores the on-the-way benefit. That is the newly observed nodes discovered when probing the intermediate nodes on the shortest paths are neglected in making decisions. Thus, we can improve \textsf{Tada-Probe}{} by counting all the newly observed nodes \textit{along the connecting paths} which are not necessarily the shortest paths and the selection criteria of taking the path with largest benefit ratio is applied. Since the selected path of nodes has the benefit ratio of at least as high as that of considering only the ending nodes, the $\frac{1}{r+1}$-approximation factor is preserved.
Following that idea, we propose a Dijkstra based algorithm to select the path with maximum benefit ratio. We assign for each node $u$ a benefit ratio $\delta(u)$, a distance measure $d(u)$ and a benefit value $\Delta(u)$. Our algorithm iteratively selects a node $u$ with highest benefit ratio and propagates the distance and benefit to its neighbors: if neighbor node $v$ observes that by going through $v$, $u$'s benefit ratio gets higher, $v$ will update its variables to have $u$ as the direct predecessor. Our algorithm finds the path with highest benefit ratio.
Note that extra care needs to be taken in our algorithm to avoid having \textit{loops} in the computed paths. The loops normally do not appear since closing a loop only increases the distance by one while having the same benefit of the path. However, in extreme cases where a path passes through a node with exceptionally high number of connections to unobserved nodes, loops may happen. To void having loops, we check whether updating a new path will close a loop by storing the predecessor of each node and traverse back until reaching a fully observed node.
\vspace{-0.09in}
\subsubsection{Optimal ILP Algorithm.}
To study the optimal solution for our \textsf{Tada-GPM}{} problem when topology is available, we present our Integer Linear Programming formulation. Hence, We can use a convenient off-the-shelf solver, e.g., Cplex, Gurobi, to find an optimal solution. Unfortunately, Integer Linear Programming is not polynomially solvable and thus, extremely time-consuming.
In the prior step, we also collapse all fully probed nodes into a single node $r$ and have connections from $r$ to partially observed nodes. Assume that there are $n$ nodes including $r$, for each $u \in V$, we define $y_u \in \{0,1\}, \forall u \in V$ such that,
\vspace{-0.05in}
\begin{align}
y_u = \left\{ \begin{array}{ll}
1 & \text{ if node } u \text{ is observed,}\\
0 & \text{ otherwise}.
\end{array}
\right. \nonumber
\end{align}
\vspace{-0.15in}
\noindent Since at most $k$ nodes are selected adaptively, we can view the solution as a tree with at most $k$ layers. Thus, we define $x_{uj} \in \{0,1\}, \forall u \in V, j = 1..k$ such that,
\vspace{-0.05in}
\begin{align}
x_{uj} = \left \{ \begin{array}{ll}
1 & \text{ if node } u \text{ is selected at layer }j \text{ or earlier},\\
0 & \text{ otherwise}.
\end{array}
\right. \nonumber
\end{align}
\vspace{-0.15in}
\noindent The \textsf{Tada-GPM}{} problem selects at most $k$ nodes, i.e., $\sum_{u \in V} x_{uk} \leq k$ to maximize the number of newly observed nodes, i.e., $\sum_{u \in V} y_{u}$. A node is observed if at least one of its neighbors is selected meaning $y_u \leq \sum_{v \in N(u)} x_{vk}$ where $N(u)$ denotes the set of $u$'s neighbors. Since $r$ is the initially fully observed nodes, we have $x_{r0} = 1$. Furthermore, $u$ can be selected at layer $j$ if at least one of its neighbors has been probed and thus, $x_{uj} \leq \sum_{v \in N(u)} x_{v(j-1)}$.
Our formulation is summarized as follows,
\vspace{-0.05in}
\begin{align}
\label{eq:agpm_ip}
\max \quad & \sum_{u \in V} y_{u} - |N(r)| - 1
\end{align}
\vspace{-0.3in}
\begin{align}
\text{ s.t.}\quad \quad & x_{r0} = 1, x_{u0} = 0, \qquad u \in V, u \neq r \nonumber \\
& \sum_{u \in V} x_{uk} \leq k \nonumber \\
& y_u \leq \sum_{v \in N(u)} x_{vk}, \qquad \forall u \in V, \nonumber\\
& x_{uj} \leq x_{u(j+1)}, \qquad \forall u \in V, j = 0..k-1, \nonumber\\
& x_{uj} \leq x_{u(j-1)} + \sum_{v \in N(u)} x_{v(j-1)}, \text{ } \forall u \in V, j = 1..k, \nonumber\\
& x_{uj}, y_{u} \in \{0, 1\}, \forall u \in V, j = 0..k. \nonumber
\end{align}
\vspace{-0.2in}
From the solution of the above ILP program, we obtain the solution for our \textsf{Tada-GPM}{} instance by simply selecting nodes $u$ that $x_{uk} = 1$. Note that the layering scheme in our formulation guarantees both the connectivity of the returned solution and containing root node $r$ and thus, the returned solution is feasible and optimal.
\begin{Theorem}
The solution of the ILP program in Eq.~\ref{eq:agpm_ip} infers the optimal solution of our \textsf{Tada-GPM}{}.
\end{Theorem}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\linewidth]{figures/compare_alg}
\vspace{-0.1in}
\caption{Performance of different algorithms}
\label{fig:compare_alg}
\end{figure}
\vspace{-0.09in}
\subsubsection{Empirical Evaluation.}
Here, we compare the probing performance in terms of the number of newly probed nodes that different algorithms deliver on a Facebook ego network\footnote{http://snap.stanford.edu/data/egonets-Facebook.html} with 347 nodes and 5029 edges. The results are presented in Fig.~\ref{fig:compare_alg}. The figure shows that our Heuristic improvement very often meets the optimal performance of the ILP algorithm while Tada-Probe is just below the former two methods. However, the naive Greedy algorithm performs badly due to having no guarantee on solution quality.
\subsection{Training Models.}
We consider two classes of well-studied machine learning models. First, linear regression model is applied to learn a linear combination of features characterized by coefficients. These coefficients learn a linear representation of the dependence of labels on the features. The output of training phase of linear regression model is a function $f_{Lin}(.)$ which will be used to estimate the gain $w_{u}^o$ of probing a node $u$ in subgraph $G'$. It is noted that we learn $f_{Lin}(.)$ from sampled graph $G'_i$ of the reference graph $G^r$ and use $f_{Lin}(.)$ to probe the incomplete $G'$.
Secondly, we consider logistic regression to our problem as follow: Let $w_u^o$, $w_v^o$ be the gain of probing the two nodes $u$ and $v$ respectively. Given a pair of nodes $<u,v>$ in $V_{G'_i}^p$, our logistic model $f_{Log(.)}$ learns to predict whether $w_u^o$ is larger than $w_v^o$. Thus, for each $G'_{i}$ in $\mathcal{G}$, we compute node features of each node $u \in V_{G'_i}^p$. We generate $\binom{|V_{G_{i}}^p|}{2}$ pairs of nodes for each subgraph $G'_{i}$ and then concatenate features of two nodes in a pair to form a single data point. Each data point $<u,v>$ is labeled by binary value ($1$ or $0$):
\vspace{-0.05in}
\begin{align}
l = \left \{ \begin{array}{ll}
1 & \mbox{if $w_u^o \geq w_v^o$};\\
0 & \mbox{if $w_u^o < w_v^o$}.\end{array} \right.
\end{align}
\section*{Supplementary Material for `\textit{Towards Optimal Strategy for Adaptive Probing in Incomplete Networks}'}
\subsection*{Complete proof of Theorem~\ref{theo:hardness}}
\begin{proof}
\balance
To prove Theorem~\ref{theo:hardness}, we construct classes of instances of the problems that there is no approximation algorithm with finite factor.
\begin{figure}[!ht]
\vspace{-0.25in}
\centering
\subfloat[Without global degrees]{
\includegraphics[width=0.4\linewidth]{figures/hard_nd.pdf}
\label{fig:hard_nodegree}
}
\subfloat[With global degrees]{
\includegraphics[width=0.4\linewidth]{figures/hard_d.pdf}
\label{fig:hard_degree}
}
\vspace{-0.05in}
\caption{Hardness illustration on global node degrees.}
\vspace{-0.1in}
\end{figure}
We construct a class of instances of the probing problems as illustrated in Figure~\ref{fig:hard_nodegree}. Each instance in this class has: a single fully probed node in black $b_1$, $n$ observed nodes in gray each of which has an edge from $b_1$ and one of the observed nodes, namely $g^*$ varying between different instances, having $m$ connections to $m$ unknown nodes in white. Thus, the partially observed graph contains $n+1$ nodes, one fully probed and $n$ observed nodes which are selectable, while the underlying graph has in total of $n+m+1$ nodes. Each instance of the family has a different $g^*$ that $m$ unknown nodes are connected to. We now prove that in this class, no algorithm can give a finite approximate solution for the two problem.
First, we observe that for any $k \geq 1$, the optimal solution which probes the nodes with connections to unknown nodes has the optimal value of $m$ newly explore nodes, denoted by $OPT = m$. We sequentially examine two possible cases of algorithms, i.e., deterministic and randomized.
\begin{itemize}
\item Consider a deterministic algorithm $\mathcal{A}$, since the $\mathcal{A}$ is unaware of the connections from gray to unknown nodes, given a budget $1 \leq k \ll n$, the lists or sequences of nodes that $\mathcal{A}$ selects are exactly the same for different instances of problems in the class. Thus, there are instances that $g^*$ is not in the fixed list/sequence of nodes selected by $\mathcal{A}$. In such cases, the number of unknown nodes explored by $\mathcal{A}$ is 0. Compared to the $OPT = m$, $\mathcal{A}$ is not a finite factor approximation algorithm.
\item Consider a randomized algorithm $\mathcal{B}$, similarly to the deterministic algorithm $\mathcal{A}$, $\mathcal{B}$ does not know the connections from the partially observed nodes to white ones. Thus, the randomized algorithm $\mathcal{B}$ essentially selects at random $k$ nodes out of $n$ observed nodes. However, this randomized scheme does not guarantee to select $g^*$ as one of its selected nodes and thus, in many situations, the number of unknown nodes discovered is 0 that invalidates $\mathcal{B}$ to be a finite factor approximation algorithm. In average, $\mathcal{B}$ has $\frac{k}{n}$ chance of selecting $g^*$ which leads to an optimal solutions with $OPT=m$. Hence, the objective value is $\frac{km}{n}$ and the ratio with optimal value is $\frac{k}{n}$. Since $k \ll n$, we can say that the ratio is $O(\frac{1}{n})$ which is not finite in the average case for randomized algorithm $\mathcal{B}$.
\end{itemize}
In both cases of deterministic and randomized algorithms, there is no finite factor approximation algorithm for \textsf{ada-GPM}{} or \textsf{batch-GPM}{}.
\end{proof}
\section{Hardness and Inapproximability}
\label{sec:hard}
We provide the hardness results and proofs of both the \textsf{ada-GPM}{} and \textsf{batch-GPM}{} problems under various assumptions based on real-world scenarios. We prove:
\begin{itemize}
\item With only local information of the partially observed subnetwork, called local view (physical network, the Internet), the problems are not only NP-hard but also \textit{inapproximable} within any finite factor.
\item Assume that, in addition to local subnetwork information, any observed node also comes with global degree in the underlying network (Online social networks), the studied problems are still \textit{inapproximable} within any finite factor.
\item If the underlying complete network topology is revealed (network completion problem), we show the $(1-1/e)$-inapproximability for both problems.
\end{itemize}
The weaker NP-hardness properties of the two problems of interest under any mentioned conditions are easy to prove by simple reductions from Maximum Coverage problem \cite{Nemhauser78}. Our stronger results of inapproximability, that also implies the NP-hardness, are shown in the following.
\subsection{Inapproximability under Local View}
\begin{Theorem}
\label{theo:hardness}
\textsf{ada-GPM}{} and \textsf{batch-GPM}{} problems, under the assumption that only local view of the partially observed network is available, are inapproximable within any finite factor. That means no polynomial time algorithm achieving a finite approximation factor for the problems.
\end{Theorem}
\begin{proof}
To prove Theorem~\ref{theo:hardness}, we construct classes of instances of the problems that there is no approximation algorithm with finite factor.
\begin{figure}[!ht]
\vspace{-0.25in}
\centering
\subfloat[Without global degrees]{
\includegraphics[width=0.4\linewidth]{figures/hard_nd.pdf}
\label{fig:hard_nodegree}
}
\subfloat[With global degrees]{
\includegraphics[width=0.4\linewidth]{figures/hard_d.pdf}
\label{fig:hard_degree}
}
\vspace{-0.05in}
\caption{Hardness illustration on global node degrees.}
\vspace{-0.1in}
\end{figure}
We construct a class of instances of the probing problems as illustrated in Figure~\ref{fig:hard_nodegree}. Each instance in this class has: a single fully probed node in black $b_1$, $n$ observed nodes in gray each of which has an edge from $b_1$ and one of the observed nodes, namely $g^*$ varying between different instances, having $m$ connections to $m$ unknown nodes in white. Thus, the partially observed graph contains $n+1$ nodes, one fully probed and $n$ observed nodes which are selectable, while the underlying graph has in total of $n+m+1$ nodes. Each instance of the family has a different $g^*$ that $m$ unknown nodes are connected to. We now prove that in this class, no algorithm can give a finite approximate solution for the two problem.
First, we observe that for any $k \geq 1$, the optimal solution which probes the nodes with connections to unknown nodes has the optimal value of $m$ newly explore nodes, denoted by $OPT = m$. We sequentially examine two possible cases of algorithms, i.e., deterministic and randomized.
\begin{itemize}
\item Consider a deterministic algorithm $\mathcal{A}$, since the $\mathcal{A}$ is unaware of the connections from gray to unknown nodes, given a budget $1 \leq k \ll n$, the lists or sequences of nodes that $\mathcal{A}$ selects are exactly the same for different instances of problems in the class. Thus, there are instances that $g^*$ is not in the fixed list/sequence of nodes selected by $\mathcal{A}$. In such cases, the number of unknown nodes explored by $\mathcal{A}$ is 0. Compared to the $OPT = m$, $\mathcal{A}$ is not a finite factor approximation algorithm.
\item Consider a randomized algorithm $\mathcal{B}$, similarly to the deterministic algorithm $\mathcal{A}$, $\mathcal{B}$ does not know the connections from the partially observed nodes to white ones. Thus, the randomized algorithm $\mathcal{B}$ essentially selects at random $k$ nodes out of $n$ observed nodes. However, this randomized scheme does not guarantee to select $g^*$ as one of its selected nodes and thus, in many situations, the number of unknown nodes discovered is 0 that invalidates $\mathcal{B}$ to be a finite factor approximation algorithm. In average, $\mathcal{B}$ has $\frac{k}{n}$ chance of selecting $g^*$ which leads to an optimal solutions with $OPT=m$. Hence, the objective value is $\frac{km}{n}$ and the ratio with optimal value is $\frac{k}{n}$. Since $k \ll n$, we can say that the ratio is $O(\frac{1}{n})$ which is not finite in the average case for randomized algorithm $\mathcal{B}$.
\end{itemize}
In both cases of deterministic and randomized algorithms, there is no finite factor approximation algorithm for \textsf{ada-GPM}{} or \textsf{batch-GPM}{}.
\end{proof}
\subsection{Inapproximability under Node Degree Distribution}
\begin{Theorem}
\label{theo:hardness_d}
\textsf{ada-GPM}{} and \textsf{batch-GPM}{} problems, under the assumption that node degree distribution is also given, are inapproximable within any finite factor.
\end{Theorem}
\begin{proof}
To prove Theorem~\ref{theo:hardness_d}, we also construct classes of instances of the problems that there is no approximation algorithm with finite factor.
We assume that besides the observed subnetwork, for any probed or observed node $v$, its global degree is also provided as in the case of social networks, e.g., Facebook, Linkedin. We prove that even with node global degrees, a finite factor approximation algorithm does not exist. The proof is similar to the first scenario with different class of problem instances as illustrated in Figure~\ref{fig:hard_degree}. Here, we add another layer of unknown nodes, indexed from $w^1_1$ to $w^1_n$ after the partially observed node layer. The bottom unknown layer stays the same with nodes indexed from $w^2_1$ to $w^2_m$. The optimal objective value now is $m + k - 1$ while for a deterministic algorithm $\mathcal{A}$, the worst value is $k$ that makes the approximation ratio to be $\frac{k}{m+k-1} = O(\frac{1}{m})$. For a randomized algorithm $\mathcal{B}$, the same ratio is obtained for the both worst and average cases.
Thus, for both scenario, no approximation algorithm with finite factor can be derived. The proof is complete.
\end{proof}
\subsection{Inapproximability under Known Topology}
\begin{Theorem}
\textsf{ada-GPM}{} and \textsf{batch-GPM}{} problems, under the assumption that the complete underlying network topology is available, are inapproximable within $(1-1/e)$.
\end{Theorem}
\begin{proof}
The proof relies on the well-known $(1-1/e)$-inapproximability result of \textit{Maximum Coverage} (\textsf{Max-Cover}{}) \cite{Feige98}. Since \textsf{Max-Cover}{} is a special case of both \textsf{ada-GPM}{} and \textsf{batch-GPM}{} when the gray nodes represent subsets and a single layer of white nodes signifies elements of the \textsf{Max-Cover}{}. Thus, the theorem follows.
\end{proof}
\section{Approximation Algorithms}
Since without network topology, our problems are inapproximable within any finite factor, our hope for an efficient algorithm with strong quality guarantee is ruminated on the case with topology available. In particular, we show that a standard greedy algorithm achieves an $(1-1/e)$-approximation guarantee for \textsf{batch-GPM}{}. For \textsf{ada-GPM}{}, we propose an $\frac{1}{r+1}$-approximation algorithm where $r$ is the radius of the optimal solution.
\subsection{Greedy Algorithm for \textsf{batch-GPM}{}}
The standard greedy algorithm \cite{Nemhauser78}, that selects $k$ out of $|V^p|$ (assumed $k \leq |V^p|$) nodes with highest number of connected white nodes, achieves the $(1-1/e)$ approximation factor. This is similar to the greedy algorithm for the \textsf{Max-Cover}{} problem \cite{Nemhauser78}. Both of these problems belong to the class of Maximizations over \textit{monotone submodular} functions \cite{Nemhauser78} and thus, the following theorem follows.
\begin{Theorem}
The greedy algorithm returns an $(1-1/e)$-approximate solution for \textsf{batch-GPM}{}.
\end{Theorem}
\subsection{An $\frac{1}{r+1}$-Algorithm for \textsf{ada-GPM}{}}
Our algorithm is inspired by a recent theoretical result for solving the \textit{Connected Maximum Coverage} (\textsf{Connected Max-Cover}{}) problem in \cite{Vandin11}. The \textsf{Connected Max-Cover}{} assumes a set of elements $\mathcal{I}$, a graph $G = (V,E)$ and a budget $k$. Each node $v \in V$ associates with a subset $P_v \subseteq \mathcal{I}$. The problem asks for a \textit{connected} subgraph $g$ with at most $k$ nodes of $G$ that maximizes $\cup_{v \in g} P_v$. The proposed algorithm in \cite{Vandin11} sequentially selects nodes with highest ratio of newly covered nodes to the length of the shortest path from observed nodes and is proved to obtain an $\frac{e-1}{(2e-1)r}$-approximation factor. Whereas, with a better analysis, we achieve an $\frac{1}{r+1}$-approximation factor, which is significantly greater than $\frac{e-1}{(2e-1)r} \leq \frac{1}{2.58\cdot r}$.
In our \textsf{ada-GPM}{} problem with underlying topology, each node can be viewed as associated with a set of connected nodes. We desire to select $k$ connected nodes to maximize the union of $k$ associated sets. However, different from \textsf{Connected Max-Cover}{} in which any connected subgraph is a feasible solution, \textsf{ada-GPM}{} requires the $k$ selected nodes to be connected from observed nodes. Thus, we analogously put another constraint of connectivity to a fixed set of nodes on the returned solution and that adds a layer of difficulty. Interestingly, we show that rooting from observed nodes and greedily adding nodes in the same manner as \cite{Vandin11} gives an $\frac{1}{r+1}$-approximate solution.
Let denote $O(v)$ to be the set of nodes that $v$ is \textit{connected to}, i.e., $ = \{u | (v,u) \in E \}$ and $P_{V^f}(v)$ to be the set of nodes on the shortest path from nodes in $V^f$ to $v$. For a set of nodes $S$, we call $f(S)$ to be the number of newly discovered nodes by probing $S$. Hence, $f(S)$ is our objective function. The details of our approximation algorithm, namely \textit{Topology-aware Adaptive Probing} (\textsf{Tada-Probe}{}), is described in Alg.~\ref{alg:agpm}.
\setlength{\textfloatsep}{3pt}
\begin{algorithm}
\caption{\textsf{Tada-Probe}{} Approximation Algorithm}
\label{alg:agpm}
\KwIn{The underlying graph $G = (V,E)$, observed subgraph $G' = (V',E')$ and a budget $k$.}
\KwOut{Augmented graph of $G'$.}
Collapse all fully observed nodes to a single node $r$\\
$i = 0$\\
\While{$i < k$}{
$v_{max} \leftarrow \max_{v \in V \backslash V^f, |P_{V^f}(v)| \leq k-i} \frac{|O(v) \backslash V'|}{|P_{V^f}(v)|}$ \\
Probe all the nodes $v \in P_{V^f}(v_{max})$ \\ \tcp{$V^f \leftarrow V^f \cup P_{V^f}(v_{max})$}
Update $V'$ and $E'$ accordingly \\
$i = i + |P_{V^f}(v_{max})|$ \\
}
\Return $G' = (V', E')$\\
\end{algorithm}
The algorithm starts by collapsing all fully observed nodes in $G'$ into a single root node $r$ which serves as the starting point. It iterates till all $k$ alloted nodes have been selected and probed to the observed graph $G'$. At each iteration, it selects a node $v_{max} \in V \backslash V^f$ within distance $k-i$ having maximum ratio of the number of unobserved nodes $|O(v)\backslash V'|$ to the length of shortest path from nodes in $V^f$ to $v$ (Line 3). Afterwards, all the nodes on the shortest path from $V^f$ to $v_{max}$ are probed (Line 4) and the sets of observed nodes $V'$ and edges $E'$ are updated accordingly (Line 5). That is all the adjacent nodes to $v \in P_{V^f}(v_{max})$ are added to $V'$ with their corresponding edges.
We now show the approximation guarantee of \textsf{Tada-Probe}{} algorithm that is stated in the following theorem.
\begin{Theorem}
\textsf{Tada-Probe}{} returns an $\frac{1}{r+1}$-approximate solution for \textsf{ada-GPM}{} problem where $r$ is the radius of the optimal solution.
\end{Theorem}
\begin{proof}
Let denote the solution returned by \textsf{Tada-Probe}{} $\hat S$ and an optimal solution $S^* = \{v^*_1,v^*_2,\dots,v^*_k\}$ which results in the maximum number of newly discovered nodes, denoted by $OPT$. We assume that both $\hat S$ and $S^*$ contain exactly $k$ nodes since adding more nodes never gives a worse solution. We call $\Delta_{S}(S')$, that is the number of additional unobserved nodes discovered by $S'$, the \textit{marginal benefit} of $S'$ with respect to $S$. Alternatively, for a single node $v$, $\Delta_{S}(v) = \Delta_{S}(\{v\})$. In addition, the ratio of the marginal benefit to the distance from the set $S$ to a node $v$, called \textit{benefit ratio}, is denoted by $\delta_{S}(v) = \frac{\Delta_{S}(v)}{|P_{S}(v)|}$.
Since in each iteration of the greedy algorithm, we add all the nodes along the shortest path connecting observed nodes to $v_{max}$, we assume that $t \leq k$ iterations are performed. In iteration $i \geq 0$, node $v^i_{max}$ is selected to probe and, up to that iteration, $S^i$ nodes have been selected and fully observed.
Due to the greedy selection, we have, $\forall i \geq 1, \forall \hat v \in S^i\backslash S^{i-1}, \forall v^* \in S^*$,
\begin{align}
\delta_{S^{i-1}}(v^i_{max}) \geq \delta_{S^{i-1}}(v^*)
\end{align}
Thus, we obtain,
\begin{align}
|P_{S^{i-1}}(v^i_{max})|\cdot \delta_{S_{i-1}}(v^i_{max}) \geq \sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j)
\end{align}
or, equivalently,
\begin{align}
\label{eq:iter}
\Delta_{S^{i-1}(v^i_{max})} \geq \sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j)
\end{align}
Adding Eq.~\ref{eq:iter} over all iterations gives,
\begin{align}
\label{eq:iter_2}
\sum_{i = 1}^{t}\Delta_{S^{i-1}(v^i_{max})} \geq \sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j)
\end{align}
The left hand side is actually $f(\hat S)$ which is sum of marginal benefit over all iterations. The other is the sum of benefit ratios over all the nodes in the optimal solution $S^*$ with respect to sets $S^{i-1}$, where $0 \leq i \leq k-1$, which are subsets of $\hat S$. Thus, $\forall i,j$,
\begin{align}
\delta_{S^{i-1}}(v^*_j) \geq \delta_{\hat S}(v^*_j) \geq \frac{\Delta_{\hat S}(v^*_j)}{|P_{\hat S}(v^*_j)|} \geq \frac{\Delta_{\hat S}(v^*_j)}{r}
\end{align}
Then, the right hand side is,
\begin{align}
\label{eq:theo1_bound}
\sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \delta_{S^{i-1}}(v^*_j) \geq \sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \frac{\Delta_{\hat S}(v^*_j)}{r}
\end{align}
Notice that $\Delta_{\hat S}(v^*_j)$ is the marginal benefit of node $v^*_j$ with respect to set $\hat S$, hence, the summation itself becomes,
\begin{align}
\sum_{i = 1}^{t}\sum_{j = |S^{i-1}|}^{|S^{i}|} \Delta_{\hat S}(v^*_j) = \sum_{j = 1}^{k} \Delta_{\hat S}(v^*_j) & = f(S^*) - f(\hat S) \nonumber \\
& = OPT - f(\hat S) \nonumber
\end{align}
Thus, Eq.~\ref{eq:iter_2} is reduced to,
\begin{align}
f(\hat S) \geq \frac{OPT - f(\hat S)}{r}
\end{align}
Rearranging the above equation, we get,
\begin{align}
f(\hat S) \geq \frac{OPT}{r+1}
\end{align}
which completes our proof.
\end{proof}
\subsubsection{Heuristic Improvement}
Despite the $\frac{1}{r+1}$-approximation guarantee, \textsf{Tada-Probe}{} algorithm only considers the benefits of the ending nodes in shortest paths and completely ignore the on-the-way benefit. That is the newly observed nodes discovered when probing the intermediate nodes on the shortest paths are neglected in making decisions. Thus, we can improve \textsf{Tada-Probe}{} by counting all the newly observed nodes \textit{along the connecting paths} which are not necessarily the shortest paths and the selection criteria of taking the path with largest benefit ratio is applied. Since the selected path of nodes has the benefit ratio of at least as high as that of considering only the ending nodes, the $\frac{1}{r+1}$-approximation factor is still preserved from \textsf{Tada-Probe}{}.
Following that idea, we propose a Dijkstra based algorithm to select the path with maximum benefit ratio. We assign for each node $u$ a benefit ratio $\delta(u)$, a distance measure $d(u)$ and a benefit value $\Delta(u)$. Our algorithm iteratively selects a node $u$ with highest benefit ratio and propagates the distance and benefit to its neighbors: if neighbor node $v$ observes that going through $v$ on $v$'s path, $u$'s benefit ratio gets higher and $v$ will update its variables to have $u$ as the predecessor. Thus, at the end, our algorithm finds the paths with highest benefit ratio among $v \in V\backslash V'$.
Note that extra care needs to be taken in our algorithm to avoid having \textit{loops} in the computed paths. The loops normally do not appear since closing a loop only increases the distance by one and keeps the same benefit of the path. However, in extreme cases where a path passes through a node with exceptionally high number of connections to unobserved nodes that pays off the additional distance to reach it and come back to a predecessor, the loops happen. To void loops, we need to check whether updating a new path will close a loop by storing the predecessor of each node and traverse back to the root node $r$.
\subsection{Optimal ILP Algorithm}
To study the optimal solution for our \textsf{ada-GPM}{} problem when topology is available, we present our Integer Linear Programming formulation and hence, can use a convenient off-the-shelf solver, e.g., Cplex, Gurobi, to find an optimal solution. The disadvantage here is that Integer Programming is not polynomially solvable and thus, extremely time-consuming.
In the prior step, we also collapse all fully probed nodes into a single node $r$ and have connections from $r$ to partially observed nodes. Assume that there are $n$ nodes including $r$, for each $u \in V$, we define $y_u \in \{0,1\}, \forall u \in V$ such that,
\begin{align}
y_u = \left\{ \begin{array}{ll}
1 & \text{ if node } u \text{ is observed,}\\
0 & \text{ otherwise}.
\end{array}
\right. \nonumber
\end{align}
Since at most $k$ nodes are selected adaptively, we can view the solution as a tree with at most $k$ layers. Thus, we define $x_{uj} \in \{0,1\}, \forall u \in V, j = 1..k$ such that,
\begin{align}
x_{uj} = \left \{ \begin{array}{ll}
1 & \text{ if node } u \text{ is selected at layer }j \text{ or earlier},\\
0 & \text{ otherwise}.
\end{array}
\right. \nonumber
\end{align}
The \textsf{ada-GPM}{} problem is to selects at most $k$ nodes, i.e., $\sum_{u \in V} x_{uk} \leq k$ to maximize the number of newly observed nodes, i.e., $\sum_{u \in V} y_{u}$. A node is observed if at least one of its neighbors is selected meaning $y_u \leq \sum_{v \in N(u)} x_{vk}$ where $N(u)$ denotes the set of $u$'s neighbors. Since $r$ is the initially fully observed nodes, we have $x_{r0} = 1$. Furthermore, $u$ can be selected at layer $j$ if at least one of its neighbors has been probed and thus, $x_{uj} \leq \sum_{v \in N(u)} x_{v(j-1)}$.
Our formulation is summarized as follows,
\begin{align}
\label{eq:agpm_ip}
\max \quad & \sum_{u \in V} y_{u} - |N(r)| - 1\\
\text{ s.t.}\quad \quad & x_{r0} = 1 \\
& x_{u0} = 0, \qquad u \in V, u \neq r\\
& \sum_{u \in V} x_{uk} \leq k \\
& y_u \leq \sum_{v \in N(u)} x_{vk}, \qquad \forall u \in V,\\
& x_{uj} \leq x_{u(j+1)}, \qquad \forall u \in V, j = 0..k-1,\\
& x_{uj} \leq x_{u(j-1)} + \sum_{v \in N(u)} x_{v(j-1)}, \text{ } \forall u \in V, j = 1..k,\\
\label{eq:agpm_ip_l}
& x_{uj}, y_{u} \in \{0, 1\}, \forall u \in V, j = 0..k.
\end{align}
From the solution of the above ILP program, we obtain the solution for our \textsf{ada-GPM}{} instance by simply selecting nodes $u$ that $x_{uk} = 1$. Note that the layering scheme in our formulation guarantees both the connectivity of the returned solution and containing root node $r$ and thus, the returned solution is feasible and optimal.
\begin{Theorem}
The solution of the ILP program in Eq.~\ref{eq:agpm_ip}-\ref{eq:agpm_ip_l} infers the optimal solution of our \textsf{ada-GPM}{}.
\end{Theorem}
\section{Introduction}
In many real-world networks, complete network topology is almost intractable to acquire, thus, most decisions are made based on incomplete networks. The impossibility to obtain complete network may stem from various sources: 1) the extreme size of the networks, e.g., Facebook, Twitter with billions of users and connections or the Internet spanning the whole planet, 2) privacy or security concerns, e.g., in Online social networks, we may not be able to see users' connections due to their own privacy settings to protect them from unwelcoming guests, 3) being hidden or undercover, e.g., terrorist networks in which only a small fraction is exposed and the rest is anonymous.
To support decision making processes based on local view and expand our observations of the networks, we investigate a network exploring problem, called \textit{Graph Probing Maximization (\textsf{GPM}{})}. In \textsf{GPM}{}, an agent is provided with an incomplete network $G'$ which is a subnetwork of an underlying real-world network $G \supsetneq G'$ and wants to explore $G$ swiftly through node probing. Once a node $u \in G'$ is probed, all neighbors $v \in G$ of $u$ will be observed and can be probed in the following steps. Given a budget $k$, the agent wishes to identify $k$ nodes from $G'$ to \emph{probe} to maximize the number of newly observed nodes.
Real-world applications of \textsf{GPM}{} includes exploring terrorist networks to help in the planning of dismantling the network. Given an incomplete adversarial network, each suspect node can be ``probed'', e.g., getting court orders for communication record, to reveal further suspects. In cybersecurity on Online social networks (OSNs), intelligent attackers can gather users' information by sending friend requests to users \cite{Ng16}. Understanding the attacker strategies is critical in coming up with effective hardening policies. Another example is in viral marketing, from a partial observation of the network, a good probing strategy for new customers can lead to exploration of potential product sales.
While several heuristics are proposed for \textsf{GPM}{} \cite{avr14y,Soundarajan15,Hanneke09,Masrour15}, they share two main drawbacks. First, they all consider selecting nodes in one batch. We argue that this strategy is ineffective as the information gained from probing nodes is not used in making the selection as shown recently \cite{Golovin10,Seeman13}. Secondly, they are metric-based methods which use a single measure to make decisions. However, real-world networks have diverse characteristics, such as different power-law degree distributions and a wide range of clustering coefficients. Thus, the proposed heuristics may be effective for particular classes of networks, but perform poorly for the others.
In this paper, we first formulate the Graph Probing Maximization and theoretically prove the strong inapproximability result that finding the optimal strategy based on local incomplete network cannot be approximated within any \textit{finite} factor. That means no polynomial time algorithm to approximate the optimal strategy within any finite multiplicative error. On the bright side, we design a novel machine learning framework that takes the local information, e.g., node centric indicators, as input and predict the best sequence of $k$ node probing to maximize the observed augmented network. We take into account a common scenario that there is available a reference network, e.g., past exposed terrorist networks when investigating an emerging one, with similar characteristics. Our framework learns a probing strategy by simulating many subnetwork scenarios from reference graph and learning the best strategy from those simulated samples.
The most difficulty in our machine learning framework is that of find the best probing strategy in sampled subnetwork scenarios from the reference network. That is how to characterize the potential gain of a \textit{probable} node in long-term (into future probing). We term this subproblem \textit{Topology-aware \textsf{GPM}{}} (\textsf{Tada-GPM}{}) since both subnetwork scenario and underlying reference network are available. Therefore, we propose an $(\frac{1}{r+1})$-approximation algorithm for this problem where $r$ is the \textit{radius} of the optimal solution. Here, the radius of a solution is defined to be the largest distance from a selected node to the subnetwork. Our algorithm looks far away to the future gain of selecting a node and thus provides a nontrivial approximation guarantee. We further propose an effective heuristic improvement and study the optimal strategy by Integer Linear Programming.
Compared with metric-based methods with inconsistent performance, our learning framework can easily adapt to networks in different traits. As a result, our experiments on real-world networks in diverse disciplines confirm the superiority of our methods through consistently outperforming around 20\% the state-of-the-art methods.
Our contributions can be summarized as follows:
\begin{itemize}
\item We first formulate the Graph Probing Maximization (\textsf{GPM}{}) and show that none of existing metric-based methods consistently work well on different networks. Indeed, we rigorously prove a strong hardness result of \textsf{GPM}{} problem: inapproximable within any finite factor.
\item We propose a novel machine learning framework which looks into the future potential benefit of probing a node to make the best decision.
\item We experimentally show that our new approach significantly and consistently improves the performance of the probing task compared to the current state-of-the-art methods on a large set of diverse real-world networks.
\end{itemize}
\textbf{Related works.}
Our work has a connection to the early network crawling literature \cite{Cho98,Chakrabarti02,Ester04,Chakrabarti99}. The website crawlers collect the public data and aim at finding the least effort strategy to gain as much information as possible. The common method to gather relevant and usable information is following the hyperlinks and expanding the domain.
Later, with the creation and explosive growth of OSNs, the attention was largely shifted to harvesting public information on these networks \cite{Chau07,Mislove07,Wwak10}. Chau et al. \cite{Chau07} were able to crawl approximately 11 million auction users on Ebay by a parallel crawler. The record of successful crawling belongs to Kwak et al. \cite{Wwak10} who gathered 41.7 million public user profiles, 1.47 billion social relations and 106 million tweets. However, these crawlers are limited to user public information due to the privacy setting feature on OSNs that protects private/updated data from unwelcoming users.
More recently, the new crawling technique of using socialbots \cite{Boshmaf12,Elishar12,Elyashar13,Fire14,Paradise15,Ng16} to friend the users and be able to see their private information. Boshmaf et al. \cite{Boshmaf12} proposed to build a large scale Socialbots system to infiltrate Facebook. The outcomes of their work are many-fold: they were able to infiltrate Facebook with success rate of 80\%; there is a possibility of privacy breaches and the security defense systems are not effective enough to prevent/stop Socialbots. The works in \cite{Elishar12,Elyashar13} focus on targeting a specific organization on Online social networks by using Socialbots to friend the organization's employees. As the results, they succeed in discovering hidden nodes/edges and achieving an acceptance rate of 50\% to 70\%.
Graph sampling and its applications have been widely studied in literature. For instance, Kim and Leskovec \cite{Kim11} study problem of inferring the unobserved parts of the network. They address network completion problem: Given a network with missing nodes and edges, how can ones complete the missing part? Maiya and Berger-Wolf \cite{Maiya10} propose sampling method that can effectively be used to infer and approximate community affiliation in the large network.
Most similar to our work, Soundarajan et al. propose MaxOutProbe \cite{Soundarajan15} probing method. MaxOutProbe estimates degrees of nodes to decide which nodes should be probed in partially observed network. This model shows better performance as compared with probing approaches based on node centralities (selecting nodes that have high degree or low local clustering). However, through experiments, we observe that MaxOutProbe's performance is still worst than the one that ranks nodes based on PageRank or Betweeness.
The probing process can be seen as a diffusion of deception in the network. Thus, it is related to the vast literature in cascading processes in the network \cite{Nguyen10, Dinh12,Nguyen11over, Dinh14}.
\textbf{Organization}: The rest of this paper is divided into five main sections. Section~\ref{sec:model} presents the studied problem and the hardness results. We propose our approximation algorithm and machine learning model in Section~\ref{sec:algorithm}. Our comprehensive experiments are presented in Section~\ref{sec:exps} and followed by conclusion in Section~\ref{sec:con}.
\section{Problem Definitions and Hardness}
\label{sec:model}
We abstract the underlying network using an undirected graph $G = (V,E)$ where $V$ and $E$ are the sets of nodes and edges. $G$ is not completely observed, instead a subgraph $G' = (V',E')$ of $G$ is seen with $V' \subseteq V$, $E' \subseteq E$. Nodes in $G$ can be divided into three disjoint sets: $V^f, V^p$ and $V^u$ as illustrated in Fig.~\ref{fig:probing_process}.
\begin{figure}[!ht]
\vspace{-0.2in}
\centering
\includegraphics[width=0.4\linewidth]{figures/model}
\vspace{-0.1in}
\caption{Incomplete view $G'$ (shaded region) of an underlying network $G$. Nodes in $G'$ are partitioned into two disjoint subsets: \textbf{black} nodes in $V^f$ which are fully observed and the \textbf{gray} nodes in $V^p$ which are only partially observed. Nodes outside of $G'$ are colored \textbf{white}}
\label{fig:probing_process}
\vspace{-0.1in}
\end{figure}
\begin{itemize}
\item \underline{Black}/Fully observed nodes: $V^f$ contains fully observed nodes (probed) meaning all of their connections are revealed. That is if $u \in V^f$ and $(u,v) \in E$ then $(u,v) \in E'$.
\item \underline{Gray}/Partially observed nodes: $V^p$ contains partially observed nodes $u$ that are adjacent to at least one fully probed node in $V^f$ and $u \notin V^f$. Only the connections between $u \in V^p$ and nodes in $V^f$ are observed while the others to unobserved nodes are hidden. Therefore, those nodes $u \in V^p$ become the only candidates for discovering unobserved nodes. Note that $V' = V^f \cup V^p$.
\item \underline{White}/Unobserved nodes: $V^u = V \setminus V'$ consists of unobserved nodes. The nodes in $V^u$ have no connection to any node in $V^f$ but may be connected with nodes in $V^p$.
\end{itemize}
\emph{Node probing}: At each step, we select a candidate gray node $ u \in V^p$ to \emph{probe}. Once probed, all the neighbors of $u$ in $G$ and the corresponding edges are revealed. That is $u$ becomes a fully observed black node and each white neighbor node $v \in V^u$ of $u$ becomes gray and is also available to probe in the subsequent steps. We call the resulted graph after probing \textit{augmented graph} and use the same notation $G'$ when the context is clear.
The main goal of \textsf{ada-GPM}{} is to increase the size of $G'$ as much as possible.
\emph{Probing budget $k$}: In addition to the subgraph $G'$, a budget $k$ is given as the number of nodes we can probe. This budget $k$ may resemble the effort/cost that can be spent on probing. Given this budget $k$, our probing problem becomes selecting at most connected $k$ nodes that maximizes the size of augmented $G'$. Alternatively, we want to maximize the number of newly observed nodes.
We call our problem \textit{Graph Probing Maximization} (\textsf{GPM}{}). There is a crucial consideration at this point: \textit{Should we select $k$ node at once or we should distribute the allowed budget in multiple steps?} The answer to this question leads to two different interesting versions: \textit{Non-Adaptive} and \textit{Adaptive}. We focus on the adaptive problem.
\begin{Definition} [Adaptive \textsf{GPM}{} (\textsf{ada-GPM}{})]
Given an incomplete subnetwork $G'$ of $G$ and a budget $k$, the Adaptive Graph Probing Maximization problem asks for $k$ partially probed nodes in $k$ consecutive steps that maximizes the number of newly observed nodes in $G'$ at the end.
\end{Definition}
In our \textsf{ada-GPM}{} problem, at each step, a node is selected subject to observing the outcomes of all previous probing. This is in contrast to the non-adaptive version which asks to make $k$ selections from the initial $V^p$ at once. Thus, the adaptive probing manner is intuitively more effective than the non-adaptive counterparts. However, it is also considerably more challenging compared to non-adaptivity due to the vastly expanded search space.
\renewcommand{\arraystretch}{1.2}
\setlength\tabcolsep{5pt}
\begin{table}[hbt] \scriptsize
\centering
\caption{Highest performance (Perf.) metric-based methods.}
\begin{tabular}{cc|cc|cc}
\toprule
\multicolumn{2}{c|}{\textbf{GnuTella}} & \multicolumn{2}{c|}{\textbf{Collaboration}} & \multicolumn{2}{c}{\textbf{Road}} \\
\hline
\textbf{Top 5} & \textbf{Perf.} & \textbf{Top 5} & \textbf{Perf.} & \textbf{Top 5} & \textbf{Perf.}\\
\midrule
CLC & 2471 & BC & 2937 & CLC & 358 \\
BC & 2341 & PR & 2108 & DEG & 346 \\
CC & 1999 & CC & 2085 & BC & 346 \\
PR & 1994 & CLC & 2061 & PR & 342 \\
DEG & 1958 & DEG & 2048 & CC & 326 \\
\bottomrule
\end{tabular}
\label{tbl:top5bnc}
\vspace{-0.1in}
\end{table}
\subsection{Hardness and Inapproximability.}
\subsubsection{Empirical Observations.}
We show the inconsistency in terms of probing performance of metric-based methods, i.e., clustering coefficient (CLC), betweenness centrality (BC), closeness centrality (CC), Pagerank (PR), local degree (DEG), MaxOutProbe \cite{Soundarajan15} and random (RAND), through experiments on 3 real-world networks, i.e., Gnutella, Co-authorship and Road networks (see Sec.~\ref{sec:exps} for a detailed description). Our results are shown in Fig.~\ref{tbl:top5bnc}. From the figure, we see that the performance of metric-based methods varies significantly on different networks and none of them is consistently better than the others. For example, clustering coefficient-based method exhibits best results on Gnutella but very bad in Collaboration networks. On road networks, all methods seem to be comparable in performance.
\subsubsection{Inapproximability Result.}
Here, we provide the hardness results of the \textsf{ada-GPM}{} problem.
Our stronger result of inapproximability
is shown in the following.
\begin{Theorem}
\label{theo:hardness}
\textsf{ada-GPM}{} problem on a partially observed network cannot be approximated within any finite factor. Here, the inapproximability is with respect to the optimal solution obtained when the underlying network is known.
\end{Theorem}
To see this, we construct classes of instances of the problems that there is no approximation algorithm with finite factor.
\begin{figure}[!ht]
\centering
\vspace{-0.05in}
\includegraphics[width=0.34\linewidth]{figures/hard_nd.pdf}
\vspace{-0.05in}
\caption{Hardness illustration.}
\label{fig:hard_nodegree}
\vspace{-0.1in}
\end{figure}
We construct a class of instances of the probing problems as illustrated in Figure~\ref{fig:hard_nodegree}. Each instance in this class has: a single fully probed node in black $b_1$, $n$ observed nodes in gray each of which has an edge from $b_1$ and one of the observed nodes, namely $g^*$ varying between different instances, having $m$ connections to $m$ unknown nodes in white. Thus, the partially observed graph contains $n+1$ nodes, one fully probed and $n$ observed nodes which are selectable, while the underlying graph has in total of $n+m+1$ nodes. Each instance of the family has a different $g^*$ that $m$ unknown nodes are connected to. We now prove that in this class, no algorithm can give a finite approximate solution for the two problem.
First, we observe that for any $k \geq 1$, the optimal solution which probes the nodes with connections to unknown nodes has the optimal value of $m$ newly explore nodes, denoted by $OPT = m$. Since any algorithm will not be aware of the number of connections that each gray node has, it does not know that $g^*$ leads to $m$ unobserved nodes. Thus, the chance that an algorithm selects $g^*$ is small and thus, can perform arbitrarily bad. Our complete proof is presented in the supplementary material.
\section{Experiments}
\label{sec:exps}
In this section, we perform experiments on real-world networks to evaluate performance of the proposed methods.
\renewcommand{\arraystretch}{1.2}
\setlength\tabcolsep{2pt}
\begin{table}[hbt] \small
\centering
\caption{Statistics for the networks used in our experiments. ACC stands for Average Clustering Coefficient. Bold and underlined networks are used for training. }
\vspace{-0.2in}
\begin{tabular}{llllc}
\addlinespace
\toprule
\textbf{Name} & \textbf{Network Type} & \#\textbf{Node} & \#\textbf{Edges} & \textbf{ACC} \\
\midrule
Roadnw-CA & Road & $21k$ & $21k$ & $7.10^{-5}$\\
Roadnw-OL & Road & $6k$ & $7k$ & $0.01$\\
\textbf{\underline{Roadnw-TG}} & Road & \textbf{$18k$} & \textbf{$23k$} & \textbf{$0.018$}\\
GnuTella04 & p2p (GnuTella) & $11k$ & $40k$ & $0.006$ \\
GnuTella05 & p2p & $9k$ & $32k$ & $0.007$ \\
\textbf{\underline{GnuTella09}} & p2p & \textbf{$8k$} & $26k$ & $0.009$\\
Ca-GrQc & Collaboration (CA) & $5k$ & $14k$ & $0.529$\\
Ca-HepPh & Collaboration & $12k$ & $118k$ & $0.611$\\
Ca-HepTh & Collaboration & $10k$ & $26k$ & $0.471$\\
Ca-CondMat & Collaboration & $23k$ & $93k$ & $0.633$\\
\textbf{\underline{Ca-AstroPh}} & Collaboration & $18k$ & $198k$ & $0.630$\\
\bottomrule
\end{tabular}
\label{tbl:dataset}
\vspace{-0.1in}
\end{table}
\begin{figure*}[!ht]
\centering
\subfloat[GnuTella Network]{
\includegraphics[width=0.3\linewidth]{figures/set2_tella.pdf}
}
\subfloat[Collaboration Network]{
\includegraphics[width=0.3\linewidth]{figures/set2_ca.pdf}
}
\subfloat[Road Network]{
\includegraphics[width=0.3\linewidth]{figures/set2_road.pdf}
}
\vspace{-0.1in}
\caption{Performance Comparison of Base Case Machine Learning ($h=1$) with Metric-based and Heuristic Approaches.}
\label{fig:set2}
\vspace{-0.25in}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[GnuTella Network]{
\includegraphics[width=0.3\linewidth]{figures/set3_tella.pdf}
}
\subfloat[Collaboration Network]{
\includegraphics[width=0.3\linewidth]{figures/set3_ca.pdf}
}
\subfloat[Road Network]{
\includegraphics[width=0.3\linewidth]{figures/set3_road.pdf}
}
\vspace{-0.1in}
\caption{Performance of Machine Learning Method with Further Looking Ahead and Other Probing Methods.}
\label{fig:set3}
\vspace{-0.25in}
\end{figure*}
\textbf{Datasets.}
Table~\ref{tbl:dataset} describes three types of real-world networks used in our experiments. The road network \cite{li05, brinkhoff02} includes edges connecting different points of interest (gas stations, restaurants) in cites.
The second type of network includes several snapshots of GnuTella network with nodes represent hosts in GnuTella network topology and edges represent connections between hosts.
In the third type, we use five collaboration networks that cover scientific collaborations between authors papers in which nodes represent scientists, edges represent collaborations.
The last type of network models metabolic pathways which are linked series of chemical reactions occurring in cell.
\renewcommand{\arraystretch}{1.2}
\setlength\tabcolsep{2pt}
\begin{table}[hbt] \small
\vspace{-0.09in}
\centering
\caption{Number of new explored nodes at budget $k = 300$ from all implemented probing methods in all datasets.}
\begin{tabular}{lllc}
\addlinespace
\toprule
\backslashbox[1mm]{\textbf{Methods}}{\textbf{Dataset}} & \textbf{GnuTella Net.} & \textbf{CA Net.} & \textbf{Road Net.} \\
\midrule
CLC & \textbf{\textcolor{red}{2471}} & 3098 & \textbf{\textcolor{red}{358}} \\
BC & 2341 & \textbf{\textcolor{red}{5052}} & 346 \\
DEG & 1958 & 3471 & 346\\
CC & 1999 & 3547 & 326 \\
PR & 1994 & 3639 & 342 \\
RAND & 2381 & 2911 & 329 \\
MaxOutProbe & 1820 & 902 & 28\\
\bottomrule
\end{tabular}
\label{tbl:bnc}
\end{table}
\textbf{Sampling Methods.}
We adopt the Breadth-First Search sampling ~\cite{Maiya10} to generate subgraphs for our experiments. In our regression model, for each network that is marked for training in Table ~\ref{tbl:dataset}, we generate samples with number of nodes varies from $0.5\%$ to $10\%$ number of nodes in $G$. The size distribution of the subgraphs follows a power law distribution with power-exponent $\gamma=-1/4$.
The size of subgraphs used in validation is kept to be roughly $5\%$ of the network size.
\textbf{Probing Methods.}
We compare performance of our linear regression (\textsf{LinReg}{}) and logistic regression (\textsf{LogReg}{}) probing method with MaxOutProbe\cite{Soundarajan15}, RAND (probe a random gray node), and centrality-based methods DEG, BC, CC, PR, CLC (see table~\ref{tbl:node_factors}) that probe the node with the highest centrality value in each step. Each considered method has different objective function for selecting nodes to probe. Sharing the same idea, all of these methods rank nodes in set $V^p$ of $G'$, and select the nodes with highest ranking to probe first.
\textbf{Performance Metrics.} For each probing method, we conduct probing at budgets $k \in \{1, 100, 200, 300\}$. During probing process, we compare performance among probing methods using \emph{newly explored nodes}, i.e., the increase in the number of nodes in $G'$ after $k$ probes. For each of the network used in the validation, we report the average results over 50 subgraphs. For obvious reason, we do not use the training network for validation.
We use statistical programming language R to implement MaxOutProbe and use C++ with igraph framework to implement \textsf{LinReg}{}, \textsf{LogReg}{} and metric-based probing methods. All of our experiments are run on a Linux server with a 2.30GHz Intel(R) Xeon(R) CPU and 64GB memory.
\subsection{Comparison between Machine Learning Methods and Metric-based Probing Methods.}
Table \ref{tbl:bnc} presents probing performance of metric-based methods in each group of networks with the best one is highlighted. Reported result is the number of explored nodes at the end of probing process ($k = 300$).
In Fig. \ref{fig:set2}, we take top 3 metric-based methods in each type of network and compare their performance with \textsf{LinReg}{} and \textsf{LogReg}{}. Here, both machine learning methods are trained by using the benefit of node in only 1-step ahead ($h = 1$, i.e., setting $k = 1$ in Alg.~\ref{alg:agpm}). Their probing performance outperforms metric-based methods in GnuTella network $20\%$ on average. They match performance of other methods in Road network and are $7\%$ worse than \textsf{BC} in Collaboration network. The experiment with \textsf{MaxOutProbe} in Road network faces the problem of non-adaptive behavior of \textsf{MaxOutProbe}: $|V^p|$ of Road network's samples is smaller than maximal budget $k$, besides, due to the very low density property of Road network, after probing $|V^p|$ nodes in sampled networks \textsf{MaxOutProbe} explores only few new nodes in underlying network; these factors lead to very poor performance of \textsf{MaxOutProbe}. \textsf{MaxOutProbe} also performs poorly as compared with adaptive implementation of metric based methods: It is $67\%$, and $36\%$ worse than \textsf{LinReg}{} in Collaboration, GnuTella network respectively.
\subsection{Benefits of Looking Into Future Gain.}
For \textsf{LinReg}{} and \textsf{LogReg}{}, we use different functions trained by different labels (marked as $h = 1$, $h = 2$, $h = 3$, $h = 4$ , which indicates regression functions are trained with benefits of node in 1-step, 2-step, 3-step or 4-step ahead). In Fig. \ref{fig:set3}, we evaluate \textsf{LinReg}{} and \textsf{LogReg}{} with \textsf{RAND} and the best metric-based methods for each type of network reported in Table ~\ref{tbl:bnc}. We use of \text{DEG} as baseline for our performance comparison. Specifically, we take the ratio of number of newly explored nodes of each probing method to number of newly explored nodes of \textsf{DEG} at the end of probing process. We omit result of \textsf{MaxOutProbe} due to its poor performance observed from previous subsection.
\textsf{LinReg}{} shows consecutive improvement in probing performance with $h = 1$, $h = 2$, $h = 3$ as it ranks candidate nodes in set $V^p$ based on their predicted gain at increasing number of hops far away from them. Overall, \textsf{LinReg}{} has better performance than \textsf{LogReg}{}. The $h = 3$ of \textsf{LinReg}{} and \textsf{LogReg}{} outperforms \textsf{DEG} from $12\%$ to $15\%$ in Collaboration and GnuTella network. The result observed for Road network is from $1\%$ to $5\%$. \textsf{LinReg}{} with $h = 3$ outperforms the best metric-based method in GnuTella network $11.6\%$; the improvement for Collaboration and Road network are $7.5\%$, $2.2\%$ respectively. The $h = 4$ of \textsf{LinReg}{} and \textsf{LogReg}{} starts decreasing compared with $h = 1$, $h = 2$ and $h = 3$. This indicates the benefit of looking further benefits of selecting a node is true within specific number of hops far away from nodes.
Among metric-based methods, while \textsf{BC} consistently performs well, the best method varies across the networks. It indicates the underlying network structure impacts performance of metric-based method and it is hard to determine which node centrality method is the best for which type of network. Interestingly, performance of random probing matches or even outperforms \textsf{BC}, \textsf{DEG}, \textsf{CC} in GnuTella networks. It is because GnuTella networks have low average clustering coefficient that makes BFS-based samples of these networks have star structure. Consequently, metric-based methods tend to rank candidate nodes in sampled network with same score. This helps random probing performs better than metric-based methods in this type of network.
\section{Conclusion}
\label{sec:con}
This paper studies the Graph Probing Maximization problem which serves as a fundamental component in many decision making problems. We first prove that the problem is not only NP-hard but also cannot be approximated within any finite factor. We then propose a novel machine learning framework to adaptively learn the best probing strategy in any individual network. The superior performance of our method over metric-based algorithms is shown by a set of comprehensive experiments on many real-world networks.
\bibliographystyle{ieeetr}
| proofpile-arXiv_065-7547 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In 1905
Henri Poincar\'{e} first suggested that accelerated
masses in a relativistic field should produce gravitational waves \cite{CGS16}.
The idea was magisterially pursued by
Einstein via his celebrated theory of general relativity. In 1918 he published his famous
quadrupole formula resulting from
the calculation of the effect of gravitational waves
\cite{Eis18}.
A century later,
the LIGO Scientific Collaboration and Virgo Collaboration
published a paper about the gravitational radiation they had
detected on September 2015 \cite{LV15}.
Ever since scientists believe to
have entered in a new era of astronomy, whereby the
universe will be studied by `its sound'
\cite{LV16a,LV16b,BHB16,YYP16,OMK16}. Gravitational Sound (GS) signals will then be here scrutinized with advanced techniques.
In the signal processing field, the problem of
finding a sparse approximation for a signal consists in
expressing the signal as a superposition of
as few elementary
components as possible, without significantly affecting
the quality of the reconstruction. In
signal processing applications
the approximation is carried out on a signal partition,
i.e.,
by dividing the signal into small pieces and
constructing the approximation
for each of those pieces of data. Traditional
techniques would carry out
the task using an orthogonal basis. However,
enormous improvements in sparsity can be achieved
using an adequate over-complete `dictionary' and
an appropriate mathematics method.
For the most part, these methods are based on minimization
of the $l_1$-norm \cite{CDS01} or are
greedy pursuit strategies
\cite{MZ93, PRK93, Nat95, RNL02, ARNS04,
Tro04, DTD06, NT09}, the latter being much more effective in practice.
Sparse signal representation of sound signals is a
valuable tool for a number of auditory tasks
\cite{SL06,NHS12}.
Moreover, the emerging theory of compressive sensing
\cite{Don06,CW08,Bar11} has
enhanced the concept of sparsity by asserting that
the number of
measurements needed for accurate representation of
a signal informational content
decreases if the sparsity of the representation improves.
Hence, when some GS tones made
with the observed Gravitation Wave (GW)
were released,
we felt motivated to produce a sparse approximation of
those clips.
We simply analyze the GS tones from a processing
viewpoint, regardless on how and why they have been
generated. We consider a) a short tone made with the
chirp {\tt{gw151226}} that has been detected,
b) the theoretical
simulated theoretical GS,
{\tt{iota\_20\_10000\_4\_4\_90\_h}}, and
c) the {\tt{Black\_Hole\_Billiards}} ring tone, which is a
more complex signal produced by superposition with an
ad hoc independent percussive sound.
The ensuing results are certainly interesting. If, in the future,
GS signals are to be generated at large scale (as astronomical images
have been produced \cite{hubweb,esoweb}), it is
important to have tools for all kinds of processing of
those signals.
{\it {The central goal of
this Communication is to present evidences of the
significant gain in sparsity achieved if a GS signal
is approximated with high quality outside the orthogonal basis framework}}.
For demonstration purposes we have made available
the MATLAB routines for implementation of the method.
\label{Intro}
\section{Some Preliminary Considerations}
The traditional frequency decomposition of a signal
given by $N$ sample points, $f(i),\,i=1,\ldots,N$,
involves the Fourier expansion
$ f(i) =\frac{1}{\sqrt{N}} \sum_{n=1}^M c(n) e^{\imath \frac{2\pi(i-1)(n-1)}{M}}, \quad i=1,\ldots,N. $
The values $|c(n)|,\, n=1,\ldots,M=N$ are called
the discrete Fourier spectrum of the signal, and can be
evaluated in a very effective manner via the
Fast Fourier Transform (FFT).
For $M>N$ even if the coefficients in the above
expansion can still be calculated via FFT, by zero padding,
these are not longer unique. Finding
a sparse solution is the goal of
sparse approximation techniques.
The problem of the sparse approximation of
a signal, outside the orthogonal basis setting,
consists in using elements of a redundant set,
called a {\it dictionary},
for constructing
an approximation involving a number of elementary
components which is significantly smaller than the signal
dimension. For signals whose structure varies with time,
sparsity performs better when the
approximation is carried out on a signal partition.
In order to give precise definitions we
introduce at this point the notational usual conventions:
$\mathbb{R}$ and $\mathbb{C}$
represent the sets of real and complex
and numbers, respectively.
Boldface fonts are used to indicate Euclidean vectors
and standard mathematical fonts to
indicate components, e.g., $\mathbf{d} \in \mathbb{C}^N$ is a vector of
$N$-components
$d(i) \in \mathbb{C}^N\,, i=1,\ldots,N$.
The operation
$\langle \cdot,\cdot \rangle$ indicates the Euclidean inner
product and $\| \cdot \|$ the induced norm, i.e.
$\| \mathbf{d} \|^2= \langle \mathbf{d}, \mathbf{d} \rangle$, with the usual
inner product definition: For $\mathbf{d} \in \mathbb{C}^N$
and $\mathbf{f} \in \mathbb{C}^N$
$
\langle \mathbf{f}, \mathbf{d} \rangle = \sum_{i=1}^N f(i) d^\ast\!(i),
$
where $d^\ast\!(i)$ stands for the complex conjugate of
$d(i)$.
A partition of a signal $\mathbf{f} \in \mathbb{R}^N$
is represented as a set of disjoint pieces,
$\mathbf{f}_q \in \mathbb{R}^{N_b},\,
q=1,\ldots,Q$, henceforth to be called `blocks',
which, without loss of generality, are assumed to
be all of the same size and such that $Q N_b =N$.
Denoting by
$\hat{\operatorname{J}}$ the concatenation operator, the
signal $\mathbf{f} \in \mathbb{R}^N$ is `assembled' from the blocks as
$\mathbf{f}=\hat{\operatorname{J}}_{q=1}^Q \mathbf{f}_q$. This operation implies that
the first $N_1$ components of the vector $\mathbf{f}$ are given
by the vector $\mathbf{f}_1$, the next $N_2$ components by the
vector $\mathbf{f}_2$, and so on.
A {\em{dictionary}} for $\mathbb{R}^{N_b}$ is
an {\em{over-complete}} set of (normalized to unity)
elements
$\mathcal{D}=\{\mathbf{d}_n \in \mathbb{R}^{N_b}\,; \| \mathbf{d}_n\|=1\}_{n=1}^M,$
which are called {\em{atoms}}.
\section{Sparse Signal Approximation}
Given a signal partition $\mathbf{f}_q \in \mathbb{R}^{N_b},\, q=1,\ldots,Q$
and a dictionary $\mathcal{D}$, the $k_q$-term approximation
for each block is given by an atomic decomposition
of the form
\begin{equation}
\label{atoq}
\mathbf{f}^{k_q}_q= \sum_{n=1}^{k_q}
c^{k_q}(n) \mathbf{d}_{\ell^{q}_n},
\quad q=1,\ldots, Q.
\end{equation}
The approximation to the whole signal is then
obtained simply by joining the approximation for
the blocks as
$\mathbf{f}^K= \hat{\operatorname{J}}_{q=1}^Q \mathbf{f}^{k_q}_q,$
where $K= \sum_{q=1}^Q k_q$.
\subsection{The Method}
The problem of finding the minimum number
of $K$ terms
such that $\|\mathbf{f} - \mathbf{f}^K\| <\rho$, for a
given tolerance parameter $\rho$,
is an NP-hard problem \cite{Nat95}.
In practical applications, one looks
for tractable sparse solutions. For this purpose
we consider the
Optimized Hierarchical Block Wise (HBW) version \cite{LRN16} of the
Optimized Orthogonal Matching Pursuit (OOMP) \cite{RNL02}
approach. This entails that, in addition to
selecting the dictionary atoms for the approximation
of each block, the blocks are ranked for their sequential
stepwise approximation. As a consequence, the approach
is optimized in the sense of
minimizing, at each iteration step, the norm of
the total residual error $\|\mathbf{f} - \mathbf{f}^K\|$ \cite{LRN16}.
As will be illustrated in Sec.~\ref{NE}, when approximating
a signal with pronounced amplitude variations
the sparsity result achieved by this
strategy is remarkable superior to that
arising when the approximation of
each block is completed at once, i.e., when the
ranking of blocks is omitted.
The OHBW-OOMP method is implemented using the
steps indicated below.
{\bf{OHBW-OOMP Algorithm}}
\begin{itemize}
\item[1)]For $q=1,\ldots,Q$
initialize the algorithm by setting:
$\mathbf{r}_q^0=\mathbf{f}_q$, $ \mathbf{f}_q^0=0$, $\Gamma_q= \emptyset$
$k_q=0$, and
selecting the `potential' first atom for
the atomic decomposition of every
block $q$ as the one corresponding to the indexes
$\ell_{1}^q$ such that
\begin{equation}
\ell_{1}^q=\operatorname*{arg\,max}_{n=1,\ldots,M}
\left |\langle \mathbf{d}_n,\mathbf{r}_{q}^{k_q}\rangle \right|^2,
\quad q=1,\ldots,Q.
\end{equation}
Assign $\mathbf{w}_1^q=\mathbf{b}_1^q=\mathbf{d}_{\ell_{1}^q}$.
\item[2)]Use the
OHBW criterion for
selecting the block to
upgrade the atomic decomposition
by adding one atom
\begin{equation}
\label{hbwoomp}
q^\star=
\operatorname*{arg\,max}_{q=1,\ldots,q}
\frac{|\langle \mathbf{w}_{k_q+1}^q, \mathbf{f}_{q}
\rangle|^2}{\|\mathbf{w}_{k_q+1}^q\|^2}.
\end{equation}
If $k_{{q^\star}}>0$ upgrade vectors
$\{\mathbf{b}_n^{k_{q^\star},{q^\star}}\}_{n=1}^{k_{q^\star}}$ for block ${q^\star}$ as
\begin{equation}
\begin{split}
\label{BW}
\mathbf{b}_{n}^{{k_{q^\star}}+1,{q^\star}}&= \mathbf{b}_{n}^{{k_{q^\star}},{q^\star}} - \mathbf{b}_{k_{q^\star}+1}^{{k_{q^\star}}+1,{q^\star}}\langle \mathbf{d}_{\ell_{{k_{q^\star}}+1}}^{{q^\star}}, \mathbf{b}_{n}^{k_{q^\star}+1,{q^\star}}\rangle,\quad n=1,\ldots,k_q,\\
\mathbf{b}_{k_{q^\star}+1}^{k_{q^\star}+1,{q^\star}}&= \frac{\mathbf{w}_{k_{q^\star}+1}^{q^\star}}{\| \mathbf{w}_{k_{q^\star}+1}^{q^\star}\|^2}.
\end{split}
\end{equation}
\item[3)]
Calculate
\begin{eqnarray}
\mathbf{r}_{{q^\star}}^{k_{q^\star}+1} &=& \mathbf{r}_{q}^{k_{q^\star}} - \langle \mathbf{w}_{k_{q^\star}+1}^{{q^\star}}, \mathbf{f}_{{q^\star}} \rangle \frac{\mathbf{w}_{k_{q^\star}+1}^{{q^\star}}}{\| \mathbf{w}_{k_{q^\star}+1}^{{q^\star}}\|^2},\nonumber \\
\mathbf{f}_{{q^\star}}^{k_{q^\star}+1} &=& \mathbf{f}_{{q^\star}}^{k_{q^\star}+1} +
\langle \mathbf{w}_{k_{q^\star}+1}^{{q^\star}}, \mathbf{f}_{{q^\star}} \rangle \frac{\mathbf{w}_{k_{q^\star}+1}^{{q^\star}}}{\| \mathbf{w}_{k_{q^\star}+1}^{{q^\star}}\|^2}.
\end{eqnarray}
Upgrade the set $\Gamma_{{q^\star}} \leftarrow \Gamma_{{q^\star}} \cup
\ell_{k_{q^\star}+1}$ and increase $k_{q^\star}\leftarrow k_{q^\star} +1$.
\item[4)]
Select a new potential atom for the
atomic decomposition of block ${q^\star}$, using
the OOMP criterion,
i.e., choose $\ell_{k_{q^\star}+1}^q$ such that
\begin{equation}
\label{oomp}
\ell_{k_{q^\star}+1}^q=\operatorname*{arg\,max}_{\substack{n=1,\ldots,M\\ n\notin \Gamma_q}}
\frac{|\langle \mathbf{d}_n,\mathbf{r}_{{q^\star}}^{k_q}
\rangle|^2}{1 - \sum_{i=1}^{k_q}
|\langle \mathbf{d}_n ,\til{\mathbf{w}}_i^q\rangle|^2},
, \quad \text{with} \quad \til{\mathbf{w}}_i^{{q^\star}}= \frac{\til{\mathbf{w}}_i^{{q^\star}}}{\|\til{\mathbf{w}}_i^{{q^\star}}\|},
\end{equation}
\item[5)]
Compute the corresponding new vector $\mathbf{w}_{k_{q^\star}+1}^{{q^\star}}$ as
\begin{equation}
\begin{split}
\label{GS}
\mathbf{w}_{k_{q^\star}+1}^q= \mathbf{d}_{\ell_{k_{q^\star}+1}}^q - \sum_{n=1}^{k_{q^\star}} \frac{\mathbf{w}_n^{q^\star}}
{\|\mathbf{w}_n^{q^\star}\|^2} \langle \mathbf{w}_n^{q^\star}, \mathbf{d}_{\ell_{k_{q^\star}}}^q\rangle.
\end{split}
\end{equation}
including, for numerical accuracy, the
re-orthogonalizing step:
\begin{equation}
\label{RGS}
\mathbf{w}_{k_{q^\star}+1}^q \leftarrow \mathbf{w}_{k_{q^\star}+1}^q- \sum_{n=1}^{k_{q^\star}} \frac{\mathbf{w}_{n}^{q^\star}}{\|\mathbf{w}_n^{q^\star}\|^2}
\langle \mathbf{w}_{n}^{q^\star} , \mathbf{w}_{k_{q^\star}+1}^q\rangle.
\end{equation}
\item[6)]Check if, for a given
$K$ and $\rho$ either the condition $\sum_{q=1}^Q k_q=K+1$
or $\| \mathbf{f} - \mathbf{f}^K\| < \rho$ has been met. If
that is the case, for $q=1,\ldots,Q$ compute the coefficients
$c^{k_q}(n) = \langle \mathbf{b}_n^{k_q}, \mathbf{f}_q \rangle,\, n=1,\ldots, k_q$.
Otherwise repeat steps 2) - 5).
\end{itemize}
{\bf{Remark 1:}} For all the values of $q$,
the OOMP criterion \eqref{oomp} in the
algorithm above ensures that, fixing the set
of previously
selected atoms, the atom corresponding to the
indexes given by \eqref{oomp} minimizes the local
residual norm $\|\mathbf{f}_q -\mathbf{f}_q^{k_q}\|$ \cite{RNL02}.
Moreover, the OHBW-OOMP criterion \eqref{hbwoomp},
for choosing the block to upgrade the approximation,
ensures the
minimization of the total residual norm \cite{LRN16}.
Let us recall that the
OOMP approach optimizes the Orthogonal Matching Pursuit
(OMP) one \cite{PRK93}. The latter is also
an optimization of the plain Matching Pursuit (MP)
method \cite{MZ93}(see the discussion in \cite{RNL02}).
\subsection{The Dictionary}
The degree of success in achieving high sparsity
using a dictionary approach depends on both,
the suitability of the mathematical method for
finding a tractable sparse solution and the dictionary
itself.
As in the case of melodic music \cite{LRN16,RNA16}, we found
the trigonometric dictionary $\mathcal{D}_T$,
which is the union of the dictionaries $\mathcal{D}_{C}$ and
$\mathcal{D}_{S}$ given below, to be an
appropriate dictionary for approximating these
GS signals.
\begin{eqnarray}
\mathcal{D}_{C}^x&=&\{w_c(n)
\cos{\frac{{\pi(2i-1)(n-1)}}{2M}},i=1,\ldots,N_b\}_{n=1}^{M}\nonumber\\
\mathcal{D}_{S}^x&=&\{w_s(n)\sin{\frac{{\pi(2i-1)(n)}}{2M}},i=1,\ldots,N_b\}_{n=1}^{M}.\nonumber
\end{eqnarray}
In the above sets $w_c(n)$ and $w_s(n),\, n=1,\ldots,M$
are normalization factors.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=9cm]{pato.eps}
\caption{\small{Prototype atoms $\mathbf{p}_1, \mathbf{p}_2$ and
$\mathbf{p}_3$, which generate the
dictionaries $\mathcal{D}_{P1}$, $\mathcal{D}_{P1}$
and $\mathcal{D}_{P3}$ by sequential
translations of one point. Each prototype is shown in a
different color. \label{pato}}}
\end{center}
\end{figure}
We also found that
sparsity may benefit by the inclusion of
a dictionary which is constructed
by translation of the prototype atoms, $\mathbf{p}_1, \mathbf{p}_2$ and
$\mathbf{p}_3$
in Fig.~\ref{pato}. Denoting by
$\mathcal{D}_{P_1}$, $\mathcal{D}_{P_2}$
and $\mathcal{D}_{P_3}$ the
dictionaries arising by translations of the atoms
$\mathbf{p}_1$, $\mathbf{p}_2$, and $\mathbf{p}_3$, respectively,
the dictionary $\mathcal{D}_{P}$ is
built as
$\mathcal{D}_{P}= \mathcal{D}_{P_1} \cup \mathcal{D}_{P_2}
\cup \mathcal{D}_{P_3}$.
The whole mixed dictionary is then
$\mathcal{D}_M = \mathcal{D}_{T} \cup
\mathcal{D}_{P}$, with
$\mathcal{D}_{T}= \mathcal{D}_{C} \cup \mathcal{D}_{S}$.
Interestingly enough, the dictionary $\mathcal{D}_M$
happens to be a sub-dictionary of a larger dictionary
proposed in \cite{RNB13} for producing sparse
representations of astronomical images, the difference
being that, in this case, sparsity does not improve
in a significant way by further enlarging the dictionary.
From a computational viewpoint
the particularity of the sub-dictionaries $\mathcal{D}_{C}$
and $\mathcal{D}_{S}$ is that
the inner product with
all its elements can be evaluated via FFT. This
possibility reduces the complexity
of the numerical calculations when the partition
unit $N_b$ is large \cite{LRN16, RNA16}.
Also, the inner products
with the atoms of the dictionaries
$\mathcal{D}_{P_2}$ and $\mathcal{D}_{P3}$
can be effectively implemented,
all at once, via a convolution operation.\\
{\bf{Note:}}
The MATLAB routine implementing the OHBW-OOMP approach,
dedicated to the dictionary introduced
in this section, has been
made available on \cite{paperpage}.
\subsection{The Processing}
\label{NE}
We process now the three signals we are considering here:
\begin{itemize}
\item[a)]The audio representation of the
detected {\tt{gw151226}} chirp \cite{LigoData}.
\item[b)]
The tone of the theoretical gravitational
wave {\tt{iota\_20\_10000\_4\_4\_90\_h}} \cite{MITData}.
\item[c)]The {\tt{Black\_Hole\_Billiards}} ring tone
\cite{LigoData}.
\end{itemize}
The quality of an approximation is measured by the
Signal to Noise Ratio (SNR) which is defined as
\begin{equation}
\text{SNR}=10 \log_{10} \frac{\| \mathbf{f}\|^2}{\|\mathbf{f} - \mathbf{f}^K\|^2}=
10 \log_{10}\frac{\sum_{\substack{i=1\\q=1}}^{N_b,Q} |f_q(i)|^2}
{\sum_{\substack{i=1\\q=1}}^{N_b,Q} |f_q(i) -f^{k_q}_q(i)|^2}.
\end{equation}
The sparsity of the whole representation is measured by
the Sparsity Ratio (SR) defined as
$\displaystyle{\text{SR}= \frac{N}{K}}$, where $K$ is the total
number of coefficients in the signal representation as
defined above.
\subsubsection*{Audio representation of the chirp
{\tt{gw151226}}}
This clip, made with the detected short
chirp {\tt{gw151226}},
is plotted in the left graph of
Fig.\ref{gwc}. The graph on the right is its
classic spectrogram.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{clip1.eps}
\includegraphics[width=8cm]{spec_clip1.eps}
\caption{\small{The graph on the left represents the
clip {\tt{gw151226}}.
The central line
is the difference between the approximation, up to
SNR=50dB, and the signal. The right graph is the
classic spectrogram of the clip on the left. \label{gwc}}}
\end{center}
\end{figure}
When an orthogonal basis for approximating
these signals is used, the best
sparsity result is achieved with the
Discrete Cosine Transform (DCT). Hence,
we first approximate this clip, up to SNR=50dB,
by nonlinear thresholding of the DCT coefficients.
The best SR (SR= 28.7) is obtained for $N_b=N= 65536$,
i.e., by processing the signal as
a single block. Contrarily, when approximating the clip
using the trigonometric dictionary $\mathcal{D}_{T}$,
the best result is obtained for $N_b=2048$,
achieving a much higher SR. Approximating
each block at once, with the OOMP approach, SR=209.4, and
raking the blocks with the OHBW-OOMP approach SR= 263.2.
Let us stress that this implies a gain in
sparsity result of $817\%$ with respect to the DCT
approach for the same value of SNR.
The central dark line
in the left graph of Fig.~\ref{gwc} represents the
difference between the signal and its approximation, up to
SNR=50dB. For this chirp the inclusion of the
dictionary $\mathcal{D}_{P}$ would not improve sparsity.
\subsubsection*{Theoretical Gravitational Wave Sound}
This is the {\tt{iota\_20\_10000\_4\_4\_90\_h}}
gravitational wave,
which belongs to the family of Extreme Mass Ratio Inspirals
\cite{Hug00,Hug01,GHK02,HDF05, DH06} available on
\cite{MITData}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{clip2.eps}
\includegraphics[width=8cm]{spec_clip2.eps}
\caption{\small{The graph on the left represents the
{\tt{iota\_20\_10000\_4\_4\_90\_h}}
tone.
The central line
is the difference between the approximation, up to
SNR=50dB, and the signal. The right graph is the
spectrogram of the clip on the left. \label{iota}}}
\end{center}
\end{figure}
It consists of $N=458752$ data points plotted
in the left graph of Fig.~\ref{iota}. The graph on the
right is its classic spectrogram. In this case the best
SR result (SR=5.1), produced by
nonlinear thresholding of
the DCT coefficients for approximating the signal up to
SNR=50dB, is obtained with $N_b=16384$. A much smaller
value of $N_b$ ($N_b=2048$) is required to achieve the
best SR result (SR=10.8)
with the OHBW-OOMP method and the trigonometric
dictionary. With the mixed dictionary $\mathcal{D}_M$ there is
a further improvement: SR=11.9.
The central dark line in the left graph of Fig.~\ref{iota} represents the
difference between the signal and its approximation, up to
SNR=50dB. For this signal the gain in SR with respect to
the DCT approximation is $136\%$. Since the amplitude of the
signal does not vary much along time, the SR
obtained by approximating each block at once, with OOMP, does not
significantly differ from the values obtained applying the
OHBW-OOMP strategy.
\subsubsection*{The {\tt{Black\_Hole\_Billiards}} ring tone}
In order to stress the relevance of the technique
for representing features
of more complex signals using a very reduced set of points,
we consider here
{\tt{Black\_Hole\_Billiards}} ring tone available on
\cite{LigoData}. This clip was created by
Milde Science Communication by superimposing
a sound of percussive nature (the billiards sound) to
a GW chirp. It consisting of $N=262144$ samples
plotted in the left graph of Fig.~\ref{bhb}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{clip3.eps}
\includegraphics[width=8cm]{spec_clip3.eps}
\caption{\small{The graph on the left represents the
{\tt{Black\_Hole\_Billiards}} clip. Credit:
Milde Science Communication.
The central dark line
is the difference between the approximation, up to
SNR=40dB, and the signal. The right graph is the
spectrogram of the clip on the left. \label{bhb}}}
\end{center}
\end{figure}
The graph on the right is its classic spectrogram.
When processing the signal with DCT the
best sparsity result when the approximation is carried out
block by block up to the same error is SR=4.2, for
SNR=40dB, and corresponds to $N_b=16384$. However,
with $N_b=2048$ the OHBW version for selecting DCT
coefficients improves in this case the standard DCT result,
attaining SR=6.2.
For an approximation of the same quality (SNR=40 dB)
the SR rendered by the OHBW-OOMP method
with $N_b=512$ and the
trigonometric dictionary $\mathcal{D}_T$ is SR= 12.1.
With the mixed dictionary $\mathcal{D}_M$ this value increases to
SR=13.7.
The central dark line in the left graph of Fig.~\ref{bhb}
represents the
difference between the signal and its approximation, up to
SNR=40dB. It is worth commenting that, if with the same
dictionary, the approximation were carried out without
ranking the blocks, i.e., approximating each block at
once up to the same SNR, the value of SR would be
only 6.7. This example highlights the importance of
adopting the OHBW strategy for constructing the signal
approximation, when the signal amplitude varies significantly along the domain of definition.
\subsection{The Role of Local Sparsity}
The SR is a global measure of sparsity indicating the
number of elementary
components contained in the whole signal.
An interesting description of a the signal variation
is rendered by a local measure of sparsity. For this
we consider the local sparsity ratio
$sr(q)= \frac{N_b}{k_q},\,q=1,\ldots,Q$ where,
as defined above,
$k_q$ is the number of coefficients in the
decomposition of the $q$-block and $N_b$ the size of
the block.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{lsr1.eps}
\includegraphics[width=8cm]{lsr2.eps}
\caption{\small{The dark line in the left graph
joins the inverse local sparsity
values for the clip {\tt{gw151226}}.
The right graph has the same description but for the
{\tt{iota\_20\_10000\_4\_4\_90\_h}} clip.
tone. \label{lsr}}}
\end{center}
\end{figure}
For illustration's convenience the dark
line in both graphs of Fig.~\ref{lsr}
depicts the inverse of this local measure. This line
joins the values
$1/sr(q),\, q=1,\ldots,Q$.
Each of these values is located in the horizontal axis at
the center of the corresponding block and provides
much information about the signal. Certainly,
simply from the observation of the the dark line
in the left
graph of Fig.~\ref{lsr} (joining 32 points of
inverse local sparsity ratio) one can realize
that the number of internal components in
the clip {\tt{gw151226}} is roughly
constant along audiable part of the signal,
with a significant higher value only at the
very end if this part.
In the case
of the {\tt{iota\_20\_10000\_4\_4\_90\_h}}
clip (right graph in the
same figure) the line joining the 224 points of
the inverse local sparsity ratio indicates a clear
drop of sparsity
towards the end of the signal, where the rapid rise of the
tone does occur (c.f. spectrogram in Fig.~\ref{iota}).
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{lsr3.eps}
\includegraphics[width=8cm]{lsr33.eps}
\caption{\small{The dark line in the left
graph joins the inverse local sparsity ratio
values for the {\tt{Black\_Hole\_Billiards}} ring tone.
The lines in the right graph discriminate the
inverse local sparsity ratio produced with atoms
in the dictionary $\mathcal{D}_P$ (blue), in the dictionary $\mathcal{D}_T$
(red) and in the whole dictionary $\mathcal{D}_M$ (black).
\label{lsrb}}}
\end{center}
\end{figure}
Since the {\tt{Black\_Hole\_Billiards}} ring tone is
a more complex signal, due to the superposition of
the artificial sound,
the information given by the local
sparsity ratio is richer than in the previous
cases.
Notice for instance that the dark
line in the left graph of Fig.~\ref{lsrb} clearly indicates
the offsets in the percussive
part of the clip which has been superimposed to the
GS chirp. Moreover this line,
joining 512 points of inverse local sparsity ratio,
also roughly follows the signal variation envelop.
The graph on the right
discriminates the local sparsity measure corresponding
to atoms in the trigonometric component of the
dictionary, and those in the
dictionary $\mathcal{D}_P$. From bottom to top the first
line (blue) represents the inverse local sparsity values
corresponding to atoms in $\mathcal{D}_P$ and the next line (red)
to atoms in $\mathcal{D}_T$. The top line (black) corresponds
to atoms in the mixed dictionary $\mathcal{D}_M$ for facilitating
the visual comparison. In this clip $20\%$ of atoms
are from dictionary $\mathcal{D}_P$ and, as indicated by the blue line in the right graph of Fig.~\ref{lsrb}, a significant
contribution of those atoms takes place within the blocks
where the rapid rise of the GS tone takes place
(c.f. spectrogram in Fig.~\ref{bhb}).
\section{Conclusions} We have here advanced an
effective technique for
the numerical representation of Gravitational Sound
clips produced by the
Laser Interferometer Gravitational-Wave Observatory (LIGO)
and the Massachusetts Institute of Technology (MIT).
Our technique is inscribed
within the particular context of sparse representation
and data reduction.
We laid out a detailed procedure to this effect
and were able to show that these types of signals can be
approximated with high quality using
{\em{significantly fewer elementary components}}
than those required
within the standard orthogonal basis framework.
\subsection*{Acknowledgments} Thanks are due to
LIGO, MIT and Milde Science Communications
for making available the GS tones we have used
in this paper. We are particularly grateful to
Prof. S. A. Hughes and Prof. B. Schutz,
for giving us information on
the generation of those signals.
| proofpile-arXiv_065-7570 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Short-distance current correlators in QCD can be analyzed using
perturbation theory, while they can be directly calculated in lattice
QCD. By matching them, one may determine the parameters in the Standard
Model. The charm quark mass is a good example, {\it i.e.} it can be
extracted from the short-distance regime by means of the moment method
first proposed by the HPQCD-Karlsruhe collaboration \cite{Allison:2008xk}. The method
has also been used for the determination of the bottom quark mass by the
same group, and the precision has been improved \cite{Chakraborty:2014aca}. More recently,
we utilized the same method but with a different lattice formulation,
to determine the charm quark mass \cite{Nakayama:2016atf}.
We use the lattice ensembles generated by the JLQCD collaboration with
the Mobius domain-wall fermion for 2+1 flavors of dynamical quarks. The
lattices are relatively fine, {\it i.e.} $a=0.080-0.055$ fm,
which allow us to control the discretization effects.
In this talk, we mainly discuss a test of this method using
experimental data, as well as the main sources of systematic uncertainty,
while leaving the full description of this work in \cite{Nakayama:2016atf}.
The same set of lattice ensembles have also been used for the studies of
heavy-light decay constant \cite{BFahy} and semileptonic decay form factors \cite{Kaneko}.
For the vector channel, the current correlator
can be related to the $e^+e^-$ cross section, or the $R$ ratio, using
the optical theorem. By comparing lattice results with
phenomenological analysis obtained from experimental data, we may
validate the lattice calculation. We demonstrate that lattice data
are consistent with experiments after taking the continuum limit.
For the determination of the charm quark mass, we use the pseudo-scalar
channel, as it provides a more sensitive probe. Among other sources of
systematic uncertainty, including those of discretization effects and
finite volume effect and so on, it turned out that the perturbative
error is the dominant source. We attempt to conservatively estimate the
effect of perturbative error.
\section{Moment of correlators}
We calculate the correlators of the pseudo-scalar current
$j_5 = i\bar{\psi_c}\gamma_5\psi_c$
and vector current
$j_k = \bar{\psi_c}\gamma_k\psi_c$
composed of charm quark field $\psi_c$:
\begin{eqnarray}
\label{eq:G^PS}
G^{PS}(t) & = & a^6 \sum_{\vector{x}} (am_c)^2
\langle 0| j_5 (\vector{x},t)j_5 (0,0) |0\rangle,
\\
\label{eq:G^V}
G^{V}(t) & = & \frac{a^6}{3}\sum_{k=1}^3 \sum_{\vector{x}}
Z_V^2
\langle 0| j_k (\vector{x},t)j_k (0,0) |0\rangle,
\end{eqnarray}
with a renormalization constant for the vector current $Z_V$.
We then construct the temporal moments
\beqn{
G_n & = & \sum_t \left(\frac{t}{a}\right)^n G(t),
}
\begin{comment}
\begin{eqnarray}
\label{eq:momentPS}
G_n^{PS} & = & \sum_t \left(\frac{t}{a}\right)^n G^{PS}(t),
\\
\label{eq:momentV}
G_n^V & = & \sum_t \left(\frac{t}{a}\right)^n G^V(t),
\end{eqnarray}
\end{comment}
for each channel with an even number $n\geq 4$.
Since the charmonim correlators $G(t)$ are exponentially suppressed in the long-distance regime, the moments are sensitive to the region of $t\sim n/M$ depending on the charmonium ($\eta_c$ or $J/\psi$) mass $M$.
The moments are related to the vacuum polarization functions $\Pi ^V (q^2)$ and $\Pi ^{PS} (q^2)$ as
\begin{eqnarray}
(q^\mu q^\nu-q^2g^{\mu\nu})\Pi^V(q^2)
& = &
i\int d^4x\, e^{iqx}
\bra{0}j^\mu(x) j^\nu(0)\cket{0},
\\
q^2 \Pi^{PS} (q^2)
& = &
i \int d^4x\, e^{iqx}
\langle 0| j_5(x)j_5(0)|0\rangle.
\end{eqnarray}
through the derivatives with respect to $q^2$:
\be{
a^{2k}G_{2k+2} ^V = \frac{12\pi^2 Q_f ^2}{k!}\left(\frac{\p}{\p q^2}\right)^k \left(\Pi^V(q^2)\right)|_{q^2 = 0}.
}
The vector channel can be related to the experimentally observed $e^+e^-$ cross section, {\it i.e.} the $R$-ratio $R(s)\equiv\sigma_{e^+e^-\to c\bar{c}}(s)/\sigma_{e^+e^-\to\mu^+\mu^-}(s)$ using the optical theorem:
\be{
\frac{12\pi^2 Q_f ^2}{k!}\left(\frac{\p}{\p q^2}\right)^k \left(\Pi^V(q^2)\right)|_{q^2 = Q_0 ^2}
\equiv
\int_{s_0}^{\infty} ds \frac{1}{(s-Q_0 ^2)^{k+1}} R(s).
}
Here $Q_0$ is an arbitrally number and often set to $Q_0 = 0$.
We use this relation between the lattice calculation and experimental data for consistency check of the lattice calculation.
The temporal moments for sufficiently small $n$ can be calculated perturbatively since they are defined in the short-distance regime.
The valuum polarization functions are represented with a dimensionless parameter
$z \equiv q^2/2m_c ^2(\mu)$ as
\be{
\Pi(q^2) = \frac{3}{16\pi^2}\sum_{k=-1} ^{\infty}C_kz^k,
}
and the coefficients $C_k$ are perturbatively calculated up to $O(\alpha_s ^3)$ in the $\overline{\m{MS}}$ scheme \cite{Maier:2007yn,Maier:2009fz,Kiyo:2009gb},
\begin{comment}
\begin{eqnarray}
C_k &=& C_k^{(0)} + \frac{\alpha_s(\mu)}{\pi}
\left( C_k^{(10)}+ C_k^{(11)}l_{m}\right)
\nonumber\\
& & + \left( \frac{\alpha_s(\mu)}{\pi}\right) ^2
\left( C_k^{(20)} + C_k^{(21)}l_{m} + C_k^{(22)}l_{m}^2 \right)
\nonumber\\
& & + \left( \frac{\alpha_s(\mu)}{\pi}\right)^3
\left( C_k^{(30)} + C_k^{(31)}l_{m} + C_k^{(32)}l_{m}^2 +
C_k^{(33)}l_{m} ^3 \right) + ...,
\end{eqnarray}
\end{comment}
and written in terms of $l_m \equiv \m{log}(m^2 _c(\mu)/\mu^2)$ and $\alpha_s (\mu)$. Since we use this perturbative expansion to extract the charm quark mass and the strong coupling constant, the uncertainty of $O(\alpha_s ^4)$ remains.
Practically, we redefine the moments to reduce the uncertainty from the scale setting as well as that from the leading discretization effect:
\beqnn{
\label{eq:reduced_moment}
R_n ^{PS}
&=
\displaystyle
\frac{am_{\eta_c}}{2a\tilde{m}_c}
\left(\frac{G_n ^{PS}}{{G_n ^{PS} }^{(0)}}\right)^{1/(n-4)}
&
\mbox{for}\;\; n\geq 6.\\
\label{eq:reduced_moment_V}
R_n^V
&=
\displaystyle
\frac{am_{J/\psi}}{2a\tilde{m}_c}
\left(\frac{G_n^V}{G_n^{V(0)}}\right)^{1/(n-2)}
&
\mbox{for}\;\; n\geq 4.
}
with the pole mass of the domain-wall fermion $\tilde{m}_c$ and the tree level moment $G_n ^{(0)}$. We will use these reduced moments to test the consistency with experimental data, and to determine the quark mass and strong coupling constant.
\section{Consistency with experimental data}
Before discussing the extraction of the charm quark mass, we try to validate the lattice calculation using the vector channel together with the experimental data available for the $R$-ratio.
Our lattice ensembles are generated with $2 + 1$ flavors of Moebius domain-wall fermion at lattice spacings $a$ = 0.080, 0.055, and 0.044 fm. The spacial size $L/a$ is 32, 48, and 64 respectively, and the temporal size $T/a$ is twice as long as $L/a$. Three defferent values of bare charm quark mass are taken to calculate charmonium correletors, and they are interpolated to the physical point such that the mass of spin-averaged 1S states are reproduced. The details of the ensembles are in \cite{BFahy}. The
renormalization constant $Z_V$ is determined non-perturbatively from the light hadron correlators as 0.955(9), 0.964(6), and 0.970(5) for $\beta$ = 4.17, 4.35, and 4.47, respectively \cite{Tomii:2016xiv}.
We extrapolate the data for $R_n ^V$ to the continuum limit using an ansatz
\begin{equation}
R_n^V = R_n^V(0)
\left( 1 + c_1(am_c) ^2 \right) \times
\left( 1 + f_1\frac{m_u + m_d + m_s}{m_c} \right),
\label{fittingfunc}
\end{equation}
with three free parameters $R_n^V(0)$, $c_1$, and $f_1$.
Higher order terms of $a$ and $m_l$ are confirmed to be
insignificant from the data.
We consider five different sources of uncertainty.
They are statistical error, finite volume effect, discretization error, uncertainty of the renormalization constant, and dynamical charm quark correction.
Since we use $2+1$ flavor ensembles in the lattice calculation, the dynamical charm quark effect is included using perturbation theory \cite{Maier:2007yn,Maier:2009fz,Kiyo:2009gb}.
The result is shown in Figure \ref{fig:vectorex}.
The ``experimental data'' are taken from \cite{Dehnadi:2011gc,Kuhn:2007vp}, which are obtained by integrating the experimentally observed $R(s)$ with appropriate weight functions.
The lattice results show only mild $a$ dependence for $n$ = 6 and 8, and their continuum limit
is consistent with the corresponding ``experimental data''.
The dominant source of error is the renormalization constant, and the combined error is about 1\%,
which is about the same in size with the phenomenological estimate.
This agreement gives confidence about the validity of our lattice calculation.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=6.5cm, angle=-90]{Rn_exper_upto8.eps}
\caption{
Reduced moments for the vector
current $R_n^V$ ($n$ = 6 (pluses) and 8 (squares))
and their continuum extrapolation.
Data are plotted after correcting for the finite light quark mass
effects by multiplying $1/(1+f_1(m_u+m_d+m_s)/m_c)$
and for the missing charm quark
loop effect
$r_n^V(n_f=4)/r_n^V(n_f=3)$.
Phenomenological estimates of the corresponding quantities
are plotted on the left:
Dehnadi {\it et al.} \cite{Dehnadi:2011gc} (filled circle),
Kuhn {\it et al.} \cite{Kuhn:2007vp} (open circle).
}
\label{fig:vectorex}
\end{center}
\end{figure}
\section{Charm quark mass extraction}
We use the reduced moment $R_n$ of the pseudo-scalar channel to determine the charm quark mass.
The continuum extrapolation of $R_n$ is shown in Figure \ref{fig:R_n extrap} with statistical error.
We assume the extrapolation form to be the same as that of $R_n ^V$ (\ref{fittingfunc}) with free parameters $R_n (0)$, $c_1$, and $f_1$, and use the perturbative factor $r_n(n_f=4)/r_n(n_f=3)$ to correct for the charm sea quark contribution. Our extrapolated lattice data are sufficiently precise since they have small lattice spacing $a$ dependence.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=6.5cm,angle=-90]{Rn_uds_sqare.eps}
\caption{
Continuum extrapolation of $R_n(a)$.
Data points correspond to $R_6$, $R_8$, $R_{10}$, $R_{12}$, and $R_{14}$
from top to bottom.
We plot the mean of the extrapolation with and without coarsest lattice as the extrapolated values,
and estimate it deviation as the $O(a^4)$ error.
}
\label{fig:R_n extrap}
\end{center}
\end{figure}
Now we consider the systematic error from the perturbative expansion for the reduced moments $r_n$, which are known up to $O(\alpha _s ^3)$ \cite{Maier:2007yn,Maier:2009fz,Kiyo:2009gb}, and the leading uncertainty is at the order of $\alpha _s ^4$.
Such error from unknown higher order terms can be estimated by residual $\mu$ dependence of the perturbative result, since the physical quantity should be independent of the renormalization scale $\mu$.
We choose the range $\mu = 2-4$ GeV to estimate this source of error.
Below the lower limit the perturbative result rapidly varies, which suggests that the perturbative expansion is no longer reliable.
In the moment method, the combination $r_n(\alpha_s(\mu),m_c(\mu))/m_c(\mu)$ has to be $\mu$ independent.
We generalize this procedure for the scales to define $\alpha_s (\mu)$ and of $m_c(\mu)$ separately. Namely, we use the perturbative expansion written in terms of $\alpha_s(\mu_\alpha)$ and $m_c(\mu_m)$ with $\mu_\alpha \neq \mu_m$ \cite{Dehnadi:2015fra}. We estimate the truncation error using the range $\mu_\alpha \in \mu_m \pm 1$ GeV with 2 GeV $\leq \m{min}\{\mu_\alpha,\mu_m\}$ and $\m{max}\{\mu_\alpha,\mu_m\}\leq$ 4 GeV.
By allowing the possibility of $\mu _\alpha \neq \mu _m$, the estimated error becomes twice as large.
We adopt this choice to be conservative.
The contribution from the gluon condensate, which appears in the operator product expansion of $r_n$, is another source of error. It can be written as
\be{
g_{2k} ^{GG} = \frac{\langle(\alpha_s/\pi)G_{\mu\nu} ^2\rangle}{2m_c(\mu)}\left(a_l + \frac{\alpha_s}{\pi}c_l\right),
}
where the coefficients $a_l$ and $c_l$ are known up to $O(\alpha_s ^2)$ \cite{Broadhurst:1994qj}.
The gluon condensate $\langle(\alpha_s/\pi)G_{\mu\nu} ^2\rangle$
is not well determined phonomenologically, {\it e.g.} $\langle(\alpha_s/\pi)G_{\mu\nu} ^2\rangle = 0.006 \pm 0.0012\m{\ GeV}^4$ from a $\tau$ dacay analysis \cite{Geshkenbein:2001mn}.
In our analysis, we treat $\langle(\alpha_s/\pi)G_{\mu\nu} ^2\rangle$ as a free parameter and determine from the lattice data together with $m_c(\mu)$ and $\alpha_s(\mu)$.
In the deffinition of the moments, there appears a meson mass $m_{\eta_c}$, which is an input parameter.
Because our lattice calculation does not contain the electromagnetic and disconnected diagram effects, we need to modify the mass of $\eta_c$ to take account of their effects. The electromagnetic effects is expected to reduce the meson mass by 2.6(1.3) MeV \cite{Davies:2009tsa}, and the disconected contribution also reduces the mass by 2.4(8) MeV according to a lattice study \cite{Follana:2006rc}. We therefore use the modified $m_{\eta_c}$ as an input, $m_{\eta_c} ^{\m{exp}}= 2983.6(0.7) + 2.4(0.8)_\m{Disc.} + 2.6(1.3)_\m{EM}$ MeV.
\begin{table}
\begin{center}
\begin{tabular}{c|c|cccccccc}
\hline
& & pert &$t_0 ^{1/2}$& stat & $O(a^4)$ & vol & $m_{\eta_c}^{\m{exp}}$ & disc & EM\\
\hline
$m_c(\mu)$ [GeV]&1.0033(96)\ \ \ \ & (77)\ \ &(49) & (4) & (30) & (4) & (3) & (4) & (6)\\
\hline
\end{tabular}
\vspace{0.5pt}
\begin{tabular}{c|c|cccccccc}
\hline
$\alpha_s(\mu)$\ \ \ \ \ \ \ \ \ \ \ \ & 0.2528(127)\ \ & (120) &(32) & (2) &\ (26) &\ \ (1) &\ (0) &\ \ (0) &\ \ (1)\\
\hline
\end{tabular}
\vspace{0.5pt}
\begin{tabular}{c|c|cccccccc}
\hline
$\frac{<(\alpha / \pi)G^2>}{m^4}$\ \ \ \ \ \ & $-$0.0006(78)\ \ & (68) &(29) & (3) &\ (22) &\ \ (3) &\ (2) &\ \ (3) &\ \ (5)\\
\hline
\end{tabular}
\caption{
Numerical results for $m_c(\mu)$ (top panel), $\alpha_s(\mu)$ (mid
panel) and $\frac{<(\alpha_s/\pi)G^2>}{m^4}$ (bottom panel)
at $\mu$ = 3~GeV.
The results are listed for choices of three input
quantities out of $R_8$, $R_{10}$ and $R_6/R_8$.
In addition to the central values with combined errors, the
breakdown of the error is presented.
They are the estimated errors from the truncation of perturbative
expansion, the input value of $t_0 ^{1/2}$, statistical, discretization error of $O(a^4)$ (or
$O(\alpha_sa^2)$),
finite volume, experimental data for $m_{\eta_c}^{\m{exp}}$,
disconnected contribution, electromagnetic effect, in the order
given.
The total error is estimated by adding the individual errors in
quadrature.
}
\label{nogluin2}
\end{center}
\end{table}
We include all of these error estimates.
Namely, statistical error, discretization effect of $O(a^4)$, finite volume, experimental value of $m_{\eta_c} ^\m{exp}$, disconnected and electromagnetic effect. Table \ref{nogluin2} lists the result of charm quark mass $m_c(\mu)$ and strong coupling $\alpha_s(\mu)$ as well as the gluon condensate $\langle(\alpha_s/\pi)G_{\mu\nu} ^2\rangle$ in the $\overline{\m{MS}}$ scheme at $\mu = 3$ GeV.
Figure \ref{fig:noingraph} shows the constraints on $m_c$ and $\alpha_s$ from the moments and their ratio. Since each moment puts different constraints on these parameters, charm quark mass $m_c(\mu)$ and coupling constant $\alpha_s (\mu)$ can be determined.
Roughly speaking, the individual moment is more sensitive to $m_c (\mu)$ while the ratio $R_6/R_8$
has a sensitivity to $\alpha_s(\mu)$.
In the final result, the dominant source of the error comes from the truncation of perturbative expansion for all quantities.
The next largest is the discretization effect of $O(a^4)$ as well as the uncertainty of lattice scale determined with the Wilson flow $t_0 ^{1/2}$.
It means that in order to achive more precise determination with this method,
we need yet another order of perturbative expansion.
\begin{figure}[tbp]
\centering
\includegraphics[width=8.9 cm, angle=0]{NoInGraph_PP.eps}
\caption{
Constraints on $m_c(\mu)$ and $\alpha_s(\mu)$
from the moments
$R_6$ (dotted curve), $R_8$ (dashed curve),
$R_{10}$ (long dashed curve), and $R_6/R_8$ (solid curve).
For each curve, the band represents the error due to the
truncation of perturbative expansion.
}
\label{fig:noingraph}
\end{figure}
\vspace{17pt}
The lattice QCD simulation has been performed on Blue Gene/Q supercomputer at the High Energy Accelerator Research Organization (KEK) under the Large Scale Simulation Program (Nos. 13/14-4, 14/15-10, 15/16-09). This work is supported in part by the Grant-in-Aid of the Japanese Ministry of Education (No. 25800147, 26247043, 26400259).
\input{refs.dat}
\end{document}
| proofpile-arXiv_065-7579 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Solving the electronic structure problem for molecules, materials, and interfaces is of fundamental importance to a large number of disciplines including physics, chemistry, and materials science.
Since the early development of quantum mechanics, it has been noted, by Dirac among others, that ``...approximate, practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation" \cite{dirac}. Historically, this has meant invoking approximate forms of the underlying interactions (e.g. mean field, tight binding,
etc.), or relying on phenomenological fits to a limited number of either experimental observations or theoretical results (e.g. force fields) \cite{Cherukara2016,Riera2016,Jaramillo-Botero2014,VanBeest1990,Ponder2003,Hornak2006,cole2007}. The development of feature-based models is not new in the scientific literature. Indeed, prior even to the acceptance of the atomic hypothesis, van der Waals argued for an equation of state based on two physical features \cite{Waals1873}. Machine learning (i.e. fitting parameters within a model) has been used in physics and chemistry since the dawn of the computer age. The term machine learning is new; the approach is not.
More recently, high-level \textit{ab initio} calculations have been used to train artificial neural networks\xspace to fit high-dimensional interaction models \cite{Li2013,Behler2007,Morawietz2013,Behler2008,Dolgirev2016,Artrith2016}, and to make informed predictions about material properties \cite{Tian2017,Rupp2012}. These approaches have proven to be quite powerful, yielding models trained for specific atomic species or based upon hand-selected geometric features \cite{Faber2017,Montavon2013,Lopez-Bezanilla2014}. Hand-selected features are arguably a significant limitation of such approaches, with the outcomes dependent upon the choice of input representation and the inclusion of all relevant features. This limitation is well known in the fields of handwriting recognition and image classification, where the performance of the traditional hand-selected feature approach has stagnated \cite{Jia2014}.
\begin{figure*}
\includegraphics[width=0.95\textwidth]{schematic_and_net2.pdf}
\caption{In this work, we use the machinery of deep learning to learn the mapping between potential and energy, bypassing the need to numerically solve the Schr\"odinger equation, and the need for computing wavefunctions. The architecture we used (shown here) consisted primarily of convolutional layers capable of extracting relevant features of the input potentials. Two fully-connected layers at the end serve as a decision layer, mapping the automatically extracted features to the desired output quantity. No manual feature-selection is necessary; this is a ``featureless-learning" approach.\label{schematic}}
\end{figure*}
Such feature-based approaches are also being used in materials discovery \cite{Curtarolo2003,Hautier2010,Saad2012} to assist materials scientists in efficiently targeting their search at promising material candidates. Unsupervised learning techniques have been used to identify phases in many-body atomic configurations \cite{Wang2016}. In previous work, an artificial neural network\xspace was shown to interpolate the mapping of position to wavefunction for a specific electrostatic potential \cite{Monterola2001,Shirvany2008, Mirzaei2010}, but the fit was not transferable, a limitation also present in other applications of artificial neural networks\xspace to partial differential equations\xspace \cite{VanMilligen1995,Carleo2016}. By transferable, we mean that a model trained on a particular form of partial differential equation\xspace will accurately and reliably predict results for examples of the same form (in our case, different confining potentials).
Machine learning can also be used to accelerate or bypass some of the heavy machinery of the \textit{ab initio} method itself. In \cite{Snyder2012}, the authors replaced the kinetic energy functional within density functional theory with a machine-learned one, and in \cite{Brockherde2016} and \cite{Yao2016}, the authors ``learned'' the mappings from potential to electron density, and charge density to kinetic energy, respectively.
Here, we use a fundamentally different approach inspired by the successful application of deep convolutional neural networks to problems in computer vision \cite{Lecun1998,Simard2003,ciresan2011flexibles,Szegedy2014} and computational games \cite{Silver2016,Mnih2013}. Rather than seeking an appropriate input representation to capture the relevant physical attributes of a system, we train a highly flexible model on an enormous collection of ground-truth examples. In doing so, the deep neural network learns \textit{both the features (in weight space) and the mapping} required to produce the desired output. This approach does not depend on the appropriate selection of input representations and features; we provide the same data to both the deep neural network and the numerical method. As such, we call this ``featureless learning''. Such an approach may offer a more scalable and parallizable approach to large-scale electronic structure problems than existing methods can offer.
In this Letter, we demonstrate the success of a \textit{featureless} machine learning approach, a convolutional deep neural network\xspace, at learning the mapping between a confining electrostatic potential and quantities such as the ground state energy, kinetic energy, and first excited-state of a bound electron. The excellent performance of our model suggests deep learning as an important new direction for treating multi-electron systems in materials.
It is known that a sufficiently large artificial neural network can approximate any continuous mapping \cite{Funahashi1989,Castro2000} but the cost of optimizing such a network can be prohibitive. Convolutional neural networks make computation feasible by exploiting the spatial structure of input data \cite{Krizhevsky2012}, similar to how the neurons in the visual cortex function \cite{Hubel1968}. When multiple convolutional layers are included, the network is called a deep convolutional neural network, forming a hierarchy of feature detection \cite{Bengio:2009}. This makes them particularly well suited to data rooted in physical origin \cite{Mehta2014,Lin2016}, since many physical systems also display a structural hierarchy. Applications of such a network structure in the field of electronic structure, however, are few (although recent work focused on training against a geometric matrix representation looks particularly promising \cite{Schutt2017}).
\section{Methods}
\subsection{Training set: choice of potentials}
Developing a deep learning model involves both the design of the network architecture and the acquisition of training data. The latter is the most important aspect of a machine learning model, as it defines the transferability of the resulting model. We investigated four classes of potentials: simple harmonic oscillators (SHO), ``infinite" wells (IW, i.e. ``particle in a box''), double-well inverted Gaussians (DIG), and random potentials (RND). Each potential can be thought of as a grayscale image: a grid of floating-point numbers.
\subsection{Numerical solver}
\begin{figure}
\includegraphics[width=0.85\columnwidth]{3d_wfn_eeee.pdf}
\caption{Wavefunctions (probability density) $|\psi_0|^2$ and the corresponding potentials $V(r)$ for two random\xspace potentials. \label{wavefunctions}}
\end{figure}
We implemented a standard finite-difference \cite{Press2007} method to solve the eigenvalue problem
\wcexclude{
\begin{equation}
\hat H\psi\equiv (\hat T + \hat V)\psi = \varepsilon\psi
\end{equation}
}
for each potential $V$ we created. The potentials were generated with a dynamic range and length scale suitable to produce ground-state energies within a physically relevant range. With the random\xspace potentials, special care was taken to ensure that some training examples produced non-trivial wavefunctions (\figref{wavefunctions}). Atomic units are used, such that $\hbar = m_\mathrm{e} = 1$. The potentials are represented on a square domain from $-20$ to $20$ a.u., discretized on a $256\times 256$ grid. As the simple harmonic oscillator\xspace potentials have an analytic solution, we used this as reference with which to validate the accuracy of the solver. The median absolute error between the analytic and the calculated energies for all simple harmonic oscillator\xspace potentials was $0.12$ mHa.
We discuss the generation of all potentials further in the Appendices.
The simple harmonic oscillator\xspace presents the simplest case for a convolutional neural network\xspace as there is an analytic solution dependent on two simple parameters ($k_x$ and $k_y$) which uniquely define the ground-state energy of a single electron ($\varepsilon_0=\frac{\hbar}{2}(\sqrt{k_x} + \sqrt{k_y})$). Furthermore, these parameters represent a very physical and visible quantity: the curvature of the potential in the two primary axes. Although these parameters are not provided to the neural network explicitly, the fact that a simple mapping exists means that the convolutional neural network\xspace need only learn it to accurately predict energies.
A similar situation exists for the infinite well\xspace. Like the simple harmonic oscillator\xspace, the ground state energy depends only on the width of the well in the two dimensions ($\varepsilon_0 = \frac{1}{2}\pi^2 \hbar^2 (L_x^{-2} +L_y^{-2}) $). It would be no surprise if even a modest network architecture is able to accurately ``discover" this mapping. An untrained human, given a ruler, sufficient examples, and an abundance of time would likely succeed in determining this mapping.
The double-well inverted Gaussian\xspace dataset is more complex in two respects. First, the potential, generated by summing a pair of 2D-Gaussians, depends on significantly more parameters; the depth, width, and aspect ratio of each Gaussian, as well as the relative positions of the wells will impact the ground state energy. Furthermore, there is no known analytical solution for a single electron in a potential well of this nature. There is, however, still a concise function which describes the underlying potential, and while this is not directly accessible to the convolutional neural network\xspace, one must wonder if the existence of such simplifies the task of the convolutional neural network\xspace. Gaussian confining potentials appear in works relating to quantum dots \cite{Gharaati2010,Gomez2008}.
The random\xspace dataset presents the ultimate challenge. Each random\xspace potential is generated by a multi-step process with randomness introduced at numerous steps along the way. There is no closed-form equation to represent the potentials, and certainly not the eigenenergies. A convolutional neural network\xspace tasked with learning the solution to the Schr\"odinger equation through these examples would have to base its predictions on many individual features, truly ``learning'' the mapping of potential to energy. One might question our omission of the Coulomb potential as an additional canonical example. The singular nature of the Coulomb potential is difficult to represent within a finite dynamic range, and, more importantly, the electronic structure methods that we would ultimately seek to reproduce already have frameworks in place to deal with these singularities (e.g. pseudopotentials).
\subsection{Deep neural network}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{loss_vs_epoch_w_inset.pdf}
\caption{The training loss curve for each model we trained.
Since the training loss is based upon the training datasets, it does not necessarily indicate how well the model generalizes to new examples. The convergence seen here indicates that 1000 epochs is an adequate stopping point; further training would produce further reduction in loss, however 1000 epochs provides sufficient evidence that the method performs well on the most interesting (i.e. random) potentials. In the inset, we see that two non-reducing convolution layers is a consistent balance of training time and low error. \label{lossdecrease}}
\end{figure}
We chose to use a simple, yet deep neural network architecture (shown in \figref{schematic}) composed of a number of repeated units of convolutional layers, with sizes chosen for a balance of speed and accuracy (inset of \figref{lossdecrease}).
We use two different types of convolutional layers, which we call ``reducing'' and ``non-reducing''.
The 7 reducing layers operate with filter (kernel) sizes of $3\times 3$ pixels. Each reducing layer operates with 64 filters and a stride of $2\times 2$, effectively reducing the image resolution by a factor of two at each step. In between each pair of these reducing convolutional layers, we have inserted two convolutional layers (for a total of 12) which operate with $16$ filters of size $4\times4$. These filters have unit stride, and therefore preserve the resolution of the image. The purpose of these layers is to add additional trainable parameters to the network. All convolutional layers have ReLU activation.
The final convolutional layer is fed into a fully-connected layer of width 1024, also with ReLU activation. This layer feeds into a final fully-connected layer with a single output. This output is the output value of the DNN. It is used to compute the mean-squared error between the true label and the predicted label, also known as the loss.
We used the AdaDelta \cite{Zeiler2012} optimization scheme with a global learning rate of 0.001 to minimize this loss function (\figref{lossdecrease}), monitoring its value as training proceeded. We found that after 1000 epochs (1000 times through all the training examples), the loss no longer decreased significantly.
We built a custom TensorFlow \cite{GoogleResearch2015} implementation in order to make use of 4 graphical processing units (GPUs) in parallel. We placed a complete copy of the neural network on each of the 4 GPUs, so that each can compute a forward and back-propagation iteration on one full batch of images. Thus our effective batch size was 1000 images per iteration (250 per GPU). After each iteration, the GPUs share their independently computed gradients with the optimizer and the optimizer moves the parameters in the direction that minimizes the loss function. Unless otherwise specified, all training datasets consisted of 200,000 training examples and training was run for 1000 epochs. All reported errors are based on evaluating the trained model on validation datasets consisting of 50,000 potentials not accessible to the network during the training process.
\section{Results}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{results_01_e.pdf}
\caption{Histograms of the true vs. predicted energies for each example in the test set indicate the performance of the various models. The insets show the distribution of error away from the diagonal line representing perfect predictions. A $1\ \text{mHa}^2$ square bin is used for the main histograms, and a 1 mHa bin size for the inset histogram. During training, the neural network was not exposed to the examples on which theses plots are based. The higher error at high energies in (d) is due to fewer training examples being present the dataset at these energies. The histogram shown in (d) is for the further-trained model, described in the text. \label{results01}}
\end{figure*}
\figref{results01}(a-d) displays the results for the simple harmonic oscillator\xspace, infinite well\xspace, double-well inverted Gaussian\xspace, and random\xspace potentials. The simple harmonic oscillator\xspace, being one of the simplest potentials, performed extremely well. The trained model was able to predict the ground state energies with a median absolute error (MAE\xspace) of 1.51 mHa\xspace.
The infinite well\xspace potentials performed moderately well with a MAE\xspace of 5.04 mHa\xspace. This is notably poorer than the simple harmonic oscillator\xspace potentials, despite their similarity in being analytically dependent upon two simple parameters. This is likely due to the sharp discontinuity associated with the infinite well\xspace potentials, combined with the sparsity of information present in the binary-valued potentials.
The model trained on the double-well inverted Gaussian\xspace potentials performed moderately well with a MAE\xspace of 2.70 mHa\xspace and the random\xspace potentials performed quite well with a MAE\xspace of 2.13 mHa. We noticed, however, that the loss was not completely converged at 1000 epochs, so we provided an additional 200,000 training examples to the network and allowed it to train for an additional 1000 epochs. With this added training, the the model performed exceptionally well, with a MAE\xspace of 1.49 mHa\xspace, below the threshold of chemical accuracy (1 kcal/mol, 1.6 mHa). In \figref{results01}(d), it is evident that the model performs more poorly at high energies, a result of the relative absence of high-energy training examples in the dataset. Given the great diversity in this latter set of potentials, it is impressive that the convolutional neural network\xspace was able to learn how to predict the energy with such a high degree of accuracy.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{results_02_e.pdf}
\caption{Histograms of the true vs. predicted energies for the model trained on the (a) kinetic energy, and (b) excited-state energy of the double-well inverted Gaussian\xspace. \label{results02}}
\end{figure}
\exclude{
\begin{table
\caption{\label{resultsTable}}
\begin{ruledtabular}
\begin{tabular}{lcc}
Model & Train Examples & MAE\xspace \\
\hline
simple harmonic oscillator\xspace & 200k & \1.51 mHa\xspace \\
infinite well\xspace & 200k & \5.04 mHa\xspace \\
double-well inverted Gaussian\xspace & 200k & \2.70 mHa\xspace \\
random\xspace & 200k & \1.49 mHa\xspace \\
random\xspace & 1M & \\MAErndcrsho \\
random\xspace, eval. on double-well inverted Gaussian\xspace & 1M & \10.93 mHa\xspace \\
random\xspace, $\langle\hat T \rangle$ & 1M & \2.98 mHa\xspace \\
\end{tabular}
\end{ruledtabular}
\end{table}1
}
Now that we have a trained model that performs well on the random\xspace test set, we investigated its transferability to another class of potentials. The model trained on the random\xspace dataset is able to predict the ground-state energy of the double-well inverted Gaussian\xspace potentials with a MAE\xspace of 2.94 mHa\xspace. We can see in \figref{results02}(c) that the model fails at high energies, an expected result given that the model was not exposed to many examples in this energy regime during training on the overall lower-energy random\xspace dataset. This moderately good performance is not entirely surprising; the production of the random\xspace potentials includes an element of Gaussian blurring, so the neural network would have been exposed to features similar to what it would see in the double-well inverted Gaussian\xspace dataset. However, this moderate performance is testament to the transferability of convolutional neural network\xspace models. Furthermore, we trained a model on an equal mixture of all four classes of potentials. It performs moderately with a MAE of 5.90 mHa\xspace. This error could be reduced through further tuning of the network architecture allowing it to better capture the higher variation in the dataset.
The total energy is just one of the many quantities associated with these one-electron systems. To demonstrate the applicability of deep neural network\xspace to other quantities, we trained a model on the first excited-state energy $\varepsilon_1$ of the double-well inverted Gaussian\xspace potentials. The model achieved a MAE\xspace of 10.93 mHa\xspace. We now have two models capable of predicting the ground-state, and first excited-state energies separately, demonstrating that a neural network can learn quantities other than the ground-state energy.
The ground-state and first excited-state are both eigenvalues of the Hamiltonian. Therefore, we investigated the training of a model on the expectation value of the kinetic energy, $\langle \hat T \rangle = \langle \psi_0 | \hat T | \psi_0 \rangle $, under the ground state wavefunction $\psi_0$ that we computed numerically for the random\xspace potentials. Since $\hat H$ and $\hat T$ do not
commute, the prediction of $\langle \hat T \rangle$ can no longer be summarized as an eigenvalue problem. The trained model predicts the kinetic energy value with a MAE\xspace of 2.98 mHa\xspace. While the spread of testing examples in \figref{results02}(a) suggests the model performs more poorly, the absolute error is still small.
\section{Conclusions}
\exclude{
Given the wide variety of machine learning methods which exist, one might question the choice of convolutional neural networks\xspace over other ``simpler'' approaches. Indeed, it is only in recent years that the widespread availability of accelerator hardware and software such as GPUs has made it possible to use this computationally heavy approach for practical problems. Convolutional neural networks seem like a good choice for a number of important reasons:
\begin{enumerate}
\item Convolutional operations make use of the spatial structure of the data. Physical quantities and phenomena (e.g. the electrostatic potential, electron density, and N-body wavefunction) are amenable to convolutional neural network\xspace.
\item The computational requirements for convolutional neural network\xspace are easily parallelized. The training workload for more involved problems can be distributed across large computing platforms \cite{GoogleResearch2015}.
\item The recent successes of convolutional neural network\xspace mean there continues to be a strong community drive to develop efficient and scalable implementations. This has resulted in rapid development of scalable implementations which make use of modern hardware architectures (including multi-GPU, multi-node distributed systems).
\item The diverse options in deep neural network\xspace architectures allow for a flexible ``basis'' for more complicated problems. Other machine learning methods require feature selection. deep neural network\xspace have consistently proven their ability to extract meaningful relationships from structured data without extensive intervention.
\end{enumerate}
}
We note that many other machine learning algorithms exist and have traditionally seen great success, such as kernel ridge regression \cite{Brockherde2016,Arsenault2014,Li2016,Suzuki2016,Faber2017,Lopez-Bezanilla2014} and random forests \cite{Faber2017,Ward2015}. Like these algorithms, convolutional deep neural networks have the ability to ``learn'' relevant features and form a non-linear input-to-output mapping without prior formulation of an input representation \cite{Kearnes2016,Schutt2017}. In our tests, these methods perform more poorly and scale such that a large number of training examples is infeasible. We have included a comparison of these alternative machine learning methods in the Appendices, justifying our decision of using a deep convolutional neural network. One notable limitation of our approach is that the efficient training and evaluation of the deep neural network\xspace requires uniformity in the input size. Future work will focus on an approach that would allow transferability to variable input sizes.
Additionally, an electrostatic potential defined on a finite grid can be rotated in integer multiples of \ang{90}, without a change to the electrostatic energies. Convolutional deep neural networks do not natively capture such rotational invariance. Clearly, this is a problem in any application of deep neural networks (e.g. image classification, etc.), and various techniques are used to compensate for the desired invariance. The common approach is to train the network on an augmented dataset consisting both of the original training set and rotated copies of the training data \cite{Dieleman2015}. In this way, the network learns a rotationally invariant set of features.
In demonstration of this technique, we tuned our model trained on the random potentials by training it further on an augmented dataset of rotated random potentials. We then tested our model on the original testing dataset, as well as a rotated copy of the test set. The median absolute error in both cases was less than 1.6 mHa. The median absolute difference in predicted energy between the rotated and unaltered test sets was however larger, at 1.7 mHa. This approach to training the deep neural network is not absolutely rotationally invariant, however the numerical error experienced due to a rotation was on the same order as the error of the method itself. Recent proposals to modify the network architecture itself to make it rotationally invariant are promising, as the additional training cost incurred with using an augmented dataset could be avoided \cite{Worrall2016,Dieleman2016}.
In summary, convolutional deep neural networks\xspace are promising candidates for application to electronic structure calculations as they are designed for data which has a spatial encoding of information.
\exclude{For this case, even though our convolutional neural network\xspace produces a highly accurate result, and does so much faster than our likely less-than-optimal finite-difference numerical solver, the time-to-solution is sufficiently small in absolute terms that the application of a convolutional neural network\xspace is not revolutionary.}
As the number of electrons in a system increases, the computational complexity grows polynomially. Accurate electronic structure methods (e.g. coupled cluster) exhibit a scaling with respect to the number of particles of $N^7$ and even the popular Kohn-Sham formalism of density functional theory scales as $N^3$ \cite{Kohn1995,Kucharski1992}. The evaluation of a convolutional neural network exhibits no such scaling, and while the training process for more complicated systems would be more expensive, this is a one-time cost.
In this work, we have taken a simple problem (one electron in a confining potential), and demonstrated that a convolutional neural network can automatically extract features and learn the mapping between $V(r)$ and the ground-state energy $\varepsilon_0$ as well as the kinetic energy $\langle\hat T\rangle$, and first excited-state energy $\varepsilon_1$. Although our focus here has been on a particular type of problem, namely an electron in a confining 2D well, the concepts here are directly applicable to many problems in physics and engineering. Ultimately, we have demonstrated the ability of a deep neural network\xspace to learn, through example alone, how to rapidly approximate the solution to a set of partial differential equations. A generalizable, transferable deep learning approach to solving partial differential equations would impact all fields of theoretical physics and mathematics.
\wcexclude{
\section{Acknowledgements}
The authors would like to acknowledge fruitful discussions with P. Bunker, P. Darancet, D. Klug, and D. Prendergast. K.M. and I.T. acknowledge funding from NSERC and SOSCIP. Compute resources were provided by SOSCIP, Compute Canada, NRC, and an NVIDIA Faculty Hardware Grant.
}
| proofpile-arXiv_065-7582 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec|intro}
Supermassive black holes, with masses $\mbhe\sim 10^6 $--$10^9 \msune$, have been identified at the centres of all local galaxies observed with high enough sensitivity \citep[see, e.g.,][for reviews]{FerrareseFord,ShankarReview,KormendyHo,GrahamReview15,2013ApJ...764..184M}. A surprising finding that has puzzled astrophysicists for the last forty years or so is that the masses of these black holes appear to be tightly linked to the global properties of their hosts, such as stellar mass or velocity dispersion, defined on scales up to a thousand times the sphere of influence of the central black hole. The origin of these correlations is still hotly debated, though there is general agreement that understanding this origin will shed light on the more general and still unsolved problem of the formation of galaxies.
Supermassive black holes are thought to have formed in a highly star-forming, gas-rich phase at early cosmological epochs. Central ``seed'' black holes are thought to gradually grow via mainly gas accretion, eventually becoming massive enough to shine as quasars or Seyfert galaxies and trigger powerful winds and/or jets capable of removing gas and quenching star formation in the host galaxy. This feedback from active black holes has become a key ingredient in almost all galaxy evolution models \citep[e.g.,][]{Granato04,Ciras05,Vittorini05,Croton06,Hopkins06,Lapi06,Shankar06,Monaco07,Guo11,Barausse12, dubois12a,dubois16,FF15,Bower16}. At later times, both the host galaxy and its black hole may further increase their mass (and size) via mergers with other galaxies/black holes, which could contribute up to $\sim 80\%$ of their final mass \citep[e.g.,][]{DeLucia07,Malbon07,oser10,Shankar10,oser12,Gonzalez11,Shankar13,dubois13,dubois16,Rod16,welker17}. Additional mechanisms, besides mergers, can also contribute to the growth of the stellar bulge and feeding of the central black hole, most notably disc instabilities \citep[e.g.,][]{Bower06,Bournaud11a,DiMatteo12,Barausse12,dubois12b}.
Recently, \citet[][see also \citealt{Bernardi07}, \citealt{gultekin}, \citealt{morabito} and \citealt{Remco15}]{Shankar16BH} showed that the local sample of galaxies with dynamical mass measurements of supermassive black holes is biased. Local galaxies hosting supermassive black holes, irrespective of their exact morphological type or the aperture within which the velocity dispersion is measured, typically present velocity dispersions that are substantially larger than those of a very large and unbiased sample from the Sloan Digital Sky Survey (SDSS) with similar stellar masses. One of the main reasons for this bias can be traced back to the observationally imposed requirement that the black-hole gravitational sphere of influence must be resolved for the black-hole mass to be reliably estimated. Via dedicated Monte Carlo simulations and accurate analysis of the residuals around the observed black-hole scaling relations, \citet{Shankar16BH} found the velocity dispersion to be a more fundamental quantity than stellar mass or effective radius. Indeed, the observed black-hole scaling
relation involving the stellar mass was found to be much more biased than the one involving velocity dispersion (up to an order of magnitude in normalisation), and its apparent tightness could be entirely ascribed to a selection effect.
\citet{Shankar16BH} also suggested that a selection bias more prominent in stellar mass than in velocity dispersion may explain several discrepancies often reported in the literature, \textit{i.e.} the fact that the observed relation between black-hole and stellar mass predicts a local black-hole mass density two to three times higher than inferred from the relation between black-hole mass and velocity dispersion \citep[e.g.,][]{Graham07BHMF,Tundo07,SWM}. \citet{shankar_new} further extended the comparison between the set of local galaxies with dynamically measured black-hole masses and SDSS galaxies. They found evidence that even the correlation between black-hole mass and S\'{e}rsic index, recently claimed to be even tighter than the one with velocity dispersion \citep{Savo16n}, is severely biased, with the correlation to velocity dispersion remaining more fundamental. The bias in the local scaling relations could also have profound implications for the background of gravitational waves expected from binary supermassive black holes, which could be a factor of a few lower than what current pulsar timing arrays can effectively detect \citep[][]{Sesana16}.
The aim of the present work is to revisit the local scaling relations between the masses of supermassive black holes and host-galaxy properties, namely velocity dispersion and stellar mass, in the context of a comprehensive semi-analytic model of galaxy formation and evolution \citep[][]{Barausse12}. This model also evolves supermassive black holes self-consistently from high-redshift ``seeds'', and accounts for black-hole mergers and for the feedback from active galactic nuclei (AGNs). After briefly reviewing the model in \sect\ref{sec|model}, we discuss (Sections \ref{sec:normalisation} and \ref{sec:dispersion}) the slope, normalisation and scatter of the black-hole scaling relations with and without the aforementioned selection effect on the resolvability of the black-hole sphere of influence.
In Section \ref{sec:residuals} we study the correlations between the residuals from fitted scaling relations, and show that they are useful for constraining theoretical models such as ours as well as
the hydrodynamic cosmological simulation Horizon-AGN \citep{dubois,dubois16,2016MNRAS.460.2979V}. Finally, we will discuss our results in \sect\ref{sec|discu} and summarise our conclusions in \sect\ref{sec|conclu}.
\begin{figure*}
\center{\includegraphics[width=14truecm]{Figure1.eps}
\caption{Left: Velocity dispersion as a function of total stellar mass for SDSS galaxies with $P(E+S0)>0.8$ (long-dashed red line), with its 1$\sigma$ dispersion (grey), compared with the prediction of the light-seed model for bulge-dominated/elliptical galaxies, i.e. galaxies with $B/T>0.7$ (solid line, with dotted lines marking the 70\% confidence region). The model's predictions for heavy seeds are very similar. Right: Stellar mass function of SDSS galaxies based on S\'{e}rsic plus Exponential fits to the observed surface brightness (shaded area, accounting for the uncertainties due to the stellar population modelling, fitting, and assumptions about dust in the galaxies, c.f. \citealt{Bernardi13,Bernardi17}). Solid and dotted lines show the prediction of the model with light seeds and its (Poissonian) $1\sigma$ uncertainties. The predictions for heavy seeds are very similar.
\label{fig|ScalingsMstar}}}
\end{figure*}
\section{Model}
\label{sec|model}
The full description of the semi-analytic model adopted as a reference in this work can be found in \citet{Barausse12}, with later updates
of the prescriptions for the black-hole spin and nuclear star cluster evolution described respectively in \citet{Sesana14} and \citet{Antonini15,Antonini15b}. Here, we briefly summarise the key points about the growth of the central supermassive black holes, which are the main focus of this paper.
The model is built on top of dark-matter merger trees generated via extended Press-Schechter algorithms \citep[e.g.,][]{Press74,Parki08} tuned to reproduce the results of N-body simulations~\citep{Parki08}. Galaxies form in each halo via the interplay and balance of gas cooling, star formation and (supernova) feedback. Dark matter haloes are also initially seeded with either \emph{light} black holes of $M_{\rm seed}\sim200\, \msune$ (to be interpreted \textit{e.g.} as the remnants of PopIII stars), or with \emph{heavy} black holes of mass $M_{\rm seed}\sim 10^5\, \msune$, which may arise for instance from protogalactic disc instabilities. The seeding of haloes is assumed to happen at early epochs $z>15$, with halo occupation fractions depending on the specific seeding model \citep[see][for details]{Barausse12,Klein16}.
In our model, seed black holes initially grow via (mainly) gas accretion from a gas reservoir, which
is in turn assumed to form at a rate proportional to the bulge star formation rate \citep[e.g.,][]{Granato04,Lapi06}. As a result, the feeding of this reservoir and the ensuing black-hole accretion events typically happen after star formation bursts triggered by major galactic mergers and disc instabilities.
In both their radiatively efficient (``quasar'') and inefficient (``radio'') accretion modes, the black holes also exert a feedback on the host galaxies,
thus reducing their (hot and cold) gas content and quenching star formation. As discussed by a number of groups \citep[][]{Granato04,Ciras05}, AGN feedback prescriptions such as these tend to induce a correlation between black hole mass and velocity dispersion of the bulge component. Also accounted by the model is the black-hole growth via black-hole mergers, following the coalescence of the host galaxies. This mechanism becomes particularly important for high black-hole masses at recent epochs.
The model is calibrated against a set of observables, such as the local stellar and black-hole mass functions, the local gas fraction, the star-formation history, the AGN luminosity function,
the local morphological fractions, and the correlations between black holes and galaxies and between black holes and nuclear star clusters~\citep[c.f. ][]{Barausse12,Sesana14,Antonini15,Antonini15b}. In more detail, as we will show in the following (c.f. Fig.~\ref{fig|ScalingsLight}), the
model's default calibration attempts to match the observed \mbh-\sis\ relation without accounting for any observational bias (on morphological type or on the resolvability of the black-hole influence sphere).
\section{Results}
\label{sec|results}
We will now compare the predictions of our model with observations, focusing on the normalisation of the scaling relations and on the role played by selection biases; the dispersion around the scaling relations; and the correlations between the residuals of the data from the scaling relations.
\begin{figure*}
\center{\includegraphics[width=14truecm]{Figure2.eps}
\caption{Black-hole mass as a function of velocity dispersion (left panels) and total stellar mass (right panels) as predicted by our model with light seeds (the results for heavy seeds are qualitatively similar). Results are shown for the full outputs of the model (labelled as ``Intrinsic'', top panels), and for the subsample of black holes with a gravitational sphere of influence above 0.1'' (labelled as ``Observed'', middle and bottom panels; see text for details). The solid and dotted blue lines in all panels represent the medians and 70\% confidence region of the distributions. The dashed magenta lines in the left panels are the median model predictions when assigning velocity dispersions to galaxies from the observed SDSS $\sigma_{\rm HL}-\mstare$ relation (long-dashed red line in the left panel of \figu\ref{fig|ScalingsMstar}). Long-dashed red lines are the intrinsic scaling relations as inferred from Monte Carlo simulations by \citet{Shankar16BH}. Blue diamonds are data collected and updated by \citet{Savo15} on local galaxies with dynamical measurements of supermassive black holes. Note that observational biases can increase the normalisation and reduce the scatter in the intrinsic scaling relations.
\label{fig|ScalingsLight}}}
\end{figure*}
\subsection{The normalisation of the scaling relations and the role of selection bias}
\label{sec:normalisation}
The left panel of \figu\ref{fig|ScalingsMstar} shows the relation between velocity dispersion and stellar mass for
early-type galaxies in the SDSS. Here `early-type' means that the probability of being elliptical or lenticular, $p$(E+S0), according to the automatic morphological classification of \citet{Huertas11}, exceeds 0.8. We restrict to this specific SDSS subsample as velocity dispersions in late-type galaxies are not spatially resolved, though the correlation does not depend on the exact cut in $p$(E+S0). For consistency with the data of \citet{Savo15} to which we will compare, we follow \citet{Shankar16BH} and correct the velocity dispersions $\sigma_{\rm HL}$, as in \citet{Cappellari06}, to a common aperture of 0.595 kpc \citep[\textit{i.e.} the one adopted by the Hyperleda data base,][]{Paturel03}. Henceforth, unless stated otherwise, we will always define velocity dispersions \sis\ at the aperture of Hyperleda. Stellar masses \mstar\ are from \citet{Bernardi13}. They are the product of luminosity $L$ and mass-to-light ratio $\mstare/L$; the $L$ values are from \citet{2015MNRAS.446.3943M}, based on S\'{e}rsic+Exponential fits to the light profiles.
The black solid line marks the median velocity dispersion-stellar mass relation as predicted by the model (for bulge-dominated/elliptical galaxies only) and black dotted lines show the 15th and 85th percentiles of the predicted distribution (at fixed stellar mass). Central velocity dispersions in the model are computed as $\sigma=A \sqrt{G M_{\rm b}/{r_{\rm b}}} [1+(V_{\rm b}/\sigma)^2]$,
where $M_{\rm b}$ is the bulge dynamical mass, $r_{\rm b}$ is the scale radius of the Hernquist profile \citep[which the model adopts to describe the bulge, see][]{Barausse12}, $A\approx 0.4$ accounts for the anisotropy of the distribution function of the bulge stellar population \citep[c.f.][\figu2, lower panel]{Baes02},
and the ratio $V_{\rm b}/\sigma$ accounts for the contribution of the bulge rotation and is modeled based on observations \citep[c.f.][for details]{Sesana14}.
As can be seen, the predicted correlation is similar to the observed one, although slightly flatter.
\begin{figure*}
\center{\includegraphics[width=14truecm]{Figure3.eps}
\caption{Same as \figu\ref{fig|ScalingsLight}, but for galaxies with a bulge-to-total ratio $B/T>0.7$ (and for light seeds; the results are similar for heavy seeds). The results are broadly similar to \figu\ref{fig|ScalingsLight}, though the dispersion of the model's intrinsic predictions is substantially lower than for the full galaxy sample. Note that the observational dataset is also restricted to early-type galaxies only, to match the sample of simulated galaxies.
\label{fig|ScalingsETGs}}}
\end{figure*}
For completeness, the right panel of \figu\ref{fig|ScalingsMstar} compares the stellar mass function predicted by the model with the observed one \citep{Bernardi13,Bernardi17}.
While the model lies slightly above the data at the highest masses, it lies below over the range $2\times 10^{10} \lesssim \mstare/\msune \lesssim 10^{11}$. This is not a major issue in the present context, since the model is consistent with the empirical galaxy scaling relations, and most notably with the \sis-\mstar\ relation shown in the left panel.
Having checked that our model reproduces the dynamical scaling relations of early-type galaxies, we now study the scaling relations with the central supermassive black hole. \figu\ref{fig|ScalingsLight} compares the model (with no morphological selection) with the whole sample (i.e., spirals as well as ellipticals and lenticulars) of \citet[][blue diamonds]{Savo15}. The left and right panels show the scaling of black hole mass with velocity dispersion and total stellar mass, respectively. In each panel, the solid and dotted lines show the median and the region containing 70\% of the model objects in a given bin. The top panels show the full black-hole sample, while the middle and bottom panels only show the subset for which the sphere of influence exceeds the typical (HST) resolution limit,
\begin{equation}
r_{\rm infl}\equiv k\frac{G\mbhe}{\sigma^2}\,, \qquad \frac{r_{\rm infl}}{d_{\rm Ang}}>0.1''\,
\label{eq|rinfl}
\end{equation}
($d_{\rm Ang}$ being the angular-diameter distance).
We use the parameter $k$ to take into account different galaxy mass profiles: $k\sim 4$ for the Hernquist profiles assumed by the model \citep[see][for details]{Barausse12}, but $k\sim 10$ (or even larger) is possible if a core is present.
On the other hand, strong lensing and accurate dynamical modelling have shown that the mass profiles of intermediate-mass, early-type galaxies are consistent with nearly isothermal profiles down to (at least) tenths of the effective radius \citep[e.g.,][and references therein]{Cappellari15}. These have $k\sim 1$. To bracket these uncertainties, we show results for both $k=10$ and 1.
To match the data as closely as possible, we draw the angular diameter distances $d_{\rm Ang}$ from an empirical probability distribution function,
which we construct from the distances in the \citet{Savo15} sample, and which peaks around $\sim 15$--20 Mpc. Using a distribution that is uniform in comoving volume, as done in \citet{Shankar16BH}, yields comparable results.
The top panels of \figu\ref{fig|ScalingsLight} suggest that the model predicts intrinsic scaling relations that lie slightly below the data, especially for \mbh-\mstar\ (top right), but which are broadly consistent with the intrinsic relations suggested by \citet[][long dashed red lines]{Shankar16BH}. The match to the \mbh-\sis\ relation improves (slightly) if we assign velocity dispersions by using the SDSS $\sigma$-$\mstare$ relation, e.g. via the analytic fits provided by
\citet[][dashed magenta line, left panels]{Sesana16}.\footnote{Note that it makes sense to assign \sis\
from the model-predicted \mstar\ via the
observed $\sigma$-$\mstare$ relation, rather
than \mstar\ from the model-predicted \sis, because masses
are more ``primitive'' quantities than velocity dispersions for
a semi-analytic galaxy formation model such as ours.}
At the same time, the model substantially underpredicts the \mbh-\mstar\ relation by up to an order of magnitude (top right panel), suggesting that there is some internal inconsistency in the data. In other words, a model like ours, tuned to match the local velocity dispersion-stellar mass and black-hole mass-velocity dispersion relations, tends to severely underpredict the \mbh-\mstar\ relation. This is in line with the results of \citet[][]{Shankar16BH}. Calibrating the model to match the observed \mbh-\mstar\ relation would instead overestimate the observed \mbh-\sis\ relation. Such an overestimate has indeed been seen in two (very different) cosmological hydrodynamic simulations \citep{Sija15,2016MNRAS.460.2979V}.\footnote{\citet{2016MNRAS.460.2979V} mention resolution, which is at best 1 kpc in their simulation, as one reason why their results do not match the \mbh-\sis\ relation. However, their predictions for \mbh\ are larger than the observations even at large \sis\ (c.f. their \figu7), while they are in good agreement with the \mbh-\mstar\ relation.}
When the selection effect on the sphere of influence of the black hole is applied to the model (middle and bottom panels of \figu\ref{fig|ScalingsLight}), the median normalisations of the predicted scaling relations increase (especially for the \mbh-\mstar\ relation), because a substantial fraction of low-mass black holes are excluded. This is because, for a given angular aperture, \eq\ref{eq|rinfl} preferentially removes objects with the smallest gravitational spheres of influence; these tend to be the lowest-mass black holes.
Therefore, this effect tends to select the ``upper end'' (in black-hole mass) of the intrinsic distributions shown in the top panels of \figu\ref{fig|ScalingsLight}. This also induces an overall flattening of the scaling relations, which is again more obvious in the \mbh-\mstar\ plane: selection hardly matters for the most massive galaxies, but it causes a factor $\lesssim 10$ increase in the median observed \mbh\ at lower masses. Selection-biased models are flatter in the \mbh-\sis\ plane as well.\footnote{As a result of this flatter slope, the model's prediction (after applying the selection bias)
lies above the (few) data with $\sigma\sim 10^2$ km/s in the sample of \citet{Savo15}. Note however that other samples, such as that of
\citet{2011Natur.469..374K}, contain black holes with masses up to $\sim 10^8 M_\odot$ at $\sigma\sim 10^2$ km/s, which is in better agreement with our model.}
We conclude that, to agree with resolution-biased observed scaling relations, models must predict intrinsic scaling relations that are significantly \emph{steeper} than the observed relations.
We have tried to obtain steeper intrinsic scaling relations in our model by changing the AGN feedback, but we have found this to be insufficient to achieve better agreement with the data at low masses and velocity dispersions. In fact, the results and conclusions of this paper are robust to changes in the AGN feedback strength as well as to changes in the black-hole accretion prescriptions.
\figu\ref{fig|ScalingsLight} also clearly shows that in addition to changing the normalisation and slope, selection dramatically decreases the dispersion around the median relations, especially at lower masses.
Note that these ``corrections'' do not depend on the exact choice of $k$, since they are present for both $k=10$ and $k=1$.
Similar comments apply to \figu\ref{fig|ScalingsETGs}, which compares model-predicted galaxies with bulge-to-total ratios of $B/T>0.7$ with the E+S0 galaxies of \citet{Savo15}, in the same format as \figu\ref{fig|ScalingsLight}. The intrinsic distributions of the model-predicted galaxies (top panels) are narrower than in the full samples (c.f. top panels of \figu\ref{fig|ScalingsLight}), and the median scaling relations have higher normalisations, in better agreement with the data and in line with the findings of \citet{Barausse12}. Although the selection effect is smaller for this specific subsample of galaxies, the model-predicted intrinsic \mbh-$\sigma$ relation is offset slightly above the data, while the predictions for the intrinsic \mbh-\mstar\ relation are slightly below the data.
The effect of including the selection bias on the resolvability of the black hole sphere of influence (middle and bottom panels) is less important than in \figu\ref{fig|ScalingsETGs}, because the small systems for which the sphere of influence is not resolvable tend to live in late-type galaxies in our semi-analytic model. Nevertheless, the selection bias still tends to make the correlations higher in normalisation, slightly flatter, and slightly tighter.
\begin{figure}
\center{\includegraphics[width=8truecm]{Figure4.eps}
\caption{Median black-hole mass as a function of total host-galaxy stellar mass (solid lines) as predicted by the model, for the black holes
with bolometric luminosity $\log L/{\rm erg\, s^{-1}}>42$. The dotted (long-dashed) lines mark the 90\% (99\%) confidence regions
at given stellar mass, as predicted by our model.
Blue squares are data from \citet{ReinesVolonteri15}. The results are for the light-seed model, but the heavy-seed one gives very similar results.
\label{fig|ScalingsReines}}
}
\end{figure}
\subsection{The dispersion around the scaling relations: a comparison with observations}
\label{sec:dispersion}
It is important to emphasise that, whatever the exact black-hole seeding recipe adopted in the model, the predicted local (intrinsic) scaling relations reported in \figu\ref{fig|ScalingsLight} for all galaxy types show very large dispersions, especially for galaxy stellar masses $\mstare \lesssim 3\times 10^{11}\, \msune$, with a broad distribution up to two orders of magnitude or more in black hole mass at fixed stellar mass or velocity dispersion. This is because the model tends to retain a significant number of low-mass black holes, which did not accrete much gas along their history (because they live in spirals or satellite galaxies) and which therefore remain closer to their seed masses \citep[e.g.,][]{Volonteri05,Barausse12}. However, for more massive galaxies the dispersion is smaller, with almost no galaxies with $\mstare \gtrsim 5\times 10^{11}\, \msune$ having black holes with $\mbhe \lesssim 10^7\, \msune$.
One interesting way to probe the existence of very low-mass black holes in relatively low-mass galaxies could be to compare with the scaling relations in active galaxies, which are not limited by the spatial resolution issues that heavily affect dynamical measurements of black holes. To this purpose, \figu\ref{fig|ScalingsReines} compares the predictions of our model with the recent sample of 262 broad-line AGNs collected by \citet{ReinesVolonteri15}. In more detail, the observations are represented by blue squares, while the lines represent the median, the 90\% confidence region (i.e. the 5th and 95th percentiles) and the 99\% confidence region (i.e. the $0.5$th and $99.5$th) of the (model-predicted) \mbh-\mstar\ relation, by assuming a light black-hole seed scenario (the heavy-seed scenario gives very similar results) and considering only systems with bolometric luminosity $\log (L/{\rm erg\, s^{-1}})>42$ \citep[roughly the minimum luminosity probed by][]{ReinesVolonteri15}.
The model's distribution of active black holes has been built by randomly drawing Eddington ratios from a Schechter distribution that extends up to the Eddington limit, in agreement with a number of observations \citep[][]{Kauffmann09,Aird12,Bongiorno12,Schulze15,Jones16}. We have verified that our predicted luminosity function at $z=0$, computed by assuming an average duty cycle of active black holes of 10\% consistent with the results from local surveys \citep[e.g.,][and references therein]{Goulding09,Shankar13,Pardo16}, agrees with the (obscuration-corrected) bolometric luminosity functions of \citet{Hop07} and \citet{SWM}.
First, let us note, as emphasised by \citet{Shankar16BH}, that a lower limit of $L\gtrsim 10^{42}\, \ergse$ should still allow black holes down to a mass of $\mbhe \sim 10^4\, \msune$ to be detected, at least if a non-negligible fraction of these black holes are still accreting at the Eddington limit. Such low mass black holes do not seem to exist in the \citet[][see also \citealt{Baldassare15}]{ReinesVolonteri15} sample (and in our model, at least in sufficiently large numbers and with high enough Eddington ratios to warrant detection). Even assuming lower virial factors $f_{\rm vir}$ than those adopted by \citet{ReinesVolonteri15} in deriving black hole masses from their measured FWHMs, as suggested by some groups \citep[e.g.,][and references therein]{Shankar16BH,Yong16}, would not alter these conclusions.
Second, it is clear that the observational sample lies, on average, below the model median predictions, which were tuned to reproduce the data on inactive local galaxies with a significantly higher normalisation. While the predicted 90\% and 99\% confidence regions for the active black-hole population encompass the data of \citet{ReinesVolonteri15}, the model also predicts the existence of a large number of active higher-mass black holes (above the median), which are not observed. Therefore, either the sample of \citet{ReinesVolonteri15} is biased toward low-luminosity active systems, or our model should be normalised to lower values by a factor $\gtrsim 3$.
The large scatter in our model means that, if the normalisation is decreased, then our model would predict a large tail of very low-mass black holes. Since these are not observed, this would have important consequences for constraining models of the seeds of the supermassive black-hole population. On the other hand, decreasing the normalisation of the \mbh-\mstar\ relation is by no means straightforward. While it could be achieved by decreasing black-hole accretion, this would imply a proportional reduction in AGN luminosity, unless a higher radiative efficiency and/or duty cycle are also assumed. The present calibration of our model already predicts rather high radiative efficiencies/black-hole spins \citep{Sesana14}. Duty cycles of active black holes could instead be constrained by comparing with independent AGN clustering measurements \citep[][and references therein]{Gatti16}. We plan to explore some of these important interrelated issues in future work.
\begin{figure*}
\center{\includegraphics[width=17truecm]{Figure5.eps}
\caption{Correlation between the residuals of the \mbh-$\mstare$ relations and those of the $\sigma$-$\mstare$ relation, at fixed stellar mass (left panel), and between the residuals of the \mbh-$\sigma$ relation and those of the $\mstare$-$\sigma$ relation, at fixed velocity dispersion (right panel). Red circles and green triangles show the ellipticals and lenticulars in the \citet{Savo15} sample. The solid blue and long-dashed magenta lines mark the best fits (for the model's predictions and the observations, respectively), while the dotted lines show the 1$\sigma$ uncertainty on the slope. Also reported are the best fits and the Pearson correlation coefficient $r$. The grey bands show the residuals extracted from the Monte Carlo simulations from \citet{Shankar16BH}, with selection bias on the black-hole gravitational sphere of influence. Note that the correlation of the residuals at fixed stellar mass (left panel) is very strong in the data, but essentially absent in the model.
These results are for the light-seed model (the heavy-seed one yields similar results), selecting only early-type galaxies ($B/T>0.7$) for which the black-hole sphere of influence is resolvable.
\label{fig|Residuals}}
}
\end{figure*}
\subsection{Constraints from correlations between residuals}
\label{sec:residuals}
We now compare our model's predictions for the \emph{residuals} of the black hole-galaxy scaling relations to the data. Such correlations are an efficient way of going beyond pairwise correlations between the variables themselves \citep[][]{Bernardi05,ShethBernardi12}. For example, measurements of the \mbh-\sis\ and \mbh-\mstar\ correlations alone do not provide insight about whether \sis\ is more important than \mstar\ in determining \mbh. This is because the \mbh-\sis\ and \mbh-\mstar\ correlations do not encode complete information about the joint distribution of \mbh, \sis\ and \mstar. Correlations between the residuals encode some of this extra information.
\begin{figure*}
\center{\includegraphics[width=17truecm]{Figure6.eps}
\caption{\label{fig|Residuals_horizon}
Same as \figu\ref{fig|Residuals}, but for the hydrodynamic cosmological simulation Horizon-AGN \citep{dubois,dubois16}.
However, unlike in \figu\ref{fig|Residuals}, no selection bias on the resolvability of the black-hole sphere of influence has been applied. Doing so leads to slightly weaker correlations and slightly lower slopes for the fits to the residuals.
Note that the velocity dispersions in the simulation are measured within the effective radius of the galaxy, and are not corrected to the Hyperleda aperture.
}}
\end{figure*}
\begin{figure}
\center{\includegraphics[width=8truecm]{Figure7.eps}
\caption{\label{fig|gamma_horizon}
Total stellar mass as a function of velocity dispersion for SDSS galaxies with $P(E-S0)>0.8$ (shaded region), and for the Horizon-AGN simulation \citep[the solid line represents the mean, and the dotted lines represent the 70\% confidence region;][]{dubois,dubois16}. As in the observational sample, we select early-type galaxies from the simulations by only considering systems with $V_c/\sigma<0.7$, where $V_c$ is the rotational velocity. Applying a different cut does not change this figure significantly. Here, for both the SDSS and Horizon-AGN data, $\sigma_e$ is computed within the effective radius. Given the resolution of Horizon-AGN, this quantity is more reliably estimated than the more central velocity dispersion $\sigma_{\rm HL}$ adopted in the previous figures.
}}
\end{figure}
To this purpose, the left and right hand panels of \figu\ref{fig|Residuals} show
$\Delta(\mbhe|\mstare)$ vs $\Delta(\sise|\mstare)$ and
$\Delta(\mbhe|\sise)$ vs $\Delta(\mstare|\sise)$, where
\begin{equation}
\Delta(Y|X)\equiv\log Y-\langle \log Y|\log X \rangle \,
\label{eq|resid}
\end{equation}
is the residual in the $Y$ variable (at fixed $X$) from the log-log-linear fit of $Y(X)$ vs $X$, i.e. $\langle \log Y|\log X \rangle$.
The magenta long-dashed and dotted lines in each panel show the best fit and 1$\sigma$ uncertainties on the correlations between residuals in the \citet{Savo15} dataset: red circles and green triangles represent ellipticals and lenticulars. We obtained the magenta lines by running 200 iterations following the steps outlined in \citet{shankar_new}, which include errors in both variables. At each iteration we eliminate three random objects from the original sample. From the full ensemble of realizations, we measure the mean slope and its 1$\sigma$ uncertainty.
The blue solid and dotted lines show a similar analysis in our semi-analytic model. However, in this case, we randomly produce 30 mock samples of $\sim 75$ galaxies each, with $B/T>0.7$ and resolvable black-hole spheres of influence. From the full ensemble of mock realizations, we then extract the mean slope and Pearson coefficient.
The correlations in \figu\ref{fig|Residuals} show that, in the data, the velocity dispersion is more strongly correlated with the black-hole mass, with a mean Pearson coefficient of $r=0.72$, than stellar mass, for which the Pearson coefficient is $r=0.56$. The model instead predicts just the opposite, with almost zero correlation with velocity dispersion (mean $r=0.05$), but with a correlation with stellar mass consistent with the data, though still rather weak (mean $r=0.28$). It is nevertheless important to realise that even an intrinsically weak correlation with velocity dispersion at fixed stellar mass does not necessarily imply that the \emph{total} correlation with velocity dispersion is small. In fact, following Appendix B in \citet{shankar_new}, the total dependence of the black-hole mass on velocity dispersion can be summarised as $\mbhe\propto\sigma^{\beta}\mstare^{\alpha}\propto\sigma^{\beta+\alpha\,\gamma}$, where $\gamma$ comes from $\mstare\propto\sigma^{\gamma}$ (where we have ignored any other explicit dependence on, e.g., S\'{e}rsic index or galaxy size). Since the model predicts $\gamma\approx 4$, and the residuals in \figu\ref{fig|Residuals} (solid blue lines) yield $\beta\sim 0.5$ and $\alpha\sim 0.6$,
one obtains a total dependence of $\mbhe\propto\sigma^{3}$, consistent with \figu\ref{fig|ScalingsETGs} (middle left panel).
Also note that a slope $\gamma\approx 4$ for the \mstar-\sis\ relation is in itself already in tension with the ($V_{\max}$-corrected) SDSS observations, which rather
prefer a slope $\gamma\approx2.2$--2.5, at least for $\mstare\gtrsim2\times10^{10} M_\odot$ (c.f. also \figu\ref{fig|gamma_horizon} below).
For completeness, \figu\ref{fig|Residuals} also shows results from the Monte Carlo simulations performed by \citet[][grey bands]{Shankar16BH}, which assume an intrinsic correlation of the type $\mbhe \propto \sise^{4.5} \mstare^{0.5}$ and include the selection bias on $r_{\rm infl}$. These models reproduce the observed trends.
Several comments are in order.
First, our results are robust against the efficiency of the AGN feedback. Indeed, in our reference model we fixed the AGN feedback efficiency to obtain a stellar mass function as close as possible to the observations at the high-mass end (c.f. \figu\ref{fig|ScalingsMstar}, right panel). By slightly decreasing this efficiency (e.g. by a factor $\sim 3$), the agreement with the observed stellar mass function in \figu\ref{fig|ScalingsMstar} slightly worsens in the high-mass end, while the \mbh-\sis\ relation steepens\footnote{Note that, in our model, increasing the AGN feedback efficiency \emph{decreases} the slope of the \mbh-\sis\ relation, mainly because AGN feedback is more effective at clearing the gas in more massive galactic hosts, thus inhibiting the growth of especially the more massive black holes.} ($\mbhe\propto \sise^{3.4}$ after including the effect of the selection bias), but not enough to fully match the observations \citep[for which slopes $\sim 4.5-5$ are usually quoted in the literature, e.g.][]{Graham11,KormendyHo}. More importantly, the correlations of the residuals remain almost unchanged ($r=0.1$ when fixing \mstar, and $r=0.3$ when fixing \sis). Second, we have checked that these correlations are not improved by considering a different choice of $k$ in \eq\ref{eq|rinfl};
by assuming no bias on the resolvability of the black-hole sphere of influence;
by considering all galaxies rather than just bulge-dominated ones;
by using the bulge mass instead of the total stellar mass;
or by using the the SDSS $\sigma$-$\mstare$ relation \citep[e.g., by the fits of][]{Sesana16} to compute velocity dispersions.
Third, the model's residuals shown in \figu\ref{fig|Residuals} do not account for measurement errors (i.e. we compute the residuals for the model's exact predictions, without folding in any observational uncertainties). We have verified that including these errors can yield a steeper slope $\beta \sim 2$ for the residuals at fixed \mstar, but this does not strengthen the correlations of the residuals shown above (essentially because the error on $\beta$ also grows when $\beta$ grows).
To check if our results are due to the particular implementation of black-hole accretion and AGN feedback in our semi-analytic model, and/or lack of sufficient ``coupling'' between velocity dispersion and AGN feedback, we have performed a similar analysis of the Horizon-AGN simulation \citep{dubois,dubois16}. This is a hydrodynamic cosmological simulation of a box with size $100$ Mpc$/h$, run with the adaptive mesh refinement code RAMSES~\citep{ramses}, with $10^9$ dark-matter particles and a minimum mesh size of 1 kpc. The simulation includes gas cooling, star formation, feedback from stars and AGNs (see~\citealp{dubois} for the details of the numerical modelling and~\citealp{2016MNRAS.460.2979V} for the discussion of the correlations between galaxies and black holes in Horizon-AGN). Galaxies are extracted with a galaxy finder running on star particles. Since the simulation assumes a Salpeter \citep{SalpeterIMF} IMF, we have reduced galaxy stellar masses by 0.25 dex \citep[e.g.,][]{Bernardi10} to match the Chabrier IMF~\citep{Chabrier03} adopted in this work. The velocity dispersion is measured from its components along each direction of the cylindrical coordinates oriented along the galaxy's spin axis (i.e. $\sigma_r$, $\sigma_t$ and $\sigma_z$ for the radial-, tangential-, and $z$-component respectively), thus $\sigma^2=(\sigma_r^2+\sigma_t^2+\sigma_z^2)/3$. The velocity dispersion of each galaxy is measured using only star particles within the effective radius of the galaxy. The results are shown in \figus\ref{fig|Residuals_horizon} and \ref{fig|gamma_horizon}. They are in qualitative agreement with those obtained with our semi-analytic model. In more detail, Horizon-AGN also predicts $\gamma\approx 4$, in tension with the data, and a very weak correlation between the residuals when fixing \mstar. Also note that in \figu\ref{fig|Residuals_horizon} we have not applied any selection bias (unlike in the case of the semi-analytic model in \figu\ref{fig|Residuals}). Restricting to systems with resolvable spheres of influence actually makes the correlations in the residuals of the Horizon-AGN simulation even weaker.
We plan to consider different (and possibly stronger) models of AGN feedback in future work. For now, the discrepancies highlighted in \figus\ref{fig|Residuals} and \ref{fig|Residuals_horizon} clearly show that correlations between the residuals of the \mbh-\sis\ and \mbh-\mstar\ scaling relations are more powerful than the scaling relations themselves at constraining models for the co-evolution of black holes and their host galaxies.
\section{Discussion}
\label{sec|discu}
Concerning the normalisation of the black-hole scaling relations, at face value our model clearly fails to reproduce the observed \mbh-\sis\ and \mbh-\mstar\ relations at the same time.
Without invoking any selection effect, one could attempt to improve the simultaneous match to both the observed scaling relations by fine-tuning some of the key parameters in the model, namely the one controlling gas accretion onto the black hole and/or the energetic feedback from the central active nucleus. For example, increasing the efficiency of AGN feedback could in principle decrease the stellar masses of the host galaxies at fixed black hole mass, thus possibly improving the match to the \mbh-\mstar\ relation (top right panel of \figu\ref{fig|ScalingsLight}). However, this would then spoil the good match to the velocity dispersion-stellar mass relation.
Similarly, one could increase the accretion onto the central black hole at fixed star formation rate and final stellar mass of the host galaxy, with the aim of improving the match to the \mbh-\mstar\ relation. This however would proportionally increase the \mbh-\sis\ relation above the data, when velocity dispersions are chosen to faithfully track those from the observed SDSS $\sigma-\mstare$ relation (magenta lines in the left panels of \figu\ref{fig|ScalingsLight}).
Concerning the dispersion around the scaling relations, our semi-analytic model, which self-consistently evolves black holes from seeds at high redshifts,
naturally predicts very broad distributions in black-hole mass at fixed velocity dispersion or stellar mass, at relatively low masses $\mstare \lesssim 3\times 10^{11}\, \msune$.
Overall, these effects are in line with the results of the Monte Carlo simulations presented in \citet{Shankar16BH}, though the intrinsic scatters
assumed there, especially in the \mbh-\sis\ relation, were always below $0.3$ dex. This was needed to avoid too flat slopes in the ``observed'', biased relations.
Indeed, here our predicted slope of the \mbh-\sis\ relation, inclusive of observational bias (e.g., middle and bottom left panels of \figu\ref{fig|ScalingsLight}), is approximately $\mbhe\propto\sigma^{\beta}$ with slope $\beta\sim 3-3.5$, significantly flatter than the slopes $\beta \gtrsim 4.5-5$ usually quoted in the literature \citep[e.g.,][]{Graham11,KormendyHo}.
Conversely, a key point of our present work is that, irrespective of the chosen masses for the seed black holes, the model predicts relatively tight scaling relations at high stellar masses $\mstare \gtrsim 5\times 10^{11}\, \msune$. In this respect, our model does not support the conjecture put forward by \citet{Batcheldor07}, according to which the \mbh-\sis\ relation is only an upper limit of a more or less uniform distribution of black holes. This is also in line with the Monte Carlo tests performed by \citet[][their \figu11]{Shankar16BH}, in which a very broad distribution of black holes extending to the lowest masses in all types of galaxies is highly disfavoured.
Nevertheless, \figu\ref{fig|ScalingsReines} shows that the present data on active galaxies may still be consistent with our model,
once the proper flux limits and Eddington ratio distributions are accounted for.
Therefore, an intrinsically broad distribution for the black-hole mass in relatively small galaxies with mass $\mstare \lesssim 3\times 10^{11}\, \msune$, extending down to black holes of mass $\mbhe \gtrsim 10^5\, \msune$ or so, cannot be excluded by present data, though it is highly disfavoured in more massive galaxies.
Models with such broad distributions, however, still tend to produce scaling relations flatter than observed, once the bias on the black-hole sphere of influence and on galaxy morphology is folded in the model predictions, as shown in \figus\ref{fig|ScalingsLight} and \ref{fig|ScalingsETGs}.
Finally, we stress again that our semi-analytic model fails to reproduce the observations when it comes to correlations between the \emph{residuals} of the scaling relations. In more detail, the model is consistent with the data for
the correlation between the residuals in the \mbh-$\sigma$ relation and those in the $\mstare$-$\sigma$ relation, at fixed velocity dispersion. However, the model predicts almost no correlation between the residuals of the \mbh-$\mstare$ relations and those of the $\sigma$-$\mstare$ relation, at fixed stellar mass, while the data hint at a rather strong correlation. We have verified that these results are robust against changing the parameters of the model (namely the AGN feedback efficiency and the parameter regulating black-hole accretion). Moreover, we have shown that the same weak correlation between the residuals at fixed stellar mass is also obtained in the hydrodynamic cosmological simulation Horizon-AGN~\citep{dubois,dubois16}, which includes thermal quasar-mode feedback and jet-structured radio-mode AGN feedback (i.e., processes that are expected to induce a stronger coupling between black-hole mass and velocity dispersion).
Another noteworthy point is that both our model and Horizon-AGN tend to produce slopes for the \sis-\mstar\ relation that are significantly steeper than the data, although in Horizon-AGN numerical resolution effects may bias the measurements of the velocity dispersion in lower-mass galaxies \citep[see discussion in][]{dubois16}.
Our interpretation is that the weak correlation of the residuals at fixed stellar mass may be at least partly due to current AGN feedback models being possibly too weak to capture the full effect of the black hole on the \emph{stellar} velocity dispersion.
While this is expected in a semi-analytic model such as ours, where the AGN feedback is typically assumed to simply eject gas from the nuclear region~\citep{Granato04,Barausse12}, it is more surprising in the Horizon-AGN simulation, which captures the back-reaction of the ejected gas onto the stellar and dark-matter dynamics~\citep{peirani16}.
Nonetheless, in hydrodynamic cosmological simulations such as Horizon-AGN, the spatial resolution is 1 kpc at best and may limit our capability to properly capture the interaction of AGN winds with gas and its impact on the dynamics within galactic bulges.
Moreover, the Horizon-AGN simulation currently employs a rather crude model of AGN feedback, mostly based on simple gas heating,
to mimic the so-called ``quasar-mode'' feedback. More realistic models of AGN feedback, also inclusive of
momentum-driven winds and stronger coupling with the surrounding interstellar medium \citep[e.g.,][]{bieri17}, might possibly be more effective at improving the comparison with
the black hole-galaxy scaling relations and their residuals. We should also mention that another possibility that has been put forward in the literature is
that AGN feedback may \emph{not} be the main cause of the scaling relations, which might be ascribed instead to a common gas supply for the galaxy and the black hole, regulated by gravitational torques~\citep{2017MNRAS.464.2840A}.
\section{Conclusions}
\label{sec|conclu}
We have compared the predictions of a comprehensive semi-analytic model of galaxy formation, which self-consistently evolves supermassive black holes from high-redshift seeds by accounting for gas accretion, mergers and AGN feedback,
with the observed scaling relations of the masses of supermassive black holes with stellar mass and velocity dispersion. Our main conclusions are:
\begin{enumerate}
\item At $\mstare \gtrsim 5\times 10^{11}\, \msune$, the dispersion in black-hole mass at fixed stellar mass is $\lesssim1$~dex -- very few black holes with masses $\mbhe \lesssim 10^{7}\, \msune$ are predicted in such massive galaxies. However, for galaxies having $\mstare \lesssim 3\times 10^{11}\, \msune$, the distribution of \mbh\ at fixed $\sigma$ or \mstar\ is broad.
\item Observational selection effects, associated with resolving the black-hole sphere of influence and/or with selecting bulge-dominated/elliptical galaxies, tighten the \mbh-\sis\ and \mbh-\mstar\ scaling relations, bringing them into better agreement with the observations.
\item No evident variation in AGN feedback and/or black-hole accretion efficiencies can provide a \emph{simultaneous} match to both scaling relations. This supports previous work suggesting an internal inconsistency between the observed \mbh-\sis\ and \mbh-\mstar\ relations.
\item Galaxy-evolution models (our semi-analytic one as well as the Horizon-AGN simulation) predict almost \emph{no} correlation between the residuals of the \mbh-$\mstare$ relations and those of the $\sigma$-$\mstare$ relation, at fixed stellar mass.
Since the data hint at a rather strong correlation, this calls for revamped AGN feedback recipes in the next generation of cosmological galaxy-evolution models, or for a re-assessment
of the importance of gravitational torques in regulating the black hole-galaxy co-evolution~\citep{2017MNRAS.464.2840A}.
\end{enumerate}
\section*{Acknowledgments}
We thank M. Volonteri for insightful conversations. EB acknowledges support from the H2020-MSCA-RISE-2015 Grant No. StronGrHEP-690904 and from the APACHE grant (ANR-16-CE31-0001) of the French Agence Nationale de la Recherche.
This work has made use of the Horizon Cluster, hosted by the Institut d'Astrophysique de Paris. We thank Stephane Rouberol for running this cluster so smoothly.
\bibliographystyle{mn2e_Daly}
\input{MbhSigmaModel.bbl}
\label{lastpage}
\end{document}
| proofpile-arXiv_065-7601 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Low scale Supersymmetry (SUSY) is motivated by solving two major flaws of the Standard Model (SM): the gauge hierarchy and Dark Matter (DM) problems.
In the SM, the hierarchy problem stems from the fact that a very unnatural Fine-Tuning (FT) is required to keep the Higgs mass at an acceptable value for current data. SUSY provides an elegant solution to this. However, SUSY must be broken at a high scale, hence some FT is reintroduced at some level. In the Minimal Supersymmetric Standard Model (MSSM), with universal soft SUSY breaking terms, a heavy spectrum is required to give large radiative corrections to the SM-like Higgs mass and account for the recently measured value of 125 GeV at the Large Hadron Collider (LHC). Thus naturalness becomes seriously challenged in the MSSM by well established experimental conditions.
Also, the alluring hints of DM existence are serious indications for new physics Beyond the SM (BSM). Due to $R$-parity conservation, the Lightest SUSY Particle (LSP) in the MSSM, the lightest neutralino, is stable and thus is a good candidate for DM. However, the constraints from LHC data (from the Higgs boson properties as well as void
searches for additional Higgs and SUSY states)
combined with cosmological relic density and DM direct detection data rule out all of the MSSM
parameter space except a very narrow region of it \cite{Abdallah:2015hza}.
Quite apart from the aforementioned two problems of the SM, it should be recalled that non-vanishing neutrino masses are presently some of the most important evidence for BSM physics. Massive neutrinos are not present in the SM. However, a simple extension of it, based on the gauge
group $SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_{B-L}$, can account for current experimental results of light neutrino
masses and their large mixing \cite{Khalil:2006yi,Basso:2008iv,Basso:2009gg,Basso:2010yz,Basso:2010as,Majee:2010ar,Li:2010rb,Perez:2009mu,Emam:2007dy,Khalil:2012gs,Khalil:2013in}. Within the $B-L$ Supersymmetric Standard Model (BLSSM), the SUSY version of such a scenario, which inherits the same beneficial features of the MSSM in connection with SUSY dynamics,
it has been emphasised that the scale of $B-L$ symmetry breaking is related to the SUSY breaking one and both
occur in the TeV region \cite{Khalil:2016lgy,Khalil:2007dr,FileviezPerez:2010ek,CamargoMolina:2012hv,Kikuchi:2008xu,Fonseca:2011vn}. Therefore, several testable signals of the BLSSM are predicted for the current experiments at the LHC \cite{Elsayed:2011de,Basso:2012tr,O'Leary:2011yq,Basso:2012ew,Elsayed:2012ec,Khalil:2015naa,Abdallah:2014fra,Basso:2012gz,Abdallah:2015hma,Abdallah:2015uba,Hammad:2016trm,Hammad:2015eca}.
In addition, the BLSSM provides new candidates for DM different from those of the MSSM. In particular, there are two kinds of neutralinos, corresponding to the gaugino of $U(1)_{B-L}$ and the $B-L$ Higgsinos. Also a right-handed sneutrino, in a particular region of parameter space, may be a plausible candidate for DM.
We also consider the scenario where the extra $B-L$ neutralinos can be cold DM states. We then examine the thermal relic abundance of these particles and discuss the constraints imposed on the BLSSM parameter space from the negative results of their direct detection. We argue that, unlike the MSSM, the BLSSM offers one with significant parameter space satisfying all available experimental constraints. This may be at the expense of high FT, if $Z'$ is quite heavy and soft SUSY breaking terms are universal. Nevertheless, for what we will eventually verify to be a small increase in FT with respect to the MSSM, we will gain in the BLSSM a more varied DM sector and much better compliance with relic and (in)direct detection data.
In the build-up to this DM phenomenology, we analyse the naturalness
problem in the BLSSM and compare its performance in this respect against that of the MSSM. In the latter, the weak scale ($M_Z$) depends on the soft SUSY breaking terms through the Renormalisation Group Equations (RGEs) and the Electro-Weak (EW) minimisation conditions, which can be expressed as
\begin{eqnarray}
\frac{1}{2} M_Z^2= \frac{m_{H_d}^2 - m_{H_u}^2 \tan^2 \beta}{\tan^2 \beta -1} - \mu^2 .
\label{EWmin}
\end{eqnarray}
Therefore, a possible measure of FT is defined as \cite{Barbieri:1998uv}
\begin{eqnarray}
\Delta(M_Z^2, a) = \left \vert \frac{a}{M_Z^2} \frac{\partial M_Z^2}{\partial a} \right\vert,
\end{eqnarray}
where $a$ stands for the Grand Unification Theory
(GUT) scale parameters (e.g., $m_0, m_{1/2}, A_0$, etc.) or low scale parameters (e.g., $M_1,M_2,M_3, m_{\tilde{q}}, m_{\tilde{\ell}}$, etc.). In order for SUSY to stabilise the weak scale, $\Delta \equiv {\rm Max} \left( \Delta(M_Z^2, a)\right)$ should be less than ${\cal O}(100~{\rm GeV})$. However, as the scale of SUSY breaking is increased, the EW one becomes highly fine-tuned. As intimated, in the BLSSM, both the weak and $B-L$ scales are related to soft SUSY breaking terms and, in addition to Eq.~(\ref{EWmin}), which is slightly modified by the presence of the gauge mixing $\tilde g$, we also have, in the same limit $\tilde g \simeq 0$,
\begin{eqnarray}
\frac{1}{2} M_{Z^\prime}^2= \frac{m_{\eta_1}^2 \tan^2 \beta' - m_{\eta_2}^2 }{1-\tan^2 \beta'} - \mu'~^2 ,
\label{BLmin}
\end{eqnarray}
where $\eta_{1,2}$ are scalar bosons, with $\langle \eta_{1,2} \rangle = v'_{1,2}$ that break the $B-L$ symmetry spontaneously, and $\tan \beta' = v'_1/v'_2$.
The bound on $M_{Z'}$, due to negative searches at LEP, is given by $M_{Z'} /g_{BL} > 6$ TeV \cite{Cacciapaglia:2006pk}.
Furthermore, LHC constraints from the Drell-Yan (DY) process also exist, which force the $B-L$ $Z'$ mass to be in the
few TeV region.
This indicates that $m_{\eta_{1,2}}$ and $\mu'$ are of order TeV. Therefore, in the scenario of universal soft SUSY breaking terms of the BLSSM, a heavy $M_{Z'}$ implies higher soft terms, hence the estimation of the FT is expected to be worse than in the MSSM. At this point, it is worth mentioning that the $Z'$ gauge boson in the BLSSM can have a large decay width, thus potentially evading LEP and LHC constraints, which are based on the assumption of a narrow decay width, hence on $Z'$ decays into SM particles and additional neutrinos only. While this has been proven to be possible in
a non-unified version of the BLSSM, wherein the aforementioned limits can be relaxed and $M_{Z'}$ can be of order one TeV \cite{Abdallah:2015hma,Abdallah:2015uba}, it remains to be seen whether a similar phenomenology can occur in the unified version of it which we are going to deal with here.
The paper is organised as follows. In Section 2 we briefly review the BLSSM with
a particular emphasis on the $B-L$ minimisation conditions that relate the mass of the neutral gauge boson $Z'$ to the soft SUSY breaking terms and also the extended neutralino sector. Section 3 is devoted to study the RGEs of the BLSSM matter content as well as the gauge and Yukawa couplings. The collider and DM constraints are addressed in Section 4.
In Section 5 we investigate the FT measures in the BLSSM versus the MSSM case. Section 6 presents our numerical results.
Finally, our remarks and conclusions are given in Section 7.
\section{The $B-L$ Supersymmetric Standard Model}
\label{sec:BLSSM}
In this section, we briefly review the BLSSM with an emphasis on its salient features with respect to the MSSM. Even though its gauge group seems like a simple extension of the MSSM gauge group with a gauged $U(1)_{B-L}$ (hereafter, $B-L$ symmetry), it significantly enriches the particle content, which drastically changes the low scale phenomena. First of all, the anomaly cancellation in the BLSSM requires three singlet fields and the right-handed neutrino fields are the most natural candidates to be included in the BLSSM framework. In this context, also the SUSY seesaw mechanisms, through which non-zero neutrino masses and mixing can be achieved consistently with current experimental indications \cite{Wendell:2010md}, can be implemented. Besides, $R-$parity, which is assumed in the MSSM to avoid fast proton decay, can be linked to the $U(1)_{B-L}$ gauge group and it can be preserved if the $B-L$ symmetry is broken spontaneously \cite{Aulakh:1999cd}, as is the case in the BLSSM studied here.
Spontaneous breaking of the $B-L$ symmetry can be realised in a similar way to the Higgs mechanism. That is, one can introduce two scalar fields, denoted as $\eta_{1,2}$. These fields should carry non-zero $B-L$ charges to break the $B-L$ symmetry and they are preferably singlet under the MSSM gauge group so as not to spoil EW Symmetry Breaking (EWSB). Thus, the Superpotential in the BLSSM can be written as
\begin{eqnarray}
W &=&\mu H_{u}H_{d}+Y_{u}^{ij}Q_{i}H_{u}u^{c}_{j}+Y_{d}^{ij}Q_{i}H_{d}d^{c}_{j}+Y_{e}^{ij}L_{i}H_{d}e^{c}_{j} \nonumber\\
&+&Y_{\nu}^{ij}L_{i}H_{u}N^{c}_{i} + Y^{ij}_{N}N^{c}_{i}N^{c}_{j}\eta_{1}+\mu^{\prime}\eta_{1}\eta_{2},
\label{superpotential}
\end{eqnarray}
where the first line represents the MSSM Superpotential using the standard notation for (s)particles while the second line includes the terms associated with the right-handed neutrinos, $N_{i}^{c}$'s, plus the singlet Higgs fields $\eta_{1}$ and $\eta_{2}$. The $B-L$ symmetry requires $\eta_{1}$ and $\eta_{2}$ to carry $-2$ and $+2$ charges under
$B-L$ transformations, respectively. The presence of the $N_{i}^{c}$ terms makes it possible to have Yukawa interaction terms for the neutrinos, denoted by $Y_{\nu}$. Finally, $\mu'$ stands for the bilinear mixing term between the singlet Higgs fields.
In addition to the right-handed neutrinos and the singlet Higgs fields, the BLSSM also introduces a gauge field ($B'$) and its gaugino ($\tilde{B}'$) associated with the gauged $B-L$ symmetry, so that the appropriate Soft SUSY-Breaking (SSB) Lagrangian can be written as
\begin{eqnarray}
-\mathcal{L}_{\rm SSB}^{{\rm BLSSM}}&=& -\mathcal{L}^{{\rm MSSM}}_{\rm SSB} +m^{2}_{\tilde{N}^{c}}|\tilde{N}^{c}|^{2} + m_{\eta_{1}}^{2}|\eta_{1}|^{2}+ m_{\eta_{2}}^{2}|\eta_{2}|^{2} +A_{\nu}\tilde{L}H_{u}\tilde{N}^{c} +A_{N}\tilde{N}^{c}\tilde{N}^{c}\eta_{1}\nonumber\\
&+& \frac{1}{2}M_{B^{\prime}}\tilde{B}^{\prime}\tilde{B}^{\prime} + M_{BB'} \tilde{B}\tilde{B}^{\prime} +B(\mu^{\prime}\eta_{1}\eta_{2}+ {\rm h.c.}).
\label{SSBLag}
\end{eqnarray}
Note that, in contrast to its non-SUSY version, the BLSSM does not allow mixing between the doublet and singlet Higgs fields through the Superpotential and SSB Lagrangian. Therefore, the scalar potential for these can be written separately and their mass matrices can be diagonalised independently. The scalar potential for the singlet Higgs fields can be derived as
\begin{equation}
V(\eta_{1} , \eta_{2})=\mu^{\prime 2}_{1}|\eta_{1}|^{2}+\mu^{\prime 2}_{2}|\eta_{2}|^{2}-\mu^{\prime}_{3}(\eta_{1} \eta_{2} + {\rm h.c.}) +\frac{1}{2}g_{BL}^{2}(|\eta_{1}|^{2}-|\eta_{2}|^{2})^{2}
\label{singpotential}
\end{equation}
and the minimisation of this potential yields Eq.~(\ref{BLmin}). Despite the non-mixing Superpotential and SSB Lagrangian, one can implement mixing between the doublet and singlet Higgs fields via $-\chi B_{\mu\nu}^{B-L}B^{Y,\mu\nu}$, where $B^a_{\mu\nu}$ is the field strength tensor of a $U(1)$ gauge field, with $a = Y,~B-L$, the hypercharge and $B-L$ charge, respectively. In the case of gauge kinetic mixing, the covariant derivative takes a non-canonical form \cite{O'Leary:2011yq} which couples the singlet Higgs fields to the doublet ones at tree-level. Even though $\tilde{g}$ is set to zero at the GUT scale, it can be generated at the low scale through the RGEs \cite{Holdom:1985ag}. In this basis, one finds
\begin{equation}
M_Z^2\,\simeq\,\frac{1}{4} (g_1^2 +g_2^2) v^2, ~~ ~~~~~~~ M_{Z'}^2\, \simeq\, g_{BL}^2 v'^2 + \frac{1}{4} \tilde{g}^2 v^2 ,
\end{equation}
where $v=\sqrt{v^2_u+v^2_d}\simeq 246$ GeV and $v'=\sqrt{v'^2_1+v'^2_{2}}$ with the
Vacuum Expectation Values (VEVs) of the
Higgs fields given by $\langle{\rm Re} H_{u,d}^0\rangle=v_{u,d}/\sqrt{2}$ and $\langle{\rm Re} ~\eta_{1,2}\rangle=v'_{1,2})/\sqrt{2}$.
It is worth mentioning that the mixing angle between $Z$ and $Z'$ is given by
\begin{equation}
\tan 2 \theta'\, \simeq\, \frac{2 \tilde{g}\sqrt{g_1^2+g_2^2}}{\tilde{g}^2 + 16 (\frac{v'}{v})^2 g_{BL}^2 -g_2^2 -g_1^2}.
\end{equation}
The minimisation conditions of the BLSSM scalar potential at tree-level lead to the following relations \cite{O'Leary:2011yq}:
\begin{eqnarray}
v_1' \left( m^2_{\eta_1} + \vert \mu' \vert^2 + \frac{1}{4} \tilde{g}g_{BL} (v_d^2 -v_u^2) + \frac{1}{2} g^2_{BL} (v'^2_1 - v'^2_2) \right) - v'_2 B\mu' &=& 0 ,\\
v_2' \left( m^2_{\eta_2} + \vert \mu' \vert^2 + \frac{1}{4} \tilde{g}g_{BL} (v_u^2 -v_d^2) + \frac{1}{2} g^2_{BL} (v'^2_2 - v'^2_1) \right) - v'_1 B\mu'
&=& 0 .
\end{eqnarray}
From these equations, one can determine $\vert \mu'\vert^2$ and $B \mu'$ in terms of other soft SUSY breaking terms. Note that, with $\tilde{g}=0$, the expression of $\vert \mu'\vert^2$ takes the form of Eq. (\ref{BLmin}).
It should be noted here that the SUSY $B-L$ extension does not affect the chargino mass matrix, that is, the latter will be exactly the same as that of the MSSM. This is not the case for the neutralino mass matrix though. This will in fact be extended to be a $7 \times 7$ mass matrix. This happens as a consequence of the existence of the additional neutral states $\tilde{B'}, \tilde{\eta}_1$ and $ \tilde{\eta}_2$. Thus, the neutralino mass matrix is given by
\begin{eqnarray}
&&{\cal M}_7({\tilde B},~{\tilde W}^3,~{\tilde
H}^0_1,~{\tilde H}^0_2,~{\tilde B'},~{\tilde{\eta}_1},~{\tilde{\eta}_2}) \equiv \left(\begin{array}{cc}
{\cal M}_4 & {\cal O}\\
{\cal O}^T & {\cal M}_3\\
\end{array}\right),
\end{eqnarray}%
where ${\cal M}_4$ is the MSSM-type neutralino mass matrix and
${\cal M}_{3}$ is the additional $3\times 3$ neutralino mass matrix,
which is given by%
\begin{equation}%
{\cal M}_3 = \left(\begin{array}{ccc}
M_{B'} & -g_{_{BL}}v'_1 & g_{_{BL}}v'_2 \\
-g_{_{BL}}v'_1 & 0 & -\mu' \\
g_{_{BL}}v'_2 & -\mu' & 0\\
\end{array}\right).
\label{mass-matrix.1} \end{equation}
In addition, the off-diagonal matrix ${\cal O}$ is given by
\begin{equation}%
{\cal O} = \left(\begin{array}{ccc}
M_{BB'} &~~~0~~~& 0 \\
0 & 0 & 0 \\
-\frac{1}{2}\tilde{g}v_d &~~~0~~~& 0\\
\frac{1}{2}\tilde{g}v_u&~~~0~~~&0\\
\end{array}\right).
\label{mass-matrix.2} \end{equation}
(Note that the off-diagonal matrix elements vanish identically if $\tilde{g}=0$ and $M_{BB'} = 0$). One can then diagonalise the real matrix ${\cal M}_{7}$ with
a symmetric mixing matrix $V$ such that
\begin{equation} V{\cal
M}_7V^{T}={\rm diag}(m_{\tilde\chi^0_k}),~~k=1,\dots,7.\label{general} \end{equation} In
these conditions, the LSP has the following decomposition
\begin{equation} \label{cm_neuComp}
\tilde\chi^0_1=V_{11}{\tilde B}+V_{12}{\tilde
W}^3+V_{13}{\tilde H}^0_d+V_{14}{\tilde
H}^0_u+V_{15}{\tilde B'}+V_{16}{\tilde{\eta}_1}+V_{17}{\tilde{\eta_2}}.
\end{equation}
If the LSP is then considered as a candidate for DM, each species in the above equation, if dominant,
leads to its own phenomenology that can possibly be distinguished in direct detection experiments. For example, to achieve
the correct relic density of Bino-like DM is challenging, since its abundance is usually so high over the fundamental parameter space that one needs to identify several annihilation and/or coannihilation channels to reduce its density down to the
Wilkinson Microwave Anisotropy Probe
(WMAP) \cite{Hinshaw:2012aka} or Planck \cite{Ade:2015xua} measurements. Since this DM state interacts through the hypercharge, its scattering with nuclei has a very low cross section. Conversely, the largest cross section in DM scattering can be obtained when DM is Higgsino-like, since it interacts with the quarks through the Yukawa interactions. Since the BLSSM sector offers significant interference in the neutralino sector, this may also drastically change the DM kinematics. In contrast to a Bino, the $\tilde{B'}-$ino interacts more strongly depending on the $B-L$ gauge coupling. Despite the severe mass bound on the $Z'$, there is no specific bound on $m_{\tilde{B}'}$, so that it can be even as low as 100 GeV \cite{Khalil:2015wua}. In this context, one can expect the LSP neutralino to be mostly formed by $\tilde{B}'$ and its cross section in its scattering with nuclei can be very large, in contrast to the Bino case. In addition to $\tilde{B}'$, the LSP neutralino can be formed by the singlet Higgsinos (also dubbed Bileptinos due to their $L=\pm2$ lepton charge). In this case, it is challenging for their abundance
to be compatible with the experimental results. The reduction through the coannihilation channels involving SUSY particles arises from the gauge kinetic mixing, which is restricted to be moderate. If its mass is nearly degenerate with that of the $\tilde{B}'$ state, they can significantly coannihilate. Also, a singlet Higgsino yields low cross section in DM scattering experiments.
Besides the neutralinos, one can also consider the sneutrino as a DM candidate when it is the LSP, of course.
In this case, the extended sector of the BLSSM involves twelve states coming from the Superpartners of the left- and the right-handed neutrinos. In a Charge and Parity (CP)-conserving framework the states entering the sneutrino mixing matrix can be expressed by separating their scalar and pseudo-scalar components
\begin{eqnarray} \label{SneutrinoCP}
\tilde{\nu}_i = \frac{\sigma_L{}_i + i \phi_L{}_i}{\sqrt{2}} \,,\quad
\tilde{N}_i = \frac{\sigma_R{}_i + i \phi_R{}_i}{\sqrt{2}}.
\end{eqnarray}
The breaking of $B-L$ generates an effective mass term through $Y^{ij}_{N}N^{c}_{i}N^{c}_{j}\eta_{1}$ causing a mass splitting between the CP-even and CP-odd
sector. Therefore, in terms of Eq. (\ref{SneutrinoCP}), the corresponding $12\times12$ mass matrix is reduced to two different $6\times6$ blocks
\begin{eqnarray}
{\cal M}^{2\,\sigma}(\sigma_L, \sigma_R) \equiv \left(\begin{array}{cc}
{\cal M}^{2\,\sigma}_{LL} & {\cal M}^{2\,\sigma}_{LR}\\
{\cal M}^{2\,\sigma}_{LR}{}^T & {\cal M}^{2\,\sigma}_{RR}\\
\end{array}\right)\,,\quad
{\cal M}^{2\,\phi}(\phi_L, \phi_R) \equiv \left(\begin{array}{cc}
{\cal M}^{2\,\phi}_{LL} & {\cal M}^{2\,\phi}_{LR}\\
{\cal M}^{2\,\phi}_{LR}{}^T & {\cal M}^{2\,\phi}_{RR}\\
\end{array}\right).
\end{eqnarray}
Such differences between CP-even and CP-odd sectors do not involve the left components with ${\cal M}^{\sigma}_{LL}$ and ${\cal M}^{\phi}_{LL}$
described by the common form ${\cal M}^{2}_{LL}$
\begin{eqnarray}
{\cal M}^{2}_{LL}{}^{i,j} \equiv \frac{\delta^{i,j}}{8} \left( \left( g_1^2 + g_2^2 + \tilde{g}\left(g_{BL} + \tilde{g}\right) \right) \delta_{H} + \left(g_{BL} + \tilde{g}\right)\delta_{\eta} \right)
+ \frac{1}{2} v_u^2 \left( Y_{\nu}^T Y_{\nu}\right)^{i,j} + m_l^2{}^{i,j}, \nonumber \\
\end{eqnarray}
where we have introduced $\delta_{\eta} = v'^2_{1} - v'^2_{2}$ and $\delta_{H} = v^2_d - v^2_u$ .
For the submatrices ${\cal M}^{2\,\sigma}_{RR}$ and ${\cal M}^{2\,\phi}_{RR}$ we have instead
\begin{eqnarray}
{\cal M}^2_{RR}{}^{i,j} &\equiv& - \frac{\delta^{i,j}}{8}\,g_{BL} \left(\tilde{g} \delta_H + 2 g_{BL} \delta_{\eta} \right) + \frac{1}{2}\,v_u^2 \left(Y_{\nu} Y_{\nu}^T\right)^{i,j}
+ m_{\tilde{N}}^2{}^{i,j} + 2\,v'^2_{1}\, \left(Y_N^2\right)^{i,j} \nonumber \\ &\mp& \sqrt{2} \left( v'_{2}\, \mu' Y_{N}^{i,j} - v'_{1}\, A_{N}^{i,j} \right)
\end{eqnarray}
while the left-right sneutrino mixing is ruled by the matrices
\begin{eqnarray}
{\cal M}^2_{LR}{}^{i,j} &\equiv& \frac{1}{2}\left( - \sqrt{2}\,v_{d} \mu Y_{\nu}^{i,j} + v_u\,\sqrt{2}\, A_{\nu}^{i,j} \pm 2 v_u\,v'_{1}\, \left(Y_{N}Y_{\nu} \right)^{i,j} \right),
\end{eqnarray}
with upper(lower) signs corresponding to CP-even(odd) cases. The parameter $Y_{\nu}$ and the corresponding trilinear term $A_{\nu}$ determine the mixing between
the left and right components. In our setup, $Y_{\nu}$ is negligible and can safely be set to zero already at the GUT scale, as it is the case also for the boundary condition of $A_{\nu}$.
The resulting $12\times12$ sneutrino mass matrix is consequently unable to mix the left- and right-handed components as the CP-even and CP-odd parts of a sneutrino state
will be completely determined by assigning its CP value and the chirality of its Supersymmetric partner.
\section{Renormalisation Group Equations}
\label{sec:rge}
The presence of an extra Abelian gauge group introduces a distinctive feature, the gauge kinetic mixing, through a renormalisable and gauge invariant operator $\chi B^{\mu\nu}B'_{\mu\nu}$ of the two Abelian field strengths.
Moreover, off-diagonal soft breaking terms for the Abelian gaugino masses are also allowed.
This effect is completely novel with respect to the MSSM or other Supersymmetric models in which only a single $U(1)$ factor is considered.
Even if the kinetic mixing is required to vanish at a given scale, the RGE evolution inevitably reintroduces it alongside the running, unless a particular field content and charge assignment are enforced. If the two Abelian gauge factors emerge from the breaking of a simple gauge group, the kinetic mixing is absent at that scale. For this reason, arguing that the BLSSM could be embedded into a wider GUT scenario (the matter content of the BLSSM, which includes three generations of right-handed neutrinos, nicely fits into the 16-D spinorial representation of $SO(10)$), we require the vanishing of the kinetic mixing at the GUT scale. As we stated above, we nevertheless end up with a non-zero kinetic mixing at low scales affecting the $Z'$ interactions as well as the Higgs and the neutralino sectors \cite{O'Leary:2011yq}. \\
Instead of working with a non-canonical kinetic Lagrangian in which the kinetic mixing $\chi$ appears, it is more practical to introduce a non-diagonal gauge covariant derivative with a diagonal kinetic Lagrangian. The two approaches are related by a gauge field redefinition and are completely equivalent. In this basis the covariant derivative of the Abelian fields takes the form $\mathcal D_\mu = \partial_\mu - i Q^T G A_\mu$, where $Q$ is the vector of the Abelian charges, $A$ is the vector of the Abelian gauge fields and $G$ is the Abelian gauge coupling matrix with non-zero off-diagonal elements. The matrix $G$ can be recast into a triangular form with an orthogonal transformation $G \rightarrow G O^T$ \cite{Coriano:2015sea}. With this parametrisation, the three independent parameters of $G$ are explicitly manifest and correspond to the Abelian couplings, $g_1$, $g_{BL}$ and $\tilde g$, describing, respectively, the hypercharge interactions, the extra $B-L$ ones and the gauge kinetic mixing. Differently from the MSSM case, the Abelian gaugino mass term is replaced by a symmetric matrix with a non-zero mixed mass term $M_{BB'}$ between the $B$ and $B'$ gauginos. Coherently with our high energy unified embedding, we choose $M_{BB'} = 0$ at the GUT scale. Notice that the Abelian gaugino mass matrix $M$ is affected by the same rotation $O$ and in the basis in which $G$ is triangular and $M$ transforms through $M \rightarrow O M O^T$. \\
We have performed a RGE study of the BLSSM assuming gauge coupling unification and mSUGRA boundary conditions at the GUT scale.
The two-loop RGEs have been computed with SARAH \cite{Staub:2013tta} and fed into SPheno \cite{Porod:2003um} which has been used for the spectrum computation and for the numerical analysis of the model. Here we show the one-loop $\beta$ functions of the gauge couplings highlighting the appearance of the kinetic mixing contributions
\begin{eqnarray}
\label{eq:RGEgauge}
\beta^{(1)}_{g_1} &=& \frac{33}{5} g_1^3, \nonumber \\
\beta^{(1)}_{g_{BL}} &=& \frac{3}{5} g_{BL} \left( 15 g_{BL}^2 + 4 \sqrt{10} g_{BL} \, \tilde g + 11 \tilde g^2 \right), \nonumber \\
\beta^{(1)}_{\tilde g} &=& \frac{3}{5} \tilde g \left( 15 g_{BL}^2 + 4 \sqrt{10} g_{BL} \, \tilde g + 11 \tilde g^2 \right) + \frac{12 \sqrt{10}}{5} g_1^2 g_{BL}, \nonumber \\
\beta^{(1)}_{g_2} &=& g_2^3, \nonumber \\
\beta^{(1)}_{g_3} &=& -3 g_3^3,
\end{eqnarray}
where we have adopted the GUT normalisations $\sqrt{3/5}$ and $\sqrt{3/2}$, respectively, for the $U(1)_Y$ and $U(1)_{B-L}$ gauge groups.
At one-loop level the expressions of the $\beta$ functions of $g_1$, $g_2$ and $g_3$ are the same as those of the MSSM with differences appearing at two-loop order only.
Notice that the term responsible for the reintroduction of a non-vanishing mixing coupling $\tilde g$ along the RGE running, even if absent at some given scale, is the last term in $\beta^{(1)}_{\tilde g}$. We recall again that the kinetic mixing is a peculiar feature of Abelian extensions of the SM and their Supersymmetric versions, admissible only between two or more $U(1)$ gauge groups.
Assuming gauge coupling unification at the GUT scale, the RGE analysis provides the results $\tilde g \simeq -0.144$ and $g_{BL} \simeq 0.55$ with $M_{\textrm{GUT}} \simeq 10^{16}$ GeV, which are controlled by the leading one-loop $\beta$ functions given in Eq.~(\ref{eq:RGEgauge}). The spread of points around these central values, less than 1\% for $g_{BL}$ and 5\% for $\tilde g$, is only due to higher-order corrections, namely two-loop running and threshold corrections.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.5]{figures/PlotGauginoMass.png}
\caption{Gaugino masses at the SUSY scale as a function of the GUT $m_{1/2}$ mass. Here, both gauge coupling and soft mass unification have been assumed. \label{fig:gauginomass}}
\end{figure}
The running of the gaugino masses is directly linked to that of the gauge couplings. In the Abelian sector and at one-loop, the Abelian gaugino mass matrix $M$ evolves with
\begin{eqnarray}
\beta_M = M G^T Q^2 G + G^T Q^2 G M = M G^{-1} \beta_G + G^{-1} \beta_G M,
\end{eqnarray}
where $Q = \sum_p Q_p Q_p^T$, with $Q_p$ the vector of the Abelian charges of the $p$ particle.
Exploiting the structure of the $\beta$ functions of the gaugino masses, a simple relation is obtained, $M_i/m_{1/2} = g_i^2/g_{\rm GUT}^2$, for non-Abelian masses at one-loop order. In the Abelian sector, due to the presence of the mixing, the previous equation is replaced by a matrix relation. Indeed, from the product $G M^{-1} G^T$, which remains constant along the RGE evolution, one finds the Abelian gaugino mass matrix $M/m_{1/2} = G^T G / g_{\rm GUT}^2$. We show in Fig.~\ref{fig:gauginomass} the dependence of the gaugino masses as a function of the GUT gaugino mass $m_{1/2}$. The hierarchy is obviously controlled by the size of the gauge couplings at low scale.
The one-loop $\beta$ functions of the soft masses of the scalar fields $H_u, H_d$ and $\eta_1, \eta_2$ are given by
\begin{eqnarray}
\beta_{m_{H_u}^2} &=& - \frac{6}{5} \left( g_1^2 (M_1^{2} + \tilde M^2 ) + \tilde g^2 ({M'_1}^{2} + \tilde M^2 ) + 2 g_1 \tilde g ( M_1 + M'_1) \tilde M \right) - 6 g_2^2 M_{W}^2, \nonumber \\
&-& 3 (g_1^2 + \tilde g^2) \sigma_1 - \frac{3 \sqrt{10}}{4} g_{BL} \tilde g \sigma_2 +6 \left( m_{H_u}^2 + m_{q_{33}}^2 + m_{u_{33}}^2 \right) Y_t^2 + 6 T_t^2 \\
\beta_{m_{H_d}^2} &=& - \frac{6}{5} \left( g_1^2 (M_1^{2} + \tilde M^2 ) + \tilde g^2 ({M'_1}^{2} + \tilde M^2 ) + 2 g_1 \tilde g ( M_1 + M'_1) \tilde M \right) - 6 g_2^2 M_{W}^2, \nonumber \\
&+& 3 (g_1^2 + \tilde g^2) \sigma_1 + \frac{3 \sqrt{10}}{4} g_{BL} \tilde g \sigma_2 +6 \left( m_{H_d}^2 + m_{q_{33}}^2 + m_{d_{33}}^2 \right) Y_b^2 + 6 T_b^2 \\
\beta_{m_{\eta_1}^2} &=& -12 {g_{BL}}^{2} ({M'_1}^{2} + \tilde M^2 ) + 4 m_{\eta_1}^2 \textrm{tr}(Y_N^2) + 4 \textrm{tr}(T_{Y_N}^2) + 8 \textrm{tr}(m_{\nu_R}^2 Y_N^2), \nonumber \\
&+& 3 \sqrt{\frac{2}{5}} g_{BL} \tilde g \sigma_1 + \frac{3}{2} {g_{BL}}^2 \sigma_2, \\
\beta_{m_{\eta_2}^2} &=& -12 {g_{BL}}^{2} ({M'_1}^{2} + \tilde M^2 ) - 3 \sqrt{\frac{2}{5}} g_{BL} \tilde \, g \sigma_1 - \frac{3}{2} {g_{BL}}^2 \sigma_2,
\end{eqnarray}
where, for the sake of simplicity, we have neglected all the Yukawa couplings but top- and bottom-quark $Y_t, Y_b$ and the heavy-neutrinos $Y_N$. We have also assumed real parameters. The gaugino masses $M_1, M'_1$ and $\tilde M$ are obtained from $M_B, M_{B'}$ and $M_{BB'}$ through the transformation $OMO^T$.
The coefficients $\sigma_{1,2}$ are defined as
\begin{eqnarray}
\sigma_1 &=& m_{H_d}^2 - m_{H_u}^2 - \textrm{tr}( m_d^2) - \textrm{tr}( m_e^2) + \textrm{tr}( m_l^2) - \textrm{tr}( m_q^2) + 2 \textrm{tr}( m_u^2), \nonumber \\
\sigma_2 &=& 2 m_{\eta_1}^2 - 2 m_{\eta_2}^2 + \textrm{tr}( m_d^2) - \textrm{tr}( m_e^2) + 2 \textrm{tr}( m_l^2) - 2 \textrm{tr}( m_q^2) + \textrm{tr}( m_u^2) - \textrm{tr}( m_{\nu_R}^2)
\end{eqnarray}
and are found to be RGE invariant combinations of the soft SUSY masses. Assuming unification conditions at the GUT scale, $\sigma_{1,2}$ remain zero along all the RGE evolution.
Being $\beta_{m_{\eta_2}^2}$ characterised only by negative contributions proportional to the Abelian gaugino masses, the corresponding soft mass $m_{\eta_2}^2$ will increase and remain positive during the run from the GUT to the EW scale. The same feature is shared by $m_{H_d}^2$ except for some particular values of the gaugino and soft scalar masses at the GUT scale for which the $Y_b$ Yukawa coupling contribution (of the $b$-quark) to $\beta_{m_{\eta_2}^2}$ is not negligible. The spontaneous symmetry breaking of EW and B-L, requiring negative $m^2_{H_u}$ and $m^2_{\eta_1}$, can be realised radiatively, which is a nice feature in both MSSM and BLSSM. Namely, even though there is no spontaneous symmetry breaking at a high scale, the large top-quark Yukawa coupling $Y_t$ and its trilinear soft term $A_t$ can drive $m^2_{H_u}$ negative through its RGE evolution, which triggers spontaneous EWSB. Similarly, a sufficiently large neutrino Yukawa coupling $Y_N$ and corresponding trilinear soft term $A_n$ turn $m^2_{\eta_1}$ negative in its RGE evolution and break the $B-L$ symmetry spontaneously.
In general only one of the three components of the diagonal $Y_N$ matrix is required to be large in order to realise the spontaneous symmetry breaking of the extra Abelian symmetry, thus providing a heavy and two possible lighter heavy-neutrino states. Notice also that the elements of the low scale values of the $Y_N$ matrix cannot be taken arbitrary large otherwise a Landau pole is hit before the GUT scale. A close inspection of the one-loop $\beta$ function of the heavy-neutrino Yukawa coupling
\begin{eqnarray}
\beta_{Y_N} = 8 Y_N Y_N^* Y_N + 2 \textrm{tr} (Y_N Y_N^*) Y_N - \frac{9}{2} {g_{BL}}^2 Y_N,
\end{eqnarray}
where we have neglected the negligible contribution of the light-neutrino Yukawa coupling $Y_\nu$, shows that $Y_N \gtrsim 0.5$ spoils indeed the perturbativity of the model at the GUT scale or below.
\section{Collider and Dark Matter Constraints}
\label{sec:colliderdm}
To investigate the viability of the BLSSM parameter space, with mSUGRA boundary conditions, we have challenged its potential signatures against two sets of experimental constraints.
To the first set belong different bounds coming from collider probes which have been used in building the scan procedure. These form a varied set of requirements
affecting our choice of the $Z'$ benchmark mass as well as the character of the acceptable low-scale particle spectrum.
As already stated, stringent constraints come from LEP2 data via EW Precision Observables (EWPOs) and from
Run 2 of the LHC through a signal-to-background analysis using Poisson statistics to extract a 95\% Confidence Level (CL) bound in the di-lepton channel. The CL has been extracted at the LHC with $\sqrt{s} = 13$ TeV and $\mathcal L = 13.3$ fb$^{-1}$, updating the analysis presented in \cite{Accomando:2016sge}. We have taken into account the $Z'$ signal and its interference with the SM background and included efficiency and acceptance for both the electron and muon channels as described in \cite{Khachatryan:2016zqb}.
Such studies affect the extended gauge sector $\left(\tilde{g}, g_{BL}, M_{Z'}\right)$ in a way that, in all safety, allow us to select the value
$M_{Z'} = 4$ TeV for all magnitudes of gauge couplings and $Z'$ total width (in the range 30--45 GeV) met in the RGE evolution. Notice that the BLSSM supplied with unification conditions at the GUT scale provides a very narrow $Z'$ width with a $\Gamma_{Z'}/M_{Z'}$ ratio reaching $~1\%$ at most. Thus, this is unlike the results of \cite{Abdallah:2015hma,Abdallah:2015uba}, which were indeed obtained without any universality conditions.
Such a $Z'$ mass value completes the independent parameters that feed our scan and which in turn provides a BLSSM low-energy spectrum.
It is at this stage that we can force the exclusion bounds coming from LEP, Tevatron and LHC linked to the negative searches of scalar degrees of freedom and to the correct reproduction of the measured Higgs signal strength around $125$ GeV.
More precisely, from our scan it is possible to extract the masses and the Branching Ratios (BRs) of all the (neutral and charged) scalars plus their effective couplings to SM fermions and bosons. This information is then processed into {\texttt HiggsBounds} \cite{Bechtle:2008jh,Bechtle:2011sb,Bechtle:2013wla,Bechtle:2015pma} which, considering all
the available collider searches, expresses whether a parameter point has been excluded at $95\%$ CL or not.
This analysis establishes a first solid sieve by reducing a considerable number of acceptable points, among those with successful EW and $U(1)_{B-L}$ symmetry breaking, as
obtained from the GUT parameters scan.
Over such points, the compatibility fit of the generated Higgs signal strengths with the ones measured at LHC is taken into account by {\texttt HiggsSignals}
\cite{Bechtle:2013xfa}, which provides the corresponding $\chi^2$. By asking for a $2\sigma$ interval around the minimum $\chi^2$ generated, we obtain a further constraint over the
parameter space investigated.
The second set of bounds that we considered emerges from the probe of DM signatures which are a common and natural product of many SUSY models.
Among these, the BLSSM stands out for both theoretical and phenomenological reasons that make the study of its DM aspects particularly worthwhile.
The presence of a gauged $B-L$ symmetry, being broken by the scalar fields $\eta_1$ and $\eta_2$, as they are charged under $B-L$ \cite{FileviezPerez:2010ek}, provides a local origin
to the discrete $R-$symmetry that is usually imposed ad-hoc to prevent fast proton decay. Consequently, the BLSSM embeds the stability of the LSP through its gauge structure, as it does for the produced DM density.
From the phenomenological side, the BLSSM, like the MSSM, has the neutralino as a possible cold DM candidate. The presence of additional neutral degrees of freedom drastically changes its properties with respect to the corresponding MSSM ones, which is mostly Bino in GUT constrained models, possibly
giving the necessary degrees of freedom to accommodate the measured DM evidences. Moreover, the BLSSM also envisages a scalar LSP in its spectrum, generated by the superpartners of the
six Majorana neutrinos, which may also be the origin of a cold DM relic.
For every possible low energy spectrum obtained, the LSP provided by the BLSSM will participate in the early thermodynamical evolution of the universe.
After an initial regime of thermal equilibrium with the SM particles, decoupling takes place once the DM annihilation rate becomes slower than the Universe
expansion.
This process would result in the relic density lasting until now.
Consequently, a crucial test of the cosmological viability of the BLSSM is enforced by requiring the relic abundance generated not to overclose the Universe
by exceeding the measured current value of the DM relic density
\begin{equation} \label{PLANCK}
\Omega h^2 = 0.1187 \pm 0.0017({\rm stat}) \pm 0.0120({\rm syst})
\end{equation}
as measured by the Planck Collaboration \cite{Ade:2015xua}.
The requirement to reproduce the measured relic density would finally highlight the region of the parameter space where
the model is able to solve the DM puzzle.
The computation of the DM abundance is achieved by solving the evolution numerically with
{\texttt MicrOMEGAs} \cite{Belanger:2006is,Belanger:2013oya}, which collects the amplitudes for all the annihilation, as well as coannihilation, processes.
Another source of constraints, which cannot be neglected due to the recent increase in precision reached by the LUX collaboration \cite{Akerib:2016lao,Akerib:2016vxi},
is linked to the
direct searches intended to detect DM signatures coming from DM scatterings with nuclei.
We have tested the BLSSM spectrum against the challenging upper limit on the Spin Independent (SI) component of the LSP-nucleus scattering.
The zeptobarn order of magnitude, reached in the recent upgrade of the DM-nucleus cross section bound, will have an interesting interplay with the parameter
space analysed to test the surviving ability of the BLSSM against stringent exclusions.
The DM scenarios provided represent a peculiar signature of the model, with characteristic degrees of freedom playing a key role
in drawing a rich DM texture.
As already stated, the BLSSM has two candidates for cold DM as it is possible to have, other than the neutralino, also a heavy stable sneutrino.
The extended neutral sector, consequence of the inclusion of an extra $B-L$ gauge factor, enlarges the neutralino components with three new states (two coming from Bileptinos and one from BLino) as seen in Eq. (\ref{cm_neuComp}).
To study the behaviour of the neutralinos we may consider the following classification
\begin{center}
\begin{tabular}{ll}
$V_{11}^2 > 0.5$ & Bino-like,\\
$V_{12}^2 > 0.5$ & Wino-like,\\
$V_{13}^2 + V_{14}^2 > 0.5 $& Higgsino-like,\\
$V_{15}^2 > 0.5 $& BLino-like,\\
$V_{16}^2 + V_{17}^2 > 0.5$ & Bileptino-like,\\
Neither of the previous cases & Mixed.
\end{tabular}
\end{center}
In this scheme the nature of the neutralino is identified with the interaction eigenstate that makes up for more than half of its content.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.4]{figures/PlotHistNeuSneu.png}
\includegraphics[scale=0.4]{figures/PlotHistNeuComp.png}
\caption{(a) The normalised distribution of the neutralino and sneutrino types found in our scan.
(b) The normalised distribution of the different types of LSP found in our scan. The histograms are stacked.}{}
\label{composition}
\end{figure}
For all the points generated in our scan, in agreement with the constraints from Higgs searches,
the LSP will, in the majority of cases, results in a fermionic DM candidate with mass below 2 TeV, see Fig.~\ref{composition}(a).
The sneutrino will instead be a subdominant option over our entire set of points.
It is interesting to explore the composition of the sneutrino LSP written in terms of CP eigenstates and left-right parts.
This is relevant to appreciate the chances to survive the direct detection probes of DM, with a left-handed sneutrino having a dangerously enhanced
scattering rate against nuclei \cite{Falk:1994es} due to $Z$ mediation.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.4]{figures/PlotHistSNeuComp.png}
\caption{Composition of the lightest sneutrino for the set of points in agreement with the constraints from {\texttt HiggsBounds} and {\texttt HiggsSignals}.
Histogram is of stacked type with normalised heights.}
\label{fig:SneuHist}
\end{figure}
Fig.~\ref{fig:SneuHist} indicates that the lightest sneutrino can sizeably be left-handed only above $\sim$ 2 TeV. The complementary region is the only one
where the sneutrino can compete against the neutralino as a possible LSP clarifying why the LSP sneutrino met in our constrained BLSSM will always be a \emph{right}-handed sneutrino.
Following the previous classification, a Bino-like neutralino will be more common to encounter as the BLSSM favourite LSP, but, as typical
features of the model, also states of BLino and Bileptino nature are often met, see Fig.~\ref{composition}(b).
Notably, no Higgsino-like neutralino are found while the Wino possibility is a most rare one, which requires very tuned conditions over the parameter space to be
produced in a sizeable amount. Given our uniform treatment over the boundary conditions, we will not consider this case though.
\section{Fine-Tuning Measures}
\label{sec:fine_tuning}
We introduce measures of FT in this section to compare BLSSM and MSSM in respect of naturalness. FT is not a physical observable, but it is rather an indication for an unknown mechanism, which is missing in the model under concern. Its quantitative values, then, can be interpreted as the effectiveness of the missing mechanisms over the low scale results. In this context, the model may cover most of the whole BSM physics, when FT is small.
There are many alternatives for a quantitative measure of FT \cite{Anderson:1994dz,Anderson:1994tr,Anderson:1995cp,Anderson:1996ew,Ciafaloni:1996zh,Chan:1997bi,Barbieri:1998uv,Giusti:1998gz,Casas:2003jx,Casas:2004uu,Casas:2004gh,Casas:2006bd,Kitano:2005wc,Athron:2007ry,Baer:2012up}, which are commonly based on the change in the $Z$-boson mass. Its measure (denoted by $\Delta$) equals the largest of these changes defined as \cite{Ellis:1985yc,Barbieri:1987fn}
\begin{equation}
\Delta={\rm Max} \left| \frac{\partial \ln v^2}{\partial \ln a_i}\right| = {\rm Max} \left| \frac{a_i}{v^2} \frac{\partial v^2}{\partial a_i } \right| = {\rm Max} \left| \frac{a_i}{M_Z ^2} \frac{\partial M_Z ^2}{\partial a_i} \right|.
\label{eq:BGFT}
\end{equation}
When viewing a parameter space, a particular point has a low FT if the $Z$ mass does not largely change when deviating from its position. A natural model will, therefore, possess large regions of viable parameter space with low FT values. Having this feature in a particular model will make it more attractive a prospect. Our goal here is to find allowed regions of parameter space for the BLSSM with a similar (or better) level of FT to the MSSM, so the models may be of comparable naturalness.
In this paper, we apply this same measure in two different scenarios (high- and low-scale parameters) for both the MSSM and BLSSM. We will proceed by explaining the procedure for the two models. Firstly, we minimise the Supersymmetric Higgs potential (after EWSB) with respect to the EW VEVs. We do this at loop level. These minimisation conditions are called tadpole equations and may be solved to find a relation for the $Z$-mass and SUSY-scale parameters. At this point, we have two choices: to use these SUSY-scale parameters or to relate these to high-scale (GUT) ones and use those. We use a fundamentally different treatment of the loop contributions to the FT in both cases. For the GUT-FT, we treat loop corrections as dependent on the EW VEV, as done in \cite{Ross:2017kjc}, which will eventually reduce the FT value by up to a factor of $\sim$ 2. For the SUSY-scale parameters, we treat these as independent of the EW VEV, as done in \cite{Baer:2012up}, which will not affect the FT between tree and loop-level.
We begin first by discussing the high and low scale scenarios for the MSSM, and proceed to extend this discussion to the BLSSM.
For the GUT-FT in the MSSM, our high-scale parameters are: the unification masses for scalars ($m_0$) and gauginos ($m_{1/2}$), the universal trilinear coupling ($A_0$), the $\mu$ parameter and the quadratic soft SUSY term ($B\mu$),
\begin{equation}
a_i = \left\lbrace m_0 , ~ m_{1/2},~ A_0,~ \mu ,~ B \mu \right\rbrace.
\end{equation}
The GUT-FT will compare the naturalness at high scale, but two models with similar measures here may have large differences at the SUSY-scale. To test whether the BLSSM and MSSM have a similar FT at both GUT and SUSY-scale, we will consider a low-scale FT. To do this, we begin with the relation for the $Z$-mass and SUSY-scale parameters,
\begin{equation}
\frac{1}{2} M_Z^2= \frac{(m_{H_d}^2 + \Sigma_d ) - (m_{H_u}^2 + \Sigma_u) \tan^2 \beta}{\tan^2 \beta -1} - \mu^2 ,
\label{eq:min_pot_MSSM}
\end{equation}
where
\begin{equation}
\Sigma_{u,d} = \frac{\partial \Delta V}{\partial v_{u,d} ^2}.
\end{equation}
Unlike in the GUT-FT case, we treat the loop corrections as independent of the EW VEV, as in \cite{Baer:2012up}. If we substitute this expression into Eq.~(\ref{eq:BGFT}) and use the low-scale parameters $a_i = \large{\lbrace}m_{H_d} ^2$, $m_{H_u} ^2$, $\mu ^2$, $\Sigma_u$, $\Sigma _d \large{\rbrace}$, one will find \cite{Baer:2012up}
\begin{equation}
\Delta_{\rm SUSY}\equiv {\rm Max}(C_{i})/(M_{Z}^{2}/2)~,
\label{FT}
\end{equation}
where
\begin{equation}
\hspace{0.3cm} C_{i}=\left\lbrace \begin{array}{lllll} C_{H_{u}} &= \left| m_{H_{u}}^{2} \dfrac{\tan ^2 {\beta}}{(\tan ^2 {\beta} -1)} \right|~, ~~~~~ C_{H_{d}} = \left| m_{H_{d}}^{2} \dfrac{1}{(\tan ^2 {\beta} -1)} \right|,
\\ & & &\\
C_{\mu} &= \left| \mu^{2} \right|, ~~~~~ C_{\Sigma_u}= \left| \Sigma_u \dfrac{\tan ^2 {\beta}}{(\tan ^2 {\beta} -1)} \right|, ~~~~~C_{\Sigma_{d}} = \left| \Sigma_d \dfrac{1}{(\tan ^2 {\beta} -1)} \right|.
\end{array}\right.
\label{eq:FT_MSSM_C}
\end{equation}
We now turn to the BLSSM. For the GUT-FT, we follow the same universal parameters as the MSSM, but with two additional terms, relating to the $\mu '$ parameter and the corresponding quadratic soft SUSY term, $B \mu '$, so that all of our high scale terms are:
\begin{equation}
a_i = \left\lbrace m_0 , ~ m_{1/2},~ A_0,~ \mu ,~ B \mu ,~ \mu',~ B \mu' \right\rbrace.
\end{equation}
We may also follow our previous procedure to find a SUSY-scale FT (SUSY-FT) for the BLSSM. By minimising the scalar potential, we find (at loop level),
\begin{equation}
\frac{Mz^2}{2}=\frac{1}{X}\left( \frac{ m_{H_d}^2 + \Sigma _{d} }{ \left(\tan ^2(\beta
)-1\right)}-\frac{ (m_{H_u}^2 + \Sigma _u) \tan ^2(\beta )}{
\left(\tan ^2(\beta )-1\right)} + \frac{\tilde{g} M_{Z'}^2 Y}{4 g_{BL}
}- \mu ^2 \right), \label{eq:blssm_mz}
\end{equation}
where
\begin{equation}
X= 1 + \frac{\tilde{g}^{2}}{(g_{1}^{2}+g_{2}^{2})}+\frac{\tilde{g}^{3}Y}{2g_{BL}(g_{1}^{2}+g_{2}^{2})},
\end{equation}
and
\begin{equation}
Y= \frac{\cos(2\beta ')}{\cos (2\beta)} = \frac{\left(\tan^2 {\beta} +1\right) \left(1-\tan^2 {\beta '} \right)}{\left(1-\tan ^2 {\beta } \right) \left(\tan ^2 {\beta '}
+1\right) }
\end{equation}
In the limit of no gauge kinetic mixing ($\tilde{g}\rightarrow 0$), this equation reproduces the MSSM minimised potential
of Eq. (\ref{eq:min_pot_MSSM}).
Our SUSY-FT parameters for the BLSSM are thus
\begin{equation}
\hspace{0.3cm} C_{i}=\left\lbrace \begin{array}{llll} C_{H_{u}} &= \left| \dfrac{m_{H_{u}}^{2}}{X} \dfrac{\tan ^2 {\beta}}{(\tan ^2 {\beta} -1)} \right|~, C_{H_{d}} = \left| \dfrac{m_{H_{d}}^{2}}{X} \dfrac{1}{(\tan ^2 {\beta} -1)} \right|, C_{\Sigma_{d}} = \left| \dfrac{\Sigma_d}{X} \dfrac{1}{(\tan ^2 {\beta} -1)} \right|
\\ & & &\\
C_{\Sigma_{u}} &= \left| \dfrac{\Sigma_u}{X} \dfrac{\tan ^2 {\beta}}{(\tan ^2 {\beta} -1)} \right|,
~C_{\mu} = \left| \dfrac{\mu^{2}}{X} \right|,
~C_{Z'} = \left| M_{Z'}^{2}\dfrac{\tilde{g}Y}{4 g_{BL} X} \right| .
\end{array}\right.
\label{FTC}
\end{equation}
These equations resemble those of the MSSM SUSY-FT, but now with a factor of $1/X$. In addition, we have a contribution from the $Z'$ mass and BLSSM loop factors. Considering the heavy mass bound on $M_{Z'}$, its contribution could be expected much larger than the other terms in Eq.~(\ref{eq:blssm_mz}), which would worsen the required FT at the low scale. However, a significantly large $M_{Z'}$ severely constrains the VEVs of the singlet Higgs fields as $\tan\beta' \sim 1$ \cite{O'Leary:2011yq} and, hence, $Y$ yields a very stringent suppression in $C_{Z'}$. Note that, even though the trilinear $A$-terms are not included in determining the FT, their effects can be counted in the SSB masses in Eq.~(\ref{FTC}), whose values include also the loop corrections.
Indeed, if the required FT measure is quantified in terms of the GUT scale parameters, as done for the MSSM in \cite{Ellis:1985yc}, such as $m_{0},m_{1/2},A_{0},\mu, B\mu, \mu',B\mu'$, one can investigate what sector is the most effective in the required FT. Fig. \ref{fig:histogram_BGFT} displays the FT contributions of the fundamental parameters of the MSSM and BLSSM. The dominating term in both cases is from the $\mu$ term, which is fixed (along with $B \mu$) by requiring EWSB. The next largest contribution to the FT measure arises from the gaugino sector, whose masses are parametrised via $m_{1/2}$. This can be understood with the heavy gluino mass bound \cite{gluino} and its large loop contribution to realise the 125 GeV Higgs boson. The BLSSM sector is also effective in the FT in terms of $\mu'$ and $B\mu'$. There is a very small dependence on $A_0$ as discussed previously, and approximately no dependence on $m_0$ or $B \mu$ in either case.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{figures/MSSMGUTFTvsParam.png}
\includegraphics[scale=0.5]{figures/BLSSMGUTFTvsParam.png}
\caption{GUT-FT histogram for the MSSM (left) and BLSSM (right), showing contributions of the GUT-parameters.}
\label{fig:histogram_BGFT}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5]{figures/MSSMSUSYFTvsParam.png} \includegraphics[scale=0.5]{figures/BLSSMSUSYFTvsParam.png}\caption{SUSY-FT histogram for the MSSM (left) and BLSSM (right), showing contribution of SUSY parameters. }
\label{fig:EW-FT_histogram}
\end{figure}
Fig. \ref{fig:EW-FT_histogram} investigates which of the low-scale parameters are most responsible for the largest FT. Both the MSSM and BLSSM are dominated by the $\mu$'s FT, with a small contribution from $m_{H_u}$ and also a slight dependence on $M_{Z'}$ for the BLSSM. Considering this, what will affect the FT between the BLSSM and MSSM will be a combination of how large the factor $X$ is and the largeness of $\mu$ in both models. This value will not be identical, as there is an additional factor of $\frac{M_{Z'}(\tilde{g} Y)}{4g_{BL}}$ in the BLSSM minimisation equation.
\section{Results}
\label{sec:results}
We will now compare the FT obtained in the BLSSM and MSSM scenarios, for our two FT measures. We will begin by explaining the interval ranges of our data, then we will discuss the SUSY-scale and GUT-scale FTs and which parameters are most responsible for their values. This will be done for both the BLSSM and MSSM, though the same parameters in both models are usually responsible for the largeness of FT. Then we will compare the GUT-FT and SUSY-FT for both the BLSSM and MSSM in the plane ($m_0$, $m_{1/2}$), as is commonly done.
The scan performed to obtain this data has been done by SPheno with all points being passed through {\texttt HiggsBounds} and {\texttt HiggsSignals}. We have scanned over the range $[0,5]$ TeV in both $m_0$ and $m_{1/2}$, $\tan \beta$ in $[0 , 60]$, $A_0$ in $[-15, 15]$ TeV, which are common universal parameters for both the MSSM and the BLSSM, while for the BLSSM we also required $\tan \beta'$ in the interval $[0,2]$
with neutrino Yukawa couplings $Y^{(1,1)}$, $Y^{(2,2)}$, $Y^{(3,3)}$ in $[0,1]$. The $M_{Z'}$ value has been fixed to 4 TeV as discussed in Section~\ref{sec:colliderdm}. We will now compare the FT for both the MSSM and BLSSM, using both low- and high-scale parameters.
We begin by presenting a measure of how the SUSY-FT parameter varies with $\mu$ in the BLSSM. Fig. \ref{fig:ft_susy_mu} displays how the SUSY parameter FT varies with $\mu$. The FT measure is equal to the maximum contribution from any of the SUSY parameters, but here we see all data points centred on the curve. The tightness of our data shows that very rarely are the other ($m_{h_{u}}$, $m_{h_{d}}$, $\Sigma _u$, $\Sigma _d$) parameters ever responsible for the FT. This behaviour is expected, as one can see from the histogram plot of SUSY parameters, see Fig. \ref{fig:EW-FT_histogram}. The corresponding plot for the MSSM looks very similar and so is not shown. The behaviour is almost identical, as is expected from the MSSM version of the histogram discussed in section \ref{sec:fine_tuning}, whereby the $\mu$ parameter dominates the FT.
Now, we turn our attention to considering loop contributions in the SUSY-scale FT. By treating the loop factors as independent parameters which contribute to FT, we may observe their contributions. Fig. \ref{fig:ft_susy_loop} presents the contribution to FT from $\Sigma _u$ and $\Sigma _d$ whilst varying $\mu$. Immediately, one can compare the typical FT values with that of the overall FT as in Fig. \ref{fig:ft_susy_mu} and see that the loop contributions will never be the dominant contribution for the FT. There is some growth with $\mu$, but for any given value, the contribution from $\mu$ itself is $10$ times larger. Since only the maximum contribution of any $C_i$ parameter is taken, we find that treating the tadpole loop contributions as independent of the VEV causes the one-loop FT to look much the same as at tree-level. Once again, this behaviour is mimicked in the MSSM, where the VEV independent tadpole loop corrections are also dwarfed by $\mu$'s FT.
Penultimately, before we turn to our final comparison of FT, we will discuss the dominant parameters in the GUT-FT sector. Fig. \ref{fig:ft_gut_m12} shows how the GUT-FT depends on $m_{1/2}$. There is a proportionality with $m_{1/2}$, favouring lower values for a better FT, but the points are not tightly constrained, unlike in SUSY-FT. The upward spread of points indicates that other parameters in addition to $m_{1/2}$ affect the FT. This is expected from the histogram in Fig. \ref{fig:histogram_BGFT}, where no one single parameter always determines FT, but rather a more even mix.
Finally, we will consider how the FT changes in the plane of ($m_0$, $m_{1/2}$). These two choices of parameters are selected since the universal scalar and gaugino masses are the two most important parameters.
We colour the points with their FT values in four intervals, namely: red for FT > 5000, green for 1000 < FT < 5000, orange for 500 < FT < 1000 and blue (the least finely-tuned points) for FT < 500. The same set of points is used to compare the GUT-FT and the SUSY-FT (there is only a recolouring of these data points between left and right hand side) for the BLSSM and MSSM.
The overall picture is similar for all four cases and it is immediately clear that the FT is comparable between the BLSSM and the MSSM. There is a difference in the distribution of points between the MSSM and BLSSM, where there seem to be no viable points until $m_0 \sim 1$TeV in the latter. This is due to the requirement of a $Z'$ mass consistent with current constraints (see Section~\ref{sec:colliderdm}). Moreover, due to the tadpole equation given in Eq.~(\ref{BLmin}) relating $M_{Z'}$ to the soft-masses $m_{\eta_{1,2}}$, which are functions of $m_0$, notice that a larger $M_{Z'}$ leads to a larger $m_0$.
All four graphs have a similar FT distribution, where a low $m_{1/2}$ is favoured and which manifests an approximate independence of $m_0$. Indeed, $m_{1/2}$ is mostly responsible for the FT rather than $m_0$ (see Fig.~\ref{fig:histogram_BGFT}). Since there is a little dependence on $m_0$, we expect to see an increasing FT as $m_{1/2}$ increases, as can be seen in all four cases.
When comparing the BLSSM and MSSM GUT-FT, the two pictures are very similar, with a slightly better FT in the MSSM, though the less fine tuned (blue) points appear about the same mass of $m_{1/2} \approx 2$ TeV.
This behaviour is very similar when comparing the SUSY-FT between BLSSM and MSSM, where the pictures (up to the distribution of points) are very similar, with a slight dependence on $m_0$, where larger values are favoured.
Lastly, we compare the GUT-FT and SUSY-FT for each of the models. In the BLSSM we find a more concentrated region of less fine-tuned points at higher $m_0$. Both measures show a strong dependence on $m_{1/2}$. In the MSSM, we again find this dependence, but not the increase in density of less-finely tuned points as in the BLSSM.
To conclude the discussion on FT, we find that the overall FT is very comparable between the BLSSM and MSSM. Though the GUT-parameter measure is similar in both pictures, with the MSSM as slightly less finely tuned, the BLSSM has a larger density of less-finely-tuned points when considering SUSY-parameters.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.56]{figures/BLSSMSUSY-mu.png}
\caption{(a) SUSY-FT vs $\mu$. The very tight spread of points indicates $\mu$ is the dominant parameter responsible for SUSY-FT. }
\label{fig:ft_susy_mu}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.56]{figures/BLSSMLoopTadpoles.png}
\caption{SUSY-FT for for one-loop tadpole corrections $C_{\Sigma_{u}}$ and $C_{\Sigma_{d}}$ for given values of $\mu$. Their contribution is never dominant and so loop corrections do not affect the SUSY-FT. }
\label{fig:ft_susy_loop}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.56]{figures/BLSSMGUT-m12.png}
\caption{GUT-FT plotted against $m_{1/2}$. There is a strong, dependence for the GUT-FT with the $m_{1/2}$ parameter, although the wide upward spread indicates other parameters may also be the dominant FT contribution. }
\label{fig:ft_gut_m12}
\end{figure}
\clearpage
\begin{figure}[ht!]
\newcommand{0.58}{0.58}
\subfigure[BLSSM GUT-FT.]{\includegraphics[scale=0.58]{figures/BLSSMGUT-loop-m0m12.png} \label{fig:ftbg_blssm_m0_m12}}
\subfigure[BLSSM SUSY-FT. ]{\includegraphics[scale=0.58]{figures/BLSSMSUSY-m0m12.png}\label{fig:ftew_blssm_m0_m12}}
\subfigure[MSSM GUT-FT.]{\includegraphics[scale=0.58]{figures/MSSMGUT-loop-m0m12.png}\label{fig:ftbg_mssm_m0_m12}}
\subfigure[MSSM SUSY-FT. ]{\includegraphics[scale=0.58]{figures/MSSMSUSY-m0m12.png}\label{fig:ftew_mssm_m0_m12}}
\caption{Fine-tuning in the plane of unification of scalar, gaugino masses for BLSSM and MSSM for both GUT-parameters ($\Delta$) and EW parameters ($\Delta_{\rm EW}$). The FT is indicated by the colour of the dots: blue for FT $<$ 500; Orange for 500 $<$ FT $<$ 1000; Green for 1000 $<$ FT $<$ 500; and Red for FT $>$ 5000.}
\label{fig:all_m0_m12}
\end{figure}
We now turn to considering the DM sectors of both models. We will see that once cosmological and direct detection bounds are imposed on the DM candidates, the BLSSM parameter space is far less constrained than the MSSM one, although at the cost of an increased GUT-FT.
For each generated spectrum, the LSP must comply with the cosmological and direct detection bounds of Section~\ref{sec:colliderdm}. The relic density in respect to the
mass of the LSP ($M_{\rm DM}$) is plotted in Fig.~\ref{fig:BLSSMvsMSSM-DM}(a).
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.4]{figures/BLSSM-DM.png}
\includegraphics[scale=0.4]{figures/MSSM-DM.png}
\caption{(a) Relic density vs LSP mass for the BLSSM.
(b) Relic density vs LSP mass for the MSSM. In both plots the horizontal lines identify the $2\sigma$ region around the current central value of $\Omega h^2$. }
\label{fig:BLSSMvsMSSM-DM}
\end{figure}
The relic is overabundant for the large part of points surviving the screening from collider constraints.
Without specifying initial conditions, as those
igniting a favourable cohannihilation, our scan reveals multiple extended areas with relic densities close to zero.
Interestingly, the BLSSM successfully accommodates values within the allowed interval in Eq.~(\ref{PLANCK}), with all LSP species.
The corresponding distributions in Fig.~\ref{fig:BLSSMvsMSSM-DM}(a) have recognisable shapes, which point to different areas where a given LSP is more likely to cross
the experimentally allowed interval. Neutralinos may be found mostly, but not entirely, at large $M_{\rm DM}$ values. Sneutrinos appear in a cloud, with low relic density values around the centre of our mass span. The sneutrino option stands out as a very promising one, compensating its low rate of production as a LSP with a milder value of the relic with respect to the neutralino.
The extended particle spectrum of the BLSSM yields a more varied nature
of the LSP, with more numerous combinations of DM annihilation diagrams, and can play a significant role in dramatically changing the response of the model to the cosmological data, in comparison to the much constrained MSSM.
This is well manifested by the relic density computed in the MSSM, as shown in Fig.~\ref{fig:BLSSMvsMSSM-DM}(b). From here, it is obvious how the BLSSM offers a variety of solutions to saturate the relic abundance compatible with the constraints, whether taken at $2\sigma$ from the central value measured by experiment or as an absolute
upper limit, precluded to the MSSM. In the former, different DM incarnations (Bino-, BLino-, Bileptino-like and mixed neutralino,
alongside the sneutrino) can comply with experimental evidence over a
$M_{\rm DM}$ interval which extends up to 2 TeV or so, while in the MSSM case solutions can only be found for much lighter LSP
masses and limitedly to one nature (the usual Bino-like neutralino).
Together with the limit on the cosmological relic produced at decoupling by the candidate DM particle, we challenge the constrained BLSSM against the negative
search for Weakly Interactive Massive Particle (WIMP) nuclear recoils by the LUX experiment.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.45]{figures/SIWIMP.png}
\caption{Spin-independent WIMP-nucleus scattering cross section generated in our scan against the upper bounds from 2016 run of the LUX experiment.}
\label{LUX2016}
\end{figure}
The 2016 results of the LUX collaboration have seen the upper bound on the cross section decreasing by a factor of four in the three years of exposure.
Such constraining analyses are still ongoing and will interestingly become a threat or a confirmation of the WIMP hypothesis in future years.
From Fig.~\ref{LUX2016} we notice how the BLSSM with the parameter space investigated largely survives such tight limits. It is also worth stressing how the LUX bounds have started touching the BLSSM parameter space, so the next improvements of direct DM searches are going to test the BLSSM very closely.
\section{Conclusions}
\label{sec:conclusions}
While several studies of the SUSY version of the $B-L$ model, BLSSM for short, exist for its low energy phenomenology, predicting distinctive experimental signatures,
very little had been said about the theoretical degree of FT required in this scenario in order to produce them or else to escape current experimental constraints coming from
EWPOs, collider and cosmological data. We have done so in the first part of this paper, by adopting a suitable
FT measure amongst those available in literature and expressed it in terms of the low energy spectra of the
MSSM and BLSSM as well as of the (high-scale) universal parameters of the two models. The latter,
for the MSSM, include: masses for scalars and gauginos, trilinear coupling, Higgsino mass and the quadratic soft SUSY term. In the BLSSM, we have all of these parameters plus two additional ones, the BLino mass and another quadratic soft SUSY term. The low and high energy spectra in the two SUSY scenarios can be related by RGEs,
that we have computed numerically at two-loop level.
We have found that the level of FT required in the BLSSM is somewhat higher than in the MSSM when computed at the GUT scale in presence of all available experimental constraints, but those connected to DM searches,
and this is primarily driven by the requirement of a large $Z'$ mass, of order 4 TeV or higher, which in turn corresponds to somewhat different acceptable values for the scalar and fermionic unification masses, which partially reflect in different low energy spectra potentially accessible at the LHC. However, when the FT is computed at the SUSY scale, the pull now originating from all available experimental constraints, chiefly the DM ones, destabilises the MSSM more than the BLSSM, as the latter appears more natural, well reflecting a much lower level of tension against data existing in the latter with respect to the former.
Furthermore, we have examined the response to the relic density constraints of the non-minimal SUSY scenario,
wherein the extra $B-L$
neutralinos (three extra neutral fermions: $U(1)_{B-L}$ gaugino
$\widetilde B'$ and two extra Higgsinos
$\widetilde{\eta}$) can be cold DM candidates.
As well known, taking the lightest neutralino in
the MSSM as the sole possible DM candidate implies severe constraints on the parameter space of this scenario.
Indeed, in the case of universal soft-breaking terms, the MSSM is
almost ruled out by combining collider, astrophysics and rare
decay constraints. Therefore, it is important
to explore very well motivated extensions of the MSSM, such as the BLSSM, that provide new DM candidates that may account
for the relic density with no conflict with other phenomenological
constraints.
After an extensive study in this direction, we have concluded that the extended particle spectrum of the BLSSM, in turn translating into a more varied nature of the LSP as well as a
more numerous combination of DM annihilation diagrams, can play a significant role in dramatically changing the
ability of SUSY to adapt to cosmological data, in comparison to the much constrained MSSM. In fact, the BLSSM offers a variety of solutions to the relic abundance constraint, whether taken at $2\sigma$ from the central value measured by experiment or as an absolute
upper limit, unavailable in the MSSM: such as, alongside the usual Bino- (and possibly sneutrino), also BLino- and Bleptino-like as well as mixed neutralino
can comply with experimental evidence over a
$M_{\rm DM}$ interval which extends up to 2 TeV or so while in the MSSM case solutions can only be found for much lighter LSP masses ($\sim 500$ GeV) and limitedly to one nature (the intimated standard Bino-like neutralino).
\section*{Acknowledgements}
SM is supported in part through the NExT Institute.
The work of LDR has been supported by the ``Angelo Della Riccia'' foundation and the STFC/COFUND Rutherford International Fellowship scheme.
The work of CM is supported by the ``Angelo Della Riccia'' foundation and by the Centre of Excellence project No TK133 ``Dark Side of the Universe''.
The work of SK is partially supported by the STDF project 13858. All authors acknowledge support
from the grant H2020-MSCA-RISE-2014 n. 645722 (NonMinimalHiggs).
\newpage
\bibliographystyle{JHEP}
\providecommand{\href}[2]{#2}\begingroup\raggedright | proofpile-arXiv_065-7611 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The problem of minimizing the total communication rate between users in a distributed system for achieving \emph{omniscience}, i.e., to learn all pieces of knowledge being distributed among users, is a fundamental problem in information theory \cite{CAZDLS:2016}. An example of such problems is \emph{cooperative data exchange} (CDE) \cite{RCS:2007}. The CDE problem considers a group of $n$ users where each user has a subset of packets in a ground set $X$ of size $k$, and wants the rest of packets in $X$. To exchange all their packets, the users broadcast (uncoded or coded) packets over a shared lossless channel, and the problem is to find the minimum sum-rate (i.e., the total transmission rate by all users) such that each user learns all packets in $X$.
Originally, CDE was proposed in \cite{RCS:2007} for a broadcast network, and was later generalized for arbitrary networks in~\cite{CW:2014,GL:2012}. Several solutions for CDE were proposed in~\cite{SSBR:2010, SSBR2:2010, MPRGR:2016}. Several extensions of CDE were also studied, e.g., in~\cite{CW:2011,YSZ:2014,YS:2014,HS:2015,HS1:2016,CH:2016}. Moreover, it was shown in~\cite{CW:2014}, and more recently in~\cite{HS:2015} and~\cite{HS1:2016}, that a solution to CDE and CDE with erasure/error correction can be characterized in closed-form when packets are randomly distributed among users.
In real-world applications, e.g., when users have different priority, or they are in different locations, different groups of users may have different objectives regarding achieving omniscience in different rounds of transmissions. Addressing such scenarios, in the literature, there have been two extensions of CDE: (i) successive omniscience (SO) \cite{CAZDLS:2016}, and (ii) CDE with priority (CDEP) \cite{HS2:2016}. In SO, in the first round of transmissions, a given subset of users achieve \emph{local omniscience}, i.e., they learn all the packets they collectively have, and in the second round of transmissions, all users in the system achieve omniscience. In CDEP, in the first round of transmissions, a given subset of users achieve \emph{global omniscience}, i.e., they learn all the packets in $X$, and in the second round of transmissions, the rest of the users achieve omniscience.
In this work, we consider extensions of SO and CDEP scenarios, referred to as the \emph{successive local omniscience} (SLO), and \emph{successive global omniscience} (SGO), where the users are divided into $\ell$ ($1\leq \ell\leq n$) nested sub-groups. In the $l$th ($1\leq l\leq \ell$) round of transmissions, the $l$th smallest sub-group of users need to achieve local or global omniscience in SLO or SGO, respectively. The problem is to find the minimum sum-rate for each round, subject to minimizing the sum-rate for the previous round.
\subsection{Our Contributions}
We use a multi-objective linear programming (MOLP) with $O(2^{n}\cdot \ell)$ constraints and $n\ell$ variables to solve the SLO and SGO problems for any arbitrary problem instance. For any instance where the packets are randomly distributed among users, we identify a system of $n\ell$ linear equations with $n\ell$ variables whose solution characterizes the minimum sum-rate for each round with high probability as $k$ tends to infinity. Moreover, for the special case of two nested groups, we derive closed-form expressions for the minimum sum-rate for each round which hold with high probability as $k$ tends to infinity.
\section{Problem Setup}\label{sec:PS}
Consider a group of $n$ users $N=\{1,\dots,n\}$ and a set of $k$ packets $X=\{x_1,x_2,\dots,x_k\}$. Initially, each user $i\in N$ holds a subset $X_i$ of packets in $X$. Assume, without loss of generality, that $X=\cup_{i\in N} X_i$. Suppose that the index set of packets available at each user is known by all other users. Assume that each packet can be partitioned into an arbitrary (but the same for all packets) number of chunks of equal size. The ultimate objective of all users is to achieve \emph{omniscience}, i.e., to learn all the packets in $X$, via broadcasting (coded or uncoded) chunks over an erasure/error-free channel. This scenario is known as the \emph{cooperative data exchange} (CDE). In CDE, the problem is to find the minimum \emph{sum-rate}, i.e., the total transmission rate by all users, where the \emph{transmission rate} of each user is the total number of chunks being transmitted by the user, normalized by the number of chunks per packet.
In this work, we consider two generalizations of CDE where the transmissions are divided into multiple rounds, and different groups of users have different objectives in each round: (i) \emph{successive local omniscience} (SLO), and (ii) \emph{successive global omniscience} (SGO). Fix an arbitrary integer $1\leq \ell\leq n$, and arbitrary integers $1< n_1<n_2<\dots <n_{\ell}=n$. Suppose that the set $N$ of $n$ users is divided into $\ell$ nested sub-groups $\emptyset\neq N_1\subsetneq N_2\subsetneq\dots\subsetneq N_{\ell}=N$, where $N_{l}\triangleq\{1,\dots,n_l\}$, $\forall 1\leq l\leq \ell$. Let $X^{(l)}$ be the set of all packets that all users in $N_l$ collectively hold, i.e., $X^{(l)}\triangleq\cup_{i\in N_l} X_i$, $\forall 1\leq l\leq \ell$. (Note that $X^{(\ell)}=X$.) Let $\overline{X}^{(l)}_i$ be the set of all packets in $X^{(l)}$ that user $i$ does not hold, i.e., $\overline{X}^{(l)}_i\triangleq X^{(l)} \setminus X_i$, $\forall i\in N_l$, $\forall 1\leq l\leq \ell$. (Note that $\overline{X}^{(\ell)}_i=X\setminus X_i$, $\forall i\in N$.) For the ease of notation, let $\overline{X}_i\triangleq \overline{X}^{(\ell)}_i$, $\forall i\in N$.
In SLO, in the $l$th ($1\leq l\leq \ell$) round of transmissions, the objective of all users in $N_l$ is to achieve \emph{local omniscience}, i.e., to learn all packets in $X^{(l)}$, without any help from the users in $N\setminus N_l$. (Each user $i\in N_l$ needs to learn all packets in $\overline{X}^{(l)}_i$.) In SGO, in the $l$th ($1\leq l\leq \ell$) round of transmissions, the objective of all users in $N_l$ is to achieve \emph{global omniscience}, i.e., to learn all packets in $X$, possibly with the help of users in $N\setminus N_l$. (Each user $i\in N_l$ needs to learn all packets in $\overline{X}_i$.) Note that, for $\ell=1$, SLO and SGO reduce to CDE.
Let $r^{(l)}_i$ be the transmission rate of user $i$ in the $l$th round. Let $r^{(l)}_S\triangleq\sum_{i\in S} r^{(l)}_i$, $\forall S\subseteq N$, be the sum-rate of all users in $S$ in the $l$th round, and let $r^{(0)}_S\triangleq 0$, $\forall S\subseteq N$. In SLO and SGO, the problem is to find the minimum $r^{(l)}_N$ for each $1\leq l\leq \ell$, subject to minimizing $r^{(l-1)}_N$. (Note that this is equivalent to finding the minimum $r^{(l)}_N$ for each $1\leq l\leq \ell$, subject to minimizing $r^{(m)}_N$ for all $1\leq m<l$.) Our goal is to solve this problem for any given problem instance $\{X_i\}$.
\section{Arbitrary Problem Instances}\label{sec:MR}
Using similar techniques previously used for CDE in \cite{CW:2014}, each round in SLO and SGO can be reduced to a multicast network coding scenario. Thus, the necessary and sufficient conditions for achieving \emph{local} and \emph{global} omniscience in the $l$th round are given by the following cut-set constraints: \[\sum_{m=1}^{l} r^{(m)}_S\geq \left|\cap_{i\in N_l\setminus S} \overline{X}^{(l)}_i \right|, \forall S\subsetneq N_l,\] and \[\sum_{m=1}^{l} r^{(m)}_S\geq \left|\cap_{i\in N\setminus S} \overline{X}_i \right|, \forall S\subsetneq N, N_{l-1}\subset S, N_{l}\not\subset S,\] respectively. (We sketch the proof of necessity of these constraints in the proofs of Theorems~\ref{thm:ArbitrarySLO} and~\ref{thm:ArbitrarySGO}, and omit the proof of their sufficiency which relies on the standard network-coding argument \cite{CW:2014}.) Based on these constraints, for any instance $\{X_i\}$, one can find a solution to SLO or SGO by solving a multi-objective linear programming (MOLP) (see Theorem~\ref{thm:ArbitrarySLO} and Theorem~\ref{thm:ArbitrarySGO}).
The special case of the following results for $\ell=2$ were previously presented in \cite{CAZDLS:2016} and \cite{HS2:2016}.
\begin{theorem}\label{thm:ArbitrarySLO}
For any instance $\{X_i\}$, any solution to the SLO problem is a solution to the following MOLP (and vice versa):
\begin{eqnarray}\label{eq:LPSLO}
\mathrm{min} && \hspace{-1.25em} r^{(\ell)}_N \\[-0.25em] \nonumber \label{eq:SLOC1}
&& \hspace{-1.25em} \dots\\[-0.25em]
\label{eq:SLOO2} \mathrm{min} && \hspace{-1.25em} r^{(2)}_N \\ \label{eq:SLOO1} \mathrm{min} && \hspace{-1.25em} r^{(1)}_N \\[-0.25em]
\mathrm{s.t.}
&& \label{eq:SLOC2} \hspace{-1.25em} r^{(1)}_S\geq \bigg|\bigcap_{i\in N_1\setminus S} \overline{X}^{(1)}_i\bigg|, \forall S\subsetneq N_1\\[-0.25em]
&& \label{eq:SLOC3} \hspace{-1.25em} \sum_{l=1}^{2} r^{(l)}_S\geq \bigg|\bigcap_{i\in N_{2}\setminus S}\overline{X}^{(2)}_i\bigg|, \forall S\subsetneq N_{2}\hspace{1.75em}\\ \nonumber
&& \hspace{-1.25em} \dots \\
&& \label{eq:SLOC4} \hspace{-1.25em} \sum_{l=1}^{\ell} r^{(l)}_S\geq \bigg|\bigcap_{i\in N_{\ell}\setminus S}\overline{X}^{(\ell)}_i\bigg|, \forall S\subsetneq N_{\ell}\hspace{1.5em}\\[-0.25em] \nonumber
&& \hspace{-1.25em} (r^{(l)}_i\geq 0, \forall i\in N_l, \forall 1\leq l\leq \ell)\\ \nonumber
&& \hspace{-1.25em} (r^{(l)}_i= 0, \forall i\in N\setminus N_l, \forall 1\leq l\leq \ell)
\end{eqnarray}
\end{theorem}
\begin{proof}[Proof (Sketch)]
In the first round, all users in $N_1$ need to learn $X^{(1)}$. Thus, for any (proper) subset $S$ of users in $N_1$, the corresponding constraint $r^{(1)}_S\geq |\cap_{i\in N_1\setminus S} \overline{X}^{(1)}_i|$ is necessary. This is due to the fact that, for any $S\subsetneq N_1$, each user $i\in N_1\setminus S$ needs to learn $\overline{X}^{(1)}_i$. This yields the constraints in~\eqref{eq:SLOC2}. For any other $S$, the corresponding constraint is, however, unnecessary. This comes from the fact that $r^{(1)}_S =r^{(1)}_{S\cap N_1} + r^{(1)}_{S\setminus N_1}= r^{(1)}_{S\cap N_1} \geq |\cap_{i\in N_1\setminus (S\cap N_1)} \overline{X}^{(1)}_i|=|\cap_{i\in N_1\setminus S} \overline{X}^{(1)}_i|$.
In the second round, all users in $N_2$ need to learn $X^{(2)}$. Similarly as above, for any (proper) subset $S$ of users in $N_2$, the corresponding constraint $r^{(1)}_S+r^{(2)}_S\geq |\cap_{i\in N_2\setminus S} \overline{X}^{(2)}_i|$ imposes a necessary constraint, and hence the constraints in~\eqref{eq:SLOC3}. However, for any other $S$, the corresponding constraint is unnecessary. This is because $r^{(1)}_S+r^{(2)}_S = r^{(1)}_{S\cap N_2}+ r^{(2)}_{S\cap N_2} \geq |\cap_{i\in N_2\setminus (S\cap N_2)} \overline{X}^{(2)}_i|=|\cap_{i\in N_2\setminus S} \overline{X}^{(2)}_i|$.
Repeating the same argument as above, it follows that the necessary constraints for all users in $N_l$ to learn $X^{(l)}$ in the $l$th round are $\sum_{1\leq m\leq l} r^{(m)}_S\geq |\cap_{i\in N_l\setminus S} \overline{X}^{(l)}_i|$, $\forall S\subsetneq N_l$.
\end{proof}
\begin{theorem}\label{thm:ArbitrarySGO}
For any instance $\{X_i\}$, any solution to the SGO problem is a solution to the following MOLP (and vice versa):
\begin{eqnarray}\label{eq:LPSGO}
\mathrm{min} && \hspace{-1.25em} r^{(\ell)}_N \\[-0.25em] \nonumber \label{eq:SGOC1}
&& \hspace{-1.25em} \dots\\[-0.25em]
\mathrm{min} && \hspace{-1.25em} r^{(2)}_N \\ \mathrm{min} && \hspace{-1.25em} r^{(1)}_N \\[-0.25em]
\mathrm{s.t.}
&& \label{eq:SGOC2} \hspace{-1.25em} r^{(1)}_S\geq \bigg|\bigcap_{i\in N\setminus S} \overline{X}_i\bigg|, \forall S\subsetneq N, N_1\not\subset S\\[-0.25em]
&& \label{eq:SGOC3} \hspace{-1.25em} \sum_{l=1}^{2} r^{(l)}_S\geq \bigg|\bigcap_{i\in N\setminus S}\overline{X}_i\bigg|, \forall S\subsetneq N, N_1\subset S, N_2\not\subset S \hspace{1.75em}\\
\nonumber
&& \hspace{-1.25em} \dots \\
&& \label{eq:SGOC4} \hspace{-1.25em} \sum_{l=1}^{\ell} r^{(l)}_S\geq \bigg|\bigcap_{i\in N\setminus S}\overline{X}_i\bigg|, \forall S\subsetneq N, N_{\ell-1}\subset S, N_{\ell}\not\subset S \hspace{2em}\\[-0.25em] \nonumber
&& \hspace{-1.25em} (r^{(l)}_i\geq 0, \forall i\in N, \forall 1\leq l\leq \ell)
\end{eqnarray}
\end{theorem}
\begin{proof}[Proof (Sketch)]
In the first round, all users in $N_1$ need to learn $X$ and none of users in $N\setminus N_1$ need to learn $X$. Thus, for any (proper) subset $S$ of users in $N$ not containing $N_1$, the corresponding constraint $r^{(1)}_S\geq |\cap_{i\in N\setminus S} \overline{X}_i|$ is necessary, and hence~\eqref{eq:SGOC3}. For any $S$ containing $N_1$, the corresponding constraint is however unnecessary since $N\setminus S$ consists only of users which need not learn $X$ in the first round.
In the second round, all users in $N_2$ need to learn $X$. Since all users in $N_1$ learn $X$ in the first round, for any $S$ containing $N_1$ but not $N_2$, the corresponding constraint $r^{(1)}_S+r^{(2)}_S\geq |\cap_{i\in N\setminus S} \overline{X}_i|$ imposes a necessary constraint, and hence~\eqref{eq:SGOC4}. For any $S$ containing $N_2$, the corresponding constraint is unnecessary since $N\setminus S$ consists only of users not in $N_2$, and none of such users need to learn $X$ in the second round. Note that, for any (proper) $S$ not containing $N_1$, the corresponding constraint is redundant since $N\setminus S$ includes some user(s) in $N_1$, and such users learn $X$ in the first round.
By using a similar argument as above, it follows that the necessary constraints for all users in $N_l$ to learn $X$ in the $l$th round are $\sum_{1\leq m\leq l} r^{(m)}_S\geq |\cap_{i\in N\setminus S} \overline{X}_i|$, $\forall S\subsetneq N$, $N_{l-1}\subset S$, $N_l\not\subset S$.
\end{proof}
\section{Random Packet Distribution}
In this section, we assume that each packet is available at each user, independently from other packets and other users, with probability $0<p<1$. (This model is referred to as the \emph{random packet distribution} in~\cite{CW:2014,HS:2015,HS1:2016}.)
Theorems~\ref{thm:SLORandom} and ~\ref{thm:SGORandom} characterize, with probability approaching $1$ (w.p.~$\rightarrow 1$) as $k$ tends to infinity ($k\rightarrow \infty$), a solution to SLO and SGO by a system of linear equations (SLE) for any \emph{random} problem instance under the assumption above.
\begin{theorem}\label{thm:SLORandom}
For any random instance $\{X_i\}$, w.p.~$\rightarrow 1$ as $k\rightarrow\infty$, a solution to the SLO problem is given by the following SLE:\vspace{-0.5em}
\begin{eqnarray}
&& \label{eq:SLE1E1}\hspace{-2em} r^{(1)}_{N_1\setminus \{i\}} = \left|\overline{X}^{(1)}_{i}\right|, \forall i\in N_1\\
&& \label{eq:SLE1E2}\hspace{-2em} r^{(l)}_{N_{l,j}} = \bigg|\bigcap_{i\in N_l\setminus N_{l,j}}\overline{X}^{(l)}_{i}\bigg|, \forall 1\leq j\leq d_l, \forall 1<l\leq \ell\\
&& \label{eq:SLE1E3}\hspace{-2em} r^{(l)}_i = 0, \forall i\in N\setminus N_l, \forall 1\leq l\leq \ell\\
&& \label{eq:SLE1E4}\hspace{-2em} r^{(l)}_i = 0, \forall i\in N_{l-1}, \forall 1<l\leq\ell
\end{eqnarray} where $d_l\triangleq n_l-n_{l-1}$ for all $1<l\leq \ell$ and $N_{l,j}\triangleq \{n_{l-1}+1,\dots,n_{l-1}+j\}$ for all $1<l\leq \ell$ and all $1\leq j\leq d_l$.
\end{theorem}
\begin{theorem}\label{thm:SGORandom}
For any random instance $\{X_i\}$, w.p.~$\rightarrow 1$ as $k\rightarrow\infty$, a solution to the SGO problem is given by the following SLE:\vspace{-0.5em}
\begin{eqnarray}
&& \label{eq:SLE2E1}\hspace{-2.5em} \sum_{m=1}^{m_l} r^{(m)}_{N_l\setminus \{j_l\}} = \bigg|\bigcap_{i\not\in N_l\setminus \{j_l\}}\overline{X}_{i}\bigg|, \forall 1\leq l<\ell\\
&& \label{eq:SLE2E2}\hspace{-2.5em} \sum_{m=1}^{l} r^{(m)}_{N\setminus \{i\}} = \left|\overline{X}_i\right|, \forall i\in N_l\setminus N_{l-1}, \forall 1\leq l\leq \ell\\
&& \label{eq:SLE2E3}\hspace{-2.5em} r^{(l)}_i = 0, \forall i\in N\setminus N_{l-1}, \forall 1< l\leq \ell\\
&& \label{eq:SLE2E4}\hspace{-2.5em} r^{(l)}_i = r^{(l)}_j, \forall i,j\in N_{l-1}, \forall 1<l\leq\ell
\end{eqnarray} where for any $1\leq l< \ell$, $j_l\in N_{m_{l}}\setminus N_{m_{l}-1}$ (for some $1\leq m_{l}\leq l$) such that $\sum_{i\in N_l\setminus\{j_l\}}\left|\overline{X}_i\right|+\left|\cap_{i\not\in N_l\setminus \{j_l\}} \overline{X}_i\right|\geq \sum_{i\in N_l\setminus\{j\}} \left|\overline{X}_i\right|+\left|\cap_{i\not\in N_l\setminus\{j\}} \overline{X}_i\right|$ for all $j\in N_l$.
\end{theorem}
Note that~\eqref{eq:SLE1E3} and~\eqref{eq:SLE1E4} imply that for SLO, in the first round, only users in $N_1$ may transmit, and in each round $l>1$, only users in $N_l\setminus N_{l-1}$ need to transmit. A closer look at~\eqref{eq:SLE1E1} and~\eqref{eq:SLE1E2} reveals that for SLO, only the users in $N_1$ may need to transmit at fractional rates, and it suffices for the rest of users in $N\setminus N_1$ to transmit at integral rates. Moreover,~\eqref{eq:SLE2E3} and~\eqref{eq:SLE2E4} imply that for SGO, in the first round, all users in $N$ may transmit, but in each round $l>1$, only users in $N_{l-1}$ need to transmit, and they all can transmit at the same rate.
Theorems~\ref{thm:SLORandomSpecial} and~\ref{thm:SGORandomSpecial} give a closed-form solution to the SLE's in~\eqref{eq:SLE1E1}-\eqref{eq:SLE1E4} and~\eqref{eq:SLE2E1}-\eqref{eq:SLE2E4} for the special case of $\ell=2$.
\begin{theorem}\label{thm:SLORandomSpecial}
For any random instance $\{X_i\}$, w.p.~$\rightarrow 1$ as $k\rightarrow\infty$, a solution to the SLO problem for $\ell=2$ is given by
\begin{equation*}\label{eq:SLOr1}
\tilde{r}^{(1)}_i=\left\{
\begin{array}{ll}
\hspace{-0.25em}\frac{1}{n_1-1}\sum_{j\in N_1} |\overline{X}^{(1)}_j|-|\overline{X}^{(1)}_i|, & \hspace{-0.25em} i\in N_1\\
\hspace{-0.25em}0, & \hspace{-0.25em} i\not\in N_1
\end{array}
\right.\end{equation*} and
\begin{equation*}\label{eq:SLOr2}
\tilde{r}^{(2)}_i=\left\{
\begin{array}{ll}
\hspace{-0.25em}0, & \hspace{-0.25em} i\in N_1\\
\hspace{-0.25em}|\cap_{j\not\in N_{2,i-n_1}}\overline{X}^{(2)}_j|-|\cap_{j\not\in N_{2,i-n_1-1}}\overline{X}^{(2)}_j|, & \hspace{-0.25em} i\not\in N_1
\end{array}
\right.\end{equation*}
\end{theorem}
\begin{theorem}\cite[Theorem~2]{HS2:2016}\label{thm:SGORandomSpecial}
For any random instance $\{X_i\}$, w.p.~$\rightarrow 1$ as $k\rightarrow\infty$, a solution to the SGO problem for $\ell=2$ is given by
\begin{equation*}\label{eq:SGOr1}
\hspace{0.5em}\tilde{r}^{(1)}_i=\left\{
\begin{array}{ll}
\hspace{-0.25em}\frac{1}{n_1-1}\hspace{-0.25em}\left(\sum_{j\in M} |\overline{X}_j|+|\cap_{j\not\in M}\overline{X}_j|\right)-|\overline{X}_i|, & \hspace{-0.25em} i\in N_1\\
\hspace{-0.25em}\frac{1}{n-n_1}\hspace{-0.25em}\left(\sum_{j\not\in M} |\overline{X}_j|-|\cap_{j\not\in M}\overline{X}_j|\right)-|\overline{X}_i|, & \hspace{-0.25em} i\not\in N_1
\end{array}
\right.\end{equation*} and
\begin{equation*}\label{eq:SGOr2}
\hspace{0.25em}\tilde{r}^{(2)}_i=\left\{
\begin{array}{ll}
\hspace{-0.25em} \frac{\sum_{j\not\in M} \left|\overline{X}_j\right|}{n_1(n-n_1)}\hspace{-0.125em}-\frac{\sum_{j\in M} \left|\overline{X}_j\right|}{n_1(n_1-1)}\hspace{-0.125em}-\frac{\hspace{-0.125em}(n\hspace{-0.075em}-\hspace{-0.075em}1)\left|\cap_{j\not\in M}\overline{X}_j\right|}{n_1(n_1-1)(n-n_1)}, & \hspace{-0.25em} i\in N_1\\
\hspace{-0.25em} 0, & \hspace{-0.25em} i\not\in N_1
\end{array}
\right.\end{equation*} where $M\triangleq N_1\setminus\{j_1\}$.
\end{theorem}
Such closed-form results lead to several interesting observations as follows. First, the minimum required number of chunks per packet for SLO is $n_1-1$, and this quantity for SGO is $\mathrm{LCM}(n_1-1,n-n_1)$. Note that this quantity for CDE is $n-1$ \cite{CW:2014}. Second, for any random instance $\{X_i\}$, the total sum-rate (normalized by the total number of packets ($k$)) is tightly concentrated around\vspace{-0.125em} \[r_{\text{SLO}} \triangleq \frac{n_1(q-q^{n_1})}{(n_1-1)(1-q^{n_1})}+\frac{q^{n_1}-q^n}{1-q^{n}},\vspace{-0.125em}\] and\vspace{-0.125em} \[r_{\text{SGO}} \triangleq \frac{(n-n_1+1)q-(n-n_1)q^{n}-q^{n-n_1+1}}{(n-n_1)(1-q^{n})},\vspace{-0.125em}\] in SLO and SGO, respectively, where $q\triangleq 1-p$. Note that this quantity for CDE is $r_{\text{CDE}}\triangleq\frac{n(q-q^n)}{(n-1)(1-q^n)}$ \cite{CW:2014}.
Let $e_{\text{SLO}} \triangleq ({r_{\text{SLO}}-r_{\text{CDE}}})/{r_{\text{CDE}}}$ and $e_{\text{SGO}}\triangleq ({r_{\text{SGO}}-r_{\text{CDE}}})/{r_{\text{CDE}}}$ be the excess rate of SLO over CDE and the excess rate of SGO over CDE, respectively. Fig.~\ref{fig:eSLOvseSGO} depicts $e_{\text{SLO}}$ and $e_{\text{SGO}}$ versus $p$ for $n=6$ and $n_1=2,\dots,5$.
Comparing $e_{\text{SLO}}(n,n_1,p)$ and $e_{\text{SGO}}(n,n_1,p)$ yields the following non-trivial observations. First, for any $1<n_1\leq \frac{n}{2}$ and any $0<p<1$, $e_{\text{SLO}}\geq e_{\text{SGO}}$, and for any $\frac{n}{2}<n_1\leq n-1$, there exists some $0<p^{*}<1$ such that for any $0<p\leq p^{*}$, $e_{\text{SLO}}\geq e_{\text{SGO}}$ and for any $p^{*}<p<1$, $e_{\text{SLO}}< e_{\text{SGO}}$. Second, for any $1< n_1\leq \frac{n}{2}$, there exists some $0<p_{*}<1$ such that $e_{\text{SLO}}$ decreases as $p$ increases from $0$ to $p_{*}$, and then the trend changes, i.e., $e_{\text{SLO}}$ increases as $p$ increases from $p_{*}$ to $1$. For any $\frac{n}{2}<n_1\leq n-1$, $e_{\text{SLO}}$ decreases as $p$ increases. This is in contrast to $e_{\text{SGO}}$ which, for any $1<n_1\leq n-1$, increases monotonically as $p$ increases. Third, for any $1<n_1\leq n-1$, $e_{\text{SLO}}(n,n_1,p)$ and $e_{\text{SGO}}(n,n-n_1,p)$ converge to the same limit as $p$ approaches $1$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{eSLOvseSGO.eps}\vspace{-0.5em}
\caption{The excess rate of SLO and SGO over CDE.}\label{fig:eSLOvseSGO}\vspace{-1em}
\end{figure}
\section{Proofs}\label{sec:Proofs}
In this section, we prove Theorem~\ref{thm:SLORandomSpecial}, and refer the reader to \cite[Theorem~2]{HS2:2016} for the proof of Theorem~\ref{thm:SGORandomSpecial}. The proofs of Theorems~\ref{thm:SLORandom} and~\ref{thm:SGORandom} use similar techniques, and are deferred to an extended version of this work due to the space limit.
The proof of Theorem~\ref{thm:SLORandomSpecial} consists of two parts: feasibility of $\{\tilde{r}^{(1)}_i, \tilde{r}^{(2)}_i\}$ with respect to (w.r.t.)~\eqref{eq:SLOC2} and~\eqref{eq:SLOC3} (Lemma~\ref{lem:feasibility}), and optimality of $\{\tilde{r}^{(1)}_i,\tilde{r}^{(2)}_i\}$ w.r.t.~LP~\eqref{eq:SLOO1} and LP~\eqref{eq:SLOO2} (Lemma~\ref{lem:optimality}).
The proofs rely on the following two lemmas. (The proofs of these lemmas can be found in \cite{HS2:2016}.)
\begin{lemma}\cite[Lemma~1]{HS2:2016}\label{lem:concentration}
For any $1\leq l\leq \ell$ and any $S\subsetneq N_l$,
\[\Bigg|\frac{1}{k}\bigg|\bigcap_{i\in N_l\setminus S} \overline{X}^{(l)}_i\bigg|-z_{|N_l|,|S|}\Bigg|<\epsilon,\] for any $\epsilon>0$, w.p.~$\rightarrow 1$ as $k\rightarrow\infty$, where
\begin{equation}\label{eq:zms}
z_{m,s} \triangleq \frac{(1-p)^{m-s}-(1-p)^{m}}{1-(1-p)^{m}},\end{equation} for any $0\leq s<m$.
\end{lemma}
\begin{lemma} \cite[Lemma~2]{HS2:2016}\label{lem:PV}
For any $0<p<1$ and any $0< s_1<s_2<m$, $\frac{z_{m,s_1}}{s_1}<\frac{z_{m,s_2}}{s_2}$.
\end{lemma}
The rest of the results hold ``w.p.~$\rightarrow 1$ as $k\rightarrow\infty$,'' and hereafter we omit this statement for brevity.
\begin{lemma}\label{lem:feasibility}
$\{\tilde{r}^{(1)}_i,\tilde{r}^{(2)}_i\}$ is feasible w.r.t.~\eqref{eq:SLOC2} and~\eqref{eq:SLOC3}.
\end{lemma}
\begin{proof}
We need to show that: (i) $\tilde{r}^{(1)}_S\geq |\cap_{i\in N_1\setminus S} \overline{X}^{(1)}_i|$, $\forall S\subsetneq N_1$, and (ii) $\tilde{r}^{(1)}_S+\tilde{r}^{(2)}_S\geq |\cap_{i\in N\setminus S} \overline{X}^{(2)}_i|$, $\forall S\subsetneq N$. First, consider the inequality (i). Take an arbitrary $S\subsetneq N_1$. Let $s\triangleq |S|$. First, suppose that $s=n_1-1$. Then, $S = N_1\setminus \{i\}$ for some $i\in N_1$. Since $\tilde{r}^{(1)}_S=|\overline{X}^{(1)}_i|$ and $|\cap_{i\in N_1\setminus S} \overline{X}^{(1)}_i| = |\overline{X}^{(1)}_i|$, the inequality (i) holds. Next, suppose that $1\leq s<n_1-1$. Note that $\tilde{r}^{(1)}_S=\frac{s}{n_1-1}\sum_{i\in N_1}|\overline{X}^{(1)}_i|-\sum_{i\in S}|\overline{X}^{(1)}_i|$. By applying Lemma~\ref{lem:concentration}, $\frac{\tilde{r}^{(1)}_S}{k}>(\frac{s}{n_1-1}) z_{n_1,n_1-1}-\epsilon$ and $\frac{1}{k}|\cap_{i\in N_1\setminus S} \overline{X}^{(1)}_i|<z_{n_1,s}+\epsilon$, for any $\epsilon>0$. Thus, the inequality (i) holds so long as $\frac{z_{n_1,n_1-1}}{n_1-1}>\frac{z_{n_1,s}}{s}$, and this inequality follows from Lemma~\ref{lem:PV} since $1\leq s<n_1-1$ (by assumption).
Next, consider the inequality (ii). Take an arbitrary $S\subsetneq N$. Let $S_1\triangleq S\cap N_1$ and $S_2\triangleq S\setminus N_1$. Obviously, $\tilde{r}^{(1)}_S+\tilde{r}^{(2)}_S=\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2}$. Let $s \triangleq |S_1|$ and $t\triangleq |S_2|$. (Note that $0\leq s\leq n_1$, $0\leq t\leq n-n_1$, and $0<r+s<n$.) First, suppose that $s=0$. There are two sub-cases: (a) $S_2=\{n_1+1,\dots,n_1+t\}$ ($=N_{2,t}$), $1\leq t\leq n-n_1$, and (b) $S_2=\{n_1+i_1,\dots,n_1+i_{t}\}$, $1\leq t< n-n_1$, $1\leq i_1<\dots<i_t\leq n-n_1$, for arbitrary $i_j\in N\setminus N_1$, $1\leq j\leq t$, such that $i_j>j$ for some $1\leq j\leq t$. In the case (a), $\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2}=\tilde{r}^{(2)}_{S_2}=\tilde{r}^{(2)}_{N_{2,t}}$ and $|\cap_{j\in N\setminus S}\overline{X}^{(2)}_j| = |\cap_{j\in N\setminus N_{2,t}}\overline{X}^{(2)}_j|=\tilde{r}^{(2)}_{N_{2,t}}$. Thus, the inequality (ii) holds. In the case (b), $\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2}=\tilde{r}^{(2)}_{S_2}=\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1}}\overline{X}^{(2)}_j|-\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1-1}}\overline{X}^{(2)}_j|$. Again by applying Lemma~\ref{lem:concentration}, $\frac{\tilde{r}^{(2)}_{S_2}}{k}>\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}-\epsilon$ and $\frac{1}{k} |\cap_{j\in N\setminus N_{2,t}}\overline{X}^{(2)}_j|<z_{n,t}+\epsilon$, for any $\epsilon>0$. Thus, the inequality (ii) holds so long as
\begin{equation}\label{eq:zzz}
\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}>z_{n,t}.\end{equation}
By rewriting~\eqref{eq:zzz} according to~\eqref{eq:zms}, it follows that~\eqref{eq:zzz} holds so long as $\sum_{i\in S_2} (1-p)^{t-i}>\sum_{1\leq j\leq t} (1-p)^{t-j}$. The latter inequality holds since $(1-p)^{t-i_j}\geq (1-p)^{t-j}$ for all $1\leq j\leq t$, and $(1-p)^{t-i_j}>(1-p)^{t-j}$ for some $1\leq j\leq t$, noting that $i_j>j$ for some $1\leq j\leq t$ (by assumption).
Next, suppose that $s=n_1-1$. Note that $S_1 = N_1\setminus\{i\}$ for some $i\in N_1$. There are two sub-cases: (a) $S_2=\{n_1+1,\dots,n\}$, and (b) $S_2=\{n_1+i_1,\dots,n_1+i_t\}$, $0\leq t<n-n_1$, $1\leq i_1<\dots<i_t\leq n-n_1$. (For $t=0$, $S_2=\emptyset$.) In the case (a), $\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2}=|\overline{X}^{(1)}_i|+|\cap_{j\not\in N_{2,n-n_1}}\overline{X}^{(2)}_j| = |\overline{X}^{(1)}_i|+|\cap_{j\in N_{1}}\overline{X}^{(2)}_j|$, and $|\cap_{j\in N\setminus S}\overline{X}^{(2)}_j| = |\overline{X}^{(2)}_i|$. Since $|\overline{X}^{(1)}_i|+|\cap_{j\in N_{1}}\overline{X}^{(2)}_j| = |\overline{X}^{(2)}_i|$ for all $i\in N_1$, the inequality (ii) holds. In the case (b), $\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2} = |\overline{X}^{(1)}_i|+\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1}}\overline{X}^{(2)}_j|-\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1-1}}\overline{X}^{(2)}_j|$. Again by applying Lemma~\ref{lem:concentration}, $\frac{1}{k}(\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2})>z_{n_1,n_1-1} + \sum_{i\in S_2} z_{n,i} - \sum_{i\in S_2} z_{n,i-1}-\epsilon$ and $\frac{1}{k}|\cap_{j\in N\setminus S}\overline{X}^{(2)}_j|<z_{n,n_1-1+t}+\epsilon$, for any $\epsilon>0$. Thus, the inequality (ii) holds so long as
\begin{equation}\label{eq:zzzz}
z_{n_1,n_1-1}+\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}>z_{n,n_1-1+t}.
\end{equation} Again, rewriting~\eqref{eq:zzzz}, this inequality holds so long as $1+(1-p)^{n-1}+p(1-p)^{n-1}\sum_{i\in S_2} (1-p)^{-i}>(1-p)^{n-n_1-t}+(1-p)^{n_1-1}$. Note that $i_j\geq j$ for all $1\leq j\leq t$. Thus, $\sum_{i\in S_2} (1-p)^{-i}\geq \sum_{1\leq j\leq t} (1-p)^{-j}=\frac{(1-p)^{-t}-1}{p}$. Thus,~\eqref{eq:zzzz} holds so long as $(1-p)^{n}((1-p)^{-n_1-t}-(1-p)^{-t-1})<1-(1-p)^{n_1-1}$. Obviously, $(1-p)^{n}((1-p)^{-n_1-t}-(1-p)^{-t-1})<(1-p)^{n_1+t+1}((1-p)^{-n_1-t}-(1-p)^{-t-1})=(1-p)-(1-p)^{n_1}$ since $n>n_1+t$ (by assumption). Thus,~\eqref{eq:zzzz} holds so long as $(1-p)-(1-p)^{n_1}<1-(1-p)^{n_1-1}$, and this inequality holds since $n_1>1$ (by assumption).
Now, suppose that $0< s<n_1-1$. (Note that for $t=0$, $S_2=\emptyset$.) Note that $\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2}=\frac{s}{n_1-1}\sum_{i\in N_1} |\overline{X}^{(1)}_i|-\sum_{i\in S_1} |\overline{X}^{(1)}_i|$ $+$ $\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1}}\overline{X}^{(2)}_j|$ $-$ $\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1-1}}\overline{X}^{(2)}_j|$. Similarly as above, by applying Lemma~\ref{lem:concentration}, $\frac{1}{k}(\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2})>(\frac{s}{n_1-1})z_{n_1,n_1-1}+\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}-\epsilon$ and $\frac{1}{k}|\cap_{j\in N\setminus S} \overline{X}^{(2)}_j|<z_{n,s+t}+\epsilon$, for any $\epsilon>0$. Thus, the inequality (ii) holds so long as $(\frac{s}{n_1-1})z_{n_1,n_1-1}+\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}>z_{n,s+t}$. Note that $\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}>z_{n,n_1-1+t}-z_{n_1,n_1-1}$ (by~\eqref{eq:zzzz}). Thus, the inequality (ii) holds so long as
\begin{equation}\label{eq:zzz2}
z_{n,n_1-1+t}-\left(\frac{n_1-1-s}{n_1-1}\right)z_{n_1,n_1-1}>z_{n,s+t}.\end{equation} By rewriting~\eqref{eq:zzz2}, this inequality holds so long as $(1-p)^{n-t}((1-p)^{-n_1+1}-(1-p)^{-s})>(\frac{n_1-1-s}{n_1-1})((1-p)-(1-p)^{n_1})$. Obviously, $(1-p)^{-n_1+1}-(1-p)^{-s}>0$ since $s<n_1-1$. Thus, the inequality~\eqref{eq:zzz2} holds so long as $(1-p)^{n_1}((1-p)^{-n_1+1}-(1-p)^{-s})>(\frac{n_1-1-s}{n_1-1})((1-p)-(1-p)^{n_1})$ since $(1-p)^{n-t}\geq (1-p)^{n_1}$. Thus, the inequality (ii) holds so long as $\frac{(1-p)-(1-p)^{n_1-s}}{n_1-1-s}>\frac{(1-p)-(1-p)^{n_1}}{n_1-1}$. Since $n_1>1$ and $0< s<n_1-1$ (by assumption), the latter inequality holds so long as $\frac{1}{m}-\frac{1}{m+1}>\frac{(1-p)^{m}}{m}-\frac{(1-p)^{m+1}}{m+1}$, for any integer $m\geq 1$, and this inequality holds since $(1-p)^{m+1}>1-(m+1)p$ for any integer $m\geq 1$ (by the Bernoulli's inequality).
Lastly, suppose that $s=n_1$. Note that $S_1=N_1$, and $S_2=\{n_1+i_1,\dots,n_1+i_t\}$, $0\leq t<n-n_1$, $1\leq i_1<\dots<i_t\leq n-n_1$. (Note, again, that for $t=0$, $S_2=\emptyset$.) Using similar techniques as above, it can be shown that $\frac{n_1}{n_1-1}z_{n_1,n_1-1}+\sum_{i\in S_2} z_{n,i}-\sum_{i\in S_2} z_{n,i-1}>z_{n,n_1+t}$. By applying this inequality along with an application of Lemma~\ref{lem:concentration}, one can see that $\tilde{r}^{(1)}_{S_1}+\tilde{r}^{(2)}_{S_2} = \frac{1}{n_1-1}\sum_{i\in N_1} |\overline{X}^{(1)}_i|+$ $\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1}}\overline{X}^{(2)}_j|-$ $\sum_{i\in S_2} |\cap_{j\not\in N_{2,i-n_1-1}}\overline{X}^{(2)}_j|\geq |\cap_{j\in N\setminus S}\overline{X}^{(2)}_j|$.
\end{proof}
\begin{lemma}\label{lem:optimality}
$\{\tilde{r}^{(1)}_i,\tilde{r}^{(2)}_i\}$ is optimal w.r.t.~LP~\eqref{eq:SLOO1} and LP~\eqref{eq:SLOO2}.
\end{lemma}
\begin{proof}
The dual of LP~\eqref{eq:SLOO1} is given by
\begin{eqnarray}\label{eq:dualLP1}
\mathrm{max.} && \nonumber \hspace{-1.5em} \sum_{S\subsetneq N_1} \bigg|\bigcap_{i\in N_1\setminus S}\hspace{-0.25em}\overline{X}^{(1)}_i\bigg|s_{S}+\hspace{-0.25em}\sum_{S\not\subset N_1} \bigg|\bigcap_{i\in N\setminus S}\overline{X}^{(2)}_i\bigg| s_{S}\hspace{2em}\\
\mathrm{s.t.} && \hspace{-1.5em} \label{eq:LP1C1} \sum_{S\subsetneq N} s_{S}\mathds{1}_{\{i\in S\}}\leq 1, \hspace{0.5em} \forall i\in N_1\\
&& \hspace{-1.5em} \label{eq:LP1C2} \sum_{S\not\subset N_1} s_{S}\mathds{1}_{\{i\in S\}}\leq 0, \hspace{0.5em} \forall i\in N\\ \nonumber
&& \hspace{-1.5em} ({s}_{S}\geq 0, \forall S\subsetneq N).
\end{eqnarray} Take $\tilde{s}_{S}=\frac{1}{n_1-1}$ $\forall S\subsetneq N_1$, $|S|=n_1-1$, and $\tilde{s}_{S}=0$ for any other $S$. Note that $\{\tilde{s}_{S}\}$ meets~\eqref{eq:LP1C1} and~\eqref{eq:LP1C2} with equality, and thus, it is feasible w.r.t.~\eqref{eq:LP1C1} and~\eqref{eq:LP1C2}. Note, also, that $\sum_{S\subsetneq N_1} |\cap_{i\in N_1\setminus S}\overline{X}^{(1)}_i|\tilde{s}_{S}+\sum_{S\subsetneq N: S\not\subset N_1} |\cap_{i\in N\setminus S}\overline{X}^{(2)}_i| \tilde{s}_{S} = \tilde{r}^{(1)}_N$. By the duality principle, $\{\tilde{r}^{(1)}_i,\tilde{r}^{(2)}_i\}$ is thus optimal w.r.t.~LP~\eqref{eq:SLOO1}. Note that the optimal value is $r_{*}\triangleq\frac{1}{n_1-1}\sum_{i\in N_1}|\overline{X}^{(1)}_i|$. Moreover, the dual of LP~\eqref{eq:SLOO2} is given by
\begin{eqnarray}\label{eq:dualLP12}
\mathrm{max.} && \nonumber \hspace{-1.5em} \sum_{S\subsetneq N_1} \bigg|\bigcap_{i\in N_1\setminus S}\hspace{-0.25em}\overline{X}^{(1)}_i\bigg|s_{S}+\hspace{-0.25em}\sum_{S\not\subseteq N_1} \bigg|\bigcap_{i\in N\setminus S}\overline{X}^{(2)}_i\bigg| s_{S}+r_{*} s_{*}\\
\mathrm{s.t.} && \hspace{-1.5em} \label{eq:LP12C1} \sum_{S\neq N_1} s_{S}\mathds{1}_{\{i\in S\}}+s_{*}\leq 0, \hspace{0.5em} \forall i\in N_1\\
&& \hspace{-1.5em} \label{eq:LP12C2} \sum_{S\not\subseteq N_1} s_{S}\mathds{1}_{\{i\in S\}}\leq 1, \hspace{0.5em} \forall i\in N\\ \nonumber
&& \hspace{-1.5em} ({s}_{S}\geq 0, \forall S\subsetneq N, S\neq N_1),
\end{eqnarray} where $s_{*}$ is unrestricted in sign. Take $\tilde{s}_{S}=\frac{1}{n_1-1}$ $\forall S\subsetneq N_1$, $|S|=n_1-1$, $\tilde{s}_{N\setminus N_1}=1$, $\tilde{s}_{*}=-1$, and $\tilde{s}_{S}=0$ for any other $S$. Note that $\{\{\tilde{s}_{S}\},\tilde{s}_{*}\}$ meets~\eqref{eq:LP12C1} and~\eqref{eq:LP12C2} with equality. Thus, $\{\{\tilde{s}_{S}\},\tilde{s}_{*}\}$ is feasible w.r.t.~\eqref{eq:LP12C1} and~\eqref{eq:LP12C2}. Note, also, that
$\sum_{S\subsetneq N_1} |\cap_{i\in N_1\setminus S}\overline{X}^{(1)}_i|\tilde{s}_{S}+\sum_{S\not\subseteq N_1} |\cap_{i\in N\setminus S}\overline{X}^{(2)}_i| \tilde{s}_{S}+r_{*}\tilde{s}_{*} = \tilde{r}^{(2)}_N$. By the duality principle, $\{\tilde{r}^{(1)}_i,\tilde{r}^{(2)}_i\}$ is thus optimal w.r.t.~LP~\eqref{eq:SLOO2}, and the optimal value is $|\cap_{i\in N_1} \overline{X}^{(2)}_i|$.
\end{proof}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-7630 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |